Artificial Intelligence (AI), particularly generative AI, has arrived in arbitration, whether we invited it in or not. Counsel is using it to summarize documents, draft outlines, and test arguments. Some arbitrators are understandably curious about whether (and how) it can assist with facilitating efficient and fair proceedings.
The anxiety surrounding AI in arbitration is often framed in dramatic terms: loss of control, compromised neutrality, or “robots deciding cases.” The risk is far more ordinary and manageable. It is confidentiality.
AI does not threaten arbitration because it is intelligent. It threatens arbitration when it is used casually, opaquely, or without regard to how information is stored, processed, or reused. Once confidentiality is properly understood, most of the fear around AI use fades and sensible boundaries emerge.
Generative AI tools do not “understand” disputes, evidence, or credibility. They predict text based on patterns in data. They are tools, not decision-makers. The arbitrator’s role, to evaluate evidence, evaluate witness credibility, apply law, and exercise judgment cannot be delegated to AI without undermining the integrity of the process. The key question is not whether AI may be used in arbitration, but how it is used and with what safeguards.
Generative Artificial Intelligence (Generative AI) refers to systems capable of producing new content, most commonly text, based on user prompts (i.e. questions). These tools may assist with drafting, summarization, or organization, but do not exercise judgment or discretion. Common examples of Generative AI tools are Claude, ChatGPT, Gemini, Midjourney, and Adobe Firefly/Photoshop.
Large Language Models (LLMs) are the underlying technology used by most Generative AI tools. They generate language probabilistically and do not assess accuracy, fairness, bias, or procedural propriety. “GPT” (Generative Pre-trained Transformer) is a technical designation for a class of LLMs. Use of such tools does not alter the arbitrator’s non-delegable duties.
Arbitral institutions and bar association authorities avoid highly technical definitions in favor of functional descriptions. For example, the American Arbitration Association AAA guidance frames AI in functional, process-oriented terms, focusing on disclosure, confidentiality, procedural fairness, and enforceability, rather than on technical design. It treats AI as analogous to existing legal tools such as e-discovery platforms, research databases, and drafting software, with emphasis on preserving process integrity, party fairness, and arbitrator independence.
Arbitration rests on party expectations of transparency and confidentiality. Not all AI tools treat data the same way. Some retain user inputs or train models on them; while others are closed-loop systems designed to prevent data reuse. Understanding which AI tool you seek to utilize and what your ethical obligations may be to the parties for doing so is key.
In conclusion, do understand the tool before using it. Do retain independent judgment. Do verify the output. Do consider procedural guidance early. Don’t upload confidential submissions into public AI tools. Don’t delegate decision-making. Don’t rely on AI without verification. Do discuss AI use early in the arbitration process, ask counsel if and how they will utilize AI, disclose your use, and memorialize in a procedural order. AI is neither a shortcut nor a threat. With clarity, discipline, and respect for confidentiality, it can be integrated without compromising arbitral integrity. as ethical duties in arbitration turn on how a tool is used, not on its engineering design.

Jennifer Lupo
CCA Associate