SVAMC Leads the Way on AI in Arbitration

Silicon Valley Arbitration and Mediation Center (SVAMC) has released draft Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration for public consultation. 

What is SVAMC?

As explained on its website, the Silicon Valley Arbitration and Mediation Center (“SVAMC”), a non-profit foundation based in Palo Alto, California, serves the global technology sector.  SVAMC promotes efficient technology dispute resolution, including advancing the use of arbitration and mediation in technology and technology-related business disputes in Silicon Valley, throughout the U.S. and around the world.

SVAMC provides educational programming and related resources to technology companies and law firms. It does not administer cases. Rather, SVAMC collaborates with leading ADR providers, technology companies, law firms, neutrals and universities to address the merits of arbitration and mediation in resolving technology and technology-related disputes.

SVAMC publishes the annual List of the World’s Leading Technology Neutrals. “The Tech List®” is a highly acclaimed, peer-vetted list comprising exceptionally qualified arbitrators and mediators in the US and globally, all having particular experience and skill in the technology sector. 

The AI Guidelines

Recognizing the increasing role of AI, SVAMC’s Drafting Subcommittee formulated a set of best practices for use of AI in international arbitration.  The draft is being released for public consultation as SVAMC moves along to finalizing the Guidelines.

Here is a brief outline of the draft Guidelines, provided on SVAMC’s site:

PRELIMINARY PROVISIONS 

Application of these Guidelines  

Definition of AI 

Non-derogation of any mandatory rules  

CHAPTER 1: Guidelines applicable to all participants in international arbitration 

1.              Understanding the uses, limitations and risks of AI applications 

2.              Safeguarding confidentiality 

CHAPTER 2: Guidelines for parties and party representatives 

3.              Duty of competence in the use of AI  

4.              Respect for the integrity of the proceedings and evidence 

CHAPTER 3: Guidelines for arbitrators 

5.              Non-delegation of decision-making responsibilities 

6.              Respect for due process 

7.              Protection and disclosure of records 

Understanding AI 

Among other things, the draft Guidelines and commentary provide a base line for understanding AI and its role in arbitration as well as issues its use can create.   

For example, the Guidelines make this observation: 

Generative AI tools produce natural-sounding and contextually relevant text based on speech patterns and semantic abstractions learned during their training. However, these outputs are a product of infinitely complex probabilistic calculations rather than intelligible “reasoning” (the so-called “black box” problem). Despite any appearance otherwise, AI tools lack self-awareness or the ability to explain their own algorithms. 

In response to this problem, participants may, as far as practical, use AI tools and applications that incorporate explainable AI features or otherwise allow them to understand how a particular output was generated based on specific inputs.  

They also observe that:

 Large language models have a tendency to “hallucinate” or offer incorrect but plausible-sounding responses when they lack information to provide an accurate response to a particular query. Hallucinations occur because these models use mathematical probabilities (derived from linguistic and semantic patterns in their training data) to generate a fluent and coherent response to any question. However, they typically cannot assess the accuracy of the resulting output.

 They further observe that:

 [E]xisting biases in the data may create, exacerbate or perpetuate any form of discrimination, racial, gender or other profiling in the search and appointment of individuals as arbitrators, experts, counsel, or any other roles in connection with arbitrations. Biases may occur when the underrepresentation of certain groups of individuals is carried over to the training data used by the AI tool to make selections or assessments.

Recognizing these and other issues with AI, the draft Guidelines require that “[a]ll participants using AI tools in connection with an arbitration should make reasonable efforts to understand each AI tool’s relevant limitations, biases, and risks and, to the extent possible, mitigate them.”

Protecting confidentiality 

The Guidelines also recognize confidentiality issues that use of AI can create.  Thus, Guideline 2 provides that “[o]nly AI tools that adequately safeguard confidentiality should be approved for uses that involve sharing confidential or legally privileged information with third parties. For this purpose, participants should review the data use and retention policies offered by the relevant AI tools and opt for more secure solutions.”   

Disclosing use of AI 

The draft Guidelines provide alternative versions of provisions governing disclosure of the use of AI tools and a request for comments as to which version is preferable.  One option  “identifies a range of factors that may be relevant in the assessment of whether disclosure is warranted, specifically whether (i) the output of an AI tool is to be relied upon in lieu of primary source material, (ii) the use of the AI tool could have a material impact on the proceeding, and (iii) the AI tool is used in a non-obvious and unexpected manner.”   

Another option makes disclosure of use of AI mandatory “(i) when the output of AI tools is used to prepare or create materially relied-upon documents (including evidence, demonstratives, witness statements and expert reports) and (ii) when the output of that AI tool can have a material impact on the proceedings or their outcome. 

It will be interesting to see how the comments go on which option should be adopted.  

Guiding arbitrators

 As far as Arbitrators’ use of AI goes, the most important guideline appears to be that “[a]n An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making function.”  Further guidance is given for respecting due process and confidentiality in arbitration.

 Useful examples

 The draft Guidelines provide useful, but non-exhaustive, examples of the limits and risks of AI by Parties and Arbitrators.  Here are a couple of examples:

An ongoing process 

As noted, the draft guidelines are being refined with public input and comment.  While, when completed, parties will be able to incorporate them in their arbitration agreements to govern use of AI in any arbitration, they are not “ready for prime time” until they have been finalized.   

Have a look 

To review the draft and, if you are so inclined, comment on the guidelines, visit SVAMC.org and follow the links to the draft Guidelines and provisions allowing comment.   

SVAMC leading the way

 Kudos to SVAMC for getting ahead of this issue as AI become ubiquitous and carefully thinking through what steps can be taken to account for it in arbitration.

 

Previous
Previous

Award Reversed: Credibility Finding Was Evidence of Bias/Misconduct

Next
Next

David Allgeyer recognized by Best Lawyers in America as 2024 “Lawyer of the Year” in Arbitration in Minneapolis.