Skip to main content

AI and Law: Navigating the Legal Landscape of Artificial Intelligence Symposium

AI Regulation and Privacy Panel

AI & IP Panel

AI for Legal Services Panel

Opening Remarks: Dean Hari M. Osofsky

Description

Please join us at Northwestern Pritzker School of Law to delve into the multifaceted legal and ethical aspects of AI, connecting the dots between AI and intellectual property, AI regulation, AI and privacy, and AI in legal services. This symposium is co-hosted by the Northwestern Law Journal of Technology and Intellectual Property and the Northwestern University Law and Technology Initiative.

Co-organizers: Daniel W. Linna Jr. and Gianna Miller, JTIP Symposium Editor

Speakers

Sabine Brunswicker – Professor for Digital Innovation at Purdue University, Director of the Research Center for Open Digital Innovation

Bryan Choi – Associate Professor of Law at Moritz College of Law & CSE at Ohio State University

Jonathan Choi – Professor of Law at University of Southern California Gould School of Law

April Dawson – Associate Dean of Technology and Innovation and a Professor of Law at NCCU School of Law

Mehtab Khan – Fellow at the Berkman Klein Center for Internet & Society at Harvard University

Nicole Morris – Professor of Practice at Emory University School of Law, Director of the Innovation and Legal Tech Initiative (ILTI)

JJ Prescott – Henry King Ransom Professor of Law, Professor of Economics at the University of Michigan, Co-Director of Empirical Legal Studies Center and Program in Law and Economics

Pamela Samuelson – Richard M. Sherman Distinguished Professor of Law and Information at University of California, Berkeley, Co-Director of Berkeley Center for Law & Technology

Harry Surden – Professor of Law at the University of Colorado Law School

Agenda & Abstracts

Agenda:

 

Panelist Abstracts:

Sabine Brunswicker
Title: The Impact of Empathy in Conversational AI on Perceived Trustworthiness and Usefulness: Insights from a Behavioral Experiment with a Legal Chatbot
Abstract:
With the advancement in data-driven machine learning (ML) modeling (e.g. deep learning), and natural language processing, artificial intelligence (AI) is transforming everyday life. The rise around language models (LLMs) and foundational LLMs used in ChatCPT, has led to the general believe that online “chatbots” can not only support citizens in day-to-day task like online shopping. Indeed, proponents argue that chatbots can hold a level of “social intelligence” that allows them to render services in areas like law and healthcare, characterized by very interpersonal and empathic relationships between and a human expert and a citizen. Although existing research has shown that empathy is crucial for designing chatbot conversations that are perceived as trustworthy and useful, I argue that there is a major research gap: Existing research it fails to disentangle a chatbots “cognitive” intelligence – that is ability to provide factually correct answers – from its social and emotional intelligence as it is perceived through language. As part of collaborative research with Northwestern University, I present results of first behavioral study related to a broader research agenda on empathy in conversational AI. Guided by linguistic theories on syntax and rhetoric, we developed a first behavioral theory of empathy in the language display to explain relational outcomes of human-AI conversations in terms of cognitive effort, helpfulness, and trustworthiness. Using this theory, we designed a chatbot that integrated a rule-based logic for empathy in language display using syntactic and rhetorical linguistic elements that evoke empathy, distinct from the chatbots knowledge-based legal rules. Through a randomized controlled experiment with a 2 by 3 factorial design involving 277 participants, we compared the outcomes generated by an empathetic chatbot, with a non-empathetic chatbot using the same legal rule and a non-conversational service in the form of frequently asked questions (“FAQs”). The results indicate that subtle changes in language syntax and style can have substantial implications for the outcomes of human-AI conversations on perceived trustworthiness, usefulness, and cognitive effort. I will conclude my talk with providing an overview of ongoing work that aims to align an open LLM through a neuro-symbolic architecture that integrates rule-based models informed by this behavioral study with generative and foundational AI, and also discuss alternative statistical views towards trustworthiness in communication relationships informed by information theory and reachability analysis.

Bryan Choi
Title: AI Standards of Care
Abstract:
What should the law of AI safety require? Thus far, leading approaches to AI regulation have embraced an ex ante risk regulation approach. But closer examination shows that these broad, horizontal efforts are thin on substantive details and delegate much to industry self-regulation. Similarly, I have argued that ex post tort liability approaches will have to defer to “professional judgment” and self-regulation unless a consensus standard of care can be established. Conventional software work has defied efforts to define such a standard of care, and AI work may be no different. Yet, there is also greater reason for hope that AI work will prove to be more conducive to standardization.
 

Jonathan Choi
Title: Lawyering in the Age of Artificial Intelligence
Abstract:
We conducted the first randomized controlled trial to study the effect of AI assistance on human legal analysis. We randomly assigned law school students to complete realistic legal tasks either with or without the assistance of GPT-4. We tracked how long the students took on each task and blind-graded the results.

We found that access to GPT-4 only slightly and inconsistently improved the quality of participants’ legal analysis but induced large and consistent increases in speed. AI assistance improved the quality of output unevenly—where it was useful at all, the lowest-skilled participants saw the largest improvements. On the other hand, AI assistance saved participants roughly the same amount of time regardless of their baseline speed. In follow up surveys, participants reported increased satisfaction from using AI to complete legal tasks and correctly guessed the tasks for which GPT-4 were most helpful.
These results have important descriptive and normative implications for the future of lawyering. Descriptively, they suggest that AI assistance can significantly improve productivity and satisfaction, and that they can be selectively employed by lawyers in areas where they are most useful. Because these tools have an equalizing effect on performance, they may also promote equality in a famously unequal profession. Normatively, our findings suggest that law schools, lawyers, judges, and clients should affirmatively embrace AI tools and plan for a future in which they will become widespread. 

April Dawson
Title:  Constitutional AI and Algorithmic Adjudication: The Promise of a Better AI Decision Making Future?
Abstract:
Algorithmic adjudication involves using AI to assist in or decide legal disputes. The AI models typically used in these emerging decision-making systems use traditionally trained AI systems trained on large data sets so the system can render a decision or prediction based on past practices. However, the decisions often perpetuate existing biases and can be difficult to explain. Algorithmic decision-making models using a constitutional AI framework (like Anthropic’s LLM Claude) may produce results that are more aligned with societal values and be more explainable. I will discuss society’s movement toward algorithmic adjudication, the challenges associated with using traditionally trained AI in these decision-making models, and whether there is potential for better outcomes with constitutional AI models.
 

Peter Henderson
Title: Foundation Models and Fair Use
Abstract:
General-purpose foundation models, large machine learning models trained on mountains of scraped data, have opened the door to a flood of copyright litigation. While model creators will argue for fair use defenses, the likelihood of their success will turn on the technical design decisions that they make when they train and deploy a model. In this talk, we discuss the technical mitigations that model creators can employ to help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law, and we suggest that the law and technical mitigations should co-evolve to achieve the goals of fair use doctrine.
 

Mehtab Khan
Title: Aligning Fair Use with AI Governance
Abstract: Companies developing and deploying Generative AI tools have been on the receiving end of multiple copyright infringement lawsuits in recent months. There are ongoing debates about how fair use may or may not apply in these cases. However, there is an underlying concern raised by these lawsuits that is currently not being given enough attention: fair use is being conflated with AI governance questions. We need a better accounting of these lurking AI governance questions while we also attempt to answer the fair use question. In this paper, I identify key AI governance questions that need attention as raised through these copyright lawsuits. I offer a framework to align the fair use assessment in Generative AI cases with AI governance principles of transparency and contestability. My analysis would allow fair use to be more attentive to the rapidly evolving AI regulation landscape, while also retaining a balance between copyright holders and the public’s interest in access and innovation.

Nicole Morris
Title: “Navigating the Intersection of AI and Trade Secrets.”
Abstract:
While generative AI presents exciting opportunities for increased efficiency and productivity, companies must navigate trade-offs involving intellectual property protection and the potential for sensitive information disclosure. This presentation delves into both the positive possibilities and emerging challenges presented by these innovative generative AI tools.

JJ Prescott
Title: Using AI to Address the Pro-Se Representation Gap
Abstract:
In a few short years, court-connected ODR has shown itself capable of dramatically improving access to justice by reducing or eliminating barriers rooted in the simple fact that courts have traditionally offered dispute resolution services only during certain hours, only in particular physical places, and primarily through traditional face-to-face proceedings. Given the monopoly that courthouses have long had on resolving many legal issues, too many Americans have discovered their rights are simply too difficult or costly to exercise. As court-connected ODR systems spread, offering new types of dispute resolution services everywhere and often at any time, people will soon find themselves with the law and the courts at their fingertips. But robust access to justice requires more than just the raw, low-cost opportunity to resolve disputes. Existing ODR platforms seek to replicate in-person procedures, simplifying and clarifying steps where possible, but litigants without representation still proceed without experience, expertise, guardrails, or the ability to gauge risk or likely outcomes. Injecting ODR with a dose of data science has the potential to address many of these shortfalls. Enhanced ODR is unlikely to render representation obsolete, but it can dramatically reduce the gap between the “haves” and the “have nots” and, on some dimensions—where machines outperform humans (e.g., minimizing agency costs)—next generation platforms may be a significant improvement.

Pamela Samuelson
Title: Generative AI Meets Copyright
Abstract:
Sixteen lawsuits against generative AI developers charging them copyright-related violations are pending in US federal courts. Thirteen are class actions brought on behalf of visual artists, programmers, and fiction and nonfiction authors. All but one of these cases claim that the use of in-copyright works as training data infringes copyright. Most also claim that outputs generated in response to user prompts infringe authorial derivative work rights. This talk will explain why so many people are upset about generative AI and how plausible are the plaintiffs’ claims.
 

Harry Surden
Title: Advances in Artificial Intelligence and Law:  ChatGPT, Large Language Models (LLMs), and Legal Practice
Abstract:
In the past two years, rapid advancements in Artificial Intelligence (AI) have started to influence various professions, including law  This presentation will provide an understandable overview of the latest developments in large language model (LLM) technology, such as ChatGPT, exploring how these AI new technologies work, their capabilities, and their current limitations.  We will also explore the emerging role of these technologies in legal practice, examining both their strengths and weaknesses in current legal tasks.  It will also explore their potential impact on legal practice in the near future.

Charlotte Tschider
Title: Humans Outside the Loop, (forthcoming, Yale J. L. & Tech. 2024) 
Abstract:
Significant issues in negligence and products liability negligence schemes, including contractual limitations on liability, separate organizations creating AI products from the actual harm, obscure the origin of issues, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the relative limits of tort law in these types of technologies, challenging long-held divisions and theoretical constructs, frustrating its goals. This paper explores key impediments to realizing tort goals and proposes an alternative regulatory scheme that reframes liability from the human in the loop to the humans outside the loop. 

Registration & Logistical Details

Please use the registration link above to join us!

Date: Friday, February 16th, 2024, 8:00 AM – 5:00 PM

Location (In-person ONLY): Thorne Auditorium, Northwestern Pritzker School of Law, 375 E. Chicago St.

This program is approved for 4.75 general CLE credit hours in Illinois. In order to receive Illinois CLE credit, please fill out and turn in a paper CLE attendance log at the program. Completed attendance logs must be turned in by March 1st, 2024 in order to receive credit. Late requests for credit will not be accepted. With questions, email external-partnerships@law.northwestern.edu.

Social Media Links