Generative Native World: Deliberating Blame

Photos © Aditya Mohan. These views are not legal advice but business opinion based on reading some English text written by a set of intelligent people.

In the rapidly evolving digital landscape, the integration of Foundation Models such as Large Language Models (LLMs) into various sectors poses intriguing legal and ethical questions, especially when errors occur in their outputs of products that use such models. The crux of the debate centers on the allocation of responsibility for these mistakes. This discourse becomes relevant in scenarios involving both legal professionals and the broader public, including corporations. 

The Legal Framework for Lawyers: Rule 11(b) 

For attorneys and law firms, the reliance on LLMs for drafting pleadings, motions, and other legal documents introduces a complex layer of accountability. Under Federal Rules of Civil Procedure Rule 11(b), lawyers are mandated to ensure that any submissions to the court are not only proper but also substantiated by existing law or a non-frivolous argument for extending, modifying, or reversing existing law. The rule highlights the importance of attorney diligence in verifying the information and arguments generated by LLMs. In essence, while LLMs serve as powerful tools for legal research and document preparation, the ultimate responsibility for the content rests squarely on the shoulders of the attorneys. This provision serves as a safeguard, ensuring that the deployment of advanced technologies such as LLMs, does not compromise the integrity of legal processes.

Beyond the Legal Sphere: A Broader Perspective

The responsibility extends beyond the legal profession to encompass both individuals and corporations that deploy LLMs. The premise is straightforward: creators and users of technology must ensure it operates within ethical and legal boundaries. A case (Moffatt v. Air Canada, 2024 BCCRT 149) in point is the 2022 incident involving Air Canada and its AI-powered customer chatbot. The chatbot, designed to assist customers with inquiries, erroneously informed a customer, Jake Moffatt, that he could be compensated for airfare under certain conditions. When the information proved inaccurate, the ensuing legal challenge highlighted a significant principle: the entity behind the technology is accountable for its output.

The tribunal's response to Air Canada's defense—that the chatbot acted independently—was unequivocal. It emphasized that the chatbot, despite its interactive nature, formed an integral part of Air Canada's digital offerings. Consequently, the airline was held responsible for all information disseminated through its website, irrespective of the source. This ruling reinforces the notion that companies cannot absolve themselves of liability by attributing errors to the autonomous operations of their technological tools.

The Path Forward: Safeguards and Responsibility

The generative capabilities of LLMs, while innovative, introduce a degree of unpredictability. Their ability to "hallucinate" or generate creative yet sometimes factually inaccurate content necessitates the implementation of robust safeguards. These measures should span both technological solutions to enhance the accuracy of outputs and legal frameworks to clearly delineate responsibility, If possible.

The incident with Air Canada serves as a poignant reminder of the essential role that oversight and accountability play in the deployment of AI technologies. As LLMs and similar technologies become increasingly woven into the fabric of society, the collective responsibility to ensure their ethical and accurate use becomes paramount. This includes not just legal professionals, but all users and creators of AI technologies.


The deliberation over blame in the generative native world highlights the need for a balanced approach that embraces the benefits of LLMs while addressing the legal and ethical implications of their errors. Through a combination of technological innovation and legal clarity, it is possible to navigate the complexities of this new generative native world.


Further read