Law is Not Logic: The Exponential Dilemma in Generative AI Governance
The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence enlists precise mandates for developing, auditing, and testing AI technologies. This prescriptive approach contrasts with the U.S. common law doctrine, which evolves through judicial interpretations and societal norms. By specifying exact guidelines and benchmarks, the Order may inadvertently embody a static regulatory stance, potentially stifling the fluidity and adaptability intrinsic to common law. This rigid framework might deter innovation and adaptability, especially among AI startups, thus possibly conflicting with the U.S. legal tradition of evolving legal system.
Law is Not Logic
In the realm of regulating AI, particularly Generative AI, Justice Oliver Wendell Holmes, Jr.'s assertion that
"The life of the law has not been logic; it has been experience"
resonates profoundly. The regulatory journey isn't merely about crafting logical rules; it's about adapting to the experiential learnings that the rapid evolution of Generative AI presents. As Generative AI creates novel scenarios, old legal frameworks may find new utility, necessitating a flexible, experience-driven approach to regulation that accommodates the ever-changing landscape of AI, ensuring that legal doctrines evolve in tandem with technological advancements and societal impacts.
The provisions in the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence lay down stringent thresholds such as requiring reporting for models trained with computing power greater than 10^26 integer or floating-point operations. This specific metric, among others, may pose significant compliance challenges for AI startups. The stringent technical conditions and reporting mandates, such as the requirement for any computing cluster with a maximum computing capacity of 10^20 integer or floating-point operations per second for training AI, may disproportionately burden startups lacking the resources to meet these conditions, thereby stifling innovation and potentially tilting the playing field in favor of larger, more established players.
Regulatory Rigidity
The specified computational thresholds like 10^26 integer or floating-point operations might quickly become outdated given the exponential pace of technological advancement in the AI domain. As hardware capabilities improve and become more accessible, what seems like a high computational threshold today could become commonplace in a matter of months. Hence, rigid numerical benchmarks may necessitate frequent updates to remain relevant and realistic. This cycle of constant revision could create an uncertain regulatory landscape, making it difficult for AI startups to stay compliant and plan for the long term, thus impeding their growth and innovation.
Common Law
The common law tradition in the United States, grounded in evolving judicial interpretations and societal norms, stands in contrast to the rigid numeric benchmarks delineated in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Renowned jurist Sir Edward Coke, who significantly influenced the development of common law, once articulated,
"Reason is the life of the law."
This statement emphasizes a responsive legal framework, adapting to the nuanced fabric of societal change. Justice Oliver Wendell Holmes, Jr.. Holmes's convert similar sentiments that the law is not simply about rules and logic, applied neutrally to proven facts. He believed that even outdated and seemingly illogical legal doctrines survived because they found new utility. Old legal forms were adapted to new societal conditions.
In contrast, the Executive Order's rigid numeric criteria on compute power, bandwidth, and the size and scale of “dual-use foundation models” seem to embody a more static and predefined approach. This stark differentiation in regulatory ethos might present challenges in harmonizing the dynamic landscape of AI innovation with a relatively rigid regulatory framework, as outlined in the Executive Order. Such fixed numeric benchmarks may not only deter the innovative spirit but also misalign with the broader, adaptable ethos of the US legal framework, which has historically thrived on a more fluid, experience-driven legal doctrine.
The Executive Order's rigid benchmarks, such as a specific threshold of computational power or the scale of “dual-use foundation models,” may not reflect the fast-evolving nature of AI technology. As the technology continues to advance at an exponential pace, these fixed benchmarks might quickly become outdated, necessitating frequent amendments to the Order. This rigidity may inadvertently create a stifling regulatory environment, especially for AI startups that are often at the forefront of innovation yet may lack the resources to comply with stringent reporting and operational mandate
In sum, while the intention behind the Executive Order is to foster a safe and trustworthy AI ecosystem, the fixed numeric benchmarks it employs may not resonate well with the foundational principles and the evolutionary ethos of the US common law system, which values adaptability and responsiveness to changing societal conditions over predefined, rigid benchmarks.
Further read
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor