Imperfectioism as a Feature in Defining AI Policy
Introduction
In the evolving landscape of artificial intelligence, the concept of Imperfectioism—embracing trial and adaptation through iterative decision-making—emerges as a crucial framework for defining effective AI policy. Inspired by Judge Jeffrey S. Sutton's insights in 51 Imperfect Solutions: States and the Making of American Constitutional Law, there is value in allowing states to take the first shot at resolving legal issues. If an initial local decision proves flawed, there remains an opportunity for a second shot—a higher-level intervention to correct or refine the approach. This process is especially relevant in AI policy, where societal and cultural variations make a one-size-fits-all approach difficult, if not impossible. By embracing Imperfectioism, policymakers can focus on creating flexible and adaptable strategies that evolve over time, rather than relying on rigid solutions that may fail to address the complexities of different communities.
The signing of the Mayflower Compact in 1620
This image signing of the Mayflower Compact in 1620 image brings to life the scene within the dimly lit, narrow cabin of the Mayflower, where leaders like John Carver, William Bradford, and Myles Standish gather around a small wooden table. Clad in Puritan attire, with heavy woolen coats and broad-brimmed hats, these men draft and sign a foundational agreement. Their serious expressions and the soft, flickering lantern light underscore the gravity of their actions as they commit to unity and order in an unfamiliar land. Women and children can be seen in the background, reinforcing the communal spirit and hope for a shared future. This moment of determination, captured here, symbolizes the enduring principles of governance and collective responsibility that continue to shape society.
The Mayflower Compact, signed on November 11, 1620, aboard the Mayflower, was a pivotal document in American history. It established a framework for self-governance among the Pilgrims and other settlers before they disembarked to establish Plymouth Colony.
Historical Context
The Mayflower, carrying 102 passengers, including 41 Pilgrims seeking religious freedom, intended to settle near the Hudson River in Virginia. However, after being blown off course, they anchored in what is now Provincetown Harbor, Massachusetts. Recognizing the need for order and unity in a land outside their original charter's jurisdiction, the male passengers drafted and signed the Mayflower Compact.
Key Elements of the Mayflower Compact
Purpose: To establish a "civil body politic" for better ordering and preservation.
Loyalty: Affirmed allegiance to King James of England.
Governance: Agreed to enact and obey laws deemed necessary for the colony's general good.
Significance
The Mayflower Compact is often regarded as a foundational step toward self-governance in the New World. It set a precedent for later democratic principles in America, emphasizing the importance of a social contract and collective responsibility.
State-Level Experimentation and the Role of Imperfectioism
Judge Sutton emphasizes the importance of state courts and constitutions in shaping American Constitutional Law, complementing the federal role in protecting individual liberties. By extension, AI is not just a technological phenomenon but a deeply cultural and societal one, influencing daily life in countless ways. AI applications can vary significantly in their ethical implications depending on local culture, economic conditions, and community values. The ethical use of AI, therefore, demands sensitivity to local norms and values, reflecting the diverse perspectives across different communities. This sensitivity is not just a matter of ethical obligation; it is also essential for the practical effectiveness of AI systems, which need to align with the specific needs and expectations of the people they serve.
Tackling AI as a policy challenge should not be about imposing a uniform solution across the entire U.S. population of 337 million (2024). Given AI's complexity and potential impact, experimenting with a single sweeping law is unwise. Instead, allowing local governments to innovate and experiment provides the best pathway forward. The United States is a large and diverse country, and this diversity should be leveraged to test, refine, and perfect AI regulations through localized approaches. Each state can serve as a laboratory of democracy, iterating and improving upon the rules that govern AI while contributing insights that can inform federal policy. By fostering a culture of local experimentation, we can generate a broad array of solutions that address specific challenges in different regions, ultimately creating a more robust and nuanced framework for AI governance.
Iterative Policy Development and Localized Innovation
The iterative process of trial, adaptation, and improvement is at the core of how effective AI policy should be developed. Just as technological advancements often rely on iterative improvements, so too should the policies that govern these technologies. Localized experimentation can help identify what works well and what doesn't, and successful approaches can be scaled up or adapted to suit other regions. This method not only mitigates the risks associated with untested, large-scale regulations but also encourages a dynamic policy environment that evolves in tandem with technological advances. States that succeed in crafting innovative AI regulations can serve as models, offering proven strategies that other jurisdictions can adopt or adapt, thereby promoting a collaborative approach to policy development.
Balancing Federal Guidance with Local Flexibility
AI policy requires drawing on federal guidance while adapting these frameworks to meet the specific needs and values of local jurisdictions. The federal government can provide overarching principles to ensure fundamental rights are protected, but the details of implementation should be flexible enough to adapt to local conditions. As discussed in the article Law as an Accelerator of Technology, existing federal laws and civil rights can be used for new technologies like AI. These federal laws can be interpreted by the state based on local norms and societal considerations. This convergence helps to strike a balance between the need for national standards and the benefits of localized control, ensuring that AI technologies are implemented in ways that maximize their benefits while minimizing potential harms.
Conclusion
Imperfectioism provides a valuable framework for defining AI policy by emphasizing the importance of flexibility, local experimentation, and iterative adaptation. By allowing states to take the lead in experimenting with AI regulations, we can harness the diversity of the United States to craft effective, context-sensitive policies that reflect the unique needs of different communities. At the same time, federal guidance can ensure that these local efforts align with broader national priorities and protect fundamental rights. The convergence of federal oversight and local innovation thus represents a powerful model for accelerating the ethical and effective adoption of AI technologies, ultimately leading to a more inclusive and resilient policy framework that benefits society as a whole.
Further read
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor
Law is Not Logic: The Exponential Dilemma in Generative AI Governance
Generative AI & Law: I Am an American Day in Central Park, 1944
Generative AI & Law: Title 35 in 2024++ with Non-human Inventors
Generative AI & Law: Similarity Between AI and Mice as a Means to Invent
Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI
Embedding Cultural Value of a Society into Large Language Models (LLMs)
Lessons in Leadership: The Fall of the Roman Republic and the Rise of Julius Caesar
Justice Sotomayor on Consequence of a Procedure or Substance
From France to the EU: A Test-and-Expand Approach to EU AI Regulation
Beyond Human: Envisioning Unique Forms of Consciousness in AI
Protoconsciousness in AGI: Pathways to Artificial Consciousness
Artificial Consciousness as a Way to Mitigate AI Existential Risk
Human Memory & LLM Efficiency: Optimized Learning through Temporal Memory
Adaptive Minds and Efficient Machines: Brain vs. Transformer Attention Systems
Self-aware LLMs Inspired by Metacognition as a Step Towards AGI
The Balance of Laws with Considerations of Fairness, Equity, and Ethics
AI Recommender Systems and First-Party vs. Third-Party Speech
Building Products that Survive the Times at Robometrics® Machines
Autoregressive LLMs and the Limits of the Law of Accelerated Returns
The Power of Branding and Perception: McDonald’s as a Case Study
Monopoly of Minds: Ensnared in the AI Company's Dystopian Web
Generative Native World: Digital Data as the New Ankle Monitor
The Secret Norden Bombsight in a B-17 and Product Design Lessons
Kodak's Missed Opportunity and the Power of Long-Term Vision
The Role of Regulatory Enforcement in the Growth of Social Media Companies