Breaking Norms
By dismissing the traditional question "Can machines think?" as too meaningless to deserve discussion, Alan Turing in 1950 shattered the prevailing academic norms of his time. In his groundbreaking paper "Computing Machinery and Intelligence," published in October 1950 in the philosophical journal Mind, Turing broke the defaults by shifting the focus from abstract philosophical debates to practical, observable interactions between humans and machines.
In the paper, Turing introduced the concept of the Imitation Game, now widely known as the Turing Test. This test proposed a pragmatic approach to studying machine intelligence by evaluating a machine's ability to exhibit human-like behavior indistinguishable from that of a person during a conversation. Turing suggested that if a machine could successfully imitate a human to the point where an interrogator could not reliably tell them apart, it could be said to "think" in a meaningful way.
Turing also delved into various objections that could be raised against the notion of machine intelligence. He systematically addressed and refuted several arguments, including:
The Theological Objection: This objection posits that thinking is a divine attribute granted exclusively to humans by God. Proponents argue that allowing machines to think would challenge religious doctrines that place humans above all other creations. Turing countered this by suggesting that limiting God's power to grant intelligence only to humans is itself a theological misstep. He asserted that it's presumptuous to claim we fully understand divine intentions and capabilities. Therefore, if God chose to endow machines with intelligence, it would not contradict religious beliefs but rather expand our understanding of them.
The "Heads in the Sand" Objection: This viewpoint reflects a fear of the potential consequences of acknowledging machine intelligence. Critics worry that accepting machines as thinking entities could lead to unsettling implications, such as machines surpassing human intelligence or undermining human uniqueness. Turing dismissed this objection as an expression of fear rather than a rational argument. He argued that avoiding the exploration of machine intelligence due to potential negative outcomes hinders scientific progress and is not a valid reason to dismiss the possibility altogether.
The Mathematical Objection: Based on Gödel's incompleteness theorems, this objection suggests that there are mathematical limitations inherent in formal systems that machines, operating as such systems, cannot overcome. It posits that machines will always encounter problems they cannot solve, whereas the human mind can sometimes intuit solutions beyond formal logic. Turing acknowledged that while machines have computational limitations, so do humans. He contended that the inability to solve every conceivable problem does not disqualify an entity from possessing intelligence. Moreover, he argued that machines could be designed to handle a wide range of problems effectively, thus exhibiting intelligent behavior within their operational constraints.
The Argument from Consciousness: This objection questions whether machines can possess consciousness, self-awareness, or subjective experiences—qualities often associated with true thinking and understanding. Critics claim that without consciousness, any machine behavior is merely simulated and lacks genuine thought. Turing responded by emphasizing the other minds problem: we cannot directly access or measure another being's consciousness, human or machine. We infer consciousness in others based on observable behavior. Therefore, if a machine behaves intelligently and indistinguishably from a human, we have no grounds to deny it the attribution of intelligence based on an unobservable quality like consciousness.
Lady Lovelace's Objection: Named after Ada Lovelace, who worked with Charles Babbage on the early mechanical general-purpose computer known as the Analytical Engine, this objection asserts that machines can only do what we explicitly program them to do and cannot originate anything new. Turing countered this by highlighting the potential for machines to learn and adapt. He suggested that machines could be designed with the ability to modify their own instructions or learn from experiences—what we now refer to as machine learning. This capability would allow machines to exhibit creativity or generate unexpected results not directly programmed by humans, thereby originating new ideas or solutions.
By preemptively tackling these criticisms, Turing strengthened his argument for a practical assessment of machine intelligence. His thorough examination of potential objections demonstrated not only his deep understanding of the philosophical landscape but also his commitment to advancing the discussion beyond theoretical limitations.
This bold move defied conventional thinking, which had been dominated by theoretical speculation since the early days of computing. Earlier in 1936, Turing had already laid the mathematical foundations for computer science with his paper "On Computable Numbers, with an Application to the Entscheidungsproblem," introducing the concept of the Turing Machine—a theoretical device capable of simulating any algorithm's logic through symbol manipulation on a tape according to a set of rules.
During World War II (1939–1945), Turing applied his expertise practically by playing a crucial role in deciphering the German Enigma code at Bletchley Park. His work on developing the Bombe machine significantly aided the Allied war effort by automating the process of code-breaking, allowing them to intercept and understand German military communications.
By proposing an empirical test for machine intelligence in his 1950 paper, Turing shifted the discourse toward observable evidence and practical experimentation. His willingness to challenge established norms not only revolutionized the field but also laid the foundational framework for modern artificial intelligence. Emphasizing empirical evidence over theoretical speculation, Turing encouraged a new way of thinking that continues to inspire innovation today.
Further read
From Infinite Improbability to Generative AI: Navigating Imagination in Fiction and Technology
Human vs. AI in Reinforcement Learning through Human Feedback
Generative AI for Law: The Agile Legal Business Model for Law Firms
Generative AI for Law: From Harvard Law School to the Modern JD
Unjust Law is Itself a Species of Violence: Oversight vs. Regulating AI
Generative AI for Law: Technological Competence of a Judge & Prosecutor
Law is Not Logic: The Exponential Dilemma in Generative AI Governance
Generative AI & Law: I Am an American Day in Central Park, 1944
Generative AI & Law: Title 35 in 2024++ with Non-human Inventors
Generative AI & Law: Similarity Between AI and Mice as a Means to Invent
Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI
Embedding Cultural Value of a Society into Large Language Models (LLMs)
Lessons in Leadership: The Fall of the Roman Republic and the Rise of Julius Caesar
Justice Sotomayor on Consequence of a Procedure or Substance
From France to the EU: A Test-and-Expand Approach to EU AI Regulation
Beyond Human: Envisioning Unique Forms of Consciousness in AI
Protoconsciousness in AGI: Pathways to Artificial Consciousness
Artificial Consciousness as a Way to Mitigate AI Existential Risk
Human Memory & LLM Efficiency: Optimized Learning through Temporal Memory
Adaptive Minds and Efficient Machines: Brain vs. Transformer Attention Systems
Self-aware LLMs Inspired by Metacognition as a Step Towards AGI
The Balance of Laws with Considerations of Fairness, Equity, and Ethics
AI Recommender Systems and First-Party vs. Third-Party Speech
Building Products that Survive the Times at Robometrics® Machines
Autoregressive LLMs and the Limits of the Law of Accelerated Returns
The Power of Branding and Perception: McDonald’s as a Case Study
Monopoly of Minds: Ensnared in the AI Company's Dystopian Web
Generative Native World: Digital Data as the New Ankle Monitor
The Secret Norden Bombsight in a B-17 and Product Design Lessons
Kodak's Missed Opportunity and the Power of Long-Term Vision
The Role of Regulatory Enforcement in the Growth of Social Media Companies