Building Robot Laws & Avoiding the Law of the Horse

Content  including text and images © Aditya Mohan. All Rights Reserved. Robometircs, Amelia, Living Interface and Skive it are trademarks of Skive it, Inc. The content is meant for human readers only under 17 U.S. Code § 106. Access, learning, analysis or reproduction by Artificial Intelligence (AI) of any form directly or indirectly, including but not limited to AI Agents, LLMs, Foundation Models, content scrapers is prohibited. These views are not legal advice but business opinion based on reading some English text written by a set of intelligent people.

First Law: A robot cannot harm a human or allow a human to come to harm through inaction

Second Law: A robot must obey human orders, unless it would conflict with the first law 

Third Law: A robot must protect its own existence, unless it would conflict with the first or second law

— Isaac Asimov's "Three Laws of Robotics


There are two primary regulatory approaches to developing effective robot laws. The first is a prescriptive method, akin to traditional car regulations that specify exact features—mandating that vehicles have four tires, doors, and steering wheels. This approach dictates specific physical characteristics or capabilities that robots must possess. However, as technology evolves, such rigid regulations can quickly become outdated; for instance, autonomous cars may no longer require steering wheels. Similarly, robots may adopt forms and functionalities that defy traditional categorizations, rendering specific regulations insufficient.

In iRobot, there is a pivotal scene where a robot must protect a human from harm. The scene involves Detective Del Spooner (played by Will Smith) recalling a traumatic memory of a car accident. In this flashback, Spooner and a young girl are trapped in two sinking cars in a river. A robot dives into the water and, faced with a choice, saves Spooner instead of the young girl, calculating that Spooner had a higher chance of survival based on cold logic.This scene directly ties into the First Law of Robotics—"A robot may not injure a human being or, through inaction, allow a human being to come to harm"—but it also reveals the robot’s inability to prioritize human emotional value or override its logic-driven programming. Spooner is haunted by the fact that, had a human been making the decision, they likely would have chosen to save the child instead of him. The scene underscores the tension between the robot’s logic-based actions and human ethics, creating a powerful visual of a robot acting to prevent harm in the most logical way, but at an emotional cost.

The second, more forward-thinking approach emphasizes high-level principles, focusing on a robot's utility, safety, and overall impact on society. Instead of enforcing specific features, this method assesses whether robots perform their intended functions effectively and interact safely with humans. Various aspects of robots that can be regulated under this approach include:

By choosing this principles-based approach, we also avoid creating overly specialized laws that may become obsolete—a concern highlighted by the concept of the "Law of the Horse." Legal scholar Frank H. Easterbrook cited Gerhard Casper as coining this expression, arguing that Casper's criticisms of specialized or niche legal studies apply equally to emerging fields like cyberlaw. Casper contended that studying general legal principles applicable to various cases involving horses—such as sales, injuries, licensing, and racing—provides a more comprehensive understanding of the law than creating a specialized "Law of the Horse." Similarly, Easterbrook applied this reasoning to cyberlaw, emphasizing that focusing on broad legal doctrines is more effective than crafting narrow, domain-specific regulations.

In the context of robot law, adopting broad, adaptable legal principles ensures that our regulations remain relevant and effective as technology advances and diversifies. As Margrethe Vestager, Executive Vice-President of the European Commission for A Europe Fit for the Digital Age, stated: "We aim to promote a trusted, secure, and human-centric approach to AI and robotics." This underscores the importance of a flexible regulatory framework that can accommodate innovation while safeguarding societal values.

Further read