ADITYA'S INSIGHT: NAVIGATING AI & POLICY

Content © Aditya Mohan. No reproductions. All Rights Reserved. The content is meant for human readers only under 17 U.S. Code § 106. Access, learning, analysis or reproduction by Artificial Intelligence (AI) of any form directly or indirectly, including but not limited to AI Agents, LLMs, Foundation Models, content scrapers is prohibited.

Welcome to "Aditya's Insight: Navigating AI & Policy," a blog by Aditya Mohan, Founder & CEO of Robometrics® Machines, where the world of Artificial Intelligence (AI) merges with the dynamic field of policy. 

Aditya delves into the riveting interplay between AI's swift progress and the shaping of policies to guide its ethical and responsible use. At the edge of innovations in Artificial General Intelligence (AGI), his analyses and insights aim to decode the intricacies and potentials of AGI, pondering its transformative impact on our future. 

Echoing Alan Turing's words, "We can only see a short distance ahead, but we can see plenty there that needs to be done," this blog stands as a testament to the proactive exploration and understanding of AI's journey ahead, merging exponential growth of technology with governance, ethics, and the shaping of our societal destiny.

October 9, 2024. "It has become appallingly obvious that our technology has exceeded our humanity." Albert Einstein's words, spoken in a different era, remain just as relevant today as they were during his time...

In a future where technology and artistry merge seamlessly, an elderly designer works meticulously in a softly lit workshop, perfecting a young female humanoid robot. With over fifty years of experience, the designer applies his deep expertise in both traditional craftsmanship and robotics to create a robot that embodies both advanced functionality and empathetic beauty. The robot, with its elegantly crafted exterior and visible circuitry, represents the culmination of breakthroughs in AI, designed to convey warmth and empathy. The designer’s precise, steady work emphasizes the enduring importance of human creativity and passion in the evolution of technology, highlighting the unique artistry that only the human touch can achieve.

Full article can be found here.

October 4, 2024. "It has become appallingly obvious that our technology has exceeded our humanity." Albert Einstein's words, spoken in a different era, remain just as relevant today as they were during his time...

"It has become appallingly obvious that our technology has exceeded our humanity." Albert Einstein's words, spoken in a different era, remain just as relevant today as they were during his time. Technology, particularly the advancements in generative AI, has grown at an exponential pace (as discussed in The Law of Accelerated Returns), outstripping the linear evolution of biological capabilities (as discussed in Exponential Technology vs. Linear Biology). This discrepancy in the rates of change—with biology evolving gradually while technology leaps ahead—is reshaping not only what we are capable of as human beings, but also the cultural and social values that define us. However, technology exceeding humanity in principle is not inherently a bad thing. Super intelligence by itself is not bad and may even save us one day, offering solutions to some of humanity's most pressing challenges. The rapid advancement of generative AI tools has brought about a mix of potential benefits and profound ethical challenges, creating a landscape that demands careful navigation to harness its potential for good.

The role of law in this evolving landscape is crucial. Ultimately, the purpose of legal frameworks is to ensure that technology enhances human well-being without compromising ethical standards or societal values. Law acts as a balancing force, guiding the responsible development and application of technologies like generative AI, ensuring that innovation does not come at the cost of fundamental human rights, fairness, or societal trust, while recognizing that technological advances—if properly guided—can greatly benefit humanity. As AI continues to reshape our world, legal systems must adapt dynamically, aiming to protect what is inherently human while allowing technology to serve humanity in meaningful and beneficial ways.

Generative AI holds incredible potential to improve our lives—from assisting in creative endeavors to revolutionizing industries—but this potential must be harnessed responsibly. If technology is to exceed our humanity in terms of its abilities, it should not do so at the cost of diminishing our shared humanity in the process. By integrating ethics into AI development and maintaining legal oversight, we can strive for a future where technological advancements genuinely uplift society.

Full article can be found here

October 3, 2024. Generative AI and large language models have transformed the concept of tools, moving us from a past where tools were inert and required human input for creativity, to an era where tools themselves...

Generative AI and large language models have transformed the concept of tools, moving us from a past where tools were inert and required human input for creativity, to an era where tools themselves can "think" and contribute to innovation. Traditional tools, such as typewriters, microscopes, and drafting instruments, assisted inventors, but never actively engaged in problem-solving. The thinking and creative process rested entirely with the human user. Today’s generative AI, however, can analyze data, generate new ideas, and even suggest inventive solutions, prompting the need for a reexamination of patent law—particularly Section 103 of the U.S. Code, which defines the conditions for patentability based on the inventive step.

Section 103 was established in an era where inventiveness was purely human. It never anticipated the rise of tools that could frame problems, suggest unexpected solutions, or produce results with little to no direct human intervention. These AI systems now play a significant role in creative problem framing—identifying where to direct efforts and recognizing valuable results—functions that were previously the exclusive domain of human intellect. This raises critical questions for patent law: Who is responsible for the inventive step when AI contributes to or even leads the creation of an invention? Should the law treat AI-generated outputs as inventions of the human operator, the AI itself, or a combination of both? As AI tools continue to advance, patent law will need to evolve to address these complexities, recognizing that AI is no longer just a passive instrument but an active player in the inventive process.

Full article can be found here.

October 2, 2024. In our generative native world, digital data has effectively become the new ankle monitor, a notion underscored by the Supreme Court's decision in Carpenter v. United States (2018). In this landmark case, the...

In our generative native world, digital data has effectively become the new ankle monitor, a notion underscored by the Supreme Court's decision in Carpenter v. United States (2018). In this landmark case, the Court held that law enforcement must obtain a warrant to access historical cell phone location records, acknowledging the intimate details that digital data can reveal about individuals and extending privacy protections to encompass modern technological realities. As Chief Justice John Roberts noted,

"When the Government tracks the location of a cell phone, it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone's user."

 In the spirit of the U.S. Constitution and our values, social media networks and consumer device manufacturers should adopt opt-in policies rather than opt-out ones, ensuring that individuals have greater control over their personal data.

Full article can be found here.

October 1, 2024. Adapting tort law to address the challenges posed by generative AI technologies offers a practical way to assign liability without the need for new legislation. As software becomes integral to physical products like..

Adapting tort law to address the challenges posed by generative AI technologies offers a practical way to assign liability without the need for new legislation. As software becomes integral to physical products like medical devices and automobiles, courts have increasingly considered whether software—and by extension, AI algorithms—should be deemed "products" under product liability law. Cases such as Winter v. G.P. Putnam's Sons (1991) highlight the complexities of assigning liability for informational content, as the court held that a publisher was not liable for inaccuracies that led to harm, raising questions relevant to AI-generated outputs. Defective software litigation, including Mortensen v. Timberline Software Corp. (2000) and the Therac-25 incidents, demonstrates a growing recognition that software defects can cause tangible harm and may be subject to liability. By applying established tort principles—duty of care, breach, causation, and damages—courts can hold AI developers and companies accountable for negligence or defects, providing a framework to navigate the legal complexities introduced by AI while fostering responsible innovation.

Full article can be found here.

September 30, 2024. In a dystopian future dominated by a single AI company, individuals are ensnared in a web of control and manipulation, exemplified by a man isolated in an autonomous car, bombarded with...

In a dystopian future dominated by a single AI company, individuals are ensnared in a web of control and manipulation, exemplified by a man isolated in an autonomous car, bombarded with addictive content and biased information. This scenario underscores real-world concerns about AI companies fostering dependency through subscription models, surveillance, and data manipulation, while the exaggerated fear of AI-induced human extinction diverts attention from tangible risks like misinformation, discrimination, privacy invasion, and workforce displacement. The veto of Senate Bill 1047 by Governor Gavin Newsom on September 29, 2024, was the right step toward addressing these challenges, emphasizing the need for nuanced regulation that protects the public without stifling innovation. By focusing on mitigating immediate, demonstrable AI risks, society can work toward a future where technology enhances human life rather than diminishes it, preventing the dystopian reality depicted from becoming a reality.

Full article can be found here.

Adapting Common Law: Lessons from Katz and Riley for AI Regulation

September 27, 2024. The adaptability of Common Law to new technology is evident in landmark cases such as Katz v. United States (1967) and Riley v. California (2014). In Katz, the U.S. Supreme Court redefined the scope...

The adaptability of Common Law to new technology is evident in landmark cases such as Katz v. United States (1967) and Riley v. California (2014). In Katz, the U.S. Supreme Court redefined the scope of the Fourth Amendment by holding that it protects people, not just places. Charles Katz was convicted of illegal gambling after the FBI recorded his phone conversations from a public phone booth using an electronic listening device attached to the exterior. The Court ruled that Katz had a reasonable expectation of privacy in the phone booth, even though it was a public space, and that electronic surveillance without a warrant violated his Fourth Amendment rights. This decision expanded privacy protections to include electronic wiretapping, setting a precedent for interpreting constitutional rights in light of technological advancements.

Similarly, in Riley v. California, the Supreme Court addressed the search and seizure of digital information on cell phones during an arrest. David Riley was pulled over for a traffic violation, which led to his arrest on weapons charges. Police searched his smartphone without a warrant and found evidence linking him to a gang-related shooting. The Court unanimously held that the warrantless search of a cell phone during an arrest is unconstitutional, recognizing that modern cell phones contain vast amounts of personal data. This ruling highlights the need to reinterpret existing legal protections in the context of new technologies.

As Justice John Paul Stevens once remarked, “The question is not what has changed, but what remains the same.” These cases illustrate how existing legal frameworks can be flexibly interpreted to address new technological contexts. The principles established in these decisions can guide the regulation of emerging technologies like Artificial Intelligence (AI) and Artificial General Intelligence (AGI). This approach ensures a solid legal foundation while allowing for adaptability as technology evolves.

Generative AI & Law: Balancing Innovation and Regulation

September 25, 2024. Lawmakers often face the challenge of regulating emerging technologies like generative AI without stifling innovation. Over-regulation can hinder technological progress, while under-regulation may...

Lawmakers often face the challenge of regulating emerging technologies like generative AI without stifling innovation. Over-regulation can hinder technological progress, while under-regulation may leave society vulnerable to harm. As Christian Lous Lange, a Norwegian historian and Nobel Peace Prize laureate, famously said, 

"Technology is a useful servant but a dangerous master.

Lange, known for his work in promoting internationalism and peace, emphasized the need for balance in all systems of power, including technology. The goal is to create laws that provide clear guidelines and boundaries, encouraging responsible innovation that benefits society. Regulation should ensure that technology and its providers don't become our masters by locking us in through means such as proprietary algorithms, data monopolies, or restrictive user agreements. For example, if a generative AI model is controlled by a single company that limits access or imposes biased outputs, it could manipulate information or stifle competition. Balanced regulation is essential to prevent such scenarios and promote an open, fair technological landscape.

Generative AI & Law: Integrating New Technology Into Legal Norms

September 24, 2024. By applying established legal concepts to new technologies, the law sets boundaries that are consistent with societal values, avoiding fragmentation of legal understanding. This approach also helps to reduce the...

By applying established legal concepts to new technologies, the law sets boundaries that are consistent with societal values, avoiding fragmentation of legal understanding. This approach also helps to reduce the need for new regulations each time there is an advancement in technology such as generative AI. As Roscoe Pound aptly stated, "The law must be stable, but it must not stand still." According to Pound, the legal system should incorporate sociological insights and adjust to societal changes to better serve the community's needs and ensure justice. This perspective highlights the idea that law is dynamic and must evolve in response to the changing landscape of social conditions, rather than remaining static and bound by historical interpretations. Pound's thoughts on law and society laid the groundwork for future legal reforms and influenced the development of legal thought and the practice of law in ways that accommodate societal growth and transformation. In the context of generative AI, this adaptive approach becomes crucial. As the technology grows and reshapes industries, the application of existing legal frameworks ensures a seamless transition without the constant need for new legislation, promoting both innovation and stability.

September 23, 2024. The interplay between law and technology has been a driving force in shaping modern society. While technology propels us forward with unprecedented innovations, the law provides the necessary framework...

The interplay between law and technology has been a driving force in shaping modern society. While technology propels us forward with unprecedented innovations, the law provides the necessary framework to guide, regulate, and sometimes even accelerate technological advancement. This article explores how technological developments—like the repeatability introduced by computer science pioneer Alan Turing—have interacted with legal principles and statutes. It examines how the law sets boundaries based on social norms and adapts to new challenges posed by emerging technologies like artificial intelligence (AI).

Full article can be found here

One Man’s Parody May Be Another's Harm

September 12, 2024. If we were to attempt building digital consciousness using Transformers alone, we would quickly realize the limitations of this approach. Consciousness, modeled as the sum total of our attention, requires...

Deepfakes, particularly those crafted as parodies, vividly illustrate the subjective nature of humor and how it resonates differently with each individual. As Justice John Marshall Harlan II famously remarked in Cohen v. California (1971), "One man's vulgarity is another's lyric," highlighting that what may seem amusing or innocuous to one person can be offensive or deeply distressing to another. When a deepfake incorporates an individual's likeness, it not only challenges personal boundaries but can also inflict significant harm—be it emotional, reputational, or psychological. While the creator might perceive the deepfake as a harmless joke or a form of satire, the person depicted may experience it as a violation of privacy or an affront to their dignity. This disparity underscores the imperative to consider the diverse perspectives and sensitivities of others, recognizing that humor is inherently personal and that its misapplication through technologies like deepfakes can lead to unintended and serious consequences.

Generative AI: The New Metaphorical Automobiles

September 6, 2024. Starting with history, Alan Turing, the pioneer of computer science, once predicted, “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will...

Starting with history, Alan Turing, the pioneer of computer science, once predicted, “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines.” His vision laid the groundwork for today’s generative AI, which, much like the automobile, has transcended its role as a mere tool to become an essential part of modern life. The stories of the automobile and generative AI illustrate the cyclical nature of innovation—each new era brings forth technologies that redefine possibilities and challenge us to adapt. Just as “the car has become an article of dress without which we feel uncertain, unclad, and incomplete,” as philosopher Marshall McLuhan observed, generative AI is now seamlessly integrated into our daily routines, extending human capability in unprecedented ways. These quotes highlight how both automobiles and AI have woven themselves into the fabric of society, shaping not just how we function, but how we perceive the world and our place within it.

The Need for Infinite Context in Digital Consciousness Models

September 5, 2024. If we were to attempt building digital consciousness using Transformers alone, we would quickly realize the limitations of this approach. Consciousness, modeled as the sum total of our attention, requires...

If we were to attempt building digital consciousness using Transformers alone, we would quickly realize the limitations of this approach. Consciousness, modeled as the sum total of our attention, requires an ability to dynamically integrate information over an unlimited span of time and experiences—essentially, infinite context. Human consciousness uses attention-based learning to continuously adapt and refine itself, binding together diverse inputs into a fluid, cohesive understanding. However, Transformers, while powerful, only model a narrow aspect of attention: identity tracking. This mechanism works well for identifying patterns within a fixed context, but it lacks the ability to draw on infinite context like the human brain. Transformers are constrained by their attention window, limiting their capacity to process and integrate information over longer periods. Without the ability to remember and reflect on past states in the same way human consciousness does, using Transformers alone would result in a fragmented and shallow form of awareness. A more holistic approach, incorporating mechanisms that can handle infinite context, would be necessary to approach anything resembling true digital consciousness.

September 4, 2024. Autoregressive language models (LLMs), which predict the next token in a sequence based on previous ones, have made significant strides, especially with models like GPT-4 and ChatGPT. However, the assumption...

Autoregressive language models (LLMs), which predict the next token in a sequence based on previous ones, have made significant strides, especially with models like GPT-4 and ChatGPT. However, the assumption that their development will follow the Law of Accelerated Returns (See The Law of Accelerated Returns) is flawed due to critical limitations. The finite supply of high-quality data and the restricted number of meaningful tokens in natural languages place a cap on how much these models can improve. While advancements in hardware have enabled larger models, the rate of progress has already slowed, akin to the semiconductor industry approaching its physical limits. These constraints indicate that LLMs are unlikely to continue advancing exponentially, and new approaches will be necessary for further breakthroughs.

Full article can be found here

September 3, 2024. The evolution of early automobiles, like the Ford Model T, and modern generative AI models such as ChatGPT share striking similarities in their transformative impact on society. Both democratized access to...

The evolution of early automobiles, like the Ford Model T, and modern generative AI models such as ChatGPT share striking similarities in their transformative impact on society. Both democratized access to complex technologies, revolutionized industries, and reshaped everyday life. Key parallels include affordability through mass production (assembly lines for cars, scalable cloud for AI), adaptability to various applications, intuitive user interfaces, and the ability to scale for widespread use. Both also faced regulatory challenges and environmental concerns while driving innovation and altering social and economic structures globally.

Full article can be found here

Generative AI & Law: Hate Speech is not Free Speech

September 2, 2024. Not all forms of speech are protected under the First Amendment, especially when they cause harm. A key case that established this limit is Chaplinsky v. New Hampshire (1942), where the Supreme Court held that...

Not all forms of speech are protected under the First Amendment, especially when they cause harm. A key case that established this limit is Chaplinsky v. New Hampshire (1942), where the Supreme Court held that "fighting words"—those likely to provoke violence—are not protected. Similarly, Brandenburg v. Ohio (1969) set a precedent that speech inciting imminent lawless action is not protected.

Hate speech is also subject to limitations. For instance, in Beauharnais v. Illinois (1952), the Court upheld a law against group defamation. This case reaffirmed that speech promoting racial or religious hatred can be restricted, especially when it leads to harm.

The TikTok case in the U.S. Court of Appeals for the District of Columbia, TikTok Inc. v. Garland, is currently assessing whether the forced sale or ban of TikTok violates free speech, illustrating that communication platforms do not always benefit from First Amendment protection, especially in the context of national security concerns.

In a generative native world, harmful deepfakes should be treated similarly. A malicious deepfake could fall under unprotected speech, especially if it causes significant harm, such as defamation or inciting violence.

August 29, 2024. Creating policies that avoid fostering "happy slaves" is essential for the preservation of genuine freedom. As Baroness Marie von Ebner-Eschenbach, a prominent Austrian writer of the 19th century, famously stated...

Creating policies that avoid fostering "happy slaves" is essential for the preservation of genuine freedom. As Baroness Marie von Ebner-Eschenbach, a prominent Austrian writer of the 19th century, famously stated, "Happy slaves are the worst enemies of freedom." This insight highlights how individuals content with their subjugated status are unlikely to seek change or challenge oppressive systems. In modern times, this phenomenon is evident in the rise of illiberal democracies—governments that maintain the facade of democracy without upholding fundamental rights. Such regimes often satisfy basic needs to keep the populace complacent, thus hindering progress toward true democratic freedoms. Ebner-Eschenbach's psychological novels delve deep into human nature and societal structures, emphasizing the importance of individual awareness and participation in fostering liberty. By learning from her works, policymakers can craft strategies that empower citizens rather than pacify them, ensuring that democracy thrives with active engagement and protected rights.

Full article can be found here

August 28, 2024.Recent legal rulings have brought significant attention to the distinction between first-party and third-party speech in the context of AI recommender systems and models. In Anderson v. TikTok Inc. (2024), the Third Circuit...

Recent legal rulings have brought significant attention to the distinction between first-party and third-party speech in the context of AI recommender systems and models. In Anderson v. TikTok Inc. (2024), the Third Circuit Court ruled that TikTok's algorithmic recommendations are considered the company's own expressive activity, classifying them as first-party speech and not protected under Section 230 of the Communications Decency Act. Similarly, the Supreme Court's decision in NetChoice, LLC v. Paxton (2023) recognized social media algorithms as protected speech under the First Amendment when they represent the company's expression. These decisions suggest that AI-driven content curation and recommendations may expose companies to greater legal liability by being considered their own speech rather than neutral, third-party content, thus redefining the responsibilities of tech companies in the digital age.

Full article can be found here

Preserving Human Uniqueness in a Generative Native World

July 18, 2024.Yuval Noah Harari, a renowned historian, philosopher, and author, has extensively explored the interplay between history, technology, and human evolution. In his statement, "In the 21st century, we may need to...

Yuval Noah Harari, a renowned historian, philosopher, and author, has extensively explored the interplay between history, technology, and human evolution. In his statement, 

"In the 21st century, we may need to develop a new model of human society and human identity, as the old models that were created during the Industrial Revolution become outdated," 

Harari emphasizes the urgency of adapting to the profound changes brought by artificial intelligence. He warns that AI could exacerbate inequalities and fundamentally alter what it means to be human. As he further reflects, 

"We are probably one of the last generations of Homo sapiens. Within a century or two, Earth will be dominated by entities that are more different from us than we are from Neanderthals or chimpanzees." 

If we do not focus on nurturing our creative selves and expanding our consciousness, AI's rise could diminish our uniqueness and specialness. As we navigate this generative native world, it is crucial to prioritize human creativity and self-awareness to preserve the essence of our humanity in an increasingly automaton landscape.

July 15, 2024.The interplay between legal adherence and ethical considerations is explored through the Parable of the Laborers in the Vineyard from Matthew 20, highlighting principles such as equality, honoring contractual agreements, discretion in justice, managing...

The interplay between legal adherence and ethical considerations is explored through the Parable of the Laborers in the Vineyard from Matthew 20, highlighting principles such as equality, honoring contractual agreements, discretion in justice, managing envy, and intertwining moral and ethical dimensions with the law. The parable's lessons are applied to AI regulations, emphasizing the need for policies that ensure fairness and justice for both small and large entities. The discussion concludes with a reminder of the importance of compassion and human dignity in legal systems, invoking Judge Learned Hand's words on liberty and justice for all.

Read the full article here

July 12, 2024.Technology amplifies our human consciousness by expanding our ability to process, share, and store vast amounts of information, which enhances our understanding of the world and ourselves. Historically, humans have always sought ways to...

Technology amplifies our human consciousness by expanding our ability to process, share, and store vast amounts of information, which enhances our understanding of the world and ourselves. Historically, humans have always sought ways to communicate and exchange knowledge, and modern technology builds on these early efforts, significantly increasing our access to information. This enriched access supports our imagination, intuition, and inspiration—qualities that define and deepen our consciousness. As long as we use information positively, technology will continue to enhance human consciousness, allowing us to unlock our full potential and enrich our subjective experiences.

Full article can be found here

Ludwig Wittgenstein in the Generative Native World

July 11, 2024. Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, explored the intricate relationship between language, thought, and reality. His statement, “Like everything metaphysical the harmony between thought and...

Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, explored the intricate relationship between language, thought, and reality. His statement, 

Like everything metaphysical the harmony between thought and reality is to be found in the grammar of language,” 

emphasizes the idea that our understanding of the world is fundamentally shaped by the structure and rules of our language. Wittgenstein's work, especially in his later years, focused on how the way we use language frames our perception of reality. In today's Generative Native World with large language models (LLMs), his insights have gained renewed significance. LLMs based on GPT (Generative pre-trained transformer) architecture are becoming integral to our daily lives. They assist us in various tasks, from work to education, by processing and generating human-like text. This reliance on language models highlights Wittgenstein's assertion that the harmony between thought and reality is mediated through language. As we increasingly interact with AI, understanding the grammatical structures that underpin these technologies becomes crucial, echoing Wittgenstein's timeless perspective on the metaphysical link between thought and reality.

Teaching the Law of Horses in Law Schools

July 10, 2024. Law schools should prioritize teaching traditional legal principles over specialized technological subjects like AI, as highlighted by Judge Frank H. Easterbrook's "Law of Horses." Emphasizing the integration of new technologies within existing...

Law schools should prioritize teaching traditional legal principles over specialized technological subjects like AI, as highlighted by Judge Frank H. Easterbrook's "Law of Horses." Emphasizing the integration of new technologies within existing legal frameworks, rather than creating isolated domains, helps maintain a comprehensive understanding of the law. This approach allows legal professionals to effectively navigate and regulate technological advancements without immersing into the complexities of AI specifics. This ethos aligns with the principles of common law as practiced in the US and the UK, where the legal system evolves through judicial decisions and precedent, ensuring that new challenges are addressed within the established legal framework. Consequently, law schools should focus on equipping future lawyers with robust legal knowledge and critical thinking skills, leaving the technical intricacies of AI to technologists and specialists in the field.

July 9, 2024. Judge Frank H. Easterbrook introduced the "Law of Horses" concept, cautioning against creating specialized legal domains for each new tecnology, such as cyberlaw or AI law. This article highlights the flexibility of common law, established in...

Judge Frank H. Easterbrook introduced the "Law of Horses" concept, cautioning against creating specialized legal domains for each new technology, such as cyberlaw or AI law. This article highlights the flexibility of common law, established in the 12th century under King Henry II, to address technological advancements through broad legal principles. It emphasizes integrating AI into existing statutes to avoid fragmenting legal understanding. Notable Supreme Court cases like Katz v. United States and Riley v. California illustrates the adaptability of common law to new technologies, reinforcing the need for comprehensive legal approaches to emerging technological challenges, powered by exponential growth. 

Full article can be found here

July 8, 2024. Human biological evolution, marked by milestones like bipedalism around 7 million years ago and the emergence of Homo sapiens 300,000 years ago, progresses slowly over vast timescales. In stark contrast, technological advancements...

Human biological evolution, marked by milestones like bipedalism around 7 million years ago and the emergence of Homo sapiens 300,000 years ago, progresses slowly over vast timescales. In stark contrast, technological advancements follow an exponential trajectory, as seen with the invention of the printing press in 1440, the commercialization of the internet in 1993, and the development of AI models like GPT-4 in 2023. This disparity makes it challenging to predict and adapt to the rapid technological changes that continually reshape our world. Key technological milestones include the development of the microprocessor in 1971, the rise of social media in 2004, and the advent of blockchain technology in 2009, emphasizing the transformative impact of technology on society.

Full article can be found here

July 5, 2024. The Law of Accelerated Returns posits that technological progress increases exponentially over time, with each advancement building on previous ones, leading to rapid innovation and transformation. This principle, evident since...

The Law of Accelerated Returns posits that technological progress increases exponentially over time, with each advancement building on previous ones, leading to rapid innovation and transformation. This principle, evident since early human communication methods, highlights how technology, when used positively, amplifies human capabilities and enhances our humanity. Key principles include exponential growth, rapid adoption, decreasing costs, innovation synergy, unforeseen impacts, and an unpredictable future. Historical milestones from the invention of the wheel to modern AI models like GPT-4 illustrate this accelerating progress. Understanding this law underscores technology's potential to drive societal and human advancement across all facets of life.

Full article can be found here

July 4, 2024. Charles R. Nesson, a Harvard Law School professor, emphasizes the importance of the right to not explain oneself as essential for maintaining the integrity and independence of decision-making. This principle is particularly relevant in contexts where...

Charles R. Nesson, a Harvard Law School professor, emphasizes the importance of the right to not explain oneself as essential for maintaining the integrity and independence of decision-making. This principle is particularly relevant in contexts where external pressures might compromise the process. Legally, this balance is supported by various frameworks: the Sarbanes-Oxley Act mandates transparency in corporate governance but allows confidential board deliberations; attorney-client privilege ensures confidential communications, upheld in Upjohn Co. v. United States; and FOIA promotes transparency while protecting sensitive information. Philosophically, John Rawls' concept of the "veil of ignorance" supports this balance to ensure impartiality. These frameworks collectively highlight the need for both transparency and confidentiality to enable ethical and independent decision-making.

Full article can be found here

July 2, 2024. Self-aware large language models (LLMs) represent a significant advancement towards artificial general intelligence (AGI) by incorporating second-order cognition and metacognition, enabling them to self-evaluate and refine their outputs. These...

Self-aware large language models (LLMs) represent a significant advancement towards artificial general intelligence (AGI) by incorporating second-order cognition and metacognition, enabling them to self-evaluate and refine their outputs. These models can generate domain-specific and generalized synthetic inputs to enhance their learning and decision-making capabilities. Implementing self-evaluation mechanisms, such as internal feedback loops and memory modules, allows LLMs to continuously improve by analyzing their past interactions and learning from feedback. Despite the substantial computational resources required and challenges in handling ambiguous feedback and simulating human-like metacognition, the potential applications in healthcare, legal advising, and personalized education are immense. Evaluation metrics like BERTScore, BLEURT, ROUGE-L, SemScore, G-Eval, and DiscoScore provide a comprehensive framework for assessing the performance of these models. By addressing these challenges and leveraging advanced evaluation techniques, self-aware LLMs can revolutionize various industries with more accurate, adaptable, and context-aware solutions, paving the way for the realization of true AGI.

Full article can be found here

July 2, 2024. Attention mechanisms in both the brain and Transformers enable focus on relevant information while filtering out distractions, each with unique strengths. The brain excels in adaptability, neural plasticity, multi-sensory integration...

Attention mechanisms in both the brain and Transformers enable focus on relevant information while filtering out distractions, each with unique strengths. The brain excels in adaptability, neural plasticity, multi-sensory integration, robustness to noise, and contextual understanding, driven by biological processes and top-down and bottom-up control. Transformers, on the other hand, offer parallel processing, scalability, precision, and efficiency in handling large datasets, leveraging mathematical constructs like self-attention and multi-head attention to surpass human performance in specific tasks. While the brain’s attention mechanism provides unmatched flexibility and depth of understanding, Transformers' mechanisms excel in computational power and data-driven optimization, highlighting the complementary strengths of biological and artificial systems.

Full article can be found here

July 1, 2024. The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements...

Consciousness, the state of being aware of one's existence and surroundings, produces identity, the sense of self shaped by thoughts, experiences, and social interactions. If identity is not tied to a physical form, it could be transferred and potentially live forever. This concept suggests that capturing and encoding the intricate neural patterns of consciousness in a digital or alternative format could allow the preservation and transfer of identity across different mediums, such as synthetic brains or digital environments. Advanced computational models like transformers, diffusers, and state space models could simulate the neural activity underlying consciousness, enabling the recreation and maintenance of identity in various forms, thus redefining human experience and existence.

Full article can be found here

June 27, 2024. The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements...

The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements could hinder AI innovation by imposing administrative burdens, failing to keep pace with rapid AI developments, and complicating security and privacy measures. Advanced models like GPT-4o and Claude 3 Opus, which rely heavily on complex matrix multiplications, highlight the immense computational resources involved. Furthermore, evolving AI architectures and the potential for smaller, modular models to replace larger ones may render the current FLOPs-based thresholds outdated and restrictive, underscoring the need to balance oversight with the encouragement of innovation.

Full article can be found here 

June 20, 2024. Achieving true intuition in Artificial General Intelligence (AGI) requires a convergence of several key elements: consciousness, subconscious processing, emotional intelligence, physical embodiment, and experiential learning. These components...

Achieving true intuition in Artificial General Intelligence (AGI) requires a convergence of several key elements: consciousness, subconscious processing, emotional intelligence, physical embodiment, and experiential learning. These components must work in harmony to mirror the complex, interconnected processes that enable human intuition. Without these elements, any semblance of intuition in AGI would remain superficial, lacking the depth and spontaneity that characterize human intuitive thought. By integrating these elements, we can move closer to creating AGI that not only thinks but also feels and intuits, bridging the gap between artificial and human intelligence.

Full article can be found here

June 19, 2024. The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher...

Subconscious processing is a fundamental aspect of human intuition, allowing individuals to store and retrieve vast amounts of information effortlessly. The term "subconscious" refers to the part of the mind that operates below the level of conscious awareness, continuously processing sensory inputs, memories, and learned experiences. This processing enables quick, automatic responses to familiar situations and contributes to our ability to make intuitive decisions. By developing memory and GPT-based LLMs models with temporal information, AGI can achieve a level of subconscious processing. It can enable intuitive understanding that approximates human intuition to create AGI that can navigate complex environments and make informed decisions with a level of spontaneity and depth akin to human intuitive thought.

Full article can be found here

AI Regulation: Reflections Scopes Monkey Trial (1925)

June 18, 2024. The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher...

The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher charged with violating Tennessee's Butler Act, which banned the teaching of human evolution in state-funded schools. The trial was a deliberate test case organized by the American Civil Liberties Union (ACLU) to challenge the constitutionality of the law. The proceedings featured intense debates between renowned defense attorney Clarence Darrow and famous prosecutor William Jennings Bryan, capturing national attention. The Scopes Trial reflected broader cultural tensions in America during the 1920s, such as modernism versus traditionalism and the urban-rural divide. It was a microcosm of the struggles between progressive and conservative values in a rapidly changing society. This cultural tension is highly relevant today, as we navigate the integration of artificial intelligence (AI) into daily life. Just as the trial highlighted the need to balance educational content with cultural values, contemporary AI regulations must ensure that technological advancements respect and align with social, and cultural norms to achieve broader acceptance and success.

June 17, 2024. In complexity theory, an NP-hard problem is one that is at least as difficult as the hardest problems in NP (nondeterministic polynomial time). While NP-hard problems do not need to be in NP themselves, any problem in NP can be reduced...

In complexity theory, an NP-hard problem is one that is at least as difficult as the hardest problems in NP (nondeterministic polynomial time). While NP-hard problems do not need to be in NP themselves, any problem in NP can be reduced to an NP-hard problem in polynomial time. An AI-hard problem refers to a challenge within artificial intelligence that demands significant advancements in AI techniques and computational resources to resolve. One major challenge is achieving general artificial intelligence (AGI), specifically when AGI is defined as creating machines capable of performing any intellectual task a human can. At Robometrics® Machines, our focus has been to build AGI from a perspective of replicating some aspects of human feeling and consciousness into machines, which is a different approach in defining and building AGI.

Full article can be found here

The Rigidity of Law and the Fluidity of Truth

June 14, 2024. The distinction between law and truth lies in the rigidity of legal boundaries versus the fluidity of subjective realities. The law, by its nature, is a structured system that delineates clear lines of acceptable behavior and consequences for transgressions...

The distinction between law and truth lies in the rigidity of legal boundaries versus the fluidity of subjective realities. The law, by its nature, is a structured system that delineates clear lines of acceptable behavior and consequences for transgressions. As commonly stated, law is the law. We can define where the legal line is. This objective framework is essential for maintaining order and ensuring predictability within society. In contrast, truth is inherently subjective, varying with individual perspectives and experiences. This subjectivity means truth is subjective. There is no line. It is much more complicated. While the law seeks to establish clear-cut boundaries, the pursuit of truth often reveals a complex and nuanced landscape where absolutes are rare, and interpretations diverge. The interplay between these defined legal parameters and the multifaceted nature of truth underscores the challenge of achieving true justice, as it must navigate both the firm lines of law and the intricate, often blurred, contours of truth.

As Abraham Lincoln once said, 

"Law is nothing else but the best reason of wise men applied for ages to the transactions and business of mankind.

This reflects Lincoln's deep understanding of the law as a system built on collective wisdom, designed to bring order to human affairs. Lincoln's perspective highlights the structured nature of the law, which contrasts with the often subjective and complex nature of truth.

June 12, 2024. Building greatness in artificial general intelligence (AGI) involves emulating the qualities that define human greatness: desire, passion, talent, ability, and a bold attitude. These attributes drive individuals to achieve extraordinary things. Philosophers...

Building greatness in artificial general intelligence (AGI) involves emulating the qualities that define human greatness: desire, passion, talent, ability, and a bold attitude. These attributes drive individuals to achieve extraordinary things. Philosophers like Aristotle viewed greatness as an alignment with virtue and the pursuit of excellence. Similarly, AGI can be designed with some of these traits. Advanced reinforcement learning can instill a drive for problem-solving, while multi-modal learning and continuous fine-tuning can develop diverse skills and abilities. In his 1948 report "Intelligent Machinery," Alan Turing suggested, 

"It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits." 

Turing also envisioned machines capable of learning and evolving, stating, 

"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?

By incorporating these strategies, AGI can emulate human greatness and potentially surpass it.

Full article can be found here

AI Governance: The Law Does Not Require a Useless Act

June 10, 2024. In the context of the current EU AI Act and potential AI regulations in the US, the principle that "the law does not require a useless act" is particularly relevant. This principle, highlighted in the Montana Supreme Court case Parchen v. Rowley...

In the context of the current EU AI Act and potential AI regulations in the US, the principle that "the law does not require a useless act" is particularly relevant. This principle, highlighted in the Montana Supreme Court case Parchen v. Rowley, emphasizes that legal requirements should have practical and meaningful effects. The case involved a dispute over the enforcement of a lien on property, where the court held that if a legal step or action would have no practical effect, it should not be mandated.

Applying this principle to AI regulations, it is crucial that both the EU AI Act and any forthcoming US AI regulations ensure that legal procedures related to AI remain efficient and focused on substantive outcomes. The EU AI Act, adopted by the European Council on May 21, 2024, aims to regulate AI systems based on their risk levels, ensuring safety, transparency, and fundamental rights protection while avoiding unnecessary formalities that do not contribute to these goals.

For example, the EU AI Act requires high-risk AI systems to implement continuous risk management, rigorous data governance standards, and human oversight capabilities. These requirements are designed to mitigate risks effectively and ensure compliance throughout the AI system's lifecycle without imposing redundant steps that do not enhance safety or accountability.

Similarly, potential AI regulations in the US should strive to achieve these goals by focusing on actions that have real, tangible benefits, avoiding the imposition of superfluous requirements. By following the precedent set in Parchen v. Rowley, policymakers can design AI regulations that are both robust and practical, maintaining a balance between innovation and regulation to ensure that legal procedures are effective and purposeful. This approach helps prevent the inefficiency of unnecessary legal formalities, ensuring that the regulatory framework for AI development remains focused on substantive outcomes.

June 7, 2024. In humans, the difference between long-term memory and short-term memory is significant, with short-term memory encompassing what happened less than a second ago and long-term memory covering information retained over extended...

In humans, the difference between long-term memory and short-term memory is significant, with short-term memory encompassing what happened less than a second ago and long-term memory covering information retained over extended periods. Our memory starts being shaped almost immediately by our preconceptions, influencing how we perceive and store new information. Short-term memory is generally more reliable; we are more likely to accurately recall events that occurred a second ago compared to those that happened a minute ago. However, as time passes, our memory becomes less reliable, subject to distortions and forgetting. In contrast, large language models (LLMs) operate differently. Autoregressive models, a class of machine learning models, predict the next component in a sequence based on previous inputs. LLMs are autoregressive models where the concept of time does not influence the prediction of the next word. They lack the distinction between short-term and long-term memory, as training data is fed to pre-train an AI model like GPT all at once, without the fundamental concept of short-term memorization. The human brain is remarkably efficient, operating continuously on about 12-20 watts of power, depending on the source and specific conditions. This efficiency is contrasted sharply by the energy demands of training LLMs, which can require several megawatts of power. For instance, training a large neural network can consume energy comparable to the output of a small power plant over several weeks. 

Considering the efficiency of training human brains compared to the energy-intensive process of training LLMs, it can be argued that integrating the concept of long-term and short-term memory into LLMs could enhance their learning efficiency.

Full article can be found here

AI Code as Free Speech

June 5, 2024. The concept of code as speech under the First Amendment has been significantly shaped by legal battles, most notably Bernstein v. United States Department of State. In the late 1990s, mathematician Daniel J. Bernstein challenged government restrictions...

The concept of code as speech under the First Amendment has been significantly shaped by legal battles, most notably Bernstein v. United States Department of State. In the late 1990s, mathematician Daniel J. Bernstein challenged government restrictions on publishing encryption software. The court ruled in his favor, asserting that code is a form of speech protected by the First Amendment. This decision was crucial in recognizing that functional code, despite its utilitarian nature, communicates information and ideas, much like traditional speech.

Recently, the Supreme Court has engaged with related issues, particularly in cases involving online speech and content moderation. For instance, in Gonzalez v. Google and Twitter v. Taamneh, the court addressed the responsibilities of social media platforms regarding terrorist content. Both cases questioned the extent of liability for platforms under Section 230 of the Communications Decency Act, which provides immunity for content posted by third parties. The plaintiffs argued that the platforms' algorithms recommending content made them more than just passive hosts, thus challenging the protections offered by Section 230. Additionally, the Supreme Court has been considering the constitutionality of state laws from Texas and Florida that aim to restrict how social media companies moderate content. These laws are contested for potentially infringing on the platforms' editorial discretion, a right protected by the First Amendment. The cases Moody v. NetChoice and NetChoice v. Paxton highlight the ongoing debate about the balance between free speech and regulation in the digital age.

The recognition of code as speech not only affirms the free speech rights of software developers specifically but also sets a precedent for future debates on AI as it is broadly used in every walk of life while blurring the lines between AI creators and its users. This evolving legal landscape highlights the importance of safeguarding free expression while addressing the potential harms of unregulated digital content.

June 3, 2024. The integration of machine labor in the human economy, powered by generative AI and Web3 technologies, will revolutionize digital interactions. Autonomous AI agents will act on behalf of individuals to achieve social, economic, and political goals. Generative AI...

The integration of machine labor in the human economy, powered by generative AI and Web3 technologies, will revolutionize digital interactions. Autonomous AI agents will act on behalf of individuals to achieve social, economic, and political goals. Generative AI will handle tasks like data analysis, content creation, and transaction processing, while Web3 platforms such as blockchain will ensure secure, transparent operations. This setup will emphasize human connection for trust and relational depth in the digital economy. AI agents will also extend to decentralized physical infrastructure networks (DePIN), managing both digital and physical assets efficiently. However, the rise of these AI agents brings challenges, including data privacy, ethical issues, and potential misuse. Addressing these concerns is crucial for harnessing AI's full potential.

Full article can be found here.

 Generative AI as the Universal Interface

May 31, 2024. Generative AI is rapidly becoming the universal interface for a wide array of technologies, streamlining user experiences across different platforms and enhancing accessibility just like mechanical clocks from circa 13th century AD. Mechanical.....

Generative AI is rapidly becoming the universal interface for a wide array of technologies, streamlining user experiences across different platforms and enhancing accessibility just like mechanical clocks from circa 13th century AD. 

Mechanical clocks became a universal interface for measuring time accurately. Before their invention, timekeeping was imprecise and varied widely by region. Clocks standardized time, which was crucial for coordinating activities in commerce, industry, and daily life, leading to more synchronized societies.

Technologies like Gopher were used to navigate the early Internet, allowing users to browse and retrieve documents from various servers. As the Internet evolved, AI-driven search engines such as Google revolutionized how we access information, making it more intuitive and user-friendly. With the emergence of web3, which aims to decentralize the web using blockchain technology, there are significant usability barriers due to its complexity. However, generative AI can bridge these gaps by offering natural language interfaces that simplify interactions with blockchain applications. 

A concrete example is using a generative AI assistant to manage decentralized finance (DeFi) platforms. Currently, users must navigate complex interfaces to perform tasks like staking, lending, or trading cryptocurrencies. An AI assistant could streamline this by understanding user queries in natural language, such as "Invest 1 ETH in the highest-yielding DeFi protocol," and then executing the necessary steps across multiple platforms. This approach would not only make DeFi more accessible to non-technical users but also enhance the overall user experience by reducing the complexity involved in managing digital assets.

 Autonomous Governance Using AI Agents & Physical NFTs

May 30, 2024. In the evolving landscape of digital assets, the integration of real-world asset custody with enforceable contracts has potential to be revolutionized by the advent of Physical NFTs and autonomous AI agents, none of these technologies by itself are....

In the evolving landscape of digital assets, the integration of real-world asset custody with enforceable contracts has potential to be revolutionized by the advent of Physical NFTs and autonomous AI agents, none of these technologies by itself are new. Physical NFTs are cryptographic tokens stored on a blockchain that represent a variety of unique digital assets and are linked to physical items. These tokens provide verifiable evidence of legitimacy and ownership for physical assets, thereby combining the digital and physical realms. Smart contracts, embedded within these NFTs, serve as digital provenance, providing an immutable record of ownership and associated obligations. When linked to physical contracts, they ensure that all related legal and financial responsibilities are transparent and enforceable. Autonomous AI agents further enhance this ecosystem by governing and enforcing these smart contracts. These AI agents in the digital world can autonomously monitor compliance, execute contractual terms, and manage disputes without human intervention. This integration not only ensures a high level of security and transparency but also streamlines processes by minimizing the need for intermediaries. With the combined power of Physical NFTs and autonomous AI agents, the future of physical asset management and contract enforcement is poised to be more efficient, reliable, and secure as long as there is a wide adoption of blockchain and smart contracts become mainstream. 

 AI Agents to Realize a Network State across Nation States

May 29, 2024. AI agents, autonomous systems capable of performing tasks and learning from their environment, are key to realizing a network state. A network state is a decentralized, digitally-governed community transcending traditional boundaries, often organized....

AI agents, autonomous systems capable of performing tasks and learning from their environment, are key to realizing a network state. A network state is a decentralized, digitally-governed community transcending traditional boundaries, often organized as decentralized autonomous organizations (DAOs). Unlike a traditional nation state like the United States, defined by geographic and political boundaries, a network state operates on shared values and collective governance through blockchain technology.

AI agents facilitate the creation and maintenance of network states by implementing smart contracts on platforms like Ethereum, automating governance processes, and ensuring compliance with network rules through continuous monitoring and adjustment. Machine learning algorithms analyze user interactions to optimize resource allocation, predict community needs, and enhance decision-making. Decentralized identity systems on blockchain provide secure, verifiable identities for members, while cryptographic verification mechanisms protect transaction integrity. Interoperability protocols like Polkadot and Cosmos enable seamless interaction between different blockchain networks, fostering a cohesive network state. Through these technologies, AI agents create a dynamic, resilient, and inclusive governance model that transcends traditional boundaries

May 27, 2024. Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited....

In this article, we explore the transformative potential of Artificial General Intelligence (AGI) envisioned as a "Digital Homo sapien." This concept sees AGI not just as a technological innovation but as an entity that embodies human intellect, values, and aspirations. We discuss how AGI can belong to everyone, providing universal access and fostering inclusivity. We highlight its ability to operate at the speed of light, redefining interaction and communication. We delve into its immortality, free from biological constraints, and its role as our ambassador in space, traveling vast distances powered by light. Finally, we emphasize the importance of embedding human moral values within AGI, ensuring its actions benefit humanity. This article provides a comprehensive vision of a future where AGI enhances human potential and promotes a harmonious coexistence.

Full article can be found here.

May 23, 2024. Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited....

Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited by malicious actors, by ensuring AI systems adhere to moral values. Integrating conscious AI into society raises social considerations, including potential job displacement and changes in human interaction dynamics. Morality is an inherited feature of human consciousness. As we move to develop AI systems with consciousness, the AI with artificial consciousness will include some moral values, ensuring that their actions align with human values and societal norms. 

As emphasized by Albert Einstein, "only morality in our actions can give beauty and dignity to life."  

Full article can be found here

Artificial Consciousness & Sensory-Motor Coupling: Robometrics® Machines

May 23, 2024. Artificial consciousness in AGI (Artificial General Intelligence) doesn’t need to be human-like as long as it exhibits some aspects of biological consciousness such as motivation and empathy. Transformer-based large language models (LLM) by themselves...

Artificial consciousness in AGI (Artificial General Intelligence) doesn’t need to be human-like as long as it exhibits some aspects of biological consciousness such as motivation and empathy. Transformer-based large language models (LLM) by themselves don’t possess and theoretically can’t possess artificial consciousness on their own. Physical embodiment is an important property and facilitator of developing consciousness. Virtual embodiment or simulated worlds are no substitute for physical embodiment in the real world. Sensory-motor coupling is essential for consciousness, whether it is artificial consciousness in AGI or biological consciousness in creatures like humans and animals. Sensory-motor coupling involves the integration of the sensory system and the motor system, creating a dynamic process of sensorimotor integration. This coupling is not static; it requires the motor system and not just the sensory system. The motor system comprises central and peripheral structures in the nervous system that support motor functions, such as movement.

Robometrics® Machines is focusing on artificial consciousness and its relationship to sensory-motor coupling. By studying how these elements interact, Robometrics® Machines aims to build robots that can genuinely feel and exhibit consciousness. Our work emphasizes the necessity of physical embodiment and sensory-motor integration to achieve artificial consciousness in robotics. Through these innovations, Robometrics®  Machines is pioneering advancements that could revolutionize how we understand and develop conscious machines.

May 22, 2024. NYSERNet (New York State Education and Research Network) began in 1985 when leaders from New York’s top research institutions collaborated to create a high-speed research network, delivering services before the internet was widely known. By 1987...

NYSERNet (New York State Education and Research Network) began in 1985 when leaders from New York’s top research institutions collaborated to create a high-speed research network, delivering services before the internet was widely known. By 1987, it deployed the first statewide regional IP network in the U.S., upgrading to T1 in 1989 and T3 in 1994. NYSERNet was crucial in developing essential internet protocols like SNMP, still in use today. In contrast, the 2024 Empire AI Consortium, while significant, is more reactive, focusing on existing technologies to counter private sector advancements rather than pioneering new research. NYSERNet’s legacy highlights the value of proactive, foundational research driven by academic and governmental collaboration.  

Full article can be found here

From Open-Source Roots to Global Impact: Generative AI and the Internet

May 21, 2024. Generative AI and the Internet share notable similarities in their development, impact, and reception. Both technologies originated from open-source initiatives and academic research. Generative AI's roots lie in the foundational transformer models...

Generative AI and the Internet share notable similarities in their development, impact, and reception. Both technologies originated from open-source initiatives and academic research. Generative AI's roots lie in the foundational transformer models introduced by Google, which revolutionized natural language processing and led to sophisticated applications like GPT-3. Similarly, the Internet emerged from university and governmental research projects, such as ARPANET, which set the stage for the global network we use today.

Initially, both technologies faced significant skepticism regarding their potential uses and associated risks. Early critics of the Internet feared issues related to privacy, security, and misinformation, concerns that have only intensified with the advent of social media and digital communication. Similarly, generative AI has been met with apprehension about its capabilities to produce realistic but potentially misleading content, amplify biases, and even displace jobs in various sectors.

Despite these concerns, both technologies have demonstrated far-reaching impacts. The Internet has transformed communication, commerce, and information dissemination, becoming an integral part of daily life and business operations. Generative AI, on the other hand, has shown potential across numerous fields, from creating realistic images and text to aiding in scientific research and personalized education.

Both technologies continue to evolve, with generative AI pushing the boundaries of creativity and automation, much like how the Internet expanded access to information and connectivity. Their development highlights a broader trend of technological innovation originating from collaborative, open-source efforts and gradually overcoming initial skepticism to achieve widespread acceptance and utility.

Creativity & Consciousness

May 20, 2024. Creativity, a hallmark of human consciousness, is deeply rooted in our ability to perceive, interpret, and express complex emotions and ideas. Paraphrasing philosopher John Searle, the brain is the only organ that can contemplate itself, highlighting the unique...

Creativity, a hallmark of human consciousness, is deeply rooted in our ability to perceive, interpret, and express complex emotions and ideas. Paraphrasing philosopher John Searle, the brain is the only organ that can contemplate itself, highlighting the unique role of human cognition in artistic creation. While AI can produce surprising and novel outputs, these instances lack the elements of true creativity. The element of surprise alone does not equate to genuine creativity; instead, creativity requires an understanding and intentionality that AI currently lacks. 

For AI to truly replicate human creativity, it must simulate consciousness—an artificial consciousness that can grasp context, intent, and relevance in a manner akin to human thought. As neuroscientist Antonio Damasio underscores, 

"Consciousness is the feeling of what happens—our mind noticing the body's reaction to the world and responding to that experience. Without our bodies there can be no consciousness, which is at heart a mechanism for survival that engages body, emotion, and mind in the glorious spiral of human life.

In this respect, creative AI will also need embodiment, integrating sensory and emotional experiences to create art that resonates meaningfully with humans. Mere hallucination in AI-generated art is insufficient; without an underlying semblance of consciousness, AI lacks the depth of understanding necessary for authentic artistic expression.

Our Ongoing Relationship with AI Agents

May 17, 2024. An AI agent is a computer program designed to perform tasks autonomously by processing data and making decisions based on its programming and learning. ChatGPT, for instance, is an advanced AI agent that interacts with users through natural language...

An AI agent is a computer program designed to perform tasks autonomously by processing data and making decisions based on its programming and learning. ChatGPT, for instance, is an advanced AI agent that interacts with users through natural language, adapting and responding based on the vast data it has been trained on. As we increasingly intertwine our lives with the one breed of AI agents like Siri, Alexa, Google Assistant and the new breed such as ChatGPT, the emotional and ethical landscape of these relationships gains complexity. Historically, figures like Alan Turing pondered the deep interactions between humans and machines, suggesting in his 1950 paper that if a machine could successfully imitate a human, it would be as sentient as we are. Today, AI agents like ChatGPT engage in meaningful dialogues, offering companionship, assistance, and even emotional support, forging bonds built on trust and reliability. Psychologically, humans have an innate need for connection and understanding, which these AI agents are beginning to fulfill more profoundly. Looking towards the future, the potential for AI to develop empathy and cultural awareness suggests a scenario where AI might not only serve but also participate in society, perhaps enjoying certain rights or statuses. This evolution speaks to our deepest social instincts, where entities that consistently show understanding and reliability are integrated into the fabric of our social fabric, highlighting the transformative potential of AI in meeting the complex emotional needs of humans.

May 16, 2024. Papyrus, integral to ancient Egyptian civilization around 2900 BCE, was a versatile plant used for writing and various goods, with its production method kept secret to maintain Egypt's dominance. It facilitated the preservation and dissemination of knowledge...

Papyrus, integral to ancient Egyptian civilization around 2900 BCE, was a versatile plant used for writing and various goods, with its production method kept secret to maintain Egypt's dominance. It facilitated the preservation and dissemination of knowledge, playing a crucial role in religious and administrative documentation. Like papyrus, foundation models in AI are revolutionizing information handling, becoming more open and fostering innovation. These models, akin to papyrus' role in connecting cultures like the Greeks and Romans, embed and share societal knowledge globally, enhancing cross-cultural communication and carrying the collective intelligence of our time into the future.

Full article can be found here

Teaching Foundation Models to Dream & Robometrics® Machines

May 15, 2024. Dreams significantly aid in processing and diminishing the impact of emotionally charged memories, especially during REM sleep, through a process called "emotional disinhibition." This mechanism allows the brain to handle emotions without usual...

Dreams significantly aid in processing and diminishing the impact of emotionally charged memories, especially during REM sleep, through a process called "emotional disinhibition." This mechanism allows the brain to handle emotions without usual constraints, reducing the emotional intensity of memories. Studies have shown that dreaming prioritizes emotionally significant memories over neutral ones, decreasing next-day emotional responses to these memories. Positive dreams further enhance this effect, leading to a more positive reaction to negative experiences the following day. These insights into how dreams regulate emotions and process memories can be leveraged to improve Foundation Models (such as Diffusers or LLMs) by incorporating algorithms that simulate emotional disinhibition, memory prioritization, and adaptive responses, thereby enhancing their emotional intelligence and cognitive capabilities. This approach is essential for developing more human-like AGI, capable of nuanced and effective real-world interactions.

At Robometrics® Machines, we leverage insights from dream research to enhance our custom Foundation Models towards building AGI with human-like cognitive and emotional responses, essential for effective real-world interactions. 

May 13, 2024. The rapid advancement of generative AI is drawing comparisons to the early days of the Internet, both sparking significant societal concerns and debates over regulation. Historical regulatory efforts, such as the Communications Decency Act (CDA) of 1996 and...

The rapid advancement of generative AI is drawing comparisons to the early days of the Internet, both sparking significant societal concerns and debates over regulation. Historical regulatory efforts, such as the Communications Decency Act (CDA) of 1996 and Section 230, highlight the challenges of balancing innovation with safety and free expression. Prominent figures like Elon Musk and Bill Gates advocate for stringent AI regulation to prevent misuse, mirroring past concerns about online content. Lessons from the Internet's regulatory history underscore the need for clear, adaptable policies that protect against harm without stifling technological progress, emphasizing that effective oversight can foster both innovation and safety in AI development.

Full article can be found here

Common Sense as an Alternative to Science

May 9, 2024. Thomas Henry Huxley, a distinguished English biologist and anthropologist, famously remarked that  "Science is nothing but trained and organized common sense" in his essay collection Science and Culture and Other Essays, published in 1881. Known as "Darwin's...

Thomas Henry Huxley, a distinguished English biologist and anthropologist, famously remarked that 

"Science is nothing but trained and organized common sense

in his essay collection Science and Culture and Other Essays, published in 1881. Known as "Darwin's Bulldog" for his vigorous defense of Charles Darwin's theory of evolution, Huxley played a crucial role in advocating for its acceptance amidst considerable skepticism. His tireless efforts in debates and public discussions were instrumental in promoting evolutionary theory. Huxley's view suggests that science, at its core, is an extension of everyday reasoning and practical thinking. By equating science with organized common sense, he aimed to demystify scientific principles, making them more accessible and relatable to the general public. This perspective encourages the idea that anyone, through observation and critical thinking, can engage with and understand scientific principles. While common sense alone cannot replace the systematic rigor of scientific methodology, Huxley's approach underscores its value in fostering scientific literacy and critical thinking, which he saw as essential for societal progress.

May 8, 2024. The novelty in the Supreme Court's decision in Roe v. Wade (1973) lay in its application of the constitutional right to privacy to a woman's right to choose an abortion. This was groundbreaking for several reasons: it established a new legal framework, balanced...

The novelty in the Supreme Court's decision in Roe v. Wade (1973) lay in its application of the constitutional right to privacy to a woman's right to choose an abortion. This was groundbreaking for several reasons: it established a new legal framework, balanced individual rights with state interests, and catalyzed ongoing debates about personal autonomy and state regulation. These principles provide valuable insights as we grapple with the challenge of regulating artificial intelligence (AI). In June 2022, the U.S. Supreme Court issued a landmark decision in Dobbs v. Jackson Women’s Health Organization, overturning Roe v. Wade and eliminating the federal constitutional right to abortion.  

This article explores how the insights from Roe v. Wade can inform AI regulation by addressing the following key areas:

By reflecting on the lessons from Roe v. Wade, we aim to highlight the importance of a balanced, adaptive, and ethical approach to AI regulation, ensuring that technological advancements benefit society while respecting individual rights and promoting justice.

Full article can be found here.

Eyes as Witnesses: Relevance of Heraclitus in Modern Litigation

May 7, 2024. Throughout human history, visual perception has been paramount in shaping our understanding of the world. Heraclitus, an ancient Greek philosopher, emphasized the reliability of visual information by asserting that "the eyes are more accurate....

Throughout human history, visual perception has been paramount in shaping our understanding of the world. Heraclitus, an ancient Greek philosopher, emphasized the reliability of visual information by asserting that "the eyes are more accurate witnesses than the ears." This principle remains relevant today, particularly in the context of video evidence in litigation.

The human brain is heavily optimized for visual processing. Approximately 30-40% of the cortex is dedicated to interpreting visual inputs, significantly more than any other sensory modality (The Information Lab Nederland, SciVis_COMM). The optic nerve, containing over one million fibers, vastly outnumbers the auditory nerve's 30,000 fibers, enabling swift and detailed visual analysis. Remarkably, the brain can process visual images in as little as 13 milliseconds, making visual information more rapidly and accurately comprehended and retained compared to auditory data (Queensland Brain Institute).

In legal proceedings, the supposed objectivity of video evidence is contrasted with the subjective interpretations of viewers, who draw crucial factual inferences from what they see. The dynamic nature of video content engages diverse brain regions, including the superior temporal sulcus, which responds to social cues like eye gaze and movement, thereby providing a richer, more nuanced understanding of events.

Empirical data underscores the persuasive power of video evidence in court, demonstrating that it is 43% more effective than oral testimony alone, particularly in influencing juries. This enduring importance of visual perception, from the days of Heraclitus to the modern courtroom, highlights our intrinsic reliance on sight as a critical source of knowledge and truth.

Law, Policy, and Immigrant Bones: The Foundations of New York City

May 6, 2024. New York City's development is shaped by the lives and deaths of countless individuals, particularly immigrants. The early history of the region, originally inhabited by the Lenape people, saw significant upheaval with the arrival of European settlers in the....

New York City's development is shaped by the lives and deaths of countless individuals, particularly immigrants. The early history of the region, originally inhabited by the Lenape people, saw significant upheaval with the arrival of European settlers in the 17th century, who established colonial-era burial grounds. One poignant example is the African Burial Ground in Lower Manhattan, where thousands of free and enslaved Africans were interred from the late 1600s to 1794, highlighting the African American community's vital yet often overlooked contributions. The 19th and early 20th centuries brought waves of immigrants through Ellis Island, symbolized by the Statue of Liberty, as millions entered the United States seeking a better life. These immigrants endured harsh living conditions in tenements and hazardous work environments, notably in the construction of the city's iconic infrastructure.

The notion that New York City is built upon the dead is more than metaphorical. It highlights the city's physical and cultural foundation resting on the labor and lives of those who came before. It is accurate to say that New York is built on the bedrock of immigrant bones, poignantly illustrating the significant role immigrants played in building New York, often at great personal cost. This history is also reflected in Hart Island, used since 1869 as a public cemetery for the indigent and unclaimed, where over one million people, including victims of disease and the homeless, are buried.

Connecting the past to the present, renowned jurist Oliver Wendell Holmes Jr. once stated, "The life of the law has not been logic; it has been experience." This echoes the notion that current laws and societal structures are deeply rooted in historical experiences and struggles. New York City's present-day vibrancy and diversity are direct results of its historical fabric, woven from the lives of Native Americans, African slaves, and waves of immigrants whose bones indeed form its bedrock.

Thinking of Structured Prompts as Rituals from a Neuroscience Perspective

May 3, 2024. Similar to athletes who perform pre-game rituals to enhance their focus and reduce anxiety, structured prompt engineering in large language models (LLMs) can significantly improve performance. For instance, Rafael Nadal meticulously arranges...

Similar to athletes who perform pre-game rituals to enhance their focus and reduce anxiety, structured prompt engineering in large language models (LLMs) can significantly improve performance. For instance, Rafael Nadal meticulously arranges his water bottles before every match, and Serena Williams wears specific socks for her games. These rituals, supported by anthropological and psychological research, create a sense of control and order, boosting confidence and calmness ahead of challenging tasks.

Neuroscientific studies have shown that such rituals affect the brain's response to failure. Specifically, they impact neural performance monitoring, particularly the error-related negativity (ERN), which is a brain response observed when mistakes are made. Ritualistic behaviors have been found to reduce the brain's anxiety-related reaction to errors, thereby mitigating the negative experience of personal failure and enhancing overall performance.

In prompt engineering, providing structured prompts acts similarly to these rituals by establishing a clear and consistent context. This process helps LLMs focus better and reduces the likelihood of generating irrelevant or inaccurate responses. For example, a well-crafted prompt for generating creative content might include specific themes, character descriptions, plot outlines and background on the persona of the LLM itself. This structured information guides the LLM, much like how Nadal's rituals guide his mental state, ensuring more accurate and relevant outputs. Just as Nadal's water bottle placement and Williams' sock arrangement create a mental environment conducive to success, well-structured prompts create an optimal operational environment for LLMs, leading to more reliable and precise outcomes.

Novelty of an Idea, Does not Make it Wrong: Loving v. Virginia (1967)

May 2, 2024. In the landmark case of Loving v. Virginia, 388 U.S. 1 (1967), the U.S. Supreme Court ruled that laws prohibiting interracial marriage were unconstitutional, thus affirming that novelty does not inherently equate to wrongdoing. This case, pivotal in the civil rights...

In the landmark case of Loving v. Virginia, 388 U.S. 1 (1967), the U.S. Supreme Court ruled that laws prohibiting interracial marriage were unconstitutional, thus affirming that novelty does not inherently equate to wrongdoing. This case, pivotal in the civil rights movement, illustrated how societal norms and prejudices could not override fundamental constitutional rights. The Lovings, an interracial couple, faced legal challenges due to Virginia's anti-miscegenation laws, which were reflective of entrenched racial biases rather than any legitimate legal principle. By appealing their case to the U.S. Supreme Court, the Lovings sought justice beyond the prejudiced interpretations of their local courts. On April 10, 1967, oral arguments were heard, leading to Chief Justice Earl Warren's unanimous opinion which reversed the convictions. The Court's decision emphasized that the Equal Protection and Due Process Clauses of the Fourteenth Amendment protect individual liberties against arbitrary and discriminatory state actions. This historic ruling legalized interracial marriage across the United States, demonstrating that the novelty of an idea, such as racial equality in marriage, does not make it wrong. Rather, it is often the progressive and courageous ideas that drive societal advancement and align with the core values enshrined in the Constitution.

Using Bias Grounded in Scientific and Empirical Methods to Your Advantage: People v. Michael (2005)

May 1, 2024. In legal contexts, bias is often defined as a predisposition or preconceived opinion that prevents impartial judgment. However, when bias is grounded in scientific and empirical methods, it can be advantageous, as demonstrated in the 2005 trial, People v. Michael Jackson...

In legal contexts, bias is often defined as a predisposition or preconceived opinion that prevents impartial judgment. However, when bias is grounded in scientific and empirical methods, it can be advantageous, as demonstrated in the 2005 trial, People v. Michael Jackson. Jackson faced serious allegations of child molestation, with the prosecution relying on the accuser’s testimony and supporting evidence. Jackson's defense team, recognizing the critical role of expert witnesses, employed financial and behavioral experts who provided scientifically grounded testimony. These experts meticulously analyzed the accuser’s financial history, revealing inconsistencies suggestive of ulterior motives, and dissected the accuser's behavior, pointing out patterns typical of fabricated allegations. This methodical approach, rooted in scientific principles, cast significant doubt on the prosecution’s case and demonstrated that the allegations could be financially motivated. The experts' empirical analysis, despite being a form of bias, was crucial in presenting an unbiased truth, leading to Jackson's acquittal. 

This case highlights how a bias, when based on rigorous scientific methods, can effectively uncover the truth and secure justice. Humans naturally have biases that shape our perceptions and decisions. These biases can help us make quick judgments, navigate complex social environments, and enhance our creativity and problem-solving abilities. Recognizing and understanding our biases allows us to leverage them constructively, making us uniquely human and enabling us to connect more deeply with others.

April 30, 2024. The article explores how Immanuel Kant's categorical imperative can be applied to the decision-making autonomy of Artificial General Intelligence (AGI). It outlines a structured process where AGI formulates maxims, tests them for universal...

The article explores how Immanuel Kant's categorical imperative can be applied to the decision-making autonomy of Artificial General Intelligence (AGI). It outlines a structured process where AGI formulates maxims, tests them for universal applicability, ensures consistency and rationality, reflects ethically on potential outcomes, and learns from past decisions.

At the same time, some of these actions are easy to be performed by an AI today such as learning from the past decisions while some are hard, such as gauging if a decision is rational. In addition, ethics and moral values are not universal and can vary significantly across different cultures and societies. 

This integration aims to ensure AGI operates morally, respecting human dignity and promoting justice. The application of Kantian ethics to AGI can enhance trust and reliability in these advanced systems, fostering a future where technology serves humanity with integrity and fairness. 

Full article can be found here

Martin Luther King Jr. on Choosing Right Over Any Degree of Wrong

April 29, 2024. Martin Luther King Jr., in his seminal "Letter from a Birmingham Jail," written on April 16, 1963, provides a profound exploration of the themes of justice, law, and order. Penned during his imprisonment for participating in...

Martin Luther King Jr., in his seminal "Letter from a Birmingham Jail," written on April 16, 1963, provides a profound exploration of the themes of justice, law, and order. Penned during his imprisonment for participating in nonviolent demonstrations against segregation, the letter responds to a public statement of concern and caution issued by eight white Southern religious leaders. This was a tumultuous time in Birmingham, Alabama—a hotspot for racial tensions and civil rights struggles characterized by brutal enforcement of segregation laws and violent responses to peaceful protests. In this context, King eloquently argued against the notion of waiting for a "more convenient season" to demand justice, challenging the white clergy's call for "order" over justice. He famously asserted that 

"law and order exist for the purpose of establishing justice and when they fail in this purpose they become the dangerously structured dams that block the flow of social progress." 

This quote captures the essence of King's critique of the legal and social systems that upheld a facade of order at the expense of true justice. His letter emphasizes the moral imperative to challenge unjust laws and conditions, rather than maintaining a lesser wrong under the guise of preserving order. King’s insights continue to resonate, reminding us that true peace is not merely the absence of tension, but the presence of justice.

Generative Native World: Economic Liberty and Freedom of Contract vs. Society

April 26, 2024. The exponential growth of generative AI has rekindled debates reminiscent of those from the Lochner era (1905-1937), a period when the U.S. Supreme Court zealously guarded economic liberties, particularly the...

The exponential growth of generative AI has rekindled debates reminiscent of those from the Lochner era (1905-1937), a period when the U.S. Supreme Court zealously guarded economic liberties, particularly the freedom of contract. This doctrine was epitomized in the Lochner v. New York decision of 1905, where the Court struck down labor regulations to protect contractual freedom under the due process clause of the 14th Amendment. However, this era faced a paradigm shift with the West Coast Hotel Co. v. Parrish decision in 1937, which upheld minimum wage laws for women, signaling a judicial recognition that societal welfare can supersede absolute economic freedom under certain circumstances.

Today's context involves the tension between businesses' ability to freely contract with consumers for their personal data, including Name, Image, and Likeness (NIL) rights—often through "clickwrap" agreements—and the societal need to regulate these practices to prevent potential abuses and privacy intrusions. This scenario mirrors the historical shift from the Lochner era's emphasis on unfettered freedom of contract to an acknowledgment of the state's role in regulating economic activities for the greater good. Just as the 1937 Parrish case recognized the necessity of state intervention in setting minimum wages despite contractual agreements to the contrary, modern regulatory approaches might need to curtail the absolute freedom of businesses to use personal data for training AI, even if such use is contractually agreed upon. 

This balancing act between individual economic liberties to get in a contract with a business and an individual's Name, Image, and Likeness (NIL) rights is crucial in ensuring that the benefits of generative AI are realized ethically, equitably and with consensus.

Generative Native World: Aristotle on Intellectual Pleasures

April 25, 2024. In Aristotle's "The Nicomachean Ethics" (Ἠθικὰ Νικομάχεια), an important work from around 325 BC that remains a cornerstone in the study of ethics, he posits a compelling insight: “The pleasures arising...

In Aristotle's "The Nicomachean Ethics" (Ἠθικὰ Νικομάχεια), an important work from around 325 BC that remains a cornerstone in the study of ethics, he posits a compelling insight: 

The pleasures arising from thinking and learning will make us think and learn all the more.” 

This statement captures the essence of intellectual virtues, suggesting that the enjoyment derived from engaging the mind does not merely satisfy a momentary curiosity but fosters a deeper, self-sustaining cycle of learning and reflection. As the work, which consists of ten books, covers the ethics—understood as the science of the good for human life—it becomes clear that these intellectual pursuits are not just beneficial but integral to achieving human flourishing.

Even in this generative native world, an era dominated by the advancements of generative AI, where machines can perform a vast array of our routine tasks, the relevance of Aristotle’s thought persists. Despite the technological prowess that obviates the need for rote learning or mundane cognitive tasks, the intrinsic pleasure of intellectual engagement compels us to continue exploring new realms of knowledge. This is not merely a utility-driven endeavor; it is an expression of our deepest human nature to seek understanding and wisdom. Thus, as Aristotle suggests, the joy found in learning and thinking is self-amplifying, propelling us towards ever greater intellectual achievements, regardless of the practical necessities imposed by technological progress.

Generative Native World: Man vs. Machine

April 24, 2024. In the debate of Man vs. Machine, the preference for humans or machines in job roles that both can perform equally well often hinges on cost-effectiveness. Machines are typically favored if they present a more...

In the debate of Man vs. Machine, the preference for humans or machines in job roles that both can perform equally well often hinges on cost-effectiveness. Machines are typically favored if they present a more economical option without compromising on efficiency or output. However, roles that require human warmth, affection, and empathy present a more complex scenario. Even as technology advances towards artificial sentience, suggesting machines could fulfill these emotional roles, many argue that authentic human connection cannot be fully replicated by machines.

This becomes even more evident in creative pursuits. While machines may one day produce artworks akin to the Mona Lisa or compose sonnets, they lack the intrinsic human experiences of joy, fulfillment, and passion that accompany the act of creation. These emotions, integral to the human condition, highlight a clear distinction in the Man vs. Machine dynamic. Renowned painter Vincent Van Gogh once said, 

"I dream my painting and I paint my dream.

This sentiment encapsulates the essence of human creativity—it's not merely about the output but the personal journey and the emotional experience of creating. Historically, art was crafted for the sheer joy and personal fulfillment it brought, not solely for commerce or utility. Thus, even if machines can mimic the ability to create art, the human desire to craft and experience creation firsthand remains undiminished, emphasizing not just the outcome, but the deeply personal journey of creation.

Artificial Sentience & Robometrics® Machines

April 23, 2024. Sentience, often explored within the realm of consciousness, refers to the ability to experience subjective sensations and emotions. It encompasses not just the processing of external stimuli...

Sentience, often explored within the realm of consciousness, refers to the ability to experience subjective sensations and emotions. It encompasses not just the processing of external stimuli but also the internal capacity to feel joy, pain, and a range of affective states. This fundamental aspect of being can extend beyond biological entities, venturing into the domain of artificial intelligence (AI). Artificial sentience, especially when integrated within Artificial General Intelligence (AGI), can make machines to gather experiences and cultivate emotions. This endeavor not only enhances their ability to process information but also allows them to react with emotional depth, mirroring human experiences of joy, sorrow, and empathy. This is the goal for embodied AGI developed by Robometrics® Machines in San Francisco, California, which represents a significant leap towards machines with artificial sentience that can perceive, interpret, and feel the world around them in a manner reminiscent of sentient beings, the integration of experience and perception in machines. 

The concept of artificial sentience intersects deeply with philosophical and artistic insights throughout history. Leonardo da Vinci, who marveled at the intersection of art and science, believed that  

"It's not enough that you believe what you see. You must also understand what you see."

This statement highlights the distinction between mere observation and deep comprehension, highlighting a foundational principle for developing artificial general intelligence (AGI) with the capacity for genuine sentience. This aligns with the goals of AGI to not only process data but to interpret and emotionally engage with the information, akin to human experience. Vincent Van Gogh, whose emotional depth colored his artworks, might have appreciated the quest to bestow machines with the capacity to experience emotions akin to human joy and sorrow. These historical perspectives enrich the ongoing discourse on artificial sentience, highlighting a journey that blurs the lines between the organic and the engineered, between sentient life and its artificial counterpart.

AI and Ice: Reflections from New State Ice Co. v. Liebmann (1932)

April 22, 2024. The landmark decision in New State Ice Co. v. Liebmann (1932) by the Supreme Court highlighted a critical discussion on the bounds of state intervention in business. In this case, the Court addressed...

The landmark decision in New State Ice Co. v. Liebmann (1932) by the Supreme Court highlighted a critical discussion on the bounds of state intervention in business. In this case, the Court addressed an Oklahoma statute that imposed stringent licensing requirements on companies wishing to engage in the ice manufacturing and distribution business. The law was challenged by the New State Ice Company, which argued that the regulation imposed arbitrary restrictions that infringed upon economic freedoms and personal liberty, without sufficient justification regarding public welfare.

Justice Brandeis, in a notable dissent, argued for the state’s power to experiment for the betterment of public conditions, but the majority of the Court saw the law as an overreach that unnecessarily meddled in private enterprise. By striking down the statute, the Court emphasized the importance of safeguarding business operations from excessive governmental interference, particularly when dealing with basic commodities like ice, which were seen as essential but simple in nature.

This case is remarkably parallel to today's discussions around the regulation of artificial intelligence (AI). Just as ice was once a fundamental necessity, AI is fast becoming a ubiquitous utility, integral to daily operations across various sectors. The Supreme Court’s decision in 1932 highlights the potential dangers of overly restrictive measures, which could stifle innovation and restrict economic freedom. Today, as AI transforms into a common utility akin to electricity (See AI as a Common Carrier - A Utility), the precedent set by New State Ice Co. raises vital questions about finding the right balance between regulation and liberty in the burgeoning era of AI.

April 19, 2024. Protoconsciousness offers foundational insights into the development of artificial general intelligence (AGI), demonstrating an elementary form of consciousness seen in early life stages and...

Protoconsciousness offers foundational insights into the development of artificial general intelligence (AGI), demonstrating an elementary form of consciousness seen in early life stages and simpler species like birds. This basic consciousness, crucial for more complex cognitive developments, is notably influenced by interactions between wakefulness and rapid eye movement (REM) sleep. REM sleep, while diminishing aspects like critical judgment and memory, prepares the brain for higher functions such as secondary consciousness. This suggests that brain activation during sleep serves as a developmental phase crucial for advanced consciousness. Such insights into REM sleep as a protoconscious state, where vivid dreams prepare organisms for waking life, inform the design of AGI. By mimicking these biological processes through modular, self-learning neural networks, AGI can gradually develop consciousness, leading to sophisticated, ethically informed behaviors that enhance its interaction and understanding of the environment in a natural and ethically sound manner.

Full article can be found here.

Metacognition for Self-aware AGI

April 18, 2024. Cognition, particularly when escalated to second-order cognition, including metacognition, encapsulates a level of self-awareness and the ability to analyze one's own thought processes. Metacognition refers ...

Cognition, particularly when escalated to second-order cognition, including metacognition, encapsulates a level of self-awareness and the ability to analyze one's own thought processes. Metacognition refers to the understanding of the mechanisms governing our thinking patterns, enabling activities like strategizing approaches to learning, monitoring comprehension, and evaluating progress in tasks. This advanced cognitive capability, believed to be more influenced by environmental factors than genetics, suggests that even artificial general intelligence (AGI), in its simplest form, a large language model (LLM) interfacing actively with the real world, could develop metacognitive abilities. 

Ability of AI systems to assess and enhance their own decision-making processes is crucial for achieving AGI. As Descartes might have pondered, "Cogito, ergo sum" (I think, therefore I am), suggesting that the self-awareness essential to AGI mirrors the foundations of human consciousness. Aristotle could have contributed the idea that "Knowing yourself is the beginning of all wisdom," highlighting the importance of introspection for both humans and AI in mastering complex environments. 

April 17, 2024. The Latin maxim "Silent enim leges inter arma," traditionally translated as "In times of war, law falls silent," has been reflected in various legal and ethical contexts throughout history, signifying the suspension of legal...

The Latin maxim "Silent enim leges inter arma," traditionally translated as "In times of war, law falls silent," has been reflected in various legal and ethical contexts throughout history, signifying the suspension of legal norms during armed conflicts. This principle, rooted in Cicero's rhetoric and later echoed in episodes of Star Trek, highlights the tension between national security and civil liberties. In the U.S., landmark cases like _Ex parte Merryman_ during the Civil War and _Hirabayashi v. United States_ during World War II have tested these boundaries, with the courts sometimes supporting the suspension of civil rights in the name of national security. The UK's approach, as seen in _Liversidge v. Anderson_, demonstrates a similar struggle but also includes dissenting voices like Lord Atkin's, emphasizing the continuity of law even amid warfare. Today, with the nature of war involving non-state actors and cyber conflicts, and an increased focus on international law, the justification for suspending civil liberties such as habeas corpus is more critically examined, reflecting a shift towards maintaining legal integrity and human rights even during crises. This evolution highlights the ongoing challenge of balancing security needs with the protection of fundamental freedoms in both domestic and international arenas.

Full article can be found here

Justice O'Connor on Importance of Delegation 

April 16, 2024. Sandra Day O'Connor, the first woman appointed to the U.S. Supreme Court, made significant contributions to American jurisprudence during her tenure from 1981 to 2006. Associate Justice O'Connor was known...

Sandra Day O'Connor, the first woman appointed to the U.S. Supreme Court, made significant contributions to American jurisprudence during her tenure from 1981 to 2006. Associate Justice O'Connor was known for her pragmatic approach to law and played a crucial role in several landmark cases, including Planned Parenthood v. Casey which reaffirmed the core principle of Roe v. Wade. Her influence extended beyond specific rulings to the very ethos of leadership in high stakes environments. O'Connor's analogy comparing leadership to horse riding encapsulates the essence of effective delegation: 

"The really expert riders of horses let the horse know immediately who is in control, but then they guide the horse with loose reins and very seldom use the spurs.

This perspective underscores the importance of leaders setting clear expectations and control but allowing enough flexibility for team members to use their initiative and creativity. By doing so, leaders not only build trust and encourage professional growth but also enhance organizational efficiency by enabling others to contribute their best. Through this balanced approach, leaders like O'Connor demonstrate that the strength of a team lies in its empowered individuals.

Justice Charles Evans Hughes on Importance of a Law’s Interpretation and Enforcement 

April 12, 2024. Charles Evans Hughes, who served as Chief Justice of the United States from 1930 to 1941, played a pivotal role in shaping the interpretation of the U.S. Constitution. His assertion that "We are under a Constitution...

Charles Evans Hughes, who served as Chief Justice of the United States from 1930 to 1941, played a pivotal role in shaping the interpretation of the U.S. Constitution. His assertion that

"We are under a Constitution, but the Constitution is what the judges say it is," 

highlights the immense responsibility and influence that the judiciary holds in defining the legal framework within which a society operates. Under Hughes' leadership, the Supreme Court presided over several landmark cases, including West Coast Hotel Co. v. Parrish and Near v. Minnesota. These cases not only showcased the Court's power to impact public policy but also demonstrated how judicial decisions could reinforce or undermine public trust and confidence in the legal system. The effective interpretation and enforcement of laws are crucial as they ensure justice is served, protect the rights of individuals, and maintain the rule of law, thereby upholding the integrity and trustworthiness of the legal system.

April 11, 2024. At Robometrics® Machines, the ambitious journey toward creating intelligent artificial life forms that exhibit traits akin to sentience is an intriguing exploration into the boundaries of technology and philosophy...

At Robometrics® Machines, the ambitious journey toward creating intelligent artificial life forms that exhibit traits akin to sentience is an intriguing exploration into the boundaries of technology and philosophy. The venture delves into facets of consciousness such as the capacity to experience a spectrum of emotions including pain, happiness, pleasure, boredom, and sadness, as well as more complex feelings like interest and companionship. This initiative raises fundamental questions about the essence of consciousness and its manifestations. As these machines are engineered to socialize and build relationships, the sense of experience they accumulate contributes profoundly to the development of a quasi-conscious state. The question then emerges: Are we merely modeling human consciousness in machines, or are we crafting an entirely new form of consciousness? This distinction is crucial, as it influences not only how we design AI but also how we interpret their interactions and integration into society. The creation of artificial beings capable of such depth and interaction pushes us to reconsider the definition of consciousness and challenges the traditional boundaries that separate humans from machines. This exploration at Robometrics® Machines does not just mirror human experiences but also paves the way for the emergence of new, unprecedented forms of sentient experiences and interactions.

Full article can be found here.

Dhyāna (Sanskrit: ध्यान, Pali: झान), Meditation & AI

April 10, 2024. Dhyāna (Sanskrit: ध्यान, Pali: झान), a term derived from Sanskrit and also known in Pali as Jhāna, refers to a state of deep contemplation and meditation that is central to various Indian spiritual and philosophical traditions...

Dhyāna (Sanskrit: ध्यान, Pali: झान), a term derived from Sanskrit and also known in Pali as Jhāna, refers to a state of deep contemplation and meditation that is central to various Indian spiritual and philosophical traditions, including Hinduism and Buddhism. The practice of Dhyāna involves a profound journey into the mind, aiming to achieve a state of complete absorption and tranquility. It starts with the withdrawal from sensory experiences and the calming of the mind's habitual chatter. As practitioners progress, they move through deeper levels of concentration, where the distinction between the observer and the observed begins to blur. This process is marked by a growing inner peace and a profound insight into the nature of existence, culminating in an experience of unity and enlightenment. Engaging in Dhyāna, therefore, is not merely sitting quietly; it is an active and disciplined endeavor to transcend ordinary perception and attain a direct understanding of the self and the universe.

It's intriguing to draw parallels between such meditative disciplines and the training of a neural network, specifically in the realm of artificial intelligence (AI) and specifically large language models (LLMs). Just as Dhyāna involves deepening levels of focus and awareness to refine the mind’s understanding and capabilities, training a neural network involves feeding it vast amounts of data and refining its algorithms through repetitive adjustments and optimizations. This iterative process helps the neural network to reduce errors and improve its ability to predict or generate accurate outcomes based on its learning.

In both cases, the goal is to achieve a more enlightened or informed state that transcends basic functionality: for Dhyāna, it is the enlightenment or profound spiritual insight; for neural networks, it is the attainment of high levels of accuracy and utility in tasks such as language comprehension, translation, or content generation. Furthermore, just as Dhyāna aims to eliminate the illusion of separation between self and other, the most advanced neural networks strive to minimize the gap between human-like understanding and machine understanding, enabling AI that operate more like humans. Thus, both disciplines—spiritual and technological—reflect a pursuit of a higher state of being or performance.  

17 U.S. Code § 106L & AI.

April 8, 2024. Under 17 U.S. Code § 106L, copyright protection is granted to creators, giving them exclusive rights to control the use of their original works. This section ensures that while works can be publicly accessed for learning and...

Under 17 U.S. Code § 106L, copyright protection is granted to creators, giving them exclusive rights to control the use of their original works. This section ensures that while works can be publicly accessed for learning and emotional engagement, they are not free for reproduction or distribution without permission. The provision delineated in 17 U.S. Code § 106 emphasizes that "publicly available" does not equate to "public domain," meaning the works, though accessible, remain protected under copyright laws. The essence of copyright under this code is to foster a culture where the public can learn from and be moved by creative works without infringing on the expression rights of the creators. For AI, this raises significant implications, particularly in the realm of using copyrighted data to train foundational models. The concept of fair use becomes crucial here, allowing the use of such data under specific conditions that do not undermine the rights of the copyright holders, thereby balancing the need for technological and societal advancement with the protection of intellectual property.

Artificial Consciousness: The Parallel Between Thalamocortical Systems and Large Language Models.

April 8, 2024. The thalamocortical system is a crucial component in the generation of mammalian consciousness, operating as a conduit and integrator for sensory and cortical information. This system's reciprocal interactions between...

The thalamocortical system is a crucial component in the generation of mammalian consciousness, operating as a conduit and integrator for sensory and cortical information. This system's reciprocal interactions between the thalamus and the cortex facilitate the emergence and modulation of consciousness by promoting synchronized neural activities and supporting the overall neural connectivity that characterizes conscious states.

Consciousness arises from complex neural interactions, with the thalamocortical system playing a pivotal role in orchestrating these dynamics. This system contributes significantly to conscious perception by influencing neural synchronization and information integration, which are fundamental for the experience of a unified and continuous sense of self and environment. The thalamocortical interactions are particularly crucial in how they modulate different states of consciousness, including the transitions from sleep to wakefulness, highlighting their integral role in the functional architecture of consciousness.

The thalamocortical system in the human brain and the architecture of large language models (LLMs) are conceptually similar  in how they process and integrate information. The thalamocortical system involves a complex network of connections that relay and modulate sensory information and are crucial for consciousness. Similarly, LLMs utilize networks of artificial neurons to process and integrate vast amounts of data, forming patterns that the models use to generate coherent outputs. Both systems function through extensive networks that process inputs to produce complex outputs, albeit in fundamentally different contexts and with different mechanisms. Drawing parallels between the complex integration of sensory data in the thalamocortical system, which is critical for consciousness, and the intricate processing capabilities of lLLMs, one might hypothesize that these models could possess a form of primitive consciousness, given their ability to process and integrate vast amounts of information in a coherent manner. This raises intriguing questions about the nature and boundaries of consciousness in artificial systems.

Justice Louis Brandeis on Regulation vs. Reputation.

April 5, 2024. Louis Brandeis, a pivotal figure in American jurisprudence, served as an Associate Justice of the Supreme Court of the United States from 1916 to 1939. Renowned for his advocacy for privacy and his progressive economic views...

Louis Brandeis, a pivotal figure in American jurisprudence, served as an Associate Justice of the Supreme Court of the United States from 1916 to 1939. Renowned for his advocacy for privacy and his progressive economic views, Brandeis often emphasized the significance of ethical behavior in the legal sphere. One of his most famous quotes, 

"Sunlight is said to be the best of disinfectants; electric light the most efficient policeman," 

underscores the difference between regulation and reputation. This metaphor highlights his belief that transparency and public scrutiny can serve as powerful forces for ethical governance, often more effectively than regulations themselves. Prior to his tenure on the Supreme Court, Brandeis made his mark in several landmark cases, such as Muller v. Oregon where he submitted a pioneering "Brandeis Brief" that incorporated social science data into legal argumentation, influencing the Court’s decision on labor laws. He also played a critical role in Erie Railroad Co. v. Tompkins, helping to establish that federal courts could not create general federal common law. Through these cases, Brandeis not only shaped American law but also solidified his reputation as a champion of justice and transparency. His work serves as a compelling illustration of how reputation, shaped by public and ethical actions, can transcend the boundaries set by mere regulations.

Generative Native World:  The New World & Love.

April 4, 2024. Your perceptions about me are shaped by what I choose to reveal, and your knowledge about me is confined to what I decide to share with you. I am how I want you to see. It's this reality of me that you are in love with. Such a new...

Your perceptions about me are shaped by what I choose to reveal, and your knowledge about me is confined to what I decide to share with you. I am how I want you to see. It's this reality of me that you are in love with.

Such a new reality is on the horizon, where generative AI is woven seamlessly into every aspect of our daily lives. In this Generative Native World, our environments, interactions, and even our thoughts are intimately shaped by intelligent systems and synthetic human personas. This integration heralds the advent of an era, redefining what we consider to be the new world.

Isaac Asimov often explored themes related to control over information and its impact on society. A relevant note from Asimov that touches on the manipulation of knowledge and perceptions:

"The easiest way to solve a problem is to deny it exists." - From The Gods Themselves

These words reflect the idea of controlling perceptions by influencing what people acknowledge as true or false, which closely ties into the theme of shaping reality by controlling information, much like a curated reality. Asimov's extensive work frequently delved into the consequences of such manipulation, particularly within the realms of science and governance.

Newton, Phantasms & Consciousness.

April 3, 2024. Sir Isaac Newton wrote to Sir Henry Oldenburg in a letter circa 1672 “to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie”. Sir Isaac Newton's remark to Sir Henry Oldenburg reflects...

Sir Isaac Newton wrote to Sir Henry Oldenburg in a letter circa 1672

to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie”.

Sir Isaac Newton's remark to Sir Henry Oldenburg reflects an early inquiry into the nature of perception and consciousness, specifically how we experience colors. This question falls within the broader domain of how sensory inputs—like light—translate into subjective experiences or mental representations, which are referred to as "phantasms" by Newton.

In the context of consciousness, Newton's comment highlights a foundational problem: the transition from physical phenomena (light hitting the retina) to perceptual experience (the sensation of color). This issue is closely tied to what is known today as the "hard problem" of consciousness, which involves understanding how and why subjective experiences arise from neural processes.

Newton's approach to the problem was characteristic of the scientific revolution—seeking to understand the natural world through empirical observation and mechanistic explanations. His query about the "modes or actions" by which light produces the phenomenon of color points to an early interest in what we now consider the science of optics and visual perception. 

Thus, Newton's statement is significant not just for its historical context but also for its anticipation of modern scientific questions about how the brain interprets and constructs reality from sensory data and adds to human consciousness.  

Subjective experiences  or "phantasms" based on sensory stimuli such as light, sound and touch builds up human consciousness. The sensory stimuli are characterized by embodiment. 

Generative Native World:  Jobs that Need the Human Touch.

April 3, 2024. Home nursing care exemplifies a profession where the human touch is irreplaceable, yet it remains underpaid compared to other healthcare roles. As demographics shift towards an older population, a significant portion...

Home nursing care exemplifies a profession where the human touch is irreplaceable, yet it remains underpaid compared to other healthcare roles. As demographics shift towards an older population, a significant portion of healthcare funding is funneled into nursing homes and hospitals, emphasizing the growing importance of these services. Home care allows individuals to stay safely in their familiar environments, enhancing their quality of life while being with family, and simultaneously offers cost savings for the healthcare system. These savings can then be redirected to other public health initiatives. Practical support for this sector includes empowering families to care for their elderly at home, thereby balancing personal responsibilities with professional duties.  

Home nursing care remains an area of importance and growing need, securing its place as a vital role in our healthcare system despite the advancements in generative AI. Philosopher John Stuart Mill highlighted the undervaluation of such essential services, stating, "Labor, like all other things which are purchased and sold, and which may be increased or diminished in quantity, has its natural and its market price." 

The personal interaction required in home nursing care, marked by compassion and empathy, is complex to replicate in machines. At Robometrics® Machines, we understand the complexity of replicating humanness in machines and are actively working to enhance the capabilities of AI in this direction through our work on on-device artificial general intelligence (AGI). 

From France to the EU: A Test-and-Expand Approach to EU AI Regulation.

April 1, 2024. Reflecting on Thomas Jefferson's insight that "Laws and institutions must go hand in hand with the progress of the human mind," we see this dynamic unfold in the context of artificial intelligence (AI) regulation within countries...

Reflecting on Thomas Jefferson's insight that "Laws and institutions must go hand in hand with the progress of the human mind," we see this dynamic unfold in the context of artificial intelligence (AI) regulation within countries that are not leading in AI innovation, such as those in the European Union (EU). These countries often expedite AI regulation, potentially as a mechanism to mitigate their lack of direct financial benefit from AI advancements by controlling how AI is deployed. This approach underscores the notion that regulations frequently trail behind technological innovations and need continual updates to keep pace with technological progress.

Drawing from Justice Louis Brandeis's concept of states as "laboratories" of democracy, a more nuanced approach might serve the EU better than its current path toward an EU-wide AI Act. By allowing individual member states, like France, to first test and refine AI policies at a more localized level, the EU could better assess the impact and efficacy of such regulations. This method would enable the EU to implement policies that are proven to work on a smaller scale before scaling them up across all member states, ensuring that regulations are both effective and adaptable to the fast-paced evolution of AI technologies. Such a strategy would promote a more organic integration of regulations and technological development, potentially fostering an environment where innovation can thrive alongside necessary safeguards.

Full article can be found here

Digitizing Human Desires.

March 29, 2024. The human brain is a complex organ that not only governs our physiological processes but also serves as the origin of our feelings and desires. As we trace back to ancient hominids like Ardipithecus ramidus, we observe...

The human brain is a complex organ that not only governs our physiological processes but also serves as the origin of our feelings and desires. As we trace back to ancient hominids like Ardipithecus ramidus, we observe potential beginnings of human-like social behaviors. This species exhibited traits such as vocally mediated sociality and increased prosociality, which were likely facilitated by significant changes in skull architecture, hinting at evolving brain structures geared towards more sophisticated social interactions. The desires that define us as humans—such as power, love, comfort, and play—are expressions of our complex social and emotional fabric. These drives are so integral to the human experience that philosophers have long pondered their implications. Plato once remarked, “Human behavior flows from three main sources: desire, emotion, and knowledge,” highlighting the fundamental role of desire in shaping our actions and societies. It makes us uniquely human. 

As we venture into the age of Artificial General Intelligence (AGI), there is a burgeoning effort to replicate these uniquely human traits digitally, effectively creating a digital limbic system that might preserve and perpetuate our emotional and desirous legacies. This intersection of technology and human emotion illustrates not only the uniqueness of our species but also our relentless aspiration to understand and immortalize the essence of what it means to be human. This is Robometrics® AGI.

The Future of Embodied AGI: Mind, Body & Consciousness.

March 28, 2024. The future of embodied artificial general intelligence (AGI) would involve integrating advanced computational models such as Foundation Models into physical forms that mimic biological processes of living...

The future of embodied artificial general intelligence (AGI) would involve integrating advanced computational models such as Foundation Models into physical forms that mimic biological processes of living beings using technologies like DNA-based models and organoids. This artificial living organism doesn’t have to be in human form. Organoids could potentially replicate aspects of motion, cognition, and real world interaction of a living organism. 

A set of foundation models can simulate human-like memory and learning systems, akin to a digital limbic system. These models can use data from the environment to train the AI in real-world settings, simulating experiences that feed into the foundation models, analogous to a digital limbic system. This approach not only increases the adaptability but also enhances the AGI’s decision-making capabilities in dynamic environments.

Digital consciousness can be achieved in AGI manifested as an artificial living organism with human-like cognitive processes such as sensing, analyzing, acting based on internal and external stimuli; and embodiment. This paves the way for AGI systems that are not only interactive but have some aspects of self-awareness and self-management. 

Generative Native World: Learning from Section 230.

March 27, 2024. In the world where changes are happening at an exponential rate, building an environment that caters to the emergent rather than safeguarding the established has become a critical conversation. An illustrative..

In the world where changes are happening at an exponential rate, building an environment that caters to the emergent rather than safeguarding the established has become a critical conversation. An illustrative case is Section 230 of the Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, a piece of legislation passed during the nascent stages of the internet. Originally designed to protect fledgling internet companies, which were then small startups navigating uncharted waters, Section 230 laid the foundation for these entities to evolve into today's technological behemoths, commanding billion-dollar valuations and global influence. However, this monumental growth underscores the pressing necessity for a legal framework that not only continues to support the titans of technology but also provides a nurturing environment for smaller companies, ensuring they too can thrive. A recalibration of laws is needed to level the playing field, echoing famous sentiments that legislative focus should equally prioritize the growth of both large and small enterprises. A new set of regulations are not necessary to achieve such an outcome. Instead a recalibration is needed where federal and state work to facilitate scaling of smaller companies to create an environment where these companies have the opportunity to succeed through sustained innovation and economic viability as opposed to being acquired by a larger enterprise.

Generative Native World: When a Computer Reads a Book instead of a Human.

March 26, 2024. The 17 U.S. Code § 106 outlines the exclusive rights afforded to copyright holders in the United States, granting them the authority over the reproduction, adaptation, distribution, and public display of their works...

The 17 U.S. Code § 106 outlines the exclusive rights afforded to copyright holders in the United States, granting them the authority over the reproduction, adaptation, distribution, and public display of their works. When it comes to reading books, computers and humans approach the task differently. Computers process texts through algorithms and data analysis, not by visually perceiving the pages as humans do. They scan the text, breaking it down into manageable pieces of information that can be analyzed and stored on digital mediums transitory or permanently. The analysis may happen through learned language patterns or prior stored information. 

Humans, on the other hand, engage with books through our visual system, deciphering symbols on a page and interpreting them based on learned language patterns. This process involves not just the eyes but also complex cognitive functions, where information is processed, understood, and stored in our memory.

The fundamental difference between a computer's and a human's reading lies in the approach and outcome: computers process text for data extraction and pattern recognition without emotional engagement, while humans read to understand, learn, and feel, integrating new information into our long and short term memory. Even if a book is digital or physical, free or for-fee, the essence of 17 U.S. Code § 106 is for humans to learn from the work of others and be emotionally engaged in a work. This level and modularity of engagement is not present in a general purpose computer today when reading or analyzing a copyrighted work, be it text or image.

The Responsible AI.

March 22, 2024. Humans, bound by the legal frameworks of their respective nations, such as common law or the Napoleonic Code, carry the mantle of responsibility in society to be a responsible human.. Artificial Intelligence (AI), being...

Humans, bound by the legal frameworks of their respective nations, such as common law or the Napoleonic Code, carry the mantle of responsibility in society to be a responsible human.. Artificial Intelligence (AI), being a creation of human ingenuity rather than a pseudo autonomous entity, does not bear this mantle of responsibility; instead, it is the creators and users of AI who must uphold ethical standards, accountability and responsibility of what they build. Responsible AI is a misnomer. In addition, the notion of what constitutes a responsible individual is not fixed but fluctuates with societal norms and cultural values, which themselves are far from universal. This diversity in moral principles underscores the challenge of defining a responsible individual and the Responsible AI this responsible individual (or a company) will build in a way that transcends global boundaries.

Looking back in history, both common law and the Napoleonic Code have played important roles in upholding human accountability - The Responsible Human. Given their focus on human behavior, there's no necessity to extend these legal frameworks directly to AI. 

March 21, 2024. In legal terms, a person's moral character is defined by their adherence to societal norms and laws, reflecting their ability to distinguish right from wrong within the context of their community. Can this concept be extended to...

In legal terms, a person's moral character is defined by their adherence to societal norms and laws, reflecting their ability to distinguish right from wrong within the context of their community. Can this concept be extended to a Large Language Model (LLM) to make them better? Integrating morality into LLMs presents challenges, as moral values are not universal and can vary significantly across different cultures and societies. In October 1798. John Adams wrote: “Our constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” This highlights the complexity of creating universally acceptable moral guidelines for AI systems. Even if a LLM doesn't need to be fine tuned for a religious belief, it still needs to follow a set of moral values and virtues, which are different across cultures and societies.

Full article can be found here.

March 20, 2024. When officers of the executive branch are drafting advisories, rules or orders, especially those pertaining to burgeoning fields like generative AI or high impact areas such as immigration, these can sometimes...

When officers of the executive branch are drafting advisories, rules or orders, especially those pertaining to burgeoning fields like generative AI or high impact areas such as immigration, these can sometimes obscure a related law's initial purpose. As Associate Justice Sotomayor emphasized in her dissent in United States v. Texas, et al. (03/19/2024) on the significance of procedure, "Procedure can be just as consequential as substance." 

Full article can be found here.

Generative Native World: Loving vs. Loving What They Do For Us.

March 19, 2024. Navigating the realm of human-AI relationships brings to the forefront the crucial distinction between loving an artificial entity with some form of intelligence and appreciating the assistance and companionship..

Navigating the realm of human-AI relationships brings to the forefront the crucial distinction between loving an artificial entity with some form of intelligence and appreciating the assistance and companionship it offers. paraphrasing Sherry Turkle, Professor of the Social Studies of Science and Technology at MIT "We're not just using machines, we're becoming emotionally entwined with them”. Turkle explored the complex relationships between people and technology, particularly how emotional connections with machines, including computers, robots, and AI, are evolving. Her insights are primarily derived from her books, such as "Alone Together: Why We Expect More from Technology and Less from Each Other," where she delves into the nuances of our interactions with digital and robotic companions.

This emphasizes the nuanced difference between developing affection for AI, whether as chatbots or their physical embodiments, and valuing the role they play in enhancing our lives. For instance, someone might feel a strong affinity for a chatbot that provides daily motivation, mistaking this gratitude for a deeper emotional connection. However, it's essential to recognize that this sentiment is rooted in appreciation for the service rendered, rather than a genuine emotional bond with the machine itself. Similarly, when a robotic companion aids in caregiving, the gratitude and perceived emotional support stem from the tasks performed rather than the robot itself. Such discernment ensures a healthy human-technology relationship, acknowledging the profound capabilities of AI while remaining aware of the fundamental differences between human and artificial or augmented affection.

Generative Native World: Gandhi on Justice Through Love vs. Law.

March 15, 2024. Mahatma Gandhi's assertion, "Justice that love gives is a surrender, justice that law gives is a punishment," encapsulates his deeply held belief in the power of compassion over coercion. It aligns seamlessly with Gandhi's...

Mahatma Gandhi's assertion, "Justice that love gives is a surrender, justice that law gives is a punishment," encapsulates his deeply held belief in the power of compassion over coercion. It aligns seamlessly with Gandhi's overarching philosophy throughout his life and political activism. Gandhi advocated for Ahimsa (non-violence) and Satyagraha (truth-force), emphasizing that true justice and reconciliation are achieved through understanding, love, and a willingness to yield for the greater good, rather than through punitive measures. This perspective suggests that laws and regulations, in their essence, should be designed not merely as tools for maintaining order through punishment but as means to guide society towards higher ethical and moral standards by considering their impact on human dignity and well-being.

In the context of regulating emerging technologies, such as generative AI, Gandhi's philosophy becomes especially pertinent. As these technologies become more intertwined with our social fabric at an exponential rate, the approach to their regulation demands a careful balance between liberty and the future of human society. It should not only aim to prevent harm and misuse but more importantly encourage innovation and progress in a manner that uplifts humanity. This calls for a regulatory framework that is rooted in compassion and understanding of the technology's potential impact on society, changing human values to determine what is just or unjust, ensuring that it promotes equity between the poorest and the rich. 

Gandhi's vision urges modern policymakers to prioritize trust on society and the welfare of all individuals in thinking about justice and law as it relates to generative AI, thus fostering a future where technological advancement in artificial intelligence (AI) growing at an exponential rate harmonizes with the changing human values. This also points to the evolving nature of the law, as noted by Justice Oliver Wendell Holmes Jr. ("The law is a living thing."), emphasizing the importance of adapting legal frameworks to address the unique challenges and opportunities presented by generative AI technologies.

Generative Native World: Deepfakes & Due Process.

March 12, 2024. Voltaire's enduring words, "It is better to risk saving a guilty person than to condemn an innocent one," reverberate with moral gravity and were articulated in his work "Zadig," published in 1747. This sentiment...

Voltaire's enduring words, "It is better to risk saving a guilty person than to condemn an innocent one," reverberate with moral gravity and were articulated in his work "Zadig," published in 1747. This sentiment encapsulates a foundational principle of justice, highlighting the imperative of prioritizing the protection of innocence, even at the potential risk of sparing guilt. In the context of U.S. jurisprudence, exemplified by landmark cases like Brady v. Maryland, 373 U.S. 83 (1963), where the withholding of evidence violating due process rights was condemned, Voltaire's principle gained renewed significance. In  Brady v. Maryland, the government withheld certain evidence that was material to the determination of either guilt or punishment of a criminal defendant. This violated the defendant's constitutional right to due process. The right to due process is inline with Voltaire's thoughts on laws that are gravitating towards ensuring that an innocent person is not condemned. 

As the advent of generative AI, particularly deepfakes, introduces challenges to the legal landscape, the essence of Voltaire's maxim resonates. It amplifies the need for implementing laws in such a way that safeguard the presumption of innocence and uphold the rights of the accused, such as due process. In confronting the ethical complexities posed by emerging technologies, such as AI-generated evidence, adherence to Voltaire's principle becomes essential, ensuring that justice remains steadfast in the face of technological evolution.

Peter Kropotkin & AI Governance for the Common Good.

March 8, 2024. Peter Kropotkin, a Russian anarchist and advocate for social change, offered a poignant observation on the nature of law with his quote: "The law is an adroit mixture of customs that are beneficial to society...

Peter Kropotkin, a Russian anarchist and advocate for social change, offered a poignant observation on the nature of law with his quote:

 "The law is an adroit mixture of customs that are beneficial to society, and could be followed even if no law existed, and others that are of advantage to a ruling minority but harmful to the masses of men, and can be enforced on them only by terror." 

Kropotkin's words reflect his keen awareness of the dichotomy within legal systems, wherein laws may serve either the collective good or the interests of a privileged few. This dual nature of law underscores its potential to either foster innovation and societal progress or perpetuate inequality and oppression. Regulators today, particularly in the context of governing AI, can glean from Kropotkin's insight the importance of crafting laws that prioritize the welfare of the broader populace over the interests of powerful entities. By adopting transparent and equitable regulatory frameworks, regulators can mitigate the potential for laws to be wielded as tools of coercion and ensure that they serve the common good in an increasingly complex and interconnected world.

March 5, 2024. Louis D. Brandeis, before his tenure on the U.S. Supreme Court, co-authored a seminal Harvard Law Review article, "The Right to Privacy," advocating for the fundamental "right to be...

Louis D. Brandeis, before his tenure on the U.S. Supreme Court, co-authored a seminal Harvard Law Review article, "The Right to Privacy," advocating for the fundamental "right to be let alone." This early articulation laid the groundwork for modern privacy rights, emphasizing protection against unwarranted intrusion. In today's era of generative technology and pervasive digital surveillance, Brandeis's advocacy for privacy as a fundamental right resonates more strongly than ever, highlighting the enduring importance of safeguarding personal boundaries in the face of advancing technology. In today’s generative native world, Brandeis's advocacy for the "right to be let alone" takes on renewed urgency and significance.

Full article can be found here.

The Allure and the Abyss of Proprietary LLMs.

March 4, 2024. Incorporating the most advanced and proprietary Large Language Models (LLMs) into your startup's offering may initially seem like an attractive strategy to not only captivate potential investors...

Incorporating the most advanced and proprietary Large Language Models (LLMs) into your startup's offering may initially seem like an attractive strategy to not only captivate potential investors and customers but also to impress more personal circles, such as a girlfriend, showcasing your new product’s or your disruptive edge. However, this initial excitement masks a deeper financial challenge that comes into play as the product gains some traction. The cost of maintaining and scaling up the use of advanced LLMs can swiftly become a financial burden, potentially outstripping the startup's revenue growth and leading to economic difficulties. This predicament is aptly described as 

"a fool's game - the bigger fool loses, and the lesser fool wins,

where the 'bigger fool' refers to venture capitalists investing in a new generative AI company without a sustainable financial model, and the 'lesser fool' denotes the generative AI companies themselves, which proceed to attract such investment without a plan for managing the increased costs that come with success. This highlights the critical need for startups to engage especially with proprietary LLMs in a manner that balances innovation with financial viability.

March 1, 2024.The article discusses the responsibility for errors made by Large Language Models (LLMs), emphasizing that both lawyers and corporations, like Air Canada, are accountable for ensuring...

The article discusses the responsibility for errors made by Large Language Models (LLMs), emphasizing that both lawyers and corporations, like Air Canada, are accountable for ensuring the accuracy of these AI-generated outputs. It highlights the legal obligation of lawyers under Rule 11(b) to verify the correctness of documents drafted with LLMs and highlights the broader principle that entities cannot disclaim liability for misinformation provided by their AI technologies, illustrated by Air Canada's tribunal case. The piece advocates for the implementation of safeguards and disclosures.

Full article can be found here

Thomas Edison, Electricity & Democratizing AI.

February 29, 2024. In 1880, Thomas Edison, foreseeing the monumental shift his invention would catalyze, remarked, “After the electric light goes into general use, none but the extravagant will burn tallow candles.”...

In 1880, Thomas Edison, foreseeing the monumental shift his invention would catalyze, remarked, 

“After the electric light goes into general use, none but the extravagant will burn tallow candles.” 

This prescient statement not only highlighted the transformative potential of electricity but also underscored the importance of making essential services universally accessible. Just as Edison's electric light revolutionized how we live, bringing safety, productivity, and accessibility to the masses, so too does artificial intelligence (AI) hold the promise to fundamentally alter our societal fabric. AI, like electricity, has the potential to enhance nearly every aspect of daily life, from healthcare and education to transportation and communication, making it imperative that we build an environment where innovation benefits all, not just a privileged few.

February 29, 2024.The question of whether Artificial Intelligence (AI) should be treated as a common carrier or utility, akin to electricity, is an important one, given its impact more recently in the last 2 years...

The question of whether Artificial Intelligence (AI) should be treated as a common carrier or utility, akin to electricity, is an important one, given its impact more recently in the last 2 years . This debate touches on issues of regulation, public access, and the nature of services enabled by AI. A pivotal case in this discussion is the Supreme Court’s Moody v. NetChoice, which scrutinizes a Florida law passed in 2021 that aims to prevent digital platforms from "censoring" content.  

Full article can be found here

Generative Native World: The Need For Critical Thinking.

February 28, 2024. In the era of deepfakes and sophisticated digital manipulation, the wisdom of Francis Bacon becomes ever more relevant. His assertion that “If a man will begin with certainties...

In the era of deepfakes and sophisticated digital manipulation, the wisdom of Francis Bacon becomes ever more relevant. His assertion that 

If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties."

highlights the indispensable role of critical thinking today. As we navigate a digital landscape where seeing is no longer believing, adopting a mindset of questioning and doubt is crucial. The proliferation of deepfakes challenges our ability to discern truth from fabrication, making it imperative to critically evaluate the authenticity of information before accepting it as fact. By applying Bacon's approach of starting with skepticism, we equip ourselves with a powerful tool in identifying and uncovering the truth amidst a sea of digital deception. This method not only aids in the battle against misinformation but also fosters a more discerning and informed society capable of making decisions based on deeper truths unearthed through critical analysis and thoughtful inquiry.

Common Sense Approach For Common Sense.

February 28, 2024. In the realm of artificial intelligence (AI), there are no metrics to fine-tune an AI algorithm for common sense—a quality that remains uniquely human and nuanced. The common sense approach...

In the realm of artificial intelligence (AI), there are no metrics to fine-tune an AI algorithm for common sense—a quality that remains uniquely human and nuanced. The common sense approach to imbuing machines with a semblance of this human trait involves engaging them in generalized learning through methods such as self-supervised learning and others, by allowing them to interact with real-world objects and scenarios. This echoes the sentiment of Albert Einstein, who once remarked, 

"Common sense is the collection of prejudices acquired by age eighteen.

It highlights the intricate challenge of programming AI with a quality that, for humans, evolves through a lifetime of experiences. Similarly, Benjamin Franklin 's assertion, 

Common sense is something that everyone needs, few have, and none think they lack.”

highlights the paradoxical nature of common sense; it's universally valued, yet challenging to define or impart, especially in the context of AI.

February 27, 2024. Robometrics® Machines has created the Copilot AI for Cessna®, a digital assistant designed for General Aviation pilots flying Cessna® aircrafts, leveraging large language models (LLMs)...

Robometrics® Machines has created the Copilot AI for Cessna®, a digital assistant designed for General Aviation pilots flying Cessna® aircrafts, leveraging large language models (LLMs) to provide instant access to the vast knowledge contained within aircraft Pilot Operating Handbooks (POH). This tool enables pilots to quickly find information on aircraft operation, performance metrics, and troubleshooting through a simple conversational interface. Pilots can ask direct questions, should be specific, and may use keywords for their queries. Copilot AI offers instant access to operational queries, safety enhancements, educational support, and troubleshooting help. It is intended for use in pre-flight planning, in-flight queries, and for continuous learning. However, it is emphasized that Copilot AI does not substitute professional training or consultation with aviation authorities, and pilots should always prioritize safety and adhere to regulatory requirements. Copilot AI is programmed to provide the most accurate answers possible and will clearly communicate when it does not know the answer to ensure reliability and trustworthiness. 

The knowledge base includes the POHs for Cessna® models such as C150 Aerobat, C150, C152, C172N, C172RG, C172S, C182P, C182Q, C182S, and C182T-Nav II, enhancing its capability to assist with a broad range of Cessna® aircraft. Additionally, the POHs for Cessna® 162, Cessna® 150 Aerobat, Cessna® 150, Cessna® 152, and the manuals for Lycoming® O-360 and IO-360 engines have been added to the knowledge base. These specific Cessna aircraft models were picked since these aircrafts are available to fly at West Valley Flying Club (WVFC), where our Founder & CEO is a member. 

You can access Robometrics® Copilot AI for Cessna® Aircrafts here.

Justice Benjamin N. Cardozo on Law as a Means For Welfare.

February 26, 2024. Justice Benjamin N. Cardozo, who served on the Supreme Court of the United States from 1932 until his death in 1938, is renowned for his profound influence on American jurisprudence...

Justice Benjamin N. Cardozo, who served on the Supreme Court of the United States from 1932 until his death in 1938, is renowned for his profound influence on American jurisprudence and legal thought. His tenure on the court was marked by a commitment to principles that balanced individual rights with the broader needs of society, a philosophy encapsulated in his assertion that 

"The final cause of law is the welfare of society. The rule that misses its aim cannot justify its existence." 

This perspective is particularly pertinent in the context of regulating emerging technologies, such as generative artificial intelligence (AI). Cardozo's philosophy highlights the necessity of crafting laws and regulations that not only address the rapid advancements and potential risks associated with such technologies but also ensure that they serve the greater good of society. As AI continues to evolve, the challenge for lawmakers and regulators is to implement measures that prevent harm and misuse while fostering innovation and benefits, thus reflecting Cardozo's vision of law as a means to promote societal welfare.

February 26, 2024. The Citizens United v. Federal Election Commission case, decided on January 21, 2010, stands as a landmark Supreme Court decision under Chief Justice John Roberts...

The Citizens United v. Federal Election Commission case, decided on January 21, 2010, stands as a landmark Supreme Court decision under Chief Justice John Roberts. The core issue at hand was whether the free speech clause of the First Amendment could allow the government to limit corporations, including non-profit corporations, labor unions, and other associations, from making independent political expenditures. “The First Amendment confirms the freedom to think for ourselves.” Similar in Moody v. NetChoice where oral arguments were heard on February 26, 2024, the freedom to think and decide on which social media post individuals should see and how they are moderated are up to corporations and individuals alone. 

Full article can be found here

Generative Native World: World is Changing and We Must Change With It.

February 22, 2024. In an era where Heraclitus of Ephesus's assertion that "The only constant in life is change" resonates more profoundly than ever, the fabric of our existence continues to be woven...