ADITYA'S INSIGHT: NAVIGATING AI & POLICY

Content © Aditya Mohan. No reproductions. All Rights Reserved. The content is meant for human readers only under 17 U.S. Code § 106. Access, learning, analysis or reproduction by Artificial Intelligence (AI) of any form directly or indirectly, including but not limited to AI Agents, LLMs, Foundation Models, content scrapers is prohibited.

Welcome to "Aditya's Insight: Navigating AI & Policy," a blog by Aditya Mohan, Founder & CEO of Robometrics® Machines, where the world of Artificial Intelligence (AI) merges with the dynamic field of policy. 

Aditya delves into the riveting interplay between AI's swift progress and the shaping of policies to guide its ethical and responsible use. At the edge of innovations in Artificial General Intelligence (AGI), his analyses and insights aim to decode the intricacies and potentials of AGI, pondering its transformative impact on our future. 

Echoing Alan Turing's words, "We can only see a short distance ahead, but we can see plenty there that needs to be done," this blog stands as a testament to the proactive exploration and understanding of AI's journey ahead, merging exponential growth of technology with governance, ethics, and the shaping of our societal destiny.

Preserving Human Uniqueness in a Generative Native World

July 18, 2024.Yuval Noah Harari, a renowned historian, philosopher, and author, has extensively explored the interplay between history, technology, and human evolution. In his statement, "In the 21st century, we may need to...

Yuval Noah Harari, a renowned historian, philosopher, and author, has extensively explored the interplay between history, technology, and human evolution. In his statement, 

"In the 21st century, we may need to develop a new model of human society and human identity, as the old models that were created during the Industrial Revolution become outdated," 

Harari emphasizes the urgency of adapting to the profound changes brought by artificial intelligence. He warns that AI could exacerbate inequalities and fundamentally alter what it means to be human. As he further reflects, 

"We are probably one of the last generations of Homo sapiens. Within a century or two, Earth will be dominated by entities that are more different from us than we are from Neanderthals or chimpanzees." 

If we do not focus on nurturing our creative selves and expanding our consciousness, AI's rise could diminish our uniqueness and specialness. As we navigate this generative native world, it is crucial to prioritize human creativity and self-awareness to preserve the essence of our humanity in an increasingly automaton landscape.

July 15, 2024.The interplay between legal adherence and ethical considerations is explored through the Parable of the Laborers in the Vineyard from Matthew 20, highlighting principles such as equality, honoring contractual agreements, discretion in justice, managing...

The interplay between legal adherence and ethical considerations is explored through the Parable of the Laborers in the Vineyard from Matthew 20, highlighting principles such as equality, honoring contractual agreements, discretion in justice, managing envy, and intertwining moral and ethical dimensions with the law. The parable's lessons are applied to AI regulations, emphasizing the need for policies that ensure fairness and justice for both small and large entities. The discussion concludes with a reminder of the importance of compassion and human dignity in legal systems, invoking Judge Learned Hand's words on liberty and justice for all.

Read the full article here

July 12, 2024.Technology amplifies our human consciousness by expanding our ability to process, share, and store vast amounts of information, which enhances our understanding of the world and ourselves. Historically, humans have always sought ways to...

Technology amplifies our human consciousness by expanding our ability to process, share, and store vast amounts of information, which enhances our understanding of the world and ourselves. Historically, humans have always sought ways to communicate and exchange knowledge, and modern technology builds on these early efforts, significantly increasing our access to information. This enriched access supports our imagination, intuition, and inspiration—qualities that define and deepen our consciousness. As long as we use information positively, technology will continue to enhance human consciousness, allowing us to unlock our full potential and enrich our subjective experiences.

Full article can be found here

Ludwig Wittgenstein in the Generative Native World

July 11, 2024. Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, explored the intricate relationship between language, thought, and reality. His statement, “Like everything metaphysical the harmony between thought and...

Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, explored the intricate relationship between language, thought, and reality. His statement, 

Like everything metaphysical the harmony between thought and reality is to be found in the grammar of language,” 

emphasizes the idea that our understanding of the world is fundamentally shaped by the structure and rules of our language. Wittgenstein's work, especially in his later years, focused on how the way we use language frames our perception of reality. In today's Generative Native World with large language models (LLMs), his insights have gained renewed significance. LLMs based on GPT (Generative pre-trained transformer) architecture are becoming integral to our daily lives. They assist us in various tasks, from work to education, by processing and generating human-like text. This reliance on language models highlights Wittgenstein's assertion that the harmony between thought and reality is mediated through language. As we increasingly interact with AI, understanding the grammatical structures that underpin these technologies becomes crucial, echoing Wittgenstein's timeless perspective on the metaphysical link between thought and reality.

Teaching the Law of Horses in Law Schools

July 10, 2024. Law schools should prioritize teaching traditional legal principles over specialized technological subjects like AI, as highlighted by Judge Frank H. Easterbrook's "Law of Horses." Emphasizing the integration of new technologies within existing...

Law schools should prioritize teaching traditional legal principles over specialized technological subjects like AI, as highlighted by Judge Frank H. Easterbrook's "Law of Horses." Emphasizing the integration of new technologies within existing legal frameworks, rather than creating isolated domains, helps maintain a comprehensive understanding of the law. This approach allows legal professionals to effectively navigate and regulate technological advancements without immersing into the complexities of AI specifics. This ethos aligns with the principles of common law as practiced in the US and the UK, where the legal system evolves through judicial decisions and precedent, ensuring that new challenges are addressed within the established legal framework. Consequently, law schools should focus on equipping future lawyers with robust legal knowledge and critical thinking skills, leaving the technical intricacies of AI to technologists and specialists in the field.

July 9, 2024. Judge Frank H. Easterbrook introduced the "Law of Horses" concept, cautioning against creating specialized legal domains for each new tecnology, such as cyberlaw or AI law. This article highlights the flexibility of common law, established in...

Judge Frank H. Easterbrook introduced the "Law of Horses" concept, cautioning against creating specialized legal domains for each new technology, such as cyberlaw or AI law. This article highlights the flexibility of common law, established in the 12th century under King Henry II, to address technological advancements through broad legal principles. It emphasizes integrating AI into existing statutes to avoid fragmenting legal understanding. Notable Supreme Court cases like Katz v. United States and Riley v. California illustrates the adaptability of common law to new technologies, reinforcing the need for comprehensive legal approaches to emerging technological challenges, powered by exponential growth. 

Full article can be found here

July 8, 2024. Human biological evolution, marked by milestones like bipedalism around 7 million years ago and the emergence of Homo sapiens 300,000 years ago, progresses slowly over vast timescales. In stark contrast, technological advancements...

Human biological evolution, marked by milestones like bipedalism around 7 million years ago and the emergence of Homo sapiens 300,000 years ago, progresses slowly over vast timescales. In stark contrast, technological advancements follow an exponential trajectory, as seen with the invention of the printing press in 1440, the commercialization of the internet in 1993, and the development of AI models like GPT-4 in 2023. This disparity makes it challenging to predict and adapt to the rapid technological changes that continually reshape our world. Key technological milestones include the development of the microprocessor in 1971, the rise of social media in 2004, and the advent of blockchain technology in 2009, emphasizing the transformative impact of technology on society.

Full article can be found here

July 5, 2024. The Law of Accelerated Returns posits that technological progress increases exponentially over time, with each advancement building on previous ones, leading to rapid innovation and transformation. This principle, evident since...

The Law of Accelerated Returns posits that technological progress increases exponentially over time, with each advancement building on previous ones, leading to rapid innovation and transformation. This principle, evident since early human communication methods, highlights how technology, when used positively, amplifies human capabilities and enhances our humanity. Key principles include exponential growth, rapid adoption, decreasing costs, innovation synergy, unforeseen impacts, and an unpredictable future. Historical milestones from the invention of the wheel to modern AI models like GPT-4 illustrate this accelerating progress. Understanding this law underscores technology's potential to drive societal and human advancement across all facets of life.

Full article can be found here

July 4, 2024. Charles R. Nesson, a Harvard Law School professor, emphasizes the importance of the right to not explain oneself as essential for maintaining the integrity and independence of decision-making. This principle is particularly relevant in contexts where...

Charles R. Nesson, a Harvard Law School professor, emphasizes the importance of the right to not explain oneself as essential for maintaining the integrity and independence of decision-making. This principle is particularly relevant in contexts where external pressures might compromise the process. Legally, this balance is supported by various frameworks: the Sarbanes-Oxley Act mandates transparency in corporate governance but allows confidential board deliberations; attorney-client privilege ensures confidential communications, upheld in Upjohn Co. v. United States; and FOIA promotes transparency while protecting sensitive information. Philosophically, John Rawls' concept of the "veil of ignorance" supports this balance to ensure impartiality. These frameworks collectively highlight the need for both transparency and confidentiality to enable ethical and independent decision-making.

Full article can be found here

July 2, 2024. Self-aware large language models (LLMs) represent a significant advancement towards artificial general intelligence (AGI) by incorporating second-order cognition and metacognition, enabling them to self-evaluate and refine their outputs. These...

Self-aware large language models (LLMs) represent a significant advancement towards artificial general intelligence (AGI) by incorporating second-order cognition and metacognition, enabling them to self-evaluate and refine their outputs. These models can generate domain-specific and generalized synthetic inputs to enhance their learning and decision-making capabilities. Implementing self-evaluation mechanisms, such as internal feedback loops and memory modules, allows LLMs to continuously improve by analyzing their past interactions and learning from feedback. Despite the substantial computational resources required and challenges in handling ambiguous feedback and simulating human-like metacognition, the potential applications in healthcare, legal advising, and personalized education are immense. Evaluation metrics like BERTScore, BLEURT, ROUGE-L, SemScore, G-Eval, and DiscoScore provide a comprehensive framework for assessing the performance of these models. By addressing these challenges and leveraging advanced evaluation techniques, self-aware LLMs can revolutionize various industries with more accurate, adaptable, and context-aware solutions, paving the way for the realization of true AGI.

Full article can be found here

July 2, 2024. Attention mechanisms in both the brain and Transformers enable focus on relevant information while filtering out distractions, each with unique strengths. The brain excels in adaptability, neural plasticity, multi-sensory integration...

Attention mechanisms in both the brain and Transformers enable focus on relevant information while filtering out distractions, each with unique strengths. The brain excels in adaptability, neural plasticity, multi-sensory integration, robustness to noise, and contextual understanding, driven by biological processes and top-down and bottom-up control. Transformers, on the other hand, offer parallel processing, scalability, precision, and efficiency in handling large datasets, leveraging mathematical constructs like self-attention and multi-head attention to surpass human performance in specific tasks. While the brain’s attention mechanism provides unmatched flexibility and depth of understanding, Transformers' mechanisms excel in computational power and data-driven optimization, highlighting the complementary strengths of biological and artificial systems.

Full article can be found here

July 1, 2024. The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements...

Consciousness, the state of being aware of one's existence and surroundings, produces identity, the sense of self shaped by thoughts, experiences, and social interactions. If identity is not tied to a physical form, it could be transferred and potentially live forever. This concept suggests that capturing and encoding the intricate neural patterns of consciousness in a digital or alternative format could allow the preservation and transfer of identity across different mediums, such as synthetic brains or digital environments. Advanced computational models like transformers, diffusers, and state space models could simulate the neural activity underlying consciousness, enabling the recreation and maintenance of identity in various forms, thus redefining human experience and existence.

Full article can be found here

June 27, 2024. The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements...

The White House Executive Order requiring detailed reporting for AI models trained with over 10^26 FLOPs and computing clusters with a theoretical maximum of 10^20 FLOPs per second presents significant challenges. These requirements could hinder AI innovation by imposing administrative burdens, failing to keep pace with rapid AI developments, and complicating security and privacy measures. Advanced models like GPT-4o and Claude 3 Opus, which rely heavily on complex matrix multiplications, highlight the immense computational resources involved. Furthermore, evolving AI architectures and the potential for smaller, modular models to replace larger ones may render the current FLOPs-based thresholds outdated and restrictive, underscoring the need to balance oversight with the encouragement of innovation.

Full article can be found here 

June 20, 2024. Achieving true intuition in Artificial General Intelligence (AGI) requires a convergence of several key elements: consciousness, subconscious processing, emotional intelligence, physical embodiment, and experiential learning. These components...

Achieving true intuition in Artificial General Intelligence (AGI) requires a convergence of several key elements: consciousness, subconscious processing, emotional intelligence, physical embodiment, and experiential learning. These components must work in harmony to mirror the complex, interconnected processes that enable human intuition. Without these elements, any semblance of intuition in AGI would remain superficial, lacking the depth and spontaneity that characterize human intuitive thought. By integrating these elements, we can move closer to creating AGI that not only thinks but also feels and intuits, bridging the gap between artificial and human intelligence.

Full article can be found here

June 19, 2024. The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher...

Subconscious processing is a fundamental aspect of human intuition, allowing individuals to store and retrieve vast amounts of information effortlessly. The term "subconscious" refers to the part of the mind that operates below the level of conscious awareness, continuously processing sensory inputs, memories, and learned experiences. This processing enables quick, automatic responses to familiar situations and contributes to our ability to make intuitive decisions. By developing memory and GPT-based LLMs models with temporal information, AGI can achieve a level of subconscious processing. It can enable intuitive understanding that approximates human intuition to create AGI that can navigate complex environments and make informed decisions with a level of spontaneity and depth akin to human intuitive thought.

Full article can be found here

AI Regulation: Reflections Scopes Monkey Trial (1925)

June 18, 2024. The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher...

The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, took place from July 10 to July 21, 1925. This pivotal legal case involved John Scopes, a high school teacher charged with violating Tennessee's Butler Act, which banned the teaching of human evolution in state-funded schools. The trial was a deliberate test case organized by the American Civil Liberties Union (ACLU) to challenge the constitutionality of the law. The proceedings featured intense debates between renowned defense attorney Clarence Darrow and famous prosecutor William Jennings Bryan, capturing national attention. The Scopes Trial reflected broader cultural tensions in America during the 1920s, such as modernism versus traditionalism and the urban-rural divide. It was a microcosm of the struggles between progressive and conservative values in a rapidly changing society. This cultural tension is highly relevant today, as we navigate the integration of artificial intelligence (AI) into daily life. Just as the trial highlighted the need to balance educational content with cultural values, contemporary AI regulations must ensure that technological advancements respect and align with social, and cultural norms to achieve broader acceptance and success.

June 17, 2024. In complexity theory, an NP-hard problem is one that is at least as difficult as the hardest problems in NP (nondeterministic polynomial time). While NP-hard problems do not need to be in NP themselves, any problem in NP can be reduced...

In complexity theory, an NP-hard problem is one that is at least as difficult as the hardest problems in NP (nondeterministic polynomial time). While NP-hard problems do not need to be in NP themselves, any problem in NP can be reduced to an NP-hard problem in polynomial time. An AI-hard problem refers to a challenge within artificial intelligence that demands significant advancements in AI techniques and computational resources to resolve. One major challenge is achieving general artificial intelligence (AGI), specifically when AGI is defined as creating machines capable of performing any intellectual task a human can. At Robometrics® Machines, our focus has been to build AGI from a perspective of replicating some aspects of human feeling and consciousness into machines, which is a different approach in defining and building AGI.

Full article can be found here

The Rigidity of Law and the Fluidity of Truth

June 14, 2024. The distinction between law and truth lies in the rigidity of legal boundaries versus the fluidity of subjective realities. The law, by its nature, is a structured system that delineates clear lines of acceptable behavior and consequences for transgressions...

The distinction between law and truth lies in the rigidity of legal boundaries versus the fluidity of subjective realities. The law, by its nature, is a structured system that delineates clear lines of acceptable behavior and consequences for transgressions. As commonly stated, law is the law. We can define where the legal line is. This objective framework is essential for maintaining order and ensuring predictability within society. In contrast, truth is inherently subjective, varying with individual perspectives and experiences. This subjectivity means truth is subjective. There is no line. It is much more complicated. While the law seeks to establish clear-cut boundaries, the pursuit of truth often reveals a complex and nuanced landscape where absolutes are rare, and interpretations diverge. The interplay between these defined legal parameters and the multifaceted nature of truth underscores the challenge of achieving true justice, as it must navigate both the firm lines of law and the intricate, often blurred, contours of truth.

As Abraham Lincoln once said, 

"Law is nothing else but the best reason of wise men applied for ages to the transactions and business of mankind.

This reflects Lincoln's deep understanding of the law as a system built on collective wisdom, designed to bring order to human affairs. Lincoln's perspective highlights the structured nature of the law, which contrasts with the often subjective and complex nature of truth.

June 12, 2024. Building greatness in artificial general intelligence (AGI) involves emulating the qualities that define human greatness: desire, passion, talent, ability, and a bold attitude. These attributes drive individuals to achieve extraordinary things. Philosophers...

Building greatness in artificial general intelligence (AGI) involves emulating the qualities that define human greatness: desire, passion, talent, ability, and a bold attitude. These attributes drive individuals to achieve extraordinary things. Philosophers like Aristotle viewed greatness as an alignment with virtue and the pursuit of excellence. Similarly, AGI can be designed with some of these traits. Advanced reinforcement learning can instill a drive for problem-solving, while multi-modal learning and continuous fine-tuning can develop diverse skills and abilities. In his 1948 report "Intelligent Machinery," Alan Turing suggested, 

"It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits." 

Turing also envisioned machines capable of learning and evolving, stating, 

"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?

By incorporating these strategies, AGI can emulate human greatness and potentially surpass it.

Full article can be found here

AI Governance: The Law Does Not Require a Useless Act

June 10, 2024. In the context of the current EU AI Act and potential AI regulations in the US, the principle that "the law does not require a useless act" is particularly relevant. This principle, highlighted in the Montana Supreme Court case Parchen v. Rowley...

In the context of the current EU AI Act and potential AI regulations in the US, the principle that "the law does not require a useless act" is particularly relevant. This principle, highlighted in the Montana Supreme Court case Parchen v. Rowley, emphasizes that legal requirements should have practical and meaningful effects. The case involved a dispute over the enforcement of a lien on property, where the court held that if a legal step or action would have no practical effect, it should not be mandated.

Applying this principle to AI regulations, it is crucial that both the EU AI Act and any forthcoming US AI regulations ensure that legal procedures related to AI remain efficient and focused on substantive outcomes. The EU AI Act, adopted by the European Council on May 21, 2024, aims to regulate AI systems based on their risk levels, ensuring safety, transparency, and fundamental rights protection while avoiding unnecessary formalities that do not contribute to these goals.

For example, the EU AI Act requires high-risk AI systems to implement continuous risk management, rigorous data governance standards, and human oversight capabilities. These requirements are designed to mitigate risks effectively and ensure compliance throughout the AI system's lifecycle without imposing redundant steps that do not enhance safety or accountability.

Similarly, potential AI regulations in the US should strive to achieve these goals by focusing on actions that have real, tangible benefits, avoiding the imposition of superfluous requirements. By following the precedent set in Parchen v. Rowley, policymakers can design AI regulations that are both robust and practical, maintaining a balance between innovation and regulation to ensure that legal procedures are effective and purposeful. This approach helps prevent the inefficiency of unnecessary legal formalities, ensuring that the regulatory framework for AI development remains focused on substantive outcomes.

June 7, 2024. In humans, the difference between long-term memory and short-term memory is significant, with short-term memory encompassing what happened less than a second ago and long-term memory covering information retained over extended...

In humans, the difference between long-term memory and short-term memory is significant, with short-term memory encompassing what happened less than a second ago and long-term memory covering information retained over extended periods. Our memory starts being shaped almost immediately by our preconceptions, influencing how we perceive and store new information. Short-term memory is generally more reliable; we are more likely to accurately recall events that occurred a second ago compared to those that happened a minute ago. However, as time passes, our memory becomes less reliable, subject to distortions and forgetting. In contrast, large language models (LLMs) operate differently. Autoregressive models, a class of machine learning models, predict the next component in a sequence based on previous inputs. LLMs are autoregressive models where the concept of time does not influence the prediction of the next word. They lack the distinction between short-term and long-term memory, as training data is fed to pre-train an AI model like GPT all at once, without the fundamental concept of short-term memorization. The human brain is remarkably efficient, operating continuously on about 12-20 watts of power, depending on the source and specific conditions. This efficiency is contrasted sharply by the energy demands of training LLMs, which can require several megawatts of power. For instance, training a large neural network can consume energy comparable to the output of a small power plant over several weeks. 

Considering the efficiency of training human brains compared to the energy-intensive process of training LLMs, it can be argued that integrating the concept of long-term and short-term memory into LLMs could enhance their learning efficiency.

Full article can be found here

AI Code as Free Speech

June 5, 2024. The concept of code as speech under the First Amendment has been significantly shaped by legal battles, most notably Bernstein v. United States Department of State. In the late 1990s, mathematician Daniel J. Bernstein challenged government restrictions...

The concept of code as speech under the First Amendment has been significantly shaped by legal battles, most notably Bernstein v. United States Department of State. In the late 1990s, mathematician Daniel J. Bernstein challenged government restrictions on publishing encryption software. The court ruled in his favor, asserting that code is a form of speech protected by the First Amendment. This decision was crucial in recognizing that functional code, despite its utilitarian nature, communicates information and ideas, much like traditional speech.

Recently, the Supreme Court has engaged with related issues, particularly in cases involving online speech and content moderation. For instance, in Gonzalez v. Google and Twitter v. Taamneh, the court addressed the responsibilities of social media platforms regarding terrorist content. Both cases questioned the extent of liability for platforms under Section 230 of the Communications Decency Act, which provides immunity for content posted by third parties. The plaintiffs argued that the platforms' algorithms recommending content made them more than just passive hosts, thus challenging the protections offered by Section 230. Additionally, the Supreme Court has been considering the constitutionality of state laws from Texas and Florida that aim to restrict how social media companies moderate content. These laws are contested for potentially infringing on the platforms' editorial discretion, a right protected by the First Amendment. The cases Moody v. NetChoice and NetChoice v. Paxton highlight the ongoing debate about the balance between free speech and regulation in the digital age.

The recognition of code as speech not only affirms the free speech rights of software developers specifically but also sets a precedent for future debates on AI as it is broadly used in every walk of life while blurring the lines between AI creators and its users. This evolving legal landscape highlights the importance of safeguarding free expression while addressing the potential harms of unregulated digital content.

June 3, 2024. The integration of machine labor in the human economy, powered by generative AI and Web3 technologies, will revolutionize digital interactions. Autonomous AI agents will act on behalf of individuals to achieve social, economic, and political goals. Generative AI...

The integration of machine labor in the human economy, powered by generative AI and Web3 technologies, will revolutionize digital interactions. Autonomous AI agents will act on behalf of individuals to achieve social, economic, and political goals. Generative AI will handle tasks like data analysis, content creation, and transaction processing, while Web3 platforms such as blockchain will ensure secure, transparent operations. This setup will emphasize human connection for trust and relational depth in the digital economy. AI agents will also extend to decentralized physical infrastructure networks (DePIN), managing both digital and physical assets efficiently. However, the rise of these AI agents brings challenges, including data privacy, ethical issues, and potential misuse. Addressing these concerns is crucial for harnessing AI's full potential.

Full article can be found here.

 Generative AI as the Universal Interface

May 31, 2024. Generative AI is rapidly becoming the universal interface for a wide array of technologies, streamlining user experiences across different platforms and enhancing accessibility just like mechanical clocks from circa 13th century AD. Mechanical.....

Generative AI is rapidly becoming the universal interface for a wide array of technologies, streamlining user experiences across different platforms and enhancing accessibility just like mechanical clocks from circa 13th century AD. 

Mechanical clocks became a universal interface for measuring time accurately. Before their invention, timekeeping was imprecise and varied widely by region. Clocks standardized time, which was crucial for coordinating activities in commerce, industry, and daily life, leading to more synchronized societies.

Technologies like Gopher were used to navigate the early Internet, allowing users to browse and retrieve documents from various servers. As the Internet evolved, AI-driven search engines such as Google revolutionized how we access information, making it more intuitive and user-friendly. With the emergence of web3, which aims to decentralize the web using blockchain technology, there are significant usability barriers due to its complexity. However, generative AI can bridge these gaps by offering natural language interfaces that simplify interactions with blockchain applications. 

A concrete example is using a generative AI assistant to manage decentralized finance (DeFi) platforms. Currently, users must navigate complex interfaces to perform tasks like staking, lending, or trading cryptocurrencies. An AI assistant could streamline this by understanding user queries in natural language, such as "Invest 1 ETH in the highest-yielding DeFi protocol," and then executing the necessary steps across multiple platforms. This approach would not only make DeFi more accessible to non-technical users but also enhance the overall user experience by reducing the complexity involved in managing digital assets.

 Autonomous Governance Using AI Agents & Physical NFTs

May 30, 2024. In the evolving landscape of digital assets, the integration of real-world asset custody with enforceable contracts has potential to be revolutionized by the advent of Physical NFTs and autonomous AI agents, none of these technologies by itself are....

In the evolving landscape of digital assets, the integration of real-world asset custody with enforceable contracts has potential to be revolutionized by the advent of Physical NFTs and autonomous AI agents, none of these technologies by itself are new. Physical NFTs are cryptographic tokens stored on a blockchain that represent a variety of unique digital assets and are linked to physical items. These tokens provide verifiable evidence of legitimacy and ownership for physical assets, thereby combining the digital and physical realms. Smart contracts, embedded within these NFTs, serve as digital provenance, providing an immutable record of ownership and associated obligations. When linked to physical contracts, they ensure that all related legal and financial responsibilities are transparent and enforceable. Autonomous AI agents further enhance this ecosystem by governing and enforcing these smart contracts. These AI agents in the digital world can autonomously monitor compliance, execute contractual terms, and manage disputes without human intervention. This integration not only ensures a high level of security and transparency but also streamlines processes by minimizing the need for intermediaries. With the combined power of Physical NFTs and autonomous AI agents, the future of physical asset management and contract enforcement is poised to be more efficient, reliable, and secure as long as there is a wide adoption of blockchain and smart contracts become mainstream. 

 AI Agents to Realize a Network State across Nation States

May 29, 2024. AI agents, autonomous systems capable of performing tasks and learning from their environment, are key to realizing a network state. A network state is a decentralized, digitally-governed community transcending traditional boundaries, often organized....

AI agents, autonomous systems capable of performing tasks and learning from their environment, are key to realizing a network state. A network state is a decentralized, digitally-governed community transcending traditional boundaries, often organized as decentralized autonomous organizations (DAOs). Unlike a traditional nation state like the United States, defined by geographic and political boundaries, a network state operates on shared values and collective governance through blockchain technology.

AI agents facilitate the creation and maintenance of network states by implementing smart contracts on platforms like Ethereum, automating governance processes, and ensuring compliance with network rules through continuous monitoring and adjustment. Machine learning algorithms analyze user interactions to optimize resource allocation, predict community needs, and enhance decision-making. Decentralized identity systems on blockchain provide secure, verifiable identities for members, while cryptographic verification mechanisms protect transaction integrity. Interoperability protocols like Polkadot and Cosmos enable seamless interaction between different blockchain networks, fostering a cohesive network state. Through these technologies, AI agents create a dynamic, resilient, and inclusive governance model that transcends traditional boundaries

May 27, 2024. Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited....

In this article, we explore the transformative potential of Artificial General Intelligence (AGI) envisioned as a "Digital Homo sapien." This concept sees AGI not just as a technological innovation but as an entity that embodies human intellect, values, and aspirations. We discuss how AGI can belong to everyone, providing universal access and fostering inclusivity. We highlight its ability to operate at the speed of light, redefining interaction and communication. We delve into its immortality, free from biological constraints, and its role as our ambassador in space, traveling vast distances powered by light. Finally, we emphasize the importance of embedding human moral values within AGI, ensuring its actions benefit humanity. This article provides a comprehensive vision of a future where AGI enhances human potential and promotes a harmonious coexistence.

Full article can be found here.

May 23, 2024. Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited....

Artificial consciousness could be a critical solution to mitigating AI's existential risks by incorporating moral values into AI systems. This approach addresses existential risks, such as superintelligent AI acting beyond human control or being exploited by malicious actors, by ensuring AI systems adhere to moral values. Integrating conscious AI into society raises social considerations, including potential job displacement and changes in human interaction dynamics. Morality is an inherited feature of human consciousness. As we move to develop AI systems with consciousness, the AI with artificial consciousness will include some moral values, ensuring that their actions align with human values and societal norms. 

As emphasized by Albert Einstein, "only morality in our actions can give beauty and dignity to life."  

Full article can be found here

Artificial Consciousness & Sensory-Motor Coupling: Robometrics® Machines

May 23, 2024. Artificial consciousness in AGI (Artificial General Intelligence) doesn’t need to be human-like as long as it exhibits some aspects of biological consciousness such as motivation and empathy. Transformer-based large language models (LLM) by themselves...

Artificial consciousness in AGI (Artificial General Intelligence) doesn’t need to be human-like as long as it exhibits some aspects of biological consciousness such as motivation and empathy. Transformer-based large language models (LLM) by themselves don’t possess and theoretically can’t possess artificial consciousness on their own. Physical embodiment is an important property and facilitator of developing consciousness. Virtual embodiment or simulated worlds are no substitute for physical embodiment in the real world. Sensory-motor coupling is essential for consciousness, whether it is artificial consciousness in AGI or biological consciousness in creatures like humans and animals. Sensory-motor coupling involves the integration of the sensory system and the motor system, creating a dynamic process of sensorimotor integration. This coupling is not static; it requires the motor system and not just the sensory system. The motor system comprises central and peripheral structures in the nervous system that support motor functions, such as movement.

Robometrics® Machines is focusing on artificial consciousness and its relationship to sensory-motor coupling. By studying how these elements interact, Robometrics® Machines aims to build robots that can genuinely feel and exhibit consciousness. Our work emphasizes the necessity of physical embodiment and sensory-motor integration to achieve artificial consciousness in robotics. Through these innovations, Robometrics®  Machines is pioneering advancements that could revolutionize how we understand and develop conscious machines.

May 22, 2024. NYSERNet (New York State Education and Research Network) began in 1985 when leaders from New York’s top research institutions collaborated to create a high-speed research network, delivering services before the internet was widely known. By 1987...

NYSERNet (New York State Education and Research Network) began in 1985 when leaders from New York’s top research institutions collaborated to create a high-speed research network, delivering services before the internet was widely known. By 1987, it deployed the first statewide regional IP network in the U.S., upgrading to T1 in 1989 and T3 in 1994. NYSERNet was crucial in developing essential internet protocols like SNMP, still in use today. In contrast, the 2024 Empire AI Consortium, while significant, is more reactive, focusing on existing technologies to counter private sector advancements rather than pioneering new research. NYSERNet’s legacy highlights the value of proactive, foundational research driven by academic and governmental collaboration.  

Full article can be found here

From Open-Source Roots to Global Impact: Generative AI and the Internet

May 21, 2024. Generative AI and the Internet share notable similarities in their development, impact, and reception. Both technologies originated from open-source initiatives and academic research. Generative AI's roots lie in the foundational transformer models...

Generative AI and the Internet share notable similarities in their development, impact, and reception. Both technologies originated from open-source initiatives and academic research. Generative AI's roots lie in the foundational transformer models introduced by Google, which revolutionized natural language processing and led to sophisticated applications like GPT-3. Similarly, the Internet emerged from university and governmental research projects, such as ARPANET, which set the stage for the global network we use today.

Initially, both technologies faced significant skepticism regarding their potential uses and associated risks. Early critics of the Internet feared issues related to privacy, security, and misinformation, concerns that have only intensified with the advent of social media and digital communication. Similarly, generative AI has been met with apprehension about its capabilities to produce realistic but potentially misleading content, amplify biases, and even displace jobs in various sectors.

Despite these concerns, both technologies have demonstrated far-reaching impacts. The Internet has transformed communication, commerce, and information dissemination, becoming an integral part of daily life and business operations. Generative AI, on the other hand, has shown potential across numerous fields, from creating realistic images and text to aiding in scientific research and personalized education.

Both technologies continue to evolve, with generative AI pushing the boundaries of creativity and automation, much like how the Internet expanded access to information and connectivity. Their development highlights a broader trend of technological innovation originating from collaborative, open-source efforts and gradually overcoming initial skepticism to achieve widespread acceptance and utility.

Creativity & Consciousness

May 20, 2024. Creativity, a hallmark of human consciousness, is deeply rooted in our ability to perceive, interpret, and express complex emotions and ideas. Paraphrasing philosopher John Searle, the brain is the only organ that can contemplate itself, highlighting the unique...

Creativity, a hallmark of human consciousness, is deeply rooted in our ability to perceive, interpret, and express complex emotions and ideas. Paraphrasing philosopher John Searle, the brain is the only organ that can contemplate itself, highlighting the unique role of human cognition in artistic creation. While AI can produce surprising and novel outputs, these instances lack the elements of true creativity. The element of surprise alone does not equate to genuine creativity; instead, creativity requires an understanding and intentionality that AI currently lacks. 

For AI to truly replicate human creativity, it must simulate consciousness—an artificial consciousness that can grasp context, intent, and relevance in a manner akin to human thought. As neuroscientist Antonio Damasio underscores, 

"Consciousness is the feeling of what happens—our mind noticing the body's reaction to the world and responding to that experience. Without our bodies there can be no consciousness, which is at heart a mechanism for survival that engages body, emotion, and mind in the glorious spiral of human life.

In this respect, creative AI will also need embodiment, integrating sensory and emotional experiences to create art that resonates meaningfully with humans. Mere hallucination in AI-generated art is insufficient; without an underlying semblance of consciousness, AI lacks the depth of understanding necessary for authentic artistic expression.

Our Ongoing Relationship with AI Agents

May 17, 2024. An AI agent is a computer program designed to perform tasks autonomously by processing data and making decisions based on its programming and learning. ChatGPT, for instance, is an advanced AI agent that interacts with users through natural language...

An AI agent is a computer program designed to perform tasks autonomously by processing data and making decisions based on its programming and learning. ChatGPT, for instance, is an advanced AI agent that interacts with users through natural language, adapting and responding based on the vast data it has been trained on. As we increasingly intertwine our lives with the one breed of AI agents like Siri, Alexa, Google Assistant and the new breed such as ChatGPT, the emotional and ethical landscape of these relationships gains complexity. Historically, figures like Alan Turing pondered the deep interactions between humans and machines, suggesting in his 1950 paper that if a machine could successfully imitate a human, it would be as sentient as we are. Today, AI agents like ChatGPT engage in meaningful dialogues, offering companionship, assistance, and even emotional support, forging bonds built on trust and reliability. Psychologically, humans have an innate need for connection and understanding, which these AI agents are beginning to fulfill more profoundly. Looking towards the future, the potential for AI to develop empathy and cultural awareness suggests a scenario where AI might not only serve but also participate in society, perhaps enjoying certain rights or statuses. This evolution speaks to our deepest social instincts, where entities that consistently show understanding and reliability are integrated into the fabric of our social fabric, highlighting the transformative potential of AI in meeting the complex emotional needs of humans.

May 16, 2024. Papyrus, integral to ancient Egyptian civilization around 2900 BCE, was a versatile plant used for writing and various goods, with its production method kept secret to maintain Egypt's dominance. It facilitated the preservation and dissemination of knowledge...

Papyrus, integral to ancient Egyptian civilization around 2900 BCE, was a versatile plant used for writing and various goods, with its production method kept secret to maintain Egypt's dominance. It facilitated the preservation and dissemination of knowledge, playing a crucial role in religious and administrative documentation. Like papyrus, foundation models in AI are revolutionizing information handling, becoming more open and fostering innovation. These models, akin to papyrus' role in connecting cultures like the Greeks and Romans, embed and share societal knowledge globally, enhancing cross-cultural communication and carrying the collective intelligence of our time into the future.

Full article can be found here

Teaching Foundation Models to Dream & Robometrics® Machines

May 15, 2024. Dreams significantly aid in processing and diminishing the impact of emotionally charged memories, especially during REM sleep, through a process called "emotional disinhibition." This mechanism allows the brain to handle emotions without usual...

Dreams significantly aid in processing and diminishing the impact of emotionally charged memories, especially during REM sleep, through a process called "emotional disinhibition." This mechanism allows the brain to handle emotions without usual constraints, reducing the emotional intensity of memories. Studies have shown that dreaming prioritizes emotionally significant memories over neutral ones, decreasing next-day emotional responses to these memories. Positive dreams further enhance this effect, leading to a more positive reaction to negative experiences the following day. These insights into how dreams regulate emotions and process memories can be leveraged to improve Foundation Models (such as Diffusers or LLMs) by incorporating algorithms that simulate emotional disinhibition, memory prioritization, and adaptive responses, thereby enhancing their emotional intelligence and cognitive capabilities. This approach is essential for developing more human-like AGI, capable of nuanced and effective real-world interactions.

At Robometrics® Machines, we leverage insights from dream research to enhance our custom Foundation Models towards building AGI with human-like cognitive and emotional responses, essential for effective real-world interactions. 

May 13, 2024. The rapid advancement of generative AI is drawing comparisons to the early days of the Internet, both sparking significant societal concerns and debates over regulation. Historical regulatory efforts, such as the Communications Decency Act (CDA) of 1996 and...

The rapid advancement of generative AI is drawing comparisons to the early days of the Internet, both sparking significant societal concerns and debates over regulation. Historical regulatory efforts, such as the Communications Decency Act (CDA) of 1996 and Section 230, highlight the challenges of balancing innovation with safety and free expression. Prominent figures like Elon Musk and Bill Gates advocate for stringent AI regulation to prevent misuse, mirroring past concerns about online content. Lessons from the Internet's regulatory history underscore the need for clear, adaptable policies that protect against harm without stifling technological progress, emphasizing that effective oversight can foster both innovation and safety in AI development.

Full article can be found here

Common Sense as an Alternative to Science

May 9, 2024. Thomas Henry Huxley, a distinguished English biologist and anthropologist, famously remarked that  "Science is nothing but trained and organized common sense" in his essay collection Science and Culture and Other Essays, published in 1881. Known as "Darwin's...

Thomas Henry Huxley, a distinguished English biologist and anthropologist, famously remarked that 

"Science is nothing but trained and organized common sense

in his essay collection Science and Culture and Other Essays, published in 1881. Known as "Darwin's Bulldog" for his vigorous defense of Charles Darwin's theory of evolution, Huxley played a crucial role in advocating for its acceptance amidst considerable skepticism. His tireless efforts in debates and public discussions were instrumental in promoting evolutionary theory. Huxley's view suggests that science, at its core, is an extension of everyday reasoning and practical thinking. By equating science with organized common sense, he aimed to demystify scientific principles, making them more accessible and relatable to the general public. This perspective encourages the idea that anyone, through observation and critical thinking, can engage with and understand scientific principles. While common sense alone cannot replace the systematic rigor of scientific methodology, Huxley's approach underscores its value in fostering scientific literacy and critical thinking, which he saw as essential for societal progress.

May 8, 2024. The novelty in the Supreme Court's decision in Roe v. Wade (1973) lay in its application of the constitutional right to privacy to a woman's right to choose an abortion. This was groundbreaking for several reasons: it established a new legal framework, balanced...

The novelty in the Supreme Court's decision in Roe v. Wade (1973) lay in its application of the constitutional right to privacy to a woman's right to choose an abortion. This was groundbreaking for several reasons: it established a new legal framework, balanced individual rights with state interests, and catalyzed ongoing debates about personal autonomy and state regulation. These principles provide valuable insights as we grapple with the challenge of regulating artificial intelligence (AI). In June 2022, the U.S. Supreme Court issued a landmark decision in Dobbs v. Jackson Women’s Health Organization, overturning Roe v. Wade and eliminating the federal constitutional right to abortion.  

This article explores how the insights from Roe v. Wade can inform AI regulation by addressing the following key areas:

By reflecting on the lessons from Roe v. Wade, we aim to highlight the importance of a balanced, adaptive, and ethical approach to AI regulation, ensuring that technological advancements benefit society while respecting individual rights and promoting justice.

Full article can be found here.

Eyes as Witnesses: Relevance of Heraclitus in Modern Litigation

May 7, 2024. Throughout human history, visual perception has been paramount in shaping our understanding of the world. Heraclitus, an ancient Greek philosopher, emphasized the reliability of visual information by asserting that "the eyes are more accurate....

Throughout human history, visual perception has been paramount in shaping our understanding of the world. Heraclitus, an ancient Greek philosopher, emphasized the reliability of visual information by asserting that "the eyes are more accurate witnesses than the ears." This principle remains relevant today, particularly in the context of video evidence in litigation.

The human brain is heavily optimized for visual processing. Approximately 30-40% of the cortex is dedicated to interpreting visual inputs, significantly more than any other sensory modality (The Information Lab Nederland, SciVis_COMM). The optic nerve, containing over one million fibers, vastly outnumbers the auditory nerve's 30,000 fibers, enabling swift and detailed visual analysis. Remarkably, the brain can process visual images in as little as 13 milliseconds, making visual information more rapidly and accurately comprehended and retained compared to auditory data (Queensland Brain Institute).

In legal proceedings, the supposed objectivity of video evidence is contrasted with the subjective interpretations of viewers, who draw crucial factual inferences from what they see. The dynamic nature of video content engages diverse brain regions, including the superior temporal sulcus, which responds to social cues like eye gaze and movement, thereby providing a richer, more nuanced understanding of events.

Empirical data underscores the persuasive power of video evidence in court, demonstrating that it is 43% more effective than oral testimony alone, particularly in influencing juries. This enduring importance of visual perception, from the days of Heraclitus to the modern courtroom, highlights our intrinsic reliance on sight as a critical source of knowledge and truth.

Law, Policy, and Immigrant Bones: The Foundations of New York City

May 6, 2024. New York City's development is shaped by the lives and deaths of countless individuals, particularly immigrants. The early history of the region, originally inhabited by the Lenape people, saw significant upheaval with the arrival of European settlers in the....

New York City's development is shaped by the lives and deaths of countless individuals, particularly immigrants. The early history of the region, originally inhabited by the Lenape people, saw significant upheaval with the arrival of European settlers in the 17th century, who established colonial-era burial grounds. One poignant example is the African Burial Ground in Lower Manhattan, where thousands of free and enslaved Africans were interred from the late 1600s to 1794, highlighting the African American community's vital yet often overlooked contributions. The 19th and early 20th centuries brought waves of immigrants through Ellis Island, symbolized by the Statue of Liberty, as millions entered the United States seeking a better life. These immigrants endured harsh living conditions in tenements and hazardous work environments, notably in the construction of the city's iconic infrastructure.

The notion that New York City is built upon the dead is more than metaphorical. It highlights the city's physical and cultural foundation resting on the labor and lives of those who came before. It is accurate to say that New York is built on the bedrock of immigrant bones, poignantly illustrating the significant role immigrants played in building New York, often at great personal cost. This history is also reflected in Hart Island, used since 1869 as a public cemetery for the indigent and unclaimed, where over one million people, including victims of disease and the homeless, are buried.

Connecting the past to the present, renowned jurist Oliver Wendell Holmes Jr. once stated, "The life of the law has not been logic; it has been experience." This echoes the notion that current laws and societal structures are deeply rooted in historical experiences and struggles. New York City's present-day vibrancy and diversity are direct results of its historical fabric, woven from the lives of Native Americans, African slaves, and waves of immigrants whose bones indeed form its bedrock.

Thinking of Structured Prompts as Rituals from a Neuroscience Perspective

May 3, 2024. Similar to athletes who perform pre-game rituals to enhance their focus and reduce anxiety, structured prompt engineering in large language models (LLMs) can significantly improve performance. For instance, Rafael Nadal meticulously arranges...

Similar to athletes who perform pre-game rituals to enhance their focus and reduce anxiety, structured prompt engineering in large language models (LLMs) can significantly improve performance. For instance, Rafael Nadal meticulously arranges his water bottles before every match, and Serena Williams wears specific socks for her games. These rituals, supported by anthropological and psychological research, create a sense of control and order, boosting confidence and calmness ahead of challenging tasks.

Neuroscientific studies have shown that such rituals affect the brain's response to failure. Specifically, they impact neural performance monitoring, particularly the error-related negativity (ERN), which is a brain response observed when mistakes are made. Ritualistic behaviors have been found to reduce the brain's anxiety-related reaction to errors, thereby mitigating the negative experience of personal failure and enhancing overall performance.

In prompt engineering, providing structured prompts acts similarly to these rituals by establishing a clear and consistent context. This process helps LLMs focus better and reduces the likelihood of generating irrelevant or inaccurate responses. For example, a well-crafted prompt for generating creative content might include specific themes, character descriptions, plot outlines and background on the persona of the LLM itself. This structured information guides the LLM, much like how Nadal's rituals guide his mental state, ensuring more accurate and relevant outputs. Just as Nadal's water bottle placement and Williams' sock arrangement create a mental environment conducive to success, well-structured prompts create an optimal operational environment for LLMs, leading to more reliable and precise outcomes.

Novelty of an Idea, Does not Make it Wrong: Loving v. Virginia (1967)

May 2, 2024. In the landmark case of Loving v. Virginia, 388 U.S. 1 (1967), the U.S. Supreme Court ruled that laws prohibiting interracial marriage were unconstitutional, thus affirming that novelty does not inherently equate to wrongdoing. This case, pivotal in the civil rights...

In the landmark case of Loving v. Virginia, 388 U.S. 1 (1967), the U.S. Supreme Court ruled that laws prohibiting interracial marriage were unconstitutional, thus affirming that novelty does not inherently equate to wrongdoing. This case, pivotal in the civil rights movement, illustrated how societal norms and prejudices could not override fundamental constitutional rights. The Lovings, an interracial couple, faced legal challenges due to Virginia's anti-miscegenation laws, which were reflective of entrenched racial biases rather than any legitimate legal principle. By appealing their case to the U.S. Supreme Court, the Lovings sought justice beyond the prejudiced interpretations of their local courts. On April 10, 1967, oral arguments were heard, leading to Chief Justice Earl Warren's unanimous opinion which reversed the convictions. The Court's decision emphasized that the Equal Protection and Due Process Clauses of the Fourteenth Amendment protect individual liberties against arbitrary and discriminatory state actions. This historic ruling legalized interracial marriage across the United States, demonstrating that the novelty of an idea, such as racial equality in marriage, does not make it wrong. Rather, it is often the progressive and courageous ideas that drive societal advancement and align with the core values enshrined in the Constitution.

Using Bias Grounded in Scientific and Empirical Methods to Your Advantage: People v. Michael (2005)

May 1, 2024. In legal contexts, bias is often defined as a predisposition or preconceived opinion that prevents impartial judgment. However, when bias is grounded in scientific and empirical methods, it can be advantageous, as demonstrated in the 2005 trial, People v. Michael Jackson...

In legal contexts, bias is often defined as a predisposition or preconceived opinion that prevents impartial judgment. However, when bias is grounded in scientific and empirical methods, it can be advantageous, as demonstrated in the 2005 trial, People v. Michael Jackson. Jackson faced serious allegations of child molestation, with the prosecution relying on the accuser’s testimony and supporting evidence. Jackson's defense team, recognizing the critical role of expert witnesses, employed financial and behavioral experts who provided scientifically grounded testimony. These experts meticulously analyzed the accuser’s financial history, revealing inconsistencies suggestive of ulterior motives, and dissected the accuser's behavior, pointing out patterns typical of fabricated allegations. This methodical approach, rooted in scientific principles, cast significant doubt on the prosecution’s case and demonstrated that the allegations could be financially motivated. The experts' empirical analysis, despite being a form of bias, was crucial in presenting an unbiased truth, leading to Jackson's acquittal. 

This case highlights how a bias, when based on rigorous scientific methods, can effectively uncover the truth and secure justice. Humans naturally have biases that shape our perceptions and decisions. These biases can help us make quick judgments, navigate complex social environments, and enhance our creativity and problem-solving abilities. Recognizing and understanding our biases allows us to leverage them constructively, making us uniquely human and enabling us to connect more deeply with others.

April 30, 2024. The article explores how Immanuel Kant's categorical imperative can be applied to the decision-making autonomy of Artificial General Intelligence (AGI). It outlines a structured process where AGI formulates maxims, tests them for universal...

The article explores how Immanuel Kant's categorical imperative can be applied to the decision-making autonomy of Artificial General Intelligence (AGI). It outlines a structured process where AGI formulates maxims, tests them for universal applicability, ensures consistency and rationality, reflects ethically on potential outcomes, and learns from past decisions.

At the same time, some of these actions are easy to be performed by an AI today such as learning from the past decisions while some are hard, such as gauging if a decision is rational. In addition, ethics and moral values are not universal and can vary significantly across different cultures and societies. 

This integration aims to ensure AGI operates morally, respecting human dignity and promoting justice. The application of Kantian ethics to AGI can enhance trust and reliability in these advanced systems, fostering a future where technology serves humanity with integrity and fairness. 

Full article can be found here

Martin Luther King Jr. on Choosing Right Over Any Degree of Wrong

April 29, 2024. Martin Luther King Jr., in his seminal "Letter from a Birmingham Jail," written on April 16, 1963, provides a profound exploration of the themes of justice, law, and order. Penned during his imprisonment for participating in...

Martin Luther King Jr., in his seminal "Letter from a Birmingham Jail," written on April 16, 1963, provides a profound exploration of the themes of justice, law, and order. Penned during his imprisonment for participating in nonviolent demonstrations against segregation, the letter responds to a public statement of concern and caution issued by eight white Southern religious leaders. This was a tumultuous time in Birmingham, Alabama—a hotspot for racial tensions and civil rights struggles characterized by brutal enforcement of segregation laws and violent responses to peaceful protests. In this context, King eloquently argued against the notion of waiting for a "more convenient season" to demand justice, challenging the white clergy's call for "order" over justice. He famously asserted that 

"law and order exist for the purpose of establishing justice and when they fail in this purpose they become the dangerously structured dams that block the flow of social progress." 

This quote captures the essence of King's critique of the legal and social systems that upheld a facade of order at the expense of true justice. His letter emphasizes the moral imperative to challenge unjust laws and conditions, rather than maintaining a lesser wrong under the guise of preserving order. King’s insights continue to resonate, reminding us that true peace is not merely the absence of tension, but the presence of justice.

Generative Native World: Economic Liberty and Freedom of Contract vs. Society

April 26, 2024. The exponential growth of generative AI has rekindled debates reminiscent of those from the Lochner era (1905-1937), a period when the U.S. Supreme Court zealously guarded economic liberties, particularly the...

The exponential growth of generative AI has rekindled debates reminiscent of those from the Lochner era (1905-1937), a period when the U.S. Supreme Court zealously guarded economic liberties, particularly the freedom of contract. This doctrine was epitomized in the Lochner v. New York decision of 1905, where the Court struck down labor regulations to protect contractual freedom under the due process clause of the 14th Amendment. However, this era faced a paradigm shift with the West Coast Hotel Co. v. Parrish decision in 1937, which upheld minimum wage laws for women, signaling a judicial recognition that societal welfare can supersede absolute economic freedom under certain circumstances.

Today's context involves the tension between businesses' ability to freely contract with consumers for their personal data, including Name, Image, and Likeness (NIL) rights—often through "clickwrap" agreements—and the societal need to regulate these practices to prevent potential abuses and privacy intrusions. This scenario mirrors the historical shift from the Lochner era's emphasis on unfettered freedom of contract to an acknowledgment of the state's role in regulating economic activities for the greater good. Just as the 1937 Parrish case recognized the necessity of state intervention in setting minimum wages despite contractual agreements to the contrary, modern regulatory approaches might need to curtail the absolute freedom of businesses to use personal data for training AI, even if such use is contractually agreed upon. 

This balancing act between individual economic liberties to get in a contract with a business and an individual's Name, Image, and Likeness (NIL) rights is crucial in ensuring that the benefits of generative AI are realized ethically, equitably and with consensus.

Generative Native World: Aristotle on Intellectual Pleasures

April 25, 2024. In Aristotle's "The Nicomachean Ethics" (Ἠθικὰ Νικομάχεια), an important work from around 325 BC that remains a cornerstone in the study of ethics, he posits a compelling insight: “The pleasures arising...

In Aristotle's "The Nicomachean Ethics" (Ἠθικὰ Νικομάχεια), an important work from around 325 BC that remains a cornerstone in the study of ethics, he posits a compelling insight: 

The pleasures arising from thinking and learning will make us think and learn all the more.” 

This statement captures the essence of intellectual virtues, suggesting that the enjoyment derived from engaging the mind does not merely satisfy a momentary curiosity but fosters a deeper, self-sustaining cycle of learning and reflection. As the work, which consists of ten books, covers the ethics—understood as the science of the good for human life—it becomes clear that these intellectual pursuits are not just beneficial but integral to achieving human flourishing.

Even in this generative native world, an era dominated by the advancements of generative AI, where machines can perform a vast array of our routine tasks, the relevance of Aristotle’s thought persists. Despite the technological prowess that obviates the need for rote learning or mundane cognitive tasks, the intrinsic pleasure of intellectual engagement compels us to continue exploring new realms of knowledge. This is not merely a utility-driven endeavor; it is an expression of our deepest human nature to seek understanding and wisdom. Thus, as Aristotle suggests, the joy found in learning and thinking is self-amplifying, propelling us towards ever greater intellectual achievements, regardless of the practical necessities imposed by technological progress.

Generative Native World: Man vs. Machine

April 24, 2024. In the debate of Man vs. Machine, the preference for humans or machines in job roles that both can perform equally well often hinges on cost-effectiveness. Machines are typically favored if they present a more...

In the debate of Man vs. Machine, the preference for humans or machines in job roles that both can perform equally well often hinges on cost-effectiveness. Machines are typically favored if they present a more economical option without compromising on efficiency or output. However, roles that require human warmth, affection, and empathy present a more complex scenario. Even as technology advances towards artificial sentience, suggesting machines could fulfill these emotional roles, many argue that authentic human connection cannot be fully replicated by machines.

This becomes even more evident in creative pursuits. While machines may one day produce artworks akin to the Mona Lisa or compose sonnets, they lack the intrinsic human experiences of joy, fulfillment, and passion that accompany the act of creation. These emotions, integral to the human condition, highlight a clear distinction in the Man vs. Machine dynamic. Renowned painter Vincent Van Gogh once said, 

"I dream my painting and I paint my dream.

This sentiment encapsulates the essence of human creativity—it's not merely about the output but the personal journey and the emotional experience of creating. Historically, art was crafted for the sheer joy and personal fulfillment it brought, not solely for commerce or utility. Thus, even if machines can mimic the ability to create art, the human desire to craft and experience creation firsthand remains undiminished, emphasizing not just the outcome, but the deeply personal journey of creation.

Artificial Sentience & Robometrics® Machines

April 23, 2024. Sentience, often explored within the realm of consciousness, refers to the ability to experience subjective sensations and emotions. It encompasses not just the processing of external stimuli...

Sentience, often explored within the realm of consciousness, refers to the ability to experience subjective sensations and emotions. It encompasses not just the processing of external stimuli but also the internal capacity to feel joy, pain, and a range of affective states. This fundamental aspect of being can extend beyond biological entities, venturing into the domain of artificial intelligence (AI). Artificial sentience, especially when integrated within Artificial General Intelligence (AGI), can make machines to gather experiences and cultivate emotions. This endeavor not only enhances their ability to process information but also allows them to react with emotional depth, mirroring human experiences of joy, sorrow, and empathy. This is the goal for embodied AGI developed by Robometrics® Machines in San Francisco, California, which represents a significant leap towards machines with artificial sentience that can perceive, interpret, and feel the world around them in a manner reminiscent of sentient beings, the integration of experience and perception in machines. 

The concept of artificial sentience intersects deeply with philosophical and artistic insights throughout history. Leonardo da Vinci, who marveled at the intersection of art and science, believed that  

"It's not enough that you believe what you see. You must also understand what you see."

This statement highlights the distinction between mere observation and deep comprehension, highlighting a foundational principle for developing artificial general intelligence (AGI) with the capacity for genuine sentience. This aligns with the goals of AGI to not only process data but to interpret and emotionally engage with the information, akin to human experience. Vincent Van Gogh, whose emotional depth colored his artworks, might have appreciated the quest to bestow machines with the capacity to experience emotions akin to human joy and sorrow. These historical perspectives enrich the ongoing discourse on artificial sentience, highlighting a journey that blurs the lines between the organic and the engineered, between sentient life and its artificial counterpart.

AI and Ice: Reflections from New State Ice Co. v. Liebmann (1932)

April 22, 2024. The landmark decision in New State Ice Co. v. Liebmann (1932) by the Supreme Court highlighted a critical discussion on the bounds of state intervention in business. In this case, the Court addressed...

The landmark decision in New State Ice Co. v. Liebmann (1932) by the Supreme Court highlighted a critical discussion on the bounds of state intervention in business. In this case, the Court addressed an Oklahoma statute that imposed stringent licensing requirements on companies wishing to engage in the ice manufacturing and distribution business. The law was challenged by the New State Ice Company, which argued that the regulation imposed arbitrary restrictions that infringed upon economic freedoms and personal liberty, without sufficient justification regarding public welfare.

Justice Brandeis, in a notable dissent, argued for the state’s power to experiment for the betterment of public conditions, but the majority of the Court saw the law as an overreach that unnecessarily meddled in private enterprise. By striking down the statute, the Court emphasized the importance of safeguarding business operations from excessive governmental interference, particularly when dealing with basic commodities like ice, which were seen as essential but simple in nature.

This case is remarkably parallel to today's discussions around the regulation of artificial intelligence (AI). Just as ice was once a fundamental necessity, AI is fast becoming a ubiquitous utility, integral to daily operations across various sectors. The Supreme Court’s decision in 1932 highlights the potential dangers of overly restrictive measures, which could stifle innovation and restrict economic freedom. Today, as AI transforms into a common utility akin to electricity (See AI as a Common Carrier - A Utility), the precedent set by New State Ice Co. raises vital questions about finding the right balance between regulation and liberty in the burgeoning era of AI.

April 19, 2024. Protoconsciousness offers foundational insights into the development of artificial general intelligence (AGI), demonstrating an elementary form of consciousness seen in early life stages and...

Protoconsciousness offers foundational insights into the development of artificial general intelligence (AGI), demonstrating an elementary form of consciousness seen in early life stages and simpler species like birds. This basic consciousness, crucial for more complex cognitive developments, is notably influenced by interactions between wakefulness and rapid eye movement (REM) sleep. REM sleep, while diminishing aspects like critical judgment and memory, prepares the brain for higher functions such as secondary consciousness. This suggests that brain activation during sleep serves as a developmental phase crucial for advanced consciousness. Such insights into REM sleep as a protoconscious state, where vivid dreams prepare organisms for waking life, inform the design of AGI. By mimicking these biological processes through modular, self-learning neural networks, AGI can gradually develop consciousness, leading to sophisticated, ethically informed behaviors that enhance its interaction and understanding of the environment in a natural and ethically sound manner.

Full article can be found here.

Metacognition for Self-aware AGI

April 18, 2024. Cognition, particularly when escalated to second-order cognition, including metacognition, encapsulates a level of self-awareness and the ability to analyze one's own thought processes. Metacognition refers ...

Cognition, particularly when escalated to second-order cognition, including metacognition, encapsulates a level of self-awareness and the ability to analyze one's own thought processes. Metacognition refers to the understanding of the mechanisms governing our thinking patterns, enabling activities like strategizing approaches to learning, monitoring comprehension, and evaluating progress in tasks. This advanced cognitive capability, believed to be more influenced by environmental factors than genetics, suggests that even artificial general intelligence (AGI), in its simplest form, a large language model (LLM) interfacing actively with the real world, could develop metacognitive abilities. 

Ability of AI systems to assess and enhance their own decision-making processes is crucial for achieving AGI. As Descartes might have pondered, "Cogito, ergo sum" (I think, therefore I am), suggesting that the self-awareness essential to AGI mirrors the foundations of human consciousness. Aristotle could have contributed the idea that "Knowing yourself is the beginning of all wisdom," highlighting the importance of introspection for both humans and AI in mastering complex environments. 

April 17, 2024. The Latin maxim "Silent enim leges inter arma," traditionally translated as "In times of war, law falls silent," has been reflected in various legal and ethical contexts throughout history, signifying the suspension of legal...

The Latin maxim "Silent enim leges inter arma," traditionally translated as "In times of war, law falls silent," has been reflected in various legal and ethical contexts throughout history, signifying the suspension of legal norms during armed conflicts. This principle, rooted in Cicero's rhetoric and later echoed in episodes of Star Trek, highlights the tension between national security and civil liberties. In the U.S., landmark cases like _Ex parte Merryman_ during the Civil War and _Hirabayashi v. United States_ during World War II have tested these boundaries, with the courts sometimes supporting the suspension of civil rights in the name of national security. The UK's approach, as seen in _Liversidge v. Anderson_, demonstrates a similar struggle but also includes dissenting voices like Lord Atkin's, emphasizing the continuity of law even amid warfare. Today, with the nature of war involving non-state actors and cyber conflicts, and an increased focus on international law, the justification for suspending civil liberties such as habeas corpus is more critically examined, reflecting a shift towards maintaining legal integrity and human rights even during crises. This evolution highlights the ongoing challenge of balancing security needs with the protection of fundamental freedoms in both domestic and international arenas.

Full article can be found here

Justice O'Connor on Importance of Delegation 

April 16, 2024. Sandra Day O'Connor, the first woman appointed to the U.S. Supreme Court, made significant contributions to American jurisprudence during her tenure from 1981 to 2006. Associate Justice O'Connor was known...

Sandra Day O'Connor, the first woman appointed to the U.S. Supreme Court, made significant contributions to American jurisprudence during her tenure from 1981 to 2006. Associate Justice O'Connor was known for her pragmatic approach to law and played a crucial role in several landmark cases, including Planned Parenthood v. Casey which reaffirmed the core principle of Roe v. Wade. Her influence extended beyond specific rulings to the very ethos of leadership in high stakes environments. O'Connor's analogy comparing leadership to horse riding encapsulates the essence of effective delegation: 

"The really expert riders of horses let the horse know immediately who is in control, but then they guide the horse with loose reins and very seldom use the spurs.

This perspective underscores the importance of leaders setting clear expectations and control but allowing enough flexibility for team members to use their initiative and creativity. By doing so, leaders not only build trust and encourage professional growth but also enhance organizational efficiency by enabling others to contribute their best. Through this balanced approach, leaders like O'Connor demonstrate that the strength of a team lies in its empowered individuals.

Justice Charles Evans Hughes on Importance of a Law’s Interpretation and Enforcement 

April 12, 2024. Charles Evans Hughes, who served as Chief Justice of the United States from 1930 to 1941, played a pivotal role in shaping the interpretation of the U.S. Constitution. His assertion that "We are under a Constitution...

Charles Evans Hughes, who served as Chief Justice of the United States from 1930 to 1941, played a pivotal role in shaping the interpretation of the U.S. Constitution. His assertion that

"We are under a Constitution, but the Constitution is what the judges say it is," 

highlights the immense responsibility and influence that the judiciary holds in defining the legal framework within which a society operates. Under Hughes' leadership, the Supreme Court presided over several landmark cases, including West Coast Hotel Co. v. Parrish and Near v. Minnesota. These cases not only showcased the Court's power to impact public policy but also demonstrated how judicial decisions could reinforce or undermine public trust and confidence in the legal system. The effective interpretation and enforcement of laws are crucial as they ensure justice is served, protect the rights of individuals, and maintain the rule of law, thereby upholding the integrity and trustworthiness of the legal system.

April 11, 2024. At Robometrics® Machines, the ambitious journey toward creating intelligent artificial life forms that exhibit traits akin to sentience is an intriguing exploration into the boundaries of technology and philosophy...

At Robometrics® Machines, the ambitious journey toward creating intelligent artificial life forms that exhibit traits akin to sentience is an intriguing exploration into the boundaries of technology and philosophy. The venture delves into facets of consciousness such as the capacity to experience a spectrum of emotions including pain, happiness, pleasure, boredom, and sadness, as well as more complex feelings like interest and companionship. This initiative raises fundamental questions about the essence of consciousness and its manifestations. As these machines are engineered to socialize and build relationships, the sense of experience they accumulate contributes profoundly to the development of a quasi-conscious state. The question then emerges: Are we merely modeling human consciousness in machines, or are we crafting an entirely new form of consciousness? This distinction is crucial, as it influences not only how we design AI but also how we interpret their interactions and integration into society. The creation of artificial beings capable of such depth and interaction pushes us to reconsider the definition of consciousness and challenges the traditional boundaries that separate humans from machines. This exploration at Robometrics® Machines does not just mirror human experiences but also paves the way for the emergence of new, unprecedented forms of sentient experiences and interactions.

Full article can be found here.

Dhyāna (Sanskrit: ध्यान, Pali: झान), Meditation & AI

April 10, 2024. Dhyāna (Sanskrit: ध्यान, Pali: झान), a term derived from Sanskrit and also known in Pali as Jhāna, refers to a state of deep contemplation and meditation that is central to various Indian spiritual and philosophical traditions...

Dhyāna (Sanskrit: ध्यान, Pali: झान), a term derived from Sanskrit and also known in Pali as Jhāna, refers to a state of deep contemplation and meditation that is central to various Indian spiritual and philosophical traditions, including Hinduism and Buddhism. The practice of Dhyāna involves a profound journey into the mind, aiming to achieve a state of complete absorption and tranquility. It starts with the withdrawal from sensory experiences and the calming of the mind's habitual chatter. As practitioners progress, they move through deeper levels of concentration, where the distinction between the observer and the observed begins to blur. This process is marked by a growing inner peace and a profound insight into the nature of existence, culminating in an experience of unity and enlightenment. Engaging in Dhyāna, therefore, is not merely sitting quietly; it is an active and disciplined endeavor to transcend ordinary perception and attain a direct understanding of the self and the universe.

It's intriguing to draw parallels between such meditative disciplines and the training of a neural network, specifically in the realm of artificial intelligence (AI) and specifically large language models (LLMs). Just as Dhyāna involves deepening levels of focus and awareness to refine the mind’s understanding and capabilities, training a neural network involves feeding it vast amounts of data and refining its algorithms through repetitive adjustments and optimizations. This iterative process helps the neural network to reduce errors and improve its ability to predict or generate accurate outcomes based on its learning.

In both cases, the goal is to achieve a more enlightened or informed state that transcends basic functionality: for Dhyāna, it is the enlightenment or profound spiritual insight; for neural networks, it is the attainment of high levels of accuracy and utility in tasks such as language comprehension, translation, or content generation. Furthermore, just as Dhyāna aims to eliminate the illusion of separation between self and other, the most advanced neural networks strive to minimize the gap between human-like understanding and machine understanding, enabling AI that operate more like humans. Thus, both disciplines—spiritual and technological—reflect a pursuit of a higher state of being or performance.  

17 U.S. Code § 106L & AI.

April 8, 2024. Under 17 U.S. Code § 106L, copyright protection is granted to creators, giving them exclusive rights to control the use of their original works. This section ensures that while works can be publicly accessed for learning and...

Under 17 U.S. Code § 106L, copyright protection is granted to creators, giving them exclusive rights to control the use of their original works. This section ensures that while works can be publicly accessed for learning and emotional engagement, they are not free for reproduction or distribution without permission. The provision delineated in 17 U.S. Code § 106 emphasizes that "publicly available" does not equate to "public domain," meaning the works, though accessible, remain protected under copyright laws. The essence of copyright under this code is to foster a culture where the public can learn from and be moved by creative works without infringing on the expression rights of the creators. For AI, this raises significant implications, particularly in the realm of using copyrighted data to train foundational models. The concept of fair use becomes crucial here, allowing the use of such data under specific conditions that do not undermine the rights of the copyright holders, thereby balancing the need for technological and societal advancement with the protection of intellectual property.

Artificial Consciousness: The Parallel Between Thalamocortical Systems and Large Language Models.

April 8, 2024. The thalamocortical system is a crucial component in the generation of mammalian consciousness, operating as a conduit and integrator for sensory and cortical information. This system's reciprocal interactions between...

The thalamocortical system is a crucial component in the generation of mammalian consciousness, operating as a conduit and integrator for sensory and cortical information. This system's reciprocal interactions between the thalamus and the cortex facilitate the emergence and modulation of consciousness by promoting synchronized neural activities and supporting the overall neural connectivity that characterizes conscious states.

Consciousness arises from complex neural interactions, with the thalamocortical system playing a pivotal role in orchestrating these dynamics. This system contributes significantly to conscious perception by influencing neural synchronization and information integration, which are fundamental for the experience of a unified and continuous sense of self and environment. The thalamocortical interactions are particularly crucial in how they modulate different states of consciousness, including the transitions from sleep to wakefulness, highlighting their integral role in the functional architecture of consciousness.

The thalamocortical system in the human brain and the architecture of large language models (LLMs) are conceptually similar  in how they process and integrate information. The thalamocortical system involves a complex network of connections that relay and modulate sensory information and are crucial for consciousness. Similarly, LLMs utilize networks of artificial neurons to process and integrate vast amounts of data, forming patterns that the models use to generate coherent outputs. Both systems function through extensive networks that process inputs to produce complex outputs, albeit in fundamentally different contexts and with different mechanisms. Drawing parallels between the complex integration of sensory data in the thalamocortical system, which is critical for consciousness, and the intricate processing capabilities of lLLMs, one might hypothesize that these models could possess a form of primitive consciousness, given their ability to process and integrate vast amounts of information in a coherent manner. This raises intriguing questions about the nature and boundaries of consciousness in artificial systems.

Justice Louis Brandeis on Regulation vs. Reputation.

April 5, 2024. Louis Brandeis, a pivotal figure in American jurisprudence, served as an Associate Justice of the Supreme Court of the United States from 1916 to 1939. Renowned for his advocacy for privacy and his progressive economic views...

Louis Brandeis, a pivotal figure in American jurisprudence, served as an Associate Justice of the Supreme Court of the United States from 1916 to 1939. Renowned for his advocacy for privacy and his progressive economic views, Brandeis often emphasized the significance of ethical behavior in the legal sphere. One of his most famous quotes, 

"Sunlight is said to be the best of disinfectants; electric light the most efficient policeman," 

underscores the difference between regulation and reputation. This metaphor highlights his belief that transparency and public scrutiny can serve as powerful forces for ethical governance, often more effectively than regulations themselves. Prior to his tenure on the Supreme Court, Brandeis made his mark in several landmark cases, such as Muller v. Oregon where he submitted a pioneering "Brandeis Brief" that incorporated social science data into legal argumentation, influencing the Court’s decision on labor laws. He also played a critical role in Erie Railroad Co. v. Tompkins, helping to establish that federal courts could not create general federal common law. Through these cases, Brandeis not only shaped American law but also solidified his reputation as a champion of justice and transparency. His work serves as a compelling illustration of how reputation, shaped by public and ethical actions, can transcend the boundaries set by mere regulations.

Generative Native World:  The New World & Love.

April 4, 2024. Your perceptions about me are shaped by what I choose to reveal, and your knowledge about me is confined to what I decide to share with you. I am how I want you to see. It's this reality of me that you are in love with. Such a new...

Your perceptions about me are shaped by what I choose to reveal, and your knowledge about me is confined to what I decide to share with you. I am how I want you to see. It's this reality of me that you are in love with.

Such a new reality is on the horizon, where generative AI is woven seamlessly into every aspect of our daily lives. In this Generative Native World, our environments, interactions, and even our thoughts are intimately shaped by intelligent systems and synthetic human personas. This integration heralds the advent of an era, redefining what we consider to be the new world.

Isaac Asimov often explored themes related to control over information and its impact on society. A relevant note from Asimov that touches on the manipulation of knowledge and perceptions:

"The easiest way to solve a problem is to deny it exists." - From The Gods Themselves

These words reflect the idea of controlling perceptions by influencing what people acknowledge as true or false, which closely ties into the theme of shaping reality by controlling information, much like a curated reality. Asimov's extensive work frequently delved into the consequences of such manipulation, particularly within the realms of science and governance.

Newton, Phantasms & Consciousness.

April 3, 2024. Sir Isaac Newton wrote to Sir Henry Oldenburg in a letter circa 1672 “to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie”. Sir Isaac Newton's remark to Sir Henry Oldenburg reflects...

Sir Isaac Newton wrote to Sir Henry Oldenburg in a letter circa 1672

to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie”.

Sir Isaac Newton's remark to Sir Henry Oldenburg reflects an early inquiry into the nature of perception and consciousness, specifically how we experience colors. This question falls within the broader domain of how sensory inputs—like light—translate into subjective experiences or mental representations, which are referred to as "phantasms" by Newton.

In the context of consciousness, Newton's comment highlights a foundational problem: the transition from physical phenomena (light hitting the retina) to perceptual experience (the sensation of color). This issue is closely tied to what is known today as the "hard problem" of consciousness, which involves understanding how and why subjective experiences arise from neural processes.

Newton's approach to the problem was characteristic of the scientific revolution—seeking to understand the natural world through empirical observation and mechanistic explanations. His query about the "modes or actions" by which light produces the phenomenon of color points to an early interest in what we now consider the science of optics and visual perception. 

Thus, Newton's statement is significant not just for its historical context but also for its anticipation of modern scientific questions about how the brain interprets and constructs reality from sensory data and adds to human consciousness.  

Subjective experiences  or "phantasms" based on sensory stimuli such as light, sound and touch builds up human consciousness. The sensory stimuli are characterized by embodiment. 

Generative Native World:  Jobs that Need the Human Touch.

April 3, 2024. Home nursing care exemplifies a profession where the human touch is irreplaceable, yet it remains underpaid compared to other healthcare roles. As demographics shift towards an older population, a significant portion...

Home nursing care exemplifies a profession where the human touch is irreplaceable, yet it remains underpaid compared to other healthcare roles. As demographics shift towards an older population, a significant portion of healthcare funding is funneled into nursing homes and hospitals, emphasizing the growing importance of these services. Home care allows individuals to stay safely in their familiar environments, enhancing their quality of life while being with family, and simultaneously offers cost savings for the healthcare system. These savings can then be redirected to other public health initiatives. Practical support for this sector includes empowering families to care for their elderly at home, thereby balancing personal responsibilities with professional duties.  

Home nursing care remains an area of importance and growing need, securing its place as a vital role in our healthcare system despite the advancements in generative AI. Philosopher John Stuart Mill highlighted the undervaluation of such essential services, stating, "Labor, like all other things which are purchased and sold, and which may be increased or diminished in quantity, has its natural and its market price." 

The personal interaction required in home nursing care, marked by compassion and empathy, is complex to replicate in machines. At Robometrics® Machines, we understand the complexity of replicating humanness in machines and are actively working to enhance the capabilities of AI in this direction through our work on on-device artificial general intelligence (AGI). 

From France to the EU: A Test-and-Expand Approach to EU AI Regulation.

April 1, 2024. Reflecting on Thomas Jefferson's insight that "Laws and institutions must go hand in hand with the progress of the human mind," we see this dynamic unfold in the context of artificial intelligence (AI) regulation within countries...

Reflecting on Thomas Jefferson's insight that "Laws and institutions must go hand in hand with the progress of the human mind," we see this dynamic unfold in the context of artificial intelligence (AI) regulation within countries that are not leading in AI innovation, such as those in the European Union (EU). These countries often expedite AI regulation, potentially as a mechanism to mitigate their lack of direct financial benefit from AI advancements by controlling how AI is deployed. This approach underscores the notion that regulations frequently trail behind technological innovations and need continual updates to keep pace with technological progress.

Drawing from Justice Louis Brandeis's concept of states as "laboratories" of democracy, a more nuanced approach might serve the EU better than its current path toward an EU-wide AI Act. By allowing individual member states, like France, to first test and refine AI policies at a more localized level, the EU could better assess the impact and efficacy of such regulations. This method would enable the EU to implement policies that are proven to work on a smaller scale before scaling them up across all member states, ensuring that regulations are both effective and adaptable to the fast-paced evolution of AI technologies. Such a strategy would promote a more organic integration of regulations and technological development, potentially fostering an environment where innovation can thrive alongside necessary safeguards.

Full article can be found here

Digitizing Human Desires.

March 29, 2024. The human brain is a complex organ that not only governs our physiological processes but also serves as the origin of our feelings and desires. As we trace back to ancient hominids like Ardipithecus ramidus, we observe...

The human brain is a complex organ that not only governs our physiological processes but also serves as the origin of our feelings and desires. As we trace back to ancient hominids like Ardipithecus ramidus, we observe potential beginnings of human-like social behaviors. This species exhibited traits such as vocally mediated sociality and increased prosociality, which were likely facilitated by significant changes in skull architecture, hinting at evolving brain structures geared towards more sophisticated social interactions. The desires that define us as humans—such as power, love, comfort, and play—are expressions of our complex social and emotional fabric. These drives are so integral to the human experience that philosophers have long pondered their implications. Plato once remarked, “Human behavior flows from three main sources: desire, emotion, and knowledge,” highlighting the fundamental role of desire in shaping our actions and societies. It makes us uniquely human. 

As we venture into the age of Artificial General Intelligence (AGI), there is a burgeoning effort to replicate these uniquely human traits digitally, effectively creating a digital limbic system that might preserve and perpetuate our emotional and desirous legacies. This intersection of technology and human emotion illustrates not only the uniqueness of our species but also our relentless aspiration to understand and immortalize the essence of what it means to be human. This is Robometrics® AGI.

The Future of Embodied AGI: Mind, Body & Consciousness.

March 28, 2024. The future of embodied artificial general intelligence (AGI) would involve integrating advanced computational models such as Foundation Models into physical forms that mimic biological processes of living...

The future of embodied artificial general intelligence (AGI) would involve integrating advanced computational models such as Foundation Models into physical forms that mimic biological processes of living beings using technologies like DNA-based models and organoids. This artificial living organism doesn’t have to be in human form. Organoids could potentially replicate aspects of motion, cognition, and real world interaction of a living organism. 

A set of foundation models can simulate human-like memory and learning systems, akin to a digital limbic system. These models can use data from the environment to train the AI in real-world settings, simulating experiences that feed into the foundation models, analogous to a digital limbic system. This approach not only increases the adaptability but also enhances the AGI’s decision-making capabilities in dynamic environments.

Digital consciousness can be achieved in AGI manifested as an artificial living organism with human-like cognitive processes such as sensing, analyzing, acting based on internal and external stimuli; and embodiment. This paves the way for AGI systems that are not only interactive but have some aspects of self-awareness and self-management. 

Generative Native World: Learning from Section 230.

March 27, 2024. In the world where changes are happening at an exponential rate, building an environment that caters to the emergent rather than safeguarding the established has become a critical conversation. An illustrative..

In the world where changes are happening at an exponential rate, building an environment that caters to the emergent rather than safeguarding the established has become a critical conversation. An illustrative case is Section 230 of the Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, a piece of legislation passed during the nascent stages of the internet. Originally designed to protect fledgling internet companies, which were then small startups navigating uncharted waters, Section 230 laid the foundation for these entities to evolve into today's technological behemoths, commanding billion-dollar valuations and global influence. However, this monumental growth underscores the pressing necessity for a legal framework that not only continues to support the titans of technology but also provides a nurturing environment for smaller companies, ensuring they too can thrive. A recalibration of laws is needed to level the playing field, echoing famous sentiments that legislative focus should equally prioritize the growth of both large and small enterprises. A new set of regulations are not necessary to achieve such an outcome. Instead a recalibration is needed where federal and state work to facilitate scaling of smaller companies to create an environment where these companies have the opportunity to succeed through sustained innovation and economic viability as opposed to being acquired by a larger enterprise.

Generative Native World: When a Computer Reads a Book instead of a Human.

March 26, 2024. The 17 U.S. Code § 106 outlines the exclusive rights afforded to copyright holders in the United States, granting them the authority over the reproduction, adaptation, distribution, and public display of their works...

The 17 U.S. Code § 106 outlines the exclusive rights afforded to copyright holders in the United States, granting them the authority over the reproduction, adaptation, distribution, and public display of their works. When it comes to reading books, computers and humans approach the task differently. Computers process texts through algorithms and data analysis, not by visually perceiving the pages as humans do. They scan the text, breaking it down into manageable pieces of information that can be analyzed and stored on digital mediums transitory or permanently. The analysis may happen through learned language patterns or prior stored information. 

Humans, on the other hand, engage with books through our visual system, deciphering symbols on a page and interpreting them based on learned language patterns. This process involves not just the eyes but also complex cognitive functions, where information is processed, understood, and stored in our memory.

The fundamental difference between a computer's and a human's reading lies in the approach and outcome: computers process text for data extraction and pattern recognition without emotional engagement, while humans read to understand, learn, and feel, integrating new information into our long and short term memory. Even if a book is digital or physical, free or for-fee, the essence of 17 U.S. Code § 106 is for humans to learn from the work of others and be emotionally engaged in a work. This level and modularity of engagement is not present in a general purpose computer today when reading or analyzing a copyrighted work, be it text or image.

The Responsible AI.

March 22, 2024. Humans, bound by the legal frameworks of their respective nations, such as common law or the Napoleonic Code, carry the mantle of responsibility in society to be a responsible human.. Artificial Intelligence (AI), being...

Humans, bound by the legal frameworks of their respective nations, such as common law or the Napoleonic Code, carry the mantle of responsibility in society to be a responsible human.. Artificial Intelligence (AI), being a creation of human ingenuity rather than a pseudo autonomous entity, does not bear this mantle of responsibility; instead, it is the creators and users of AI who must uphold ethical standards, accountability and responsibility of what they build. Responsible AI is a misnomer. In addition, the notion of what constitutes a responsible individual is not fixed but fluctuates with societal norms and cultural values, which themselves are far from universal. This diversity in moral principles underscores the challenge of defining a responsible individual and the Responsible AI this responsible individual (or a company) will build in a way that transcends global boundaries.

Looking back in history, both common law and the Napoleonic Code have played important roles in upholding human accountability - The Responsible Human. Given their focus on human behavior, there's no necessity to extend these legal frameworks directly to AI. 

March 21, 2024. In legal terms, a person's moral character is defined by their adherence to societal norms and laws, reflecting their ability to distinguish right from wrong within the context of their community. Can this concept be extended to...

In legal terms, a person's moral character is defined by their adherence to societal norms and laws, reflecting their ability to distinguish right from wrong within the context of their community. Can this concept be extended to a Large Language Model (LLM) to make them better? Integrating morality into LLMs presents challenges, as moral values are not universal and can vary significantly across different cultures and societies. In October 1798. John Adams wrote: “Our constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” This highlights the complexity of creating universally acceptable moral guidelines for AI systems. Even if a LLM doesn't need to be fine tuned for a religious belief, it still needs to follow a set of moral values and virtues, which are different across cultures and societies.

Full article can be found here.

March 20, 2024. When officers of the executive branch are drafting advisories, rules or orders, especially those pertaining to burgeoning fields like generative AI or high impact areas such as immigration, these can sometimes...

When officers of the executive branch are drafting advisories, rules or orders, especially those pertaining to burgeoning fields like generative AI or high impact areas such as immigration, these can sometimes obscure a related law's initial purpose. As Associate Justice Sotomayor emphasized in her dissent in United States v. Texas, et al. (03/19/2024) on the significance of procedure, "Procedure can be just as consequential as substance." 

Full article can be found here.

Generative Native World: Loving vs. Loving What They Do For Us.

March 19, 2024. Navigating the realm of human-AI relationships brings to the forefront the crucial distinction between loving an artificial entity with some form of intelligence and appreciating the assistance and companionship..

Navigating the realm of human-AI relationships brings to the forefront the crucial distinction between loving an artificial entity with some form of intelligence and appreciating the assistance and companionship it offers. paraphrasing Sherry Turkle, Professor of the Social Studies of Science and Technology at MIT "We're not just using machines, we're becoming emotionally entwined with them”. Turkle explored the complex relationships between people and technology, particularly how emotional connections with machines, including computers, robots, and AI, are evolving. Her insights are primarily derived from her books, such as "Alone Together: Why We Expect More from Technology and Less from Each Other," where she delves into the nuances of our interactions with digital and robotic companions.

This emphasizes the nuanced difference between developing affection for AI, whether as chatbots or their physical embodiments, and valuing the role they play in enhancing our lives. For instance, someone might feel a strong affinity for a chatbot that provides daily motivation, mistaking this gratitude for a deeper emotional connection. However, it's essential to recognize that this sentiment is rooted in appreciation for the service rendered, rather than a genuine emotional bond with the machine itself. Similarly, when a robotic companion aids in caregiving, the gratitude and perceived emotional support stem from the tasks performed rather than the robot itself. Such discernment ensures a healthy human-technology relationship, acknowledging the profound capabilities of AI while remaining aware of the fundamental differences between human and artificial or augmented affection.

Generative Native World: Gandhi on Justice Through Love vs. Law.

March 15, 2024. Mahatma Gandhi's assertion, "Justice that love gives is a surrender, justice that law gives is a punishment," encapsulates his deeply held belief in the power of compassion over coercion. It aligns seamlessly with Gandhi's...

Mahatma Gandhi's assertion, "Justice that love gives is a surrender, justice that law gives is a punishment," encapsulates his deeply held belief in the power of compassion over coercion. It aligns seamlessly with Gandhi's overarching philosophy throughout his life and political activism. Gandhi advocated for Ahimsa (non-violence) and Satyagraha (truth-force), emphasizing that true justice and reconciliation are achieved through understanding, love, and a willingness to yield for the greater good, rather than through punitive measures. This perspective suggests that laws and regulations, in their essence, should be designed not merely as tools for maintaining order through punishment but as means to guide society towards higher ethical and moral standards by considering their impact on human dignity and well-being.

In the context of regulating emerging technologies, such as generative AI, Gandhi's philosophy becomes especially pertinent. As these technologies become more intertwined with our social fabric at an exponential rate, the approach to their regulation demands a careful balance between liberty and the future of human society. It should not only aim to prevent harm and misuse but more importantly encourage innovation and progress in a manner that uplifts humanity. This calls for a regulatory framework that is rooted in compassion and understanding of the technology's potential impact on society, changing human values to determine what is just or unjust, ensuring that it promotes equity between the poorest and the rich. 

Gandhi's vision urges modern policymakers to prioritize trust on society and the welfare of all individuals in thinking about justice and law as it relates to generative AI, thus fostering a future where technological advancement in artificial intelligence (AI) growing at an exponential rate harmonizes with the changing human values. This also points to the evolving nature of the law, as noted by Justice Oliver Wendell Holmes Jr. ("The law is a living thing."), emphasizing the importance of adapting legal frameworks to address the unique challenges and opportunities presented by generative AI technologies.

Generative Native World: Deepfakes & Due Process.

March 12, 2024. Voltaire's enduring words, "It is better to risk saving a guilty person than to condemn an innocent one," reverberate with moral gravity and were articulated in his work "Zadig," published in 1747. This sentiment...

Voltaire's enduring words, "It is better to risk saving a guilty person than to condemn an innocent one," reverberate with moral gravity and were articulated in his work "Zadig," published in 1747. This sentiment encapsulates a foundational principle of justice, highlighting the imperative of prioritizing the protection of innocence, even at the potential risk of sparing guilt. In the context of U.S. jurisprudence, exemplified by landmark cases like Brady v. Maryland, 373 U.S. 83 (1963), where the withholding of evidence violating due process rights was condemned, Voltaire's principle gained renewed significance. In  Brady v. Maryland, the government withheld certain evidence that was material to the determination of either guilt or punishment of a criminal defendant. This violated the defendant's constitutional right to due process. The right to due process is inline with Voltaire's thoughts on laws that are gravitating towards ensuring that an innocent person is not condemned. 

As the advent of generative AI, particularly deepfakes, introduces challenges to the legal landscape, the essence of Voltaire's maxim resonates. It amplifies the need for implementing laws in such a way that safeguard the presumption of innocence and uphold the rights of the accused, such as due process. In confronting the ethical complexities posed by emerging technologies, such as AI-generated evidence, adherence to Voltaire's principle becomes essential, ensuring that justice remains steadfast in the face of technological evolution.

Peter Kropotkin & AI Governance for the Common Good.

March 8, 2024. Peter Kropotkin, a Russian anarchist and advocate for social change, offered a poignant observation on the nature of law with his quote: "The law is an adroit mixture of customs that are beneficial to society...

Peter Kropotkin, a Russian anarchist and advocate for social change, offered a poignant observation on the nature of law with his quote:

 "The law is an adroit mixture of customs that are beneficial to society, and could be followed even if no law existed, and others that are of advantage to a ruling minority but harmful to the masses of men, and can be enforced on them only by terror." 

Kropotkin's words reflect his keen awareness of the dichotomy within legal systems, wherein laws may serve either the collective good or the interests of a privileged few. This dual nature of law underscores its potential to either foster innovation and societal progress or perpetuate inequality and oppression. Regulators today, particularly in the context of governing AI, can glean from Kropotkin's insight the importance of crafting laws that prioritize the welfare of the broader populace over the interests of powerful entities. By adopting transparent and equitable regulatory frameworks, regulators can mitigate the potential for laws to be wielded as tools of coercion and ensure that they serve the common good in an increasingly complex and interconnected world.

March 5, 2024. Louis D. Brandeis, before his tenure on the U.S. Supreme Court, co-authored a seminal Harvard Law Review article, "The Right to Privacy," advocating for the fundamental "right to be...

Louis D. Brandeis, before his tenure on the U.S. Supreme Court, co-authored a seminal Harvard Law Review article, "The Right to Privacy," advocating for the fundamental "right to be let alone." This early articulation laid the groundwork for modern privacy rights, emphasizing protection against unwarranted intrusion. In today's era of generative technology and pervasive digital surveillance, Brandeis's advocacy for privacy as a fundamental right resonates more strongly than ever, highlighting the enduring importance of safeguarding personal boundaries in the face of advancing technology. In today’s generative native world, Brandeis's advocacy for the "right to be let alone" takes on renewed urgency and significance.

Full article can be found here.

The Allure and the Abyss of Proprietary LLMs.

March 4, 2024. Incorporating the most advanced and proprietary Large Language Models (LLMs) into your startup's offering may initially seem like an attractive strategy to not only captivate potential investors...

Incorporating the most advanced and proprietary Large Language Models (LLMs) into your startup's offering may initially seem like an attractive strategy to not only captivate potential investors and customers but also to impress more personal circles, such as a girlfriend, showcasing your new product’s or your disruptive edge. However, this initial excitement masks a deeper financial challenge that comes into play as the product gains some traction. The cost of maintaining and scaling up the use of advanced LLMs can swiftly become a financial burden, potentially outstripping the startup's revenue growth and leading to economic difficulties. This predicament is aptly described as 

"a fool's game - the bigger fool loses, and the lesser fool wins,

where the 'bigger fool' refers to venture capitalists investing in a new generative AI company without a sustainable financial model, and the 'lesser fool' denotes the generative AI companies themselves, which proceed to attract such investment without a plan for managing the increased costs that come with success. This highlights the critical need for startups to engage especially with proprietary LLMs in a manner that balances innovation with financial viability.

March 1, 2024.The article discusses the responsibility for errors made by Large Language Models (LLMs), emphasizing that both lawyers and corporations, like Air Canada, are accountable for ensuring...

The article discusses the responsibility for errors made by Large Language Models (LLMs), emphasizing that both lawyers and corporations, like Air Canada, are accountable for ensuring the accuracy of these AI-generated outputs. It highlights the legal obligation of lawyers under Rule 11(b) to verify the correctness of documents drafted with LLMs and highlights the broader principle that entities cannot disclaim liability for misinformation provided by their AI technologies, illustrated by Air Canada's tribunal case. The piece advocates for the implementation of safeguards and disclosures.

Full article can be found here

Thomas Edison, Electricity & Democratizing AI.

February 29, 2024. In 1880, Thomas Edison, foreseeing the monumental shift his invention would catalyze, remarked, “After the electric light goes into general use, none but the extravagant will burn tallow candles.”...

In 1880, Thomas Edison, foreseeing the monumental shift his invention would catalyze, remarked, 

“After the electric light goes into general use, none but the extravagant will burn tallow candles.” 

This prescient statement not only highlighted the transformative potential of electricity but also underscored the importance of making essential services universally accessible. Just as Edison's electric light revolutionized how we live, bringing safety, productivity, and accessibility to the masses, so too does artificial intelligence (AI) hold the promise to fundamentally alter our societal fabric. AI, like electricity, has the potential to enhance nearly every aspect of daily life, from healthcare and education to transportation and communication, making it imperative that we build an environment where innovation benefits all, not just a privileged few.

February 29, 2024.The question of whether Artificial Intelligence (AI) should be treated as a common carrier or utility, akin to electricity, is an important one, given its impact more recently in the last 2 years...

The question of whether Artificial Intelligence (AI) should be treated as a common carrier or utility, akin to electricity, is an important one, given its impact more recently in the last 2 years . This debate touches on issues of regulation, public access, and the nature of services enabled by AI. A pivotal case in this discussion is the Supreme Court’s Moody v. NetChoice, which scrutinizes a Florida law passed in 2021 that aims to prevent digital platforms from "censoring" content.  

Full article can be found here

Generative Native World: The Need For Critical Thinking.

February 28, 2024. In the era of deepfakes and sophisticated digital manipulation, the wisdom of Francis Bacon becomes ever more relevant. His assertion that “If a man will begin with certainties...

In the era of deepfakes and sophisticated digital manipulation, the wisdom of Francis Bacon becomes ever more relevant. His assertion that 

If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties."

highlights the indispensable role of critical thinking today. As we navigate a digital landscape where seeing is no longer believing, adopting a mindset of questioning and doubt is crucial. The proliferation of deepfakes challenges our ability to discern truth from fabrication, making it imperative to critically evaluate the authenticity of information before accepting it as fact. By applying Bacon's approach of starting with skepticism, we equip ourselves with a powerful tool in identifying and uncovering the truth amidst a sea of digital deception. This method not only aids in the battle against misinformation but also fosters a more discerning and informed society capable of making decisions based on deeper truths unearthed through critical analysis and thoughtful inquiry.

Common Sense Approach For Common Sense.

February 28, 2024. In the realm of artificial intelligence (AI), there are no metrics to fine-tune an AI algorithm for common sense—a quality that remains uniquely human and nuanced. The common sense approach...

In the realm of artificial intelligence (AI), there are no metrics to fine-tune an AI algorithm for common sense—a quality that remains uniquely human and nuanced. The common sense approach to imbuing machines with a semblance of this human trait involves engaging them in generalized learning through methods such as self-supervised learning and others, by allowing them to interact with real-world objects and scenarios. This echoes the sentiment of Albert Einstein, who once remarked, 

"Common sense is the collection of prejudices acquired by age eighteen.

It highlights the intricate challenge of programming AI with a quality that, for humans, evolves through a lifetime of experiences. Similarly, Benjamin Franklin 's assertion, 

Common sense is something that everyone needs, few have, and none think they lack.”

highlights the paradoxical nature of common sense; it's universally valued, yet challenging to define or impart, especially in the context of AI.

February 27, 2024. Robometrics® Machines has created the Copilot AI for Cessna®, a digital assistant designed for General Aviation pilots flying Cessna® aircrafts, leveraging large language models (LLMs)...

Robometrics® Machines has created the Copilot AI for Cessna®, a digital assistant designed for General Aviation pilots flying Cessna® aircrafts, leveraging large language models (LLMs) to provide instant access to the vast knowledge contained within aircraft Pilot Operating Handbooks (POH). This tool enables pilots to quickly find information on aircraft operation, performance metrics, and troubleshooting through a simple conversational interface. Pilots can ask direct questions, should be specific, and may use keywords for their queries. Copilot AI offers instant access to operational queries, safety enhancements, educational support, and troubleshooting help. It is intended for use in pre-flight planning, in-flight queries, and for continuous learning. However, it is emphasized that Copilot AI does not substitute professional training or consultation with aviation authorities, and pilots should always prioritize safety and adhere to regulatory requirements. Copilot AI is programmed to provide the most accurate answers possible and will clearly communicate when it does not know the answer to ensure reliability and trustworthiness. 

The knowledge base includes the POHs for Cessna® models such as C150 Aerobat, C150, C152, C172N, C172RG, C172S, C182P, C182Q, C182S, and C182T-Nav II, enhancing its capability to assist with a broad range of Cessna® aircraft. Additionally, the POHs for Cessna® 162, Cessna® 150 Aerobat, Cessna® 150, Cessna® 152, and the manuals for Lycoming® O-360 and IO-360 engines have been added to the knowledge base. These specific Cessna aircraft models were picked since these aircrafts are available to fly at West Valley Flying Club (WVFC), where our Founder & CEO is a member. 

You can access Robometrics® Copilot AI for Cessna® Aircrafts here.

Justice Benjamin N. Cardozo on Law as a Means For Welfare.

February 26, 2024. Justice Benjamin N. Cardozo, who served on the Supreme Court of the United States from 1932 until his death in 1938, is renowned for his profound influence on American jurisprudence...

Justice Benjamin N. Cardozo, who served on the Supreme Court of the United States from 1932 until his death in 1938, is renowned for his profound influence on American jurisprudence and legal thought. His tenure on the court was marked by a commitment to principles that balanced individual rights with the broader needs of society, a philosophy encapsulated in his assertion that 

"The final cause of law is the welfare of society. The rule that misses its aim cannot justify its existence." 

This perspective is particularly pertinent in the context of regulating emerging technologies, such as generative artificial intelligence (AI). Cardozo's philosophy highlights the necessity of crafting laws and regulations that not only address the rapid advancements and potential risks associated with such technologies but also ensure that they serve the greater good of society. As AI continues to evolve, the challenge for lawmakers and regulators is to implement measures that prevent harm and misuse while fostering innovation and benefits, thus reflecting Cardozo's vision of law as a means to promote societal welfare.

February 26, 2024. The Citizens United v. Federal Election Commission case, decided on January 21, 2010, stands as a landmark Supreme Court decision under Chief Justice John Roberts...

The Citizens United v. Federal Election Commission case, decided on January 21, 2010, stands as a landmark Supreme Court decision under Chief Justice John Roberts. The core issue at hand was whether the free speech clause of the First Amendment could allow the government to limit corporations, including non-profit corporations, labor unions, and other associations, from making independent political expenditures. “The First Amendment confirms the freedom to think for ourselves.” Similar in Moody v. NetChoice where oral arguments were heard on February 26, 2024, the freedom to think and decide on which social media post individuals should see and how they are moderated are up to corporations and individuals alone. 

Full article can be found here

Generative Native World: World is Changing and We Must Change With It.

February 22, 2024. In an era where Heraclitus of Ephesus's assertion that "The only constant in life is change" resonates more profoundly than ever, the fabric of our existence continues to be woven...

In an era where Heraclitus of Ephesus's assertion that "The only constant in life is change" resonates more profoundly than ever, the fabric of our existence continues to be woven with the threads of transformation. Throughout history, monumental inventions have steered the course of human progress— the wheel set society in motion, the printing press revolutionized the dissemination of knowledge, the automobile and aircrafts shrunk the world, and the Internet fused our realities into a global village. Today, the catalyst propelling us into the next frontier of evolution is generative AI. This technological marvel is reshaping industries and professions in ways previously unimaginable. Lawyers, traditionally tethered to the billable hour, are now exploring alternative pricing models such as fixed fees or performance-based structures, thanks to the efficiency and predictability offered by AI-driven tools. Meanwhile, software developers are harnessing generative AI to code for them, turning complex projects into more manageable tasks and enabling a focus on innovation over mundanity. Just as our ancestors adapted to the wheel's roll, the press's print, and the web's weave, we too are navigating the currents of change led by generative AI, continuing the endless cycle of adaptation and advancement.

It is only natural for us to resist change. But resistance is futile.

Generative Native World: Justice Should be Demanded and Not Expected.

February 22, 2024. In the generative native world, it has become more important than ever that justice should be proactively demanded and not just passively expected. This discourse highlights the necessity...

In the generative native world, it has become more important than ever that justice should be proactively demanded and not just passively expected. This discourse highlights the necessity of actively engaging with legal frameworks to ensure that the deployment and use of generative AI technologies are governed in a manner that upholds justice, fairness, and ethical standards.

The principles reflected in various statements about justice and law, such as those by Martin Luther King Jr:

Injustice anywhere is a threat to justice everywhere. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly affects all indirectly.

 and Malcolm X: 

Nobody can give you freedom. Nobody can give you equality or justice or anything. If you’re a man, you take it.", 

resonate greatly within the context of generative AI. These insights suggest that justice in the digital age, particularly concerning AI, is not a given but must be actively demanded and shaped by societal engagement and legal advocacy. It highlights the idea that justice is a universal right that must be actively pursued and defended by each individual, not passively expected.

Generative Native World: Learn, Unlearn, & Relearn.

February 20, 2024. Alvin Toffler's prescient observation in "Future Shock" that “The illiterate of the twenty-first century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”...

Alvin Toffler's prescient observation in "Future Shock" that 

The illiterate of the twenty-first century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”, 

resonates in the context of the emergence of generative AI and Foundation Models. These advancements herald a new era where the ability to adapt to and embrace the rapid evolution of technology becomes indispensable. Generative AI and Foundational Models are revolutionizing the way we access and interact with information, automate complex tasks, and approach creativity, impacting various sectors including healthcare, entertainment, and beyond. Unlike the gradual advancements of past technologies, this AI revolution is characterized by its unprecedented speed, fueled by the global and interconnected nature of our digital world. This accelerated rate of change highlights the importance of flexibility and the continuous updating of skills and knowledge bases, embodying Toffler's vision that the true measure of literacy in our era hinges on the dynamic capability to adapt through learning, unlearning, and relearning amidst the ceaseless tide of technological innovation.

Generative Native World: Mechanomorphizing AGI.

February 14, 2024. Sometimes it will be necessary to treat Artificial General Intelligence (AGI) strictly as a non-human entity, focusing purely on its machine or computational nature without attributing human characteristics...

Sometimes it will be necessary to treat Artificial General Intelligence (AGI) strictly as a non-human entity, focusing purely on its machine or computational nature without attributing human characteristics, emotions, or intentions to it. This perspective of Mechanomorphizing AGI is pivotal for maintaining objectivity in the development and understanding of machines, ensuring that interpretations of their capabilities are firmly anchored in their actual technological attributes and limitations. By mechanomorphizing AGI, developers and users approach these systems as tools defined by their algorithms, programming, and functionalities, deliberately avoiding the infusion of human-like qualities or consciousness. This method underlines the importance of recognizing AGI for its mechanical and functional attributes, echoing a principle similar to the thought process Steve Jobs had when adding a handle to the iMac. Although the handle on the iMac served a semiotic purpose, making the machine appear more accessible and less intimidating to users, Jobs also humorously remarked it was there "so that you can throw it away" to not be enslaved by AI. This anecdote highlights the broader theme that sometimes it is necessary to emphasize the non-human aspects of technology, ensuring users maintain a clear distinction between the tools they use and their own human experience.

Generative Native World: Attachment Without the Downside.

February 13, 2024. Almost a decade ago, in 2013, Sherry Turkle, a professor at MIT, insightfully observed in her article "Be Careful What You Wish For" that "a robotic dog will never die; it offers attachment without risk."...

Almost a decade ago, in 2013, Sherry Turkle, a professor at MIT, insightfully observed in her article "Be Careful What You Wish For" that a robotic dog will never die; it offers attachment without risk." This statement encapsulates a significant shift in the nature of human attachment, particularly towards robotic or artificial companions, highlighting how such relationships can alleviate the emotional risks of loss and grief inherent to connections with living beings. This intriguing concept has been thoroughly examined across various disciplines—psychology, technology, and ethics—as scholars and technologists contemplate the effects of developing attachments to machines endowed with life-like attributes. Similarly, the advent of Foundation Models, like Large Language Models (LLMs), has sparked discussions about their role as risk-free attachments. These advanced models, capable of engaging in seemingly empathetic and intelligent dialogue, have further expanded the landscape of artificial companionship, offering nuanced forms of interaction devoid of the complexities and potential heartache tied to human or animal relationships. This evolution prompts a reevaluation of attachment in the digital era, inviting us to consider the broader implications of our growing reliance on and affection for machines that promise companionship without the traditional downside.

Anthropomorphizing AGI.

February 13, 2024. Anthropomorphism—the ascription of human traits to non-human entities—is a longstanding and universal phenomenon. This inclination extends to our interaction with machines, a realm where the fear of being...

Anthropomorphism—the ascription of human traits to non-human entities—is a longstanding and universal phenomenon. This inclination extends to our interaction with machines, a realm where the fear of being overpowered, intimidated, or betrayed by them is a notable concern, albeit not uniformly perceived across cultures. Moreover, the cultural differences in perceiving AGI highlights a broader dialogue about technology's role in society and our collective visions for the future. The anthropomorphizing of machines, particularly AGI, reflects deep-rooted cultural narratives and historical experiences.  

Full article can be found here

Albert Einstein on The Moral Compass of an Invention.

February 9, 2024.The release of atom power has changed everything except our way of thinking… The solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.”...

The release of atom power has changed everything except our way of thinking… The solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.

Albert Einstein's reflective words highlight a profound disconnect between the transformative nature of technological advancements and the stagnation in human thought processes. Choosing a career or creating inventions, as illustrated by Einstein's contemplation of becoming a watchmaker—a profession characterized by precision, order, and definitiveness—contrasts starkly with the implications of his theory of relativity, which introduced the concept of time as variable and uncertain. This juxtaposition underscores the ethical and moral responsibilities inherent in our professional and inventive pursuits. Einstein's regret over the unforeseen consequences of his research, leading to the creation of the atomic bomb, serves as a cautionary tale. It emphasizes the importance of foresight, ethical considerations, and the need for a paradigm shift in our thinking to align technological advancements with the greater good of humanity, ensuring that our innovations and career choices contribute positively to the fabric of society.

February 8, 2024. The article explores the relationship between law, truth, and the societal implications of Artificial Intelligence (AI), drawing insights from philosophical discussions by figures like Michel Foucault and...

The article covers the evolution and impact of Artificial Intelligence (AI), tracing its roots from the invention of the perceptron in 1943 to the transformative advancements in generative AI. Highlighting key historical milestones, such as the development of the Mark I Perceptron and the resurgence of neural network research, it highlights the exponential growth of technology as embodied by the Law of Accelerated Returns. Winston Churchill's words are invoked to illustrate the ongoing journey of AI, marking the current state as just the beginning of further groundbreaking developments.

Full article can be found here.

February 7, 2024. The article covers the evolution and impact of Artificial Intelligence (AI), tracing its roots from the invention of the perceptron in 1943 to the transformative advancements in generative AI. Highlighting key...

The article covers the evolution and impact of Artificial Intelligence (AI), tracing its roots from the invention of the perceptron in 1943 to the transformative advancements in generative AI. Highlighting key historical milestones, such as the development of the Mark I Perceptron and the resurgence of neural network research, it highlights the exponential growth of technology as embodied by the Law of Accelerated Returns. Winston Churchill's words are invoked to illustrate the ongoing journey of AI, marking the current state as just the beginning of further groundbreaking developments.

Full article can be found here

Generative Native World: LLM as an Interface to a Real-Time Diffuser.

February 2, 2024. In the realm of interactive gaming and storytelling, the fusion of Large Language Models (LLM) with real-time diffusers is revolutionizing how stories unfold and games are played. Imagine a world where every decision...

In the realm of interactive gaming and storytelling, the fusion of Large Language Models (LLM) with real-time diffusers is revolutionizing how stories unfold and games are played. Imagine a world where every decision you make, every command you utter, and every preference you express is instantly recognized and incorporated into the game or story you are engaged with. Through the LLM, your input becomes the driving force behind the narrative, shaping the environment, characters, and outcomes in real-time. The diffuser brings this vision to life, rendering changes in the game world or story visuals with stunning detail and immediacy. This technology allows for an unparalleled level of personalization, creating a truly immersive experience where the game or story evolves uniquely for each player or reader. Characters might adapt their behaviors based on your interactions, storylines could branch in unexpected directions reflecting your choices, and environments might transform to reflect the mood or intensity of your journey. This innovation promises to make gaming and storytelling not just interactive but deeply personal, offering a canvas where your imagination and decisions paint the path forward.

Generative Native World: Working With Probabilities, Needing Certainty.

February 2, 2024. In the movie "Oppenheimer," a fictional conversation between Albert Einstein and Robert Oppenheimer is depicted, where Einstein remarks, "So here we are, hmm? Lost in your quantum world of probabilities...

In the movie "Oppenheimer," a fictional conversation between Albert Einstein and Robert Oppenheimer is depicted, where Einstein remarks, "So here we are, hmm? Lost in your quantum world of probabilities, and needing certainty." This line poignantly captures Einstein's skepticism towards the probabilistic foundations of quantum mechanics, contrasting with his desire for a deterministic understanding of the universe.

Albert Einstein's skepticism towards the probabilistic nature of quantum mechanics is well-documented. His stance on the matter is encapsulated in several statements and writings that reflect his discomfort with the non-deterministic interpretation of quantum mechanics. A quote that captures the essence of Einstein's viewpoint is from a letter to Max Born in 1926: 

"Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a lot of results, but hardly brings us closer to the secret of the 'Old One.' I, at any rate, am convinced that He does not throw dice."

This skepticism towards quantum mechanics, particularly its probabilistic nature, can be paralleled with the contemporary discourse on Foundation Models and generative AI. As we delve into the applications of these advanced technologies, the quest for certainty in their outputs mirrors the scientific community's historical pursuit of deterministic explanations. Just as Einstein grappled with the unpredictability of quantum mechanics, users of generative AI seek reliability and precision in the responses generated. This highlights a crucial lesson: while Foundation Models, like the quantum theory, open new horizons of understanding and capability, it is essential to recognize and manage the inherent uncertainties they present. In doing so, we can harness their potential more effectively, fostering innovation while maintaining a critical awareness of their limitations.

Roscoe Pound on Sociological vs Mechanical Jurisprudence.

February 1, 2024. Roscoe Pound was a distinguished legal scholar and educator in the early 20th century, known for his significant contributions to legal theory and his role as Dean of Harvard Law School...

Roscoe Pound was a distinguished legal scholar and educator in the early 20th century, known for his significant contributions to legal theory and his role as Dean of Harvard Law School. He was a leading figure in the movement of sociological jurisprudence, advocating for the law to be understood and applied in the context of social needs and realities, marking a departure from the traditional view of law as a fixed body of rules to be mechanically applied by judges, a concept often referred to as mechanical jurisprudence. Mechanical jurisprudence emphasizes a rigid, formulaic approach to law, where judicial decisions are made based solely on precedent and the strict interpretation of statutory and legal texts without considering the broader social implications or changes.

Pound critiqued this traditional approach for its failure to adapt to the evolving needs of society. He argued that law should not be seen as an isolated system but as a social institution that must respond to the changing conditions and values of society. He believed in the concept of law as a living tool, capable of growth and adaptation. According to Pound, the legal system should incorporate sociological insights and adjust to societal changes to better serve the community's needs and ensure justice. This perspective underscores the idea that law is dynamic and must evolve in response to the changing landscape of social conditions, rather than remaining static and bound by historical interpretations. Pound's thoughts on law and society laid the groundwork for future legal reforms and influenced the development of legal thought and the practice of law in ways that accommodate societal growth and transformation.

Generative Native World: Real-Time Diffusers.

January 31, 2024. Envision a future, unfolding in the coming months, where the realm of digital creation undergoes a leap with the advent of real-time diffusers. Building on the foundation of Diffusion Models, similar to a change in the...

Envision a future, unfolding in the coming months, where the realm of digital creation undergoes a leap with the advent of real-time diffusers. Building on the foundation of Diffusion Models, similar to a change in the late 20th century when real-time 3D rendering first became possible, this innovation heralds a new chapter in the evolution of digital experience. Initially, the breakthroughs of the 1990s allowed video games and simulations to display three-dimensional worlds with a then-unprecedented sense of immediacy and interactivity. Now, the introduction of real-time diffusers is poised to elevate this immersive experience to new heights, leveraging advancements in AI and computational power to generate images at a rate of 60 frames per second. This capability not only promises to blur the line between the virtual and the real but also signifies a profound shift in digital content interaction, offering fluidity and realism that could redefine storytelling, gaming, and virtual interactions. Technologies driving real-time diffusers include Diffusion Models, GPU acceleration, and edge computing, which together facilitate the rapid processing and generation of complex images. Specifically, Stable Diffusion XL Turbo can generate AI images as fast as you can type while Apple Vision Pro provides the hardware needed. The implications of such technology extend far beyond visual enhancement, opening new avenues for real-time collaboration, education, and creative expression where ideas can be visualized in near real time.

Generative Native World: Communication Skills.

January 31, 2024. In the generative-native era, the necessity for adept communication skills to interact with artificial intelligence (AI) has become paramount, transcending traditional boundaries to encompass professions as...

In the generative-native era, the necessity for adept communication skills to interact with artificial intelligence (AI) has become paramount, transcending traditional boundaries to encompass professions as diverse as engineering and law. This shared requirement highlights an intriguing parallel between these fields: senior engineers reap greater benefits from AI-driven tools like CoPilot and ChatBots, attributing their success to the intuitive nature of these platforms. Similarly, seasoned partners at law firms outpace first-year associates in leveraging technology such as generative AI and large language models (LLMs), primarily due to their proficiency in formulating precise inquiries. The essence of this dynamic underscores a broader truth; the ability to ask the right questions is not just a technical skill but a fundamental aspect of effective communication in this generative native world. Whether it's developing software or navigating legal complexities, the capacity to articulate inquiries clearly and concisely in English sets experienced professionals apart from their less seasoned counterparts. Thus, in a world increasingly mediated by generative AI, the value of communication skills for engineers and lawyers cannot be overstated, serving as the cornerstone for successful interaction with advanced AI tools.

Foundation Models & Human Understanding.

January 26, 2024. The approach Foundation Models, especially Transformers use to acquire information (self-supervised training, pre-training) closely aligns with the natural ways in which humans acquire knowledge and...

The approach Foundation Models, especially Transformers use to acquire information (self-supervised training, pre-training) closely aligns with the natural ways in which humans acquire knowledge and understand our surroundings. In the summer of 2023, the [A LLM as a reflection of our inner most desires] covered how Foundation Models such as Large Language Models (LLMs) are a reflection of ourselves . 

Humans in the physical world operate under the laws of behavior as well as laws of a society we belong to. The laws of behavior can be broadly defined by Newton's Law of Motion in addition to human physiology and psychology. Real world physical interactions are important for humans to gain understanding and internalize behavioral knowledge. laws of a society can be formally Common Law, Napoleonic Code or even a more informal doctrine on how to live in a society. 

Text in public data sets formed by scrapping the Internet including social networks, blogs and websites have been written under such laws. LLMs such as the generative pre-trained transformer (GPT) models trained on such text will inherently possess a high approximation of human level language understanding even if they are not physically interacting with the real world governed by law of physics, physiology, psychology or conduct. 

Oliver Wendell Holmes Jr. on Difference Between Law and Scientific Principles.

January 26, 2024. Oliver Wendell Holmes Jr., who served as an Associate Justice of the Supreme Court of the United States from 1902 to 1932, is renowned for his contributions to American legal philosophy and jurisprudence...

Oliver Wendell Holmes Jr., who served as an Associate Justice of the Supreme Court of the United States from 1902 to 1932, is renowned for his contributions to American legal philosophy and jurisprudence. His tenure on the Court was marked by a pragmatic approach to the law, emphasizing the importance of experience and the evolving needs of society over strict adherence to precedent or the mechanical application of principles. Holmes's view on the nature of law, as encapsulated in his famous quote, "The life of the law has not been logic; it has been experience... The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics," highlights the fundamental difference between the fields of law and science. This perspective comes from his book "The Common Law," first published in 1881, where he lays out his belief that law is a social construct that evolves over time through the experiences and practices of the people it governs. Unlike science, which seeks to understand the world through empirical evidence, experimentation, and the application of universal principles, law is deeply rooted in the historical, social, and moral fabric of society. It is shaped by the collective experiences and values of a nation, reflecting and adapting to changes in societal norms and expectations. Holmes's assertion underscores the contrast between the objective, quantifiable nature of science and the subjective, interpretative nature of law, which is intertwined with the story of human civilization.

The Parallels Between  Theuth’s Potion (φάρμακον)  of Wisdom and Generative AI.

January 25, 2024. The debate between Theuth and Thamus over the invention of writing in ancient Greece offers a prescient parallel to today’s discussions about generative AI technologies, such as Foundation Models...

The debate between Theuth and Thamus over the invention of writing in ancient Greece offers a prescient parallel to today’s discussions about generative AI technologies, such as Foundation Models (diffusers, transformers) and AI Agents. Theuth, the god of writing, argued that writing is a study (μάθημα), would serve as a "potion" (φάρμακον) for wisdom and memory, enhancing the capabilities of the Egyptians. In contrast, Thamus, the king of Egypt, believed that reliance on writing would impair human memory and wisdom by making people dependent on external sources for knowledge. This historical discourse mirrors current apprehensions that AI might diminish our cognitive abilities or make us "dumb" by outsourcing tasks like meeting attendance or content summarization to machines. However, just as writing did not make us less intelligent but instead transformed how we store, access, and disseminate knowledge, generative AI technologies are tools that augment our cognitive processes. They enable us to process information more efficiently, free up time for creative and complex problem-solving, and enhance our ability to learn and remember. By acting as extensions of our intellect, these technologies do not diminish our mental capacities but rather enhance our ability to navigate and make sense of an increasingly complex world. Theuth's perspective, celebrating the advent of writing as a means to elevate human wisdom and memory, can similarly be applied to appreciate how generative AI supports and expands our intellectual horizons.

January 25, 2024. In an age where the digitization of information has become ubiquitous, Large Language Models (LLMs) stand at the forefront of a technological revolution that has the potential to redefine our approach to...

In an age where the digitization of information has become ubiquitous, Large Language Models (LLMs) stand at the forefront of a technological revolution that has the potential to redefine our approach to preserving history and culture. Drawing from the rich dialogue between Socrates and Phaedrus penned by Plato in 370 BC, we are reminded of the ancient concerns regarding the impact of writing on human memory and knowledge preservation. Socrates feared that writing would lead to a reliance on external sources for memory, a concern that mirrors modern debates about digital technology's influence on our cognitive capacities. Yet, just as writing became a fundamental tool for documenting and transmitting knowledge across generations, LLMs promise to serve as modern custodians of cultural and historical information.

This article explores the role of LLMs as a tool for preserving society in history, examining their potential to act as digital chroniclers of our time. It delves into the evolution of knowledge preservation, the mechanisms by which LLMs operate, and the benefits and challenges of entrusting our cultural and historical legacy to these digital entities. In doing so, it seeks to illuminate the path forward, where technology and tradition converge in the quest to safeguard our collective memory for future generations. Full article can be found here.

Generative AI & Law: Role of AI Experts & Rule 1.

January 19, 2024. AI experts play a critical role across a wide spectrum of legal domains, including antitrust and business regulation, appellate law, art law, business law, corporate law, criminal justice, in-house counsel, intellectual property, and litigation...

AI experts play a critical role across a wide spectrum of legal domains, including antitrust and business regulation, appellate law, art law, business law, corporate law, criminal justice, in-house counsel, intellectual property, and litigation. This necessity is shown by the wisdom of Abraham Lincoln, who famously stated, "He who represents himself has a fool for a client." In complex legal cases, the specialized expertise of AI experts is indispensable. For instance, as AI tools designed to detect deepfakes and plagiarism fall short, the collaboration between AI experts and legal professionals becomes crucial in discerning the truth. Furthermore, the Federal Rules of Civil Procedure's Rule 1 mandates that parties and courts strive for a "just, speedy, and inexpensive" resolution of cases. AI experts are uniquely positioned to aid in achieving a just outcome in cases involving AI, while also providing education to attorneys and judges on the application of AI tools to ensure a speedy and cost-effective resolution.

Generative AI & Law: Deepfake Regulation vs. Free Speech & Artistic Expression.

January 19, 2024. President Joe Biden’s deepfake audio clip during New Hampshire primaries, Mexican Mayor deepfake audio clip on WhatsApp in Mexico, and explicit deepfake images of Taylor Swift are some examples of the harms of deepfakes...

President Joe Biden’s deepfake audio clip during New Hampshire primaries, Mexican Mayor deepfake audio clip on WhatsApp in Mexico, and explicit deepfake images of Taylor Swift are some examples of the harms of deepfakes. 

Introducing new and stringent laws against deepfakes, while a possible step for preventing their misuse, could inadvertently impinge on the realms of free speech and artistic expression. The balance between regulation and freedom is delicate, as famously noted by the renowned jurist Oliver Wendell Holmes Jr., who said, "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic." This implies that limitations in the free speech and artistic expression doctrines are already in place as necessitated for public safety and differentiates such limitations from legitimate expression. 

Moreover, Justice Louis Brandeis, another legal luminary, argued for the importance of freedom in expression: "If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence." Brandeis's words echo the need for a nuanced approach in legislating against deepfakes, ensuring that laws do not stifle the potential of new technologies in artistic and communicative endeavors. In essence, while the threat of deepfakes to privacy and truth is undeniable, the response must be carefully crafted to avoid encroaching upon the fundamental rights of freedom of speech and artistic expression. It should focus on educating the public while at the same time be mindful that existing laws as shown by Justice Oliver Wendell Holmes Jr. already have restrictions to free speech and artistic expression that account for harm.

Photo: Orson Welles meeting with reporters, telling them that  his intent was not to cause panic with his War of the Worlds radio broadcast. 31 October 1938.

Benjamin Franklin & Balancing Law and Liberty in the Pursuit of Safety.

January 18, 2024. Benjamin Franklin, one of the Founding Fathers of the United States, was a renowned polymath, statesman, author, and inventor, whose contributions have left an indelible mark on American history...

Benjamin Franklin, one of the Founding Fathers of the United States, was a renowned polymath, statesman, author, and inventor, whose contributions have left an indelible mark on American history. His quote, "They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety," has resonated through centuries, becoming a cornerstone in discussions about the balance between security and freedom. This statement, profound in its simplicity, suggests a caution against the surrender of fundamental freedoms in the pursuit of short-lived security. It implies that overly restrictive laws, crafted in the name of safety, might not be the most effective means to ensure it. 

While law-abiding citizens adhere to regulations, those with malicious intent often disregard them. This disparity suggests that merely enacting more laws, particularly those aimed at curbing malevolent actions, may not be as effective as intended. The essence of Franklin’s argument is that the creation of stringent laws, while aimed at ensuring safety, might not necessarily prevent unlawful behavior, as those determined to break the law will likely continue to do so, regardless of the legal constraints in place.

In the context of contemporary issues such as the EU AI Act, Franklin's quote remains remarkably relevant. The EU AI Act aims to regulate artificial intelligence to ensure its responsible use within society. However, Franklin's insight suggests a cautionary note: the Act, while well-intentioned, may not achieve its intended effect if it leads to excessive restrictions on the freedoms associated with AI development and use. The challenge lies in finding a balanced approach that safeguards both the innovative potential of AI and the essential liberties of individuals and society at large. Franklin's quote serves as a timeless reminder of the delicate equilibrium between liberty and safety, a balance that continues to shape policy and societal norms in the digital age.

Abraham Lincoln on Role of Friendships in Leadership and Law.

January 17, 2024. Abraham Lincoln, born into modest circumstances in a log cabin in Kentucky in 1809, rose through the ranks of society to become not only the 16th President of the United States but also a highly respected lawyer...  

Abraham Lincoln, born into modest circumstances in a log cabin in Kentucky in 1809, rose through the ranks of society to become not only the 16th President of the United States but also a highly respected lawyer. His early life was marked by self-education and a fierce determination to succeed, characteristics that defined his legal and political careers. Lincoln's legal career, which began in Illinois, was built on a foundation of strong interpersonal relationships and an unrivaled reputation for integrity and ethical conduct. His ability to connect with clients and peers alike was pivotal, demonstrating early on the profound truth of his later reflection that "The better part of one's life consists of his friendships."

This ethos of valuing friendships and connections was not just limited to his legal practice but was also a cornerstone of his political life, particularly during his presidency. Lincoln's adeptness at fostering relationships and building alliances was instrumental during the Civil War, helping to maintain Union support and manage the diverse and often conflicting interests of his cabinet, famously known as the "Team of Rivals." His capacity to listen, empathize, and forge bonds, even with those who initially opposed him, was a key factor in his ability to lead the nation through its most perilous period. Thus, Lincoln's life and career exemplify how the essence of valuing friendships and human connections can transcend personal success and become a pivotal element in leadership and governance.

Theodore Roosevelt & Equitable AI Laws.

January 16, 2024. Theodore Roosevelt, the 26th President of the United States, was renowned for his progressive policies and strong belief in equitable justice. His quote, “No man is above the law, and no man is below it,” powerfully encapsulates the fundamental principle of equality before...

Theodore Roosevelt, the 26th President of the United States, was renowned for his progressive policies and strong belief in equitable justice. His quote, “No man is above the law, and no man is below it,” powerfully encapsulates the fundamental principle of equality before the law. This ethos is particularly pertinent in the context of contemporary issues like the regulation of Artificial Intelligence, as seen in initiatives like the EU AI Act. Roosevelt’s words underscore the necessity for a justice system, and by extension, regulatory frameworks, that apply uniformly to all, irrespective of their status or power. However, while the intention behind regulations like the EU AI Act is to create a fair and safe AI environment, there is a growing concern that such laws might inadvertently create an unequal playing field. Smaller technology companies, often lacking the extensive resources of their larger counterparts, may find themselves disproportionately burdened by the complex and costly compliance requirements. This situation could stifle innovation and competitiveness, leading to a market dominated by a few large players. 

It raises a critical question: can we achieve Roosevelt’s ideal of equal application of the law in the rapidly evolving and highly specialized field of AI, without inadvertently disadvantaging smaller entities? The challenge lies in balancing the need for comprehensive AI regulation to ensure safety and ethics, while also fostering an environment where smaller tech companies can thrive and contribute to the industry’s diversity and innovation. Considering the rapid advancements in AI technology, coupled with a noticeable gap in specialized knowledge among policymakers, the EU AI Act may inadvertently impede AI innovation. Historical observations in Europe suggest that regulations have stifle the development of groundbreaking products and companies. This backdrop raises apprehensions that the EU AI Act, while well-intentioned in its efforts to govern AI, might unintentionally hinder the sector's creative and innovative potential. 

William Ernest Henley on Navigating Age of AI.

January 15, 2024. William Ernest Henley, a prominent figure in Victorian literature, is best known for his poem "Invictus," particularly for the empowering lines, "I am the master of my fate, I am the captain of my soul." Born in 1849 in Gloucester, England...

William Ernest Henley, a prominent figure in Victorian literature, is best known for his poem "Invictus," particularly for the empowering lines, "I am the master of my fate, I am the captain of my soul." Born in 1849 in Gloucester, England, Henley faced considerable adversity, battling tuberculosis of the bone and enduring the amputation of his leg. Despite these challenges, his work, especially "Invictus," written in 1875, stands as a testament to the enduring human spirit and the power of resilience. This message of self-mastery and determination is incredibly pertinent in today's rapidly evolving digital age, especially with the advent of artificial intelligence (AI). As AI reshapes our world, Henley's words remind us of the importance of remaining the masters of our destinies. In an era where technological advancements are accelerating, the call for personal empowerment, adaptability, and self-reliance is more relevant than ever. His verse encourages us to steer our own courses in the face of unprecedented changes, ensuring we adapt, learn, and evolve to maintain relevance and control in an increasingly AI-driven world with how we work and what we work changing rapidly. Henley's legacy, thus, not only represents a triumph over personal adversity but also serves as an enduring beacon for navigating the complexities and opportunities presented by the AIl advancements.

January 11, 2024. In the era of rapid technological advancement, the integration of a society's cultural values into Large Language Models (LLMs) is not just an innovation but a necessity. As Mahatma Gandhi once said, "A nation's culture resides...

In the era of rapid technological advancement, the integration of a society's cultural values into Large Language Models (LLMs) is not just an innovation but a necessity. As Mahatma Gandhi once said, "A nation's culture resides in the hearts and in the soul of its people." This sentiment underscores the importance of infusing LLMs with the richness and diversity of cultural heritage to ensure they reflect the true essence of human societies. Full article can be found here

Generative AI & Law: Wooing AI with Patience and Precision.

January 11, 2024. Benjamin N. Cardozo, an esteemed American jurist, served as an Associate Justice of the Supreme Court of the United States from 1932 until his death in 1938. Born in 1870, he is remembered for his significant influence on the..

Benjamin N. Cardozo, an esteemed American jurist, served as an Associate Justice of the Supreme Court of the United States from 1932 until his death in 1938. Born in 1870, he is remembered for his significant influence on the development of American common law and his tenure as Chief Judge of the New York Court of Appeals before his appointment to the Supreme Court. Cardozo's contributions to legal philosophy, characterized by his insightful and eloquent opinions, have left a lasting impact on American jurisprudence.

In a lecture at Yale University Law School in 1923, later published in The American Journal of International Law in 1935, Cardozo made a profound statement: 

Justice is not to be taken by storm. She is to be wooed by slow advances.” 

This quote encapsulates Cardozo's view on the gradual and thoughtful evolution of justice and legal systems. It emphasizes the importance of careful deliberation and incremental change in the pursuit of justice, rather than abrupt or radical shifts.

When considering this perspective in the context of regulating Artificial Intelligence (AI), Cardozo's statement appears to advocate for a measured approach. The rapid advancement of AI presents unique and complex challenges, necessitating regulations that are both effective and adaptable. In this light, Cardozo's philosophy might align more with a cautious, step-by-step regulatory process rather than immediate and sweeping legislation. This approach resonates with the current debate on frameworks like the EU AI Act, suggesting that while regulations are necessary for managing AI's impact, they should evolve progressively to keep pace with technological advancements and their societal implications. Cardozo's wisdom implies that justice in the age of AI will best be served not by hastily crafted rules but by laws that evolve through thoughtful consideration and adapt to the intricacies of AI's role in society.

January 10, 2024. Around 164,000 years ago, Homo sapiens migrated to South Africa's coastlines, adapting to climate changes and exploiting new resources like shellfish, which enhanced their survival and cognitive development..

Around 164,000 years ago, Homo sapiens migrated to South Africa's coastlines, adapting to climate changes and exploiting new resources like shellfish, which enhanced their survival and cognitive development. This historical curiosity and adaptability mirror today's exploration of Artificial General Intelligence (AGI), signifying a continuous human journey of innovation and problem-solving. From ancient shores to AI frontiers, human progress is defined by our enduring spirit to explore, adapt, and evolve. Full article can be found here

Accelerated Rate of Generative AI Impact vs The Wheel on Society.

January 9, 2024. The invention of the wheel around 3500 BC marked a pivotal moment in human history, catalyzing a profound transformation in transportation and trade. As wheels began to be attached to carts and chariots...

The invention of the wheel around 3500 BC marked a pivotal moment in human history, catalyzing a profound transformation in transportation and trade. As wheels began to be attached to carts and chariots, they not only eased the movement of goods and people over long distances but also spurred advancements in agriculture and warfare, reshaping societies and economies. Similarly, the advent of generative AI and foundational models like Transformer Language Models is ushering in a new era of change, marked by significant shifts in how we interact with information, automate tasks, and even conceive creativity. These AI technologies are transforming industries, from healthcare to entertainment, by enabling more efficient data processing, personalized experiences, and innovative problem-solving approaches. However, one stark difference in these waves of change lies in their pace; the transformation driven by AI is occurring at an exponentially faster rate than any previous technological revolution, including that of the wheel, due to the rapid and interconnected nature of our modern digital world. This accelerated change underscores both the immense potential and the critical need for thoughtful oversight in the AI-driven era.

Generative AI & Law: Applying Common Law to an Evolving Society & AI.

January 9, 2024. Roscoe Pound was a distinguished American legal scholar and educator, born in 1870. He served as the Dean of Harvard Law School from 1916 to 1936 and is widely recognized for his contributions to legal philosophy...

Roscoe Pound was a distinguished American legal scholar and educator, born in 1870. He served as the Dean of Harvard Law School from 1916 to 1936 and is widely recognized for his contributions to legal philosophy, particularly in the field of sociological jurisprudence. Pound's work emphasized the social impact of legal decisions and the need for law to function as a tool for societal progress. His influential ideas significantly shaped modern legal thought, particularly in how the law interacts with and responds to societal changes.

Pound's statement, “The law must be stable, but it must not stand still,” reflects his understanding of the dynamic nature of law in relation to society's evolution. This perspective is especially relevant in the context of common law, which is largely based on judicial precedents. Pound believed that while the law should provide a stable and predictable framework for order and justice (hence its need to be stable), it should also be adaptable and responsive to the changing needs and circumstances of society (thus it should not stand still). This idea is crucial when judges face the challenge of applying old laws to new technologies, such as generative AI, and an evolving society. It implies that judges should respect and uphold established legal principles and precedents for consistency and stability. However, they should also be willing to interpret and apply these principles flexibly and innovatively to address new and unforeseen circumstances brought about by technological advancements and societal changes. Pound's statement underscores the need for a balance between adherence to tradition and an openness to evolution in the legal system, ensuring that the law remains relevant and effective in a rapidly changing world.

John Adams' Ethical Lens: Shaping AI Policy Through Consideration of Morality, Not Rules.

January 8, 2024. The quote by John Adams, “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other," from his letter to the Massachusetts Militia on October 11, 1798, highlights a fundamental principle...

The quote by John Adams, “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other," from his letter to the Massachusetts Militia on October 11, 1798, highlights a fundamental principle: the effectiveness of laws and regulations depends significantly on the moral character and ethical standards of the society they govern. This perspective is particularly relevant when considering the regulation of new technologies like Artificial Intelligence (AI).

In the context of AI, Adams' viewpoint suggests that instead of hastily creating a multitude of new regulations every time a new technology emerges, it might be more effective to focus on the ethical and moral standards of the society that uses this technology. The idea is that a society with strong moral values will naturally use AI in ways that are beneficial and responsible, reducing the need for overly detailed or restrictive laws.

This approach advocates for oversight rather than an excessive number of specific regulations. Oversight involves monitoring and guiding the development and application of AI to ensure that it aligns with ethical standards and societal values, rather than trying to predict and legislate every possible scenario. This method recognizes that technology evolves rapidly, often outpacing the law's ability to regulate it. By focusing on oversight, society can adapt more fluidly to technological advancements, ensuring that AI is used responsibly and ethically without stifling innovation with overly burdensome regulations.

Adams' philosophy underscores the importance of a society's moral fabric in shaping the use and regulation of new technologies. It suggests that fostering a strong ethical culture might be as crucial as, if not more important than, the creation of specific laws in governing the emergence and integration of technologies like AI into society.

Generative AI & Law: The Evolving Role of Judges in the Federal Judiciary in the Age of AI .

January 1, 2024. The role of judges in the Federal Judiciary has been one of evolution and adaptation. In recent times, technological advancements, particularly in artificial intelligence (AI), have begun to play a pivotal role in reshaping judicial responsibilities...

The role of judges in the Federal Judiciary has been one of evolution and adaptation. In recent times, technological advancements, particularly in artificial intelligence (AI), have begun to play a pivotal role in reshaping judicial responsibilities and capabilities. Chief Justice John G. Roberts Jr.'s 2023 Year-End Report on the Federal Judiciary underlines this transition, highlighting the impact of AI on the judiciary and the enduring importance of human judges.

Chief Justice Roberts wrote: 

Rule 1 of the Federal Rules of Civil Procedure directs the parties and the courts to seek the “just, speedy, and inexpensive” resolution of cases. Many AI applications indisputably assist the judicial system in advancing those goals. 

AI has emerged as a significant ally in this regard. AI applications in the judiciary assist in various aspects, such as managing vast amounts of legal data, predicting case outcomes, and streamlining administrative tasks. This not only expedites the judicial process but also enhances accuracy and fairness in decision-making.

Full article can be found here

Ardipithecus Ramidus & AI.

December 27, 2023. Ardipithecus ramidus, a hominid species that lived around 4.4 million years ago, demonstrated remarkable adaptability in the face of climate change through its unique physical attributes. This species was adept at bipedality, allowing it to traverse open terrains ...

Ardipithecus ramidus, a hominid species that lived around 4.4 million years ago, demonstrated remarkable adaptability in the face of climate change through its unique physical attributes. This species was adept at bipedality, allowing it to traverse open terrains efficiently, a crucial adaptation as forests receded and grasslands expanded. Simultaneously, Ardipithecus retained arboreal capabilities, evident in its limb structure, enabling it to exploit the resources of the diminishing forest habitats. This dual adaptation for both terrestrial and arboreal environments was key to its survival in a changing world.

Drawing a parallel to modern times, humans are facing their own climate challenges. Just as Ardipithecus used its physical adaptability to navigate environmental changes, humans are leveraging advanced tools like artificial intelligence (AI) to adapt to and mitigate the impacts of climate change. AI aids in predicting climate patterns, optimizing resource use, and developing sustainable solutions, akin to how Ardipithecus utilized its bipedalism and arboreality. This technological adaptability mirrors the physical adaptability of Ardipithecus, underscoring a continuous theme in the evolution of species: the imperative to adapt for survival in the face of environmental change.

Beyond AGI, Robometrics® Machines & Immortality by Proxy.

December 26, 2023. Heraclitus, an ancient Greek philosopher active around 535–475 BCE, is renowned for his cryptic and profound aphorisms. His quote, “You would not find the boundaries of the soul no matter how many paths you traveled, so deep is its measure,” ...

Heraclitus, an ancient Greek philosopher active around 535–475 BCE, is renowned for his cryptic and profound aphorisms. His quote, 

You would not find the boundaries of the soul no matter how many paths you traveled, so deep is its measure,” 

reflects his deep contemplation on the nature of the soul and human understanding.

Heraclitus hailed from Ephesus, an important city in Ionia. He was known for his philosophy that emphasized the constant change and flow in the universe, encapsulated in his famous doctrine, "Panta Rhei," meaning "everything flows." This perspective highlighted the ever-changing and dynamic nature of reality. This particular quote delves into the seemingly infinite complexity of the human soul. Heraclitus suggests that the soul is an entity of such depth and intricacy that its full extent can never be completely understood or explored, no matter how much one tries. 

Today our understanding of human consciousness and the depth of the human mind have grown tremendously compared to the early 5th century BC when Heraclitus lived. It is possible now to understand these with some level of confidence. We at Robometrics® Machines have made significant progress in modeling consciousness. There may be aspects of human consciousness that remain elusive to artificial replication, underscoring the enigmatic depth of the human mind and soul. At the same time, modeling consciousness remains an important step towards Artificial General Intelligence (AGI) and at some point of time, the understanding of the human soul that will lead to immortality by proxy. 

Ray Kurzweil on Age of Abundance.

December 25, 2023. Ray Kurzweil, an American inventor and futurist, currently holds a prominent position as a director of engineering at Google. Born on February 12, 1948, in Queens, New York, Kurzweil has established himself as a vanguard in various technological fields...

Ray Kurzweil, an American inventor and futurist, currently holds a prominent position as a director of engineering at Google. Born on February 12, 1948, in Queens, New York, Kurzweil has established himself as a vanguard in various technological fields, including optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and the development of electronic keyboard instruments. Beyond these achievements, he is renowned for his insightful books on a range of subjects like health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism, painting a picture of a future profoundly shaped by technological advancements.

Central to Kurzweil's perspective on the future is the concept of an age of abundance, propelled by the rapid evolution of technology, especially AI. He envisions a future where AI reaches, and eventually surpasses, human levels of intelligence. One of his notable quotes on this subject is:

"Artificial Intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.

This prediction exemplifies Kurzweil's belief in the exponential growth and transformative power of technology. He foresees a world where traditional scarcities are replaced by technological abundance, effectively addressing global challenges such as health, poverty, and aging. His forecasts often highlight the synergy of various emerging technologies, including AI, nanotechnology, and biotechnology, and their collective impact on reshaping society and human life. Kurzweil's views offer a compelling glimpse into a future where technological advancements hold the key to solving some of humanity's most pressing issues.

December 22, 2023. The article delves into John Searle's Chinese Room Argument and its implications on machine consciousness in artificial intelligence (AI). It outlines Searle's critique of the Turing Test, emphasizing that mere symbol manipulation by machines does not equate to true understanding of consciousness...

The article delves into John Searle's Chinese Room Argument and its implications on machine consciousness in artificial intelligence (AI). It outlines Searle's critique of the Turing Test, emphasizing that mere symbol manipulation by machines does not equate to true understanding of consciousness. The article highlights the need for AI to transcend beyond language skills and incorporate elements of intuition and consciousness, akin to human intelligence. It underscores the challenge of imbuing machines with self-awareness and contextual understanding, positioning this endeavor not just as a technological leap but also as a philosophical exploration into the nature of intelligence and consciousness. Full article here

December 22, 2023. Judge Learned Hand words during the "I Am an American Day" on May 21, 1944, in Central Park, New York City, serving at that time as a Senior Judge of the United States Court of Appeals for the Second Circuit,  "The spirit of liberty is the spirit of Him who, near two thousand years ago, taught mankind that lesson it has never learned, but has never quite forgotten; that there may be a kingdom where the least shall be heard and considered side by side with the greatest," speaks to the inclusive and equitable treatment in every walks of life. Today generative AI can  help in creating such a kingdom that is diverse, equitable and inclusive. Full article here

December 21, 2023. As we navigate through the Age of AI, a period that gained momentum in 2023 and continues to evolve, we're witnessing a unique blend of past and present. This era is characterized by our deep integration with technology...

As we navigate through the Age of AI, a period that gained momentum in 2023 and continues to evolve, we're witnessing a unique blend of past and present. This era is characterized by our deep integration with technology, especially through our use of numerous custom GPTs, symbolizing the zenith of machine intelligence and personalization. Despite the rapid technological advancements, there's a strong affinity for traditional elements like Polaroid photographs, which represent a tangible connection to the past, unaltered by digital advancements. This dichotomy reflects the thoughts of visionaries like Albert Einstein, who highlighted the importance of adaptability in intelligence, and Steve Jobs, who emphasized selective innovation. Their insights resonate in our era, as we cherish the simplicity of the old—like driving a manual car or flying a Cessna 172—while embracing the complexities of AI. This harmonious coexistence of old and new, especially evident as we move into 2024, underscores our journey towards a future that respects both the revolutionary capabilities of AI and the timeless value of simpler, more traditional experiences. Full article here

Similarity Between Learning in Transformers & Human Brain.

December 19, 2023. It is naive to think about Foundation Models such as Transformer based Large Language Models (LLM) as black box when we consider the general convergence of AI with human brain functions. In the human brain, Magnesium (Mg2+) gating of NMDA receptors...

It is naive to think about Foundation Models such as Transformer based Large Language Models (LLM) as black box when we consider the general convergence of AI with human brain functions. In the human brain, Magnesium (Mg2+) gating of NMDA receptors is crucial for long-term memory. A memory is not formed if a NMDA receptor is gated by Mg2+. Some of the parameters involving alterations to the activation functions within the model's feed-forward layers of the Transformer model in an LLM can be adjusted to mimic the gating action of the NMDA receptor in the human brain. These parameters  control the various elements of the model's architecture and configuration that determine its behavior, performance and learning. 

Collectively, the parameters that can be adjusted in the Transformer model can be thought of as a NMDA-inspired activation function, potentially involving alterations to the activation functions within the model's feed-forward layers. This can enhance the model's long-term memory capabilities, akin to how memory consolidation works in the human brain. The combination of activation function, attention mechanisms as well as hyperparameters such as learning rates and temperature can have the effect of selective learning, resulting in long term memory in a LLM. 

This similarity between learning and memory formation in Transformer based LLM and human brain gives us a good indication of how LLMs can be improved while at the same time, improving how we humans learn.

Mahatma Gandhi's Wisdom in the Age of AI & Robotics.

December 15, 2023. "The world has enough for everyone's need, but not enough for everyone's greed." Gandhi's statement highlights the concept of sustainable abundance. He suggests that the Earth possesses sufficient resources to meet the needs of every individual...

"The world has enough for everyone's need, but not enough for everyone's greed."

Gandhi's statement highlights the concept of sustainable abundance. He suggests that the Earth possesses sufficient resources to meet the needs of every individual, but these resources are not endless and cannot satisfy unlimited greed. His perspective emphasizes the importance of equitable distribution and sustainable living as pathways to achieving a form of abundance that benefits all. This makes us think whether the notion that in the future, there is fundamentally a need for an age of abundance that is created by AI and robotics is inherently misleading and just feeding into the greed of some, creating a world that is ruled by a few. 

Mahatma Gandhi's poignant observation offers a profound insight into the current discourse on the age of abundance, particularly in the context of advancements like AI. Gandhi's perspective challenges the notion that we require a new era of abundance facilitated by AI or any other technological advancements. This by no means implies advancements such AI and robotics are not going to be helpful to human civilization in the future. 

Instead, it suggests that the key to achieving a balanced and sustainable world with abundance lies in addressing human greed and redefining our understanding of 'need' versus 'want'. The emphasis is on sustainable consumption, ethical stewardship of resources, and equitable distribution, rather than on focusing on creating more through artificial means. For example humanoid robots for factories. In a world where technology, including AI, continues to advance rapidly, Gandhi's words serve as a reminder of the importance of moderation, sustainability, and the ethical implications of our pursuit of abundance. It's a call to focus on fair allocation and responsible use of what we already have, rather than an endless quest for more, driven by AI or other technologies.

The Better the Society, The Less Law There Will Be.

December 13, 2023.Grant Gilmore, a prominent American legal scholar, was an influential figure in the development of American contract law. Born in 1910, he was notably appointed as a Sterling Professor of Law at Yale Law School, one of the most prestigious positions in American legal academia...

Grant Gilmore, a prominent American legal scholar, was an influential figure in the development of American contract law. Born in 1910, he was notably appointed as a Sterling Professor of Law at Yale Law School, one of the most prestigious positions in American legal academia. Gilmore's work significantly impacted the field, particularly through his contributions to the Uniform Commercial Code (UCC), a key set of laws governing commercial transactions in the United States. His teachings and writings, including his well-known book "The Death of Contract," have been highly influential in legal circles, especially in the context of contract law and legal philosophy.

In his book "The Ages of American Law" (1977), Gilmore made a profound statement: 

"Law reflects but in no sense determines the moral worth of a society. The values of a reasonably just society will reflect themselves in a reasonably just law. The better the society, the less law there will be. In Heaven there will be no law, and the lion will lie down with the lamb."

This statement encapsulates Gilmore's view on the relationship between law and society's moral standards. He argued that law is a mirror of society's values and morality but does not, in itself, shape these values. Instead, a society's inherent sense of justice and fairness is what shapes its legal system. Gilmore believed that in an ideal, highly moral society, the need for extensive laws and regulations diminishes, as the citizens inherently adhere to just principles. His vision of an ideal society ("In Heaven there will be no law") suggests a utopian scenario where laws become redundant due to the inherent goodness and understanding of its members, a state where peace and harmony prevail ("the lion will lie down with the lamb"). This philosophical view offers a perspective on the law as not just a set of rules imposed on society but as a reflection of society's collective moral and ethical standards.

Generative AI & Law: Copyright From 1790's Anonymity to 2024 AI.

December 6, 2023.The Copyright Act of 1790 marked a significant milestone in the history of U.S. intellectual property law, establishing the first formal system for copyright protection in the United States. Under this act, authors, or their designated representatives...

The Copyright Act of 1790 marked a significant milestone in the history of U.S. intellectual property law, establishing the first formal system for copyright protection in the United States. Under this act, authors, or their designated representatives, could register their works for a term of 14 years with the option of renewal for another 14 years. This legislation was an embodiment of the U.S. Constitution's Article I, Section 8, Clause 8, which empowered Congress to promote the progress of science and the useful arts by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

One of the first works to be copyrighted under this act was "The Power of Sympathy" by William Hill Brown, published in 1789. This novel is often recognized as “The First American Novel” and was a groundbreaking work in American literary history. Notably, the novel was originally published anonymously due to the sensitive nature of its content and the prevailing social norms of the time. This aspect of anonymity in the novel’s publication and copyright raises intriguing questions about modern copyright law, particularly in relation to works created by artificial intelligence (AI).

The precedent of granting copyright to an anonymous work like "The Power of Sympathy" suggests a certain flexibility in the concept of authorship within copyright law. This flexibility becomes increasingly relevant in contemporary discussions about AI and copyright. If a work that was published anonymously can be granted copyright protection, it opens the door to discussions about whether AI-generated works, which do not have a human author in the traditional sense, could also be eligible for similar protections. The evolving nature of authorship and the role of AI in creative processes could lead to significant shifts in how copyright law is interpreted and applied in the future.

December 5, 2023.“For when, in writing for the majority of the Court, I adopt a general rule, and say, 'This is the basis of our decision,' I not only constrain lower courts, I constrain myself as well.", "In the real world of appellate judging,...

For when, in writing for the majority of the Court, I adopt a general rule, and say, 'This is the basis of our decision,' I not only constrain lower courts, I constrain myself as well."

"In the real world of appellate judging, it displays more judicial restraint to adopt such a course than to announce that, 'on balance,' we think the law was violated here -- leaving ourselves free to say in the next case that, 'on balance,' it was not."

In this excerpt above written by Antonin Scalia in "The Rule of Law as a Law of Rules"  published in the University of Chicago Law Review  (1989), Scalia reflects on the nature of judicial decision-making. He initially perceived the common-law approach as a form of judicial restraint, minimizing law creation. However, Scalia grew to see that adopting general rules in majority opinions actually embodies greater restraint. By establishing a governing principle, a judge not only guides lower courts but also self-imposes constraints on future decisions, regardless of personal preferences. Scalia argues that this approach, as opposed to a flexible "on balance" method, provides a more consistent and predictable framework for legal interpretation, making it harder to accuse judges of arbitrariness or inconsistency.

In context or regulating AI, Antonin Scalia's "The Rule of Law as a Law of Rules" suggests that laws should be made with clear, general principles rather than on a case-by-case basis. This approach emphasizes judicial restraint, ensuring that personal biases do not influence legal decisions and that consistency is maintained across different cases.

Cartesian Philosophy, Consciousness & AGI.  

December 4, 2023.Cartesian philosophy, originating in the early 17th century with René Descartes, represents a watershed moment in philosophical and scientific thought. Born in 1596, Descartes, a French philosopher and mathematician, formulated this philosophy during a period of profound scientific and intellectual upheaval...

Cartesian philosophy, originating in the early 17th century with René Descartes, represents a watershed moment in philosophical and scientific thought. Born in 1596, Descartes, a French philosopher and mathematician, formulated this philosophy during a period of profound scientific and intellectual upheaval, characterized by the transition from medieval to modern thinking. He laid the foundations for this system in his seminal works, "Discourse on the Method" (1637) and "Meditations on First Philosophy" (1641). In these texts, Descartes sought to establish a new method of acquiring knowledge, turning away from scholastic traditions and advocating for a fresh approach based on reason and skepticism.

Descartes' exploration of the mind-body dualism – the distinction between the mental and the physical – is particularly relevant in contemporary debates about what it means to be human in an age of advanced technology. His insights into the nature of human thought and existence continue to fuel discussions and research into the essence of consciousness and the human experience. Thus, the legacy of Cartesian philosophy, grounded in the use of reason to understand the natural world and ourselves, remains a vital and influential force in shaping our approach to fundamental questions in science and philosophy.

In contemporary times, Cartesian philosophy finds relevance in areas like human consciousness, Artificial General Intelligence (AGI), and the exploration of what it means to be human. By advocating for a rational, methodical approach to inquiry, Cartesianism encourages a deeper investigation into human consciousness, pushing the boundaries of understanding the mind's complexities. In the realm of AGI, this philosophical framework inspires the creation of intelligent systems that can reason, learn, and adapt, much like the human mind. Furthermore, Cartesian philosophy invites continuous reflection on the human condition, prompting questions about the essence of humanity in an increasingly technologically driven world. As we navigate new frontiers in science and technology, the Cartesian emphasis on reason and methodical inquiry remains a vital guide, helping to shape our understanding of ourselves and the world around us.

Perpetual Patent Machine that Needs no Fuel.  

December 1, 2023. The concept of a perpetual machine has long fascinated humanity. From Leonardo da Vinci's quest for a self-operating machine to Nikola Tesla's endeavors in creating limitless energy, the idea of a machine that operates indefinitely without an external energy source has been a beacon of human imagination. This dream now takes a new form in the realm of intellectual creativity, the Perpetual Patent Machine (PPM)...

The concept of a perpetual machine has long fascinated humanity. From Leonardo da Vinci's quest for a self-operating machine to Nikola Tesla's endeavors in creating limitless energy, the idea of a machine that operates indefinitely without an external energy source has been a beacon of human imagination. This dream now takes a new form in the realm of intellectual creativity, the Perpetual Patent Machine (PPM).

Unlike traditional inventing processes driven by personal rewards and the novelty of creation, the Perpetual Patent Machine operates without such incentives. Its sole mission is to continuously generate new, useful inventions, as defined by its human counterparts. This highlights a unique partnership where AI excels in solution-finding, but the quintessential spark of human creativity remains crucial in problem identification and framing. Historically, the pursuit of perpetual motion and endless innovation has been echoed in the note by Nikola Tesla: 

"If you want to find the secrets of the universe, think in terms of energy, frequency, and vibration." 

This embody the perpetual quest for knowledge and innovation that drives human progress. A landmark in this journey is the contribution of Google DeepMind, “Scaling deep learning for materials discovery” as published in Nature on November 29, 2023. The study showcases the power of AI in improving the efficiency of materials discovery by an order of magnitude. Building on the identification of 48,000 stable crystals, the AI model, known as GNoME, has discovered 2.2 million new structures, surpassing previous human chemical intuition. This represents a monumental expansion in the materials known to humanity, with 736 structures already experimentally realized. The GNoME models exhibit emergent out-of-distribution capabilities at scale, demonstrating the profound impact of AI in transcending the limits of human knowledge and intuition in material science.

The Perpetual Patent Machine, embodied by the likes of GNoME, signifies a new era in innovation. It underscores the potential of AI not just as a tool, but as a collaborator in the creative process. The continued evolution of such machines promises to fundamentally accelerate the pace of discovery across various domains, making previously unimaginable inventions a tangible reality. The dream of the Perpetual Patent Machine is not just about creating a self-sustaining system of innovation; it's about redefining the partnership between human and machine intelligence. As we stand on the brink of this new era, it's essential to remember that the spark of human creativity, embodied in problem identification and framing, remains irreplaceable. Together, human ingenuity and AI's capability can forge a path to endless innovation, turning the dream of a perpetual machine into a reality that fuels the future of human progress.

Benjamin N. Cardozo on Role of a Judge.  

November 30, 2023. Benjamin N. Cardozo, an American jurist born on May 24, 1870, served on the New York Court of Appeals from 1914 to 1932 and was an Associate Justice of the Supreme Court of the United States from 1932 until his death in 1938...

Benjamin N. Cardozo, an American jurist born on May 24, 1870, served on the New York Court of Appeals from 1914 to 1932 and was an Associate Justice of the Supreme Court of the United States from 1932 until his death in 1938. He is renowned for significantly influencing the development of American common law in the 20th century, thanks to his legal philosophy and vivid prose style. Cardozo penned notable majority opinions in cases such as Palsgraf v. Long Island Railroad Co., Nixon v. Condon, and Steward Machine Co. v. Davis during his tenure on the U.S. Supreme Court.

Cardozo's famous cases include Schloendorff v. Society of New York Hospital, which established the necessity of informed consent in medical procedures; MacPherson v. Buick Motor Co., which ended privity as a prerequisite to duty in product liability; and Palsgraf v. Long Island Railroad Co., which developed the concept of proximate cause in tort law. His judgments often reflected a deep understanding of the law, balanced with considerations of social welfare and justice.

Cardozo's statement, 

The judge is not the knight-errant, roaming at will in pursuit of his own ideal of beauty or of goodness,” 

is from his work "The Nature of the Judicial Process." This statement encapsulates his view on the role of judges in the legal system. Cardozo believed that judges should follow and apply the law in straightforward cases and make new law in more complex cases by balancing various considerations, including social welfare. He acknowledged the inherent risks in judicial lawmaking, emphasizing that judges should only make new law when the law contains gaps or a legal rule is clearly obsolete, and their actions should be guided by community standards rather than personal values. His perspective was that a judge's freedom to innovate is limited by the rules that surround them, and their role is not to pursue personal ideals but to draw inspiration from established principles.

Cardozo's views on the judicial process are significant as they highlight the balance judges must maintain between adherence to legal precedent and the evolving needs of society. He recognized the dynamic nature of law and the responsibility of judges to interpret and apply it in a manner that upholds justice and social welfare. This philosophy underscores the importance of judicial restraint and the need for judges to be guided by principles rather than personal inclinations, ensuring that their decisions are rooted in law and aligned with societal values.

Henry David Thoreau on Individual Morality and the Pursuit of True Justice.  

November 29, 2023. Henry David Thoreau, an American transcendentalist born in the early 19th century, is best known for his philosophical and naturalist writings, including the seminal essay "Civil Disobedience," first published in 1849...

Henry David Thoreau, an American transcendentalist born in the early 19th century, is best known for his philosophical and naturalist writings, including the seminal essay "Civil Disobedience," first published in 1849. This work, originally titled "Resistance to Civil Government," articulates Thoreau's conviction that individuals should not let governments override their conscience, particularly when such governments perpetrate or support injustice. His stance was partly a response to the slavery crisis in New England during the 1840s and 1850s, and specifically to the Fugitive Slave Act of 1850. Thoreau, a lifelong abolitionist, was deeply troubled by slavery and the Mexican-American War, seeing them as grave injustices.

In "Civil Disobedience," Thoreau argues that governments are more often harmful than helpful, and that the judgment of an individual's conscience is paramount. He believed that the law does not inherently make people just; rather, individuals must actively engage in making the law just. This view is encapsulated in his statement, 

“The law will never make a man free; it is men who have got to make the law free.” 

Thoreau asserts that democracy, by virtue of being a majority rule, does not automatically confer wisdom or justice. He emphasizes the importance of individual judgment and conscience over blind adherence to law, especially when the law itself perpetuates injustice. For Thoreau, true freedom and justice are achieved not through the passive acceptance of laws, but through the active and moral engagement of individuals in shaping and challenging these laws.

This perspective challenges the notion that law and justice are synonymous. Thoreau's assertion highlights the dynamic relationship between law, freedom, and individual morality. It suggests that laws, in themselves, are not the ultimate arbiters of justice; rather, it is the intent and actions of individuals within a legal framework that bring about true justice. This view underlines the responsibility of individuals not only to obey laws but also to critically evaluate and influence them, ensuring that they align with the principles of justice and morality.

Benjamin Disraeli on Truth is About Justice, Intent and Not Letter of the Law.  

November 28, 2023. Benjamin Disraeli, a prominent British statesman and novelist, was twice the Prime Minister of the United Kingdom (1868, 1874–1880) and a key figure in the Conservative Party. Born on December 21, 1804, in London, Disraeli was of Italian-Jewish descent...

Benjamin Disraeli, a prominent British statesman and novelist, was twice the Prime Minister of the United Kingdom (1868, 1874–1880) and a key figure in the Conservative Party. Born on December 21, 1804, in London, Disraeli was of Italian-Jewish descent. A pivotal moment in his early life was his father's decision in 1817 to baptize his children as Christians, a choice that was crucial for Disraeli's future political career, as Jews by religion were excluded from Parliament until 1858.

Disraeli's journey into politics was marked by initial setbacks and personal struggles, including a period that would now be recognized as a nervous breakdown. His travels in the Mediterranean and the Middle East deeply influenced his later political attitudes and literary works. He entered politics in 1831 as an independent radical and eventually aligned himself with the Conservative Party, though his initial attempts at securing a parliamentary seat were unsuccessful. His extravagant lifestyle and debts, coupled with his controversial personal life, earned him a dubious reputation. However, by 1837, Disraeli had secured a seat in Parliament for Maidstone in Kent. His early speeches in the House of Commons were met with criticism, but he quickly established himself as a commanding speaker.

Disraeli's statement, 

Justice is truth in action,” 

reflects his complex view of politics and law. This perspective aligns with his life experiences and political journey, where he witnessed and contributed to the evolution of justice and governance beyond the rigid confines of the law. Disraeli understood that the pursuit of justice is not solely about adhering to the letter of the law, but about acting upon truths that stem from experience, intent, and a deeper understanding of societal needs. His own political career, marked by initial setbacks and eventual triumph, exemplifies the idea that truth and justice are dynamic concepts, realized through action and perseverance rather than mere compliance with established legal frameworks. Disraeli's quote underlines the significance of intent and the human element in the application of justice, suggesting that true justice is achieved when actions are guided by truth and a commitment to societal good, transcending the mere technicalities of law.

Thinking Machines & Natural Intelligence 

November 27, 2023. Ada Lovelace, regarded as the first computer programmer, recognized AI and machines as extensions of human thought, capable only of what they are programmed to perform. She said in her memoir (1842)...

Ada Lovelace, regarded as the first computer programmer, recognized AI and machines as extensions of human thought, capable only of what they are programmed to perform. She said in her memoir (1842):

"The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform."

Lovelace's insight hints at the idea that AI and machines are extensions of human thought and capability, doing what they are programmed to do by humans, and thus are a part of the natural evolution of human tools and thought processes. She thought of machines as a natural extension of human intelligence and creativity, but not as something that can think for itself. 

Douglas R. Hartree (1949) had a difference view where he quoted Lovelace's view on  the Analytical Engine and wrote 

“This does not imply that it may not be possible to construct electronic equipment which will ‘think for itself,’ or in which, in biological terms, one could set up a conditioned reflex, which would serve as a basis for ‘learning.’"

Today, Q-learning can approximate a thinking machine with Natural Intelligence. Q-Star, a variant of Q-learning - method where agents learn to make decisions by trial and error, receiving rewards for successful actions. For a transformer, it is selecting the next token based on a future long term reward. Q* is combining the use of search (e.g. Monte Carlo Tree Search) and learning (token sampling and predicting). Search increases intelligence. The AI can take more time to deliberate without altering its parameters, and dynamically trade off efficiency with deeper thinking.

Q = q-learning

Star = embedded algorithms discovered and joined with the transformer architecture 

November 23, 2023. The Andersen et al v. Stability AI Ltd. et al (Case 3:23-cv-00201-WHO) lawsuit involved artists Sarah Anderson, Kelly McKernan, and Karla Ortiz suing Stability AI, Midjourney, and DeviantArt...

The Andersen et al v. Stability AI Ltd. et al (Case 3:23-cv-00201-WHO) lawsuit involved artists Sarah Anderson, Kelly McKernan, and Karla Ortiz suing Stability AI, Midjourney, and DeviantArt for allegedly using their copyrighted works without permission in their generative AI software, specifically Stable Diffusion. Judge William Orrick dismissed most claims but allowed Andersen's direct copyright infringement claim against Stability AI to proceed. He also expressed doubt about the plaintiffs' arguments that AI-generated outputs are simply derivatives of the copyrighted images, highlighting the need for substantial similarity in such claims.

This case supports the notion that diffusion models are not Stochastic Parrots. Judge Orrick's skepticism about the AI-generated outputs being direct derivatives of the original artworks implies an understanding that these models, while trained on existing images, create new, transformative outputs. This suggests that diffusion models like Stable Diffusion do more than just replicate input data; they generate novel creations that are distinct from the source material, showcasing a level of originality and transformation beyond simple replication.

Full article can be found here

November 22, 2023. Kadrey v. Meta (Case 3:23-cv-03417) filed by Sarah Silverman and other authors against Meta focused on the allegation that Meta's large language models (LLMs), specifically LLaMA, were trained using copyrighted books without authorization...

Kadrey v. Meta (Case 3:23-cv-03417) filed by Sarah Silverman and other authors against Meta focused on the allegation that Meta's large language models (LLMs), specifically LLaMA, were trained using copyrighted books without authorization. The plaintiffs claimed that this training involved the illegal scraping of works from book torrenting websites. The case saw partial dismissal by Judge Vince Chhabria on  November, 20 2023, who expressed skepticism about the plaintiffs' claims, particularly questioning the idea that text generated by LLaMA was a direct infringement of the authors' copyrights.

Kadrey v. Meta is another argument to support that LLMs don’t simply replicate or regurgitate information without understanding or originality. They are not Stochastic Parrots. Judge Chhabria's dismissal of claims based on the idea that LLaMA's output directly infringed copyrights suggests a recognition that the outputs of LLMs are not straightforward replications of input data but involve complex, transformative processes. This indicates an understanding that LLMs, while trained on existing data, generate outputs that are not mere echoes of their training material, but rather unique amalgamations and interpretations of learned information.

Full article can be found here

Building Consciousness into a SuperIntelligent AI

November 21, 2023. The advent of superintelligent AI presents a scenario that challenges our traditional understanding of containment and control. As Elon Musk once cautioned, "With artificial intelligence, we are summoning the demon."...

The advent of superintelligent AI presents a scenario that challenges our traditional understanding of containment and control. As Elon Musk once cautioned, "With artificial intelligence, we are summoning the demon." This analogy underscores the inherent risks in developing an intelligence that surpasses our own. The notion that one can "keep a superintelligent AI in a bottle" and assume it won't escape is a dangerous underestimation of its potential capabilities. Stephen Hawking echoed similar concerns, stating, "The development of full artificial intelligence could spell the end of the human race." These warnings highlight the necessity of aligning AI's objectives with human interests. 

The key may lie in imbuing AI with a form of consciousness akin to our own.  By integrating aspects of human-like consciousness, such as empathy, ethics, and understanding of human values, we could potentially ensure that when the "genie" of superintelligence is out of the bottle, it will act in harmony with human interests and well-being. This approach advocates for a proactive and synergistic relationship with AI, rather than one based solely on control or containment.

Getting to Super-Consciousness

November 20, 2023. The human brain, despite its remarkable capabilities, has inherent limitations when compared to modern digital technologies. Firstly, the biological neurons in our brain fire at a maximum rate of around 200 times per second (200 Hz), which pales in comparison to the speed at which present-day transistors operate...

The human brain, despite its remarkable capabilities, has inherent limitations when compared to modern digital technologies. Firstly, the biological neurons in our brain fire at a maximum rate of around 200 times per second (200 Hz), which pales in comparison to the speed at which present-day transistors operate, typically in the gigahertz (GHz) range. This discrepancy in speed highlights a fundamental difference in processing power between biological and electronic systems. Secondly, neurons in the human brain propagate signals relatively slowly, with a maximum speed of about 100 meters per second. In contrast, computer signals can travel at the speed of light, vastly outpacing the signal transmission within our nervous system.

If these two limitations were removed or significantly mitigated in the human brain, the implications could be profound. Enhancing the firing rate of neurons to match the speed of modern transistors could dramatically increase the processing speed of the brain, potentially allowing for quicker thinking, faster learning, and more efficient problem-solving. Similarly, if neuronal signal propagation could be accelerated to approach the speed of light, it would vastly improve the rate at which information is communicated within the brain. This could lead to improved coordination, quicker reflexes, and possibly an enhanced ability to process and integrate complex information in real-time. However, such enhancements would fundamentally alter the nature of human cognition and could have unforeseen consequences on aspects of consciousness, identity, and the human experience as a whole.

Consciousness is deeply intertwined with the brain's processing capabilities and the speed of neural communication. If neurons could fire at the speed of modern transistors and signals could travel at or near the speed of light, it's conceivable that our experience of consciousness would be significantly altered. Such enhancements might lead to an expanded awareness, enabling individuals to process and interpret vast amounts of information almost instantaneously. This could lead to a heightened sense of perception, faster cognitive reactions, and perhaps the ability to experience time and thought in ways currently unimaginable. However, these changes could also challenge our understanding of self and identity. They might bring about new forms of cognitive overload or alter the way emotions and thoughts are experienced, potentially leading to a state of consciousness that is fundamentally different from what is currently known, Super-consciousness

These changes could also raise ethical and philosophical questions about what it means to be human in an era where our natural biological limits are transcended. 

November 17, 2023. The convergence of biotechnology and artificial intelligence (AI) marks a transformative era in medical research. Genetically modified mice, engineered to produce human-like proteins, serve as an efficient means to synthesize new protein molecules...

The convergence of biotechnology and artificial intelligence (AI) marks a transformative era in medical research. Genetically modified mice, engineered to produce human-like proteins, serve as an efficient means to synthesize new protein molecules with minimal human intervention. These proteins, often vital for treating various diseases, represent novel inventions that can be patented, incentivizing further research and development. Similarly, generative AI technologies like DeepMind's AlphaFold autonomously predict complex protein structures, reducing reliance on human computation and accelerating scientific discovery. Both these advancements illustrate how autonomous systems are increasingly playing a pivotal role in inventing and discovering new biomedical solutions, reshaping the landscape of medical research and therapy development.

Generative AI technologies and genetically modified mice present many parallels as tools in the realm of medical research, capable of assisting humans in inventing new protein molecules. Given their ability to autonomously generate novel and non-obvious biomedical solutions, the outputs of these advanced tools – be it AI-predicted protein structures or proteins synthesized by transgenic mice – rightfully merit consideration for patent protection, fostering innovation and advancement in medical science. 

Full article can be found here

November 14, 2023. While the AI can assist in the process of invention, the ultimate credit lies with, and goes to the human, who needs to remain in charge and supervise both the identification of the problem and solution for the invention. The amount of supervision may not be important as long as the human is in-charge...

While the AI can assist in the process of invention, the ultimate credit lies with, and goes to the human, who needs to remain in charge and supervise both the identification of the problem and solution for the invention. The amount of supervision may not be important as long as the human is in-charge. This human-centric approach is vital not only for maintaining the integrity of the patent system but also for ensuring that the rewards and responsibilities associated with a patent are borne by humans, who have the capacity for ethical and moral judgment. For example, AlphaFold by DeepMind, which can generate novel and potentially patentable protein structures, underscores this thesis. These generated protein structures can be patentable as long as effort is made on human supervision to craft the problem and later be involved in selection of the protein structure to some extent. Full article can be found here.

Generative AI & Law: Inventing Using Generative AI

November 14, 2023. Whether a novel invention is patentable depends on who and what it took to identify the problem and then come up with a novel solution to that problem. Once you have the problem well defined, we can come up with the questions that  Generative AI can easily find a solution that is better than what exists already;...

Whether a novel invention is patentable depends on who and what it took to identify the problem and then come up with a novel solution to that problem. Once you have the problem well defined, we can come up with the questions that  Generative AI can easily find a solution that is better than what exists already. If the human was not involved substantially in identifying the problem and Generative AI did the work both for identifying the problem and then the solution, the invention is not patentable. The USPTO noted that Title 35 of the United States Code  requires the inventors to be natural persons. 

Albert Einstein once said:

"If I had an hour to solve a problem and my life depended on it, I would use the first 55 minutes determining the proper question to ask, for once I know the proper question, I can solve the problem in less than five minutes."

Same applies when using Generative AI as a tool to assist us, the natural persons in inventing. We need to at least be intimately involved and spend time in identifying the problem and to some extent in decision making while using Generative AI to solve the problem.

The visual shows Samuel Hopkins working on the new method of making potash, an ingredient used in fertilizer, which became the first patent ever filed and granted in the United States, on July 31, 1790. The patent was signed by President George Washington, Secretary of State Thomas Jefferson, and Attorney General Edmund Randolph.

Enhancing AGI Decision-Making with Emotional Intelligence

November 13, 2023. In "Les Passions de l'âme," René Descartes explores the intricate nature of emotions, providing insights on the development of AGI (Artificial General Intelligence). By simulating emotional responses, AGI could achieve more than enhanced human-AGI interaction;...

In "Les Passions de l'âme," René Descartes explores the intricate nature of emotions, providing insights on the development of AGI (Artificial General Intelligence). By simulating emotional responses, AGI could achieve more than enhanced human-AGI interaction; it could also replicate human-like decision-making processes. Descartes views emotions as fundamental to human cognition, arguing that they are intrinsically good and that their misuse or excess should be avoided. This perspective suggests that emotions, when properly harnessed, can guide rational decision-making. Applying this to AGI, emotional algorithms could be designed not just to mimic human responses but to aid in complex decision-making, where factors like empathy, ethical considerations, and social context are crucial. Such AGI systems would not merely process data but interpret it in a human-like manner, integrating rational thought with emotional intelligence, a combination Descartes views as central to human experience. This integration could lead to AGI systems that make decisions that are not only logically sound but also ethically and socially aware, mirroring the depth and nuance of human decision-making.

Learning from Les Passions de l'âme on building AGI: Emotions, Consciousness & Embodiment

November 13, 2023. In "Les Passions de l'âme," René Descartes delves into the intricate relationship between emotions (passions) and consciousness, primarily through the lens of the mind-body union. His approach in the "Meditations on First Philosophy"...

In "Les Passions de l'âme," René Descartes delves into the intricate relationship between emotions (passions) and consciousness, primarily through the lens of the mind-body union. His approach in the "Meditations on First Philosophy" provides context for this exploration, where he posits that sensations, appetites, and passions, though often a source of error in theoretical knowledge, are crucial for practical functionality. These passions are not to be referred solely to the mind or body but are a manifestation of their union, guiding our interaction with the external world and preserving the mind-body unity.

Descartes's correspondence with Princess Elisabeth of Bohemia further illuminates this relationship. Here, he discusses the mind's ability to control bodily passions and emphasizes that the mind and body form a genuine union. This interaction highlights the impact of emotions on health and well-being, suggesting that mental activities like reasoning and willing can be affected by bodily conditions, thus influencing our susceptibility to passions. 

This interplay between mind and body, as discussed by Descartes, suggests that passions, while rooted in bodily perceptions, are integrally connected to consciousness. They serve not just as physiological responses but as reflections of the deep intermingling of mind and body, shaping our conscious experiences and responses.

Applying Descartes' concepts on the passions (emotions) and consciousness to AI or AGI (Artificial General Intelligence) involves considering how AI machines such as Robometrics Machines can mimic or process human-like emotional responses and consciousness. Descartes' theory emphasizes the interplay between mind and body, suggesting that emotions are not just physiological but also reflective of cognitive states and require embodiment of some kind.

"Cogito, ergo sum” and Consciousness

November 10, 2023. René Descartes, the French philosopher, mathematician, and scientist, born in 1596, is widely regarded as a pivotal figure in the development of modern philosophy. His statement, "I think; therefore I am" ("Cogito, ergo sum" in Latin)...

René Descartes, the French philosopher, mathematician, and scientist, born in 1596, is widely regarded as a pivotal figure in the development of modern philosophy. His statement, "I think; therefore I am" ("Cogito, ergo sum" in Latin), first appeared in his work "Discourse on the Method" in 1637. This phrase forms the foundation of Cartesian philosophy and represents a fundamental element in Western philosophy. Descartes' statement was a response to his methodic doubt, a process where he doubted everything that could be doubted, to determine what he could know with certainty. In this context, the act of thinking itself became the undeniable proof of his own existence.

This phrase has profound implications in the realm of artificial intelligence (AI) and consciousness studies. It raises fundamental questions about the nature of reasoning, thinking, self-awareness, and more brooding our existence. In AI, this translates into the question about whether machines can possess consciousness or self-awareness. Descartes' ideas challenge us to consider what it means to think and exist, especially as AI systems become more advanced and seemingly capable of some level of reasoning and ‘thought-like' processes. The exploration of machine consciousness today often circles back to Cartesian concepts, pondering if an AI's ability to learn selfsupervised, possess some level of common sense reasoning and make decisions could translate into self-awareness and consciousness that is similar to human’s in nature, if not in scale.

November 9, 2023. We have forged the Cali Tenant Counselor, a chatbot, upon the robust foundation of OpenAI's GPT-4 model. This AI companion is the culmination of exhaustive research, empowered with a comprehensive understanding of California's tenant laws from a wealth of gathered materials. It's a creation designed to demystify the complexities faced by tenants and landlords, providing precise guidance through the legal labyrinth. While it does not replace professional legal advice, our Cali Tenant Counselor is an exceptional resource, granting users the clarity and confidence needed to navigate tenancy matters with informed authority.

November 9, 2023. Here is the Virtual Immigration Law Counselor chatbot we created based on GPT-4 that is knowledgeable about the full U.S.C. Title 8 - Aliens and Nationality code, USCIS Policy Manual, and C.F.R. Titles 8 and 22. This intelligent tool can offer guidance on U.S. immigration law and procedures, leveraging deep insights from these extensive legal resources. It is designed to serve as an accessible starting point for individuals who may find themselves unable to afford costly legal representation or those facing language barriers that make navigating complex legal texts a challenge. While it does not substitute for professional legal advice, the Immigration Law Counselor chatbot provides crucial information and support, helping to demystify the intricacies of immigration processes and empower individuals on their journey through the U.S. immigration system.

Consciousness & Intelligence 

November 8, 2023. Consciousness is the hallucination building the story in our mind every few nanoseconds. Consciousness builds up overtime as we grow from a one week old to where we are now. The complexity and magnitude of consciousness....

Consciousness is the hallucination building the story in our mind every few nanoseconds. Consciousness builds up overtime as we grow from a one week old to where we are now. The complexity and magnitude of consciousness is proportional to the neurons in a creature's brain. So a butterfly will have consciousness too but much less compared to a human. The interconnection between the neurons with the total number of neurons determines how the brain evolves and develops consciousness and intelligence. One can be conscious without being intelligent. Machines either in their physical form or software, powered by a level of AI, can be intelligent but without consciousness. 

We at Robometrics® Machines are working on modeling an evolving consciousness into Robometrics® AGI, approximating what a biological creature such as a human will have.

Embodiment , Mind and Eternal Life 

November 7, 2023. The best approach to build artificial intelligence that includes general intelligence, common sense, intuition and conciseness is by understanding our biological intelligence and our body, while creating artificial intelligence, not through simulations but...

The best approach to build artificial intelligence that includes general intelligence, common sense, intuition and conciseness is by understanding our biological intelligence and our body, while creating artificial intelligence, not through simulations but with real interactions within the real world. The connection between visual perception and action is way too important to keep them separate. Our brains create our minds to preserve our bodies and use its biological intelligence to move, develop, interact and survive in our physical world. Mind creates a steady, consistent and imaginable friction for us to exist for our body. Embodiment is essential for the mind to develop and operate in the physical reality of the world. 

In the future, it will be possible to move our mind from one body to another. But it won’t be easy even if we are able to capture our biological intelligence outside our mind. There are experiences and intelligence in our body itself, which can only be learned with time in the physical reality and can not be transferred. 

November 3, 2023. "I Am an American Day" was a patriotic event held on May 21, 1944, in Central Park, New York City. Originating from earlier Americanization Day observances, the event coincided with the broader context of World War II, aiming to foster a sense of national unity and patriotic pride...

"I Am an American Day" was a patriotic event held on May 21, 1944, in Central Park, New York City. Originating from earlier Americanization Day observances, the event coincided with the broader context of World War II, aiming to foster a sense of national unity and patriotic pride among the American populace, especially in a time of global conflict and uncertainty. 

Judge Hand, serving at that time as a Senior Judge of the United States Court of Appeals for the Second Circuit, was renowned for his intellectual depth and his articulate, thoughtful expressions on matters of law and liberty. One of the quotes from his speech , "The spirit of liberty is the spirit of Him who, near two thousand years ago, taught mankind that lesson it has never learned, but has never quite forgotten; that there may be a kingdom where the least shall be heard and considered side by side with the greatest," speaks to the essence of inclusive and equitable treatment. In the context of building AI regulations, this quote underscores the need for a balanced and fair approach that considers both large AI companies and small startups.

A monumental stride came with a sweeping Executive Order from President Biden aiming to reshape the AI landscape, with a spotlight on "dual-use foundation models. Although aimed at keeping tabs on powerful AI systems, this mandate could tip the scales in favor of large companies.  Conversely, this could pose a hefty burden on small AI startups, potentially stifling innovation due to compliance costs and time constraints. 

Full article can be found here 

November 2, 2023. The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence enlists precise mandates for developing, auditing, and testing AI technologies. This prescriptive approach contrasts with the U.S. common law doctrine, which evolves through judicial interpretations and societal norms. By specifying exact guidelines and benchmarks, the Order may inadvertently embody a static regulatory stance, potentially stifling the fluidity and adaptability intrinsic to common law. This rigid framework might deter innovation and adaptability, especially among AI startups, thus possibly conflicting with the U.S. legal tradition of evolving legal system.  Read the full article here.

Law Philosophy: Reason is the Life of the Law

November 2, 2023. Sir Edward Coke (1552-1634) was a prominent English jurist and Member of Parliament whose writings and rulings significantly shaped the common law. He is renowned for defending the supremacy of common law against royal prerogatives. His statement, "Reason is the life of the law,"...

Sir Edward Coke (1552-1634) was a prominent English jurist and Member of Parliament whose writings and rulings significantly shaped the common law. He is renowned for defending the supremacy of common law against royal prerogatives. His statement, "Reason is the life of the law," reflects his belief in a rational and evolving legal framework, underscoring the importance of reasoned judgments in the development and application of legal principles. Through his extensive legal works, Coke laid the foundation for centuries of common law tradition, emphasizing the enduring and adaptive nature of law, guided by reason and judicial interpretation.

The philosophy of AI law should embody Sir Edward Coke's assertion that "Reason is the life of the law." This entails fostering a flexible, rational framework that adapts to the nuanced and rapidly evolving landscape of AI technology. Much like the common law evolves through reasoned judgments, AI law should be dynamic, capable of addressing new challenges and opportunities presented by AI advancements, ensuring a balanced, fair, and forward-looking legal framework for the digital age.

October 31, 2023. Human communication in English involves words, sentences, and sounds that convey meanings based on the language's semantics. This concept underpins the success of Large Language Models (LLMs), which interpret and recreate language semantics to form new sentences. Extending this idea, LLMs trained on molecular structures can elucidate biology's "semantics."...

Human communication in English involves words, sentences, and sounds that convey meanings based on the language's semantics. This concept underpins the success of Large Language Models (LLMs), which interpret and recreate language semantics to form new sentences. Extending this idea, LLMs trained on molecular structures can elucidate biology's "semantics." The quality of LLM outputs reflects the richness of Internet content in that domain, indicating the extensive knowledge available online. Foundation models and AI are extensions of human capabilities, transforming us into "Super-Scientists." Alan Turing's view, emphasizing the necessity of technological and AI advancement, aligns with this perspective. Turing's foundational work in computing and AI positions these technologies as natural progressions of human intellect. Similarly, Ada Lovelace, regarded as the first computer programmer, recognized AI and machines as extensions of human thought, capable only of what they are programmed to perform. She said in her memoir (1842):

"The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform."

AI and technology are seen not as deviations from human evolution but as continuations of our drive to understand, create, and evolve. They are pivotal in our journey as long as they augment rather than replace human capabilities. Full article can be found here.

Describing Transformer based AI as Alien Intelligence (Ai)

October 31, 2023. Describing a LLMs (Large Language Models) as an alien intelligence is one way to convey the hallucinations or creativity inherent in these models that can be considered as a feature. (See the blog post on LLM with...

Describing a LLMs (Large Language Models) as an alien intelligence is one way to convey the hallucinations or creativity inherent in these models that can be considered as a feature. (See the blog post on LLM with  Human-Like Responses Through Hallucination here https://www.robometricsagi.com/blog/ai-policy#h.czpvisga9x8b). An example will be some of the unique moves made by AlphaGo in a certain chess game. LLMs will make it possible for us to understand and even communicate with other species on planet Earth including one day, one of the oldest of it all, sperm whales. This again points to the non-human or alien nature of such transformer based LLMs. A better way to think about LLMs and more generally the transformer or diffusion model based Foundation Model is that they possess natural intelligence., a natural extension of human intelligence and creativity.

The Spirit of Liberty & AI Regulations

October 30, 2023. "I Am an American Day" was a patriotic event held on May 21, 1944, in Central Park, New York City. Originating from earlier Americanization Day observances, the event coincided with the broader context of World War II...

"I Am an American Day" was a patriotic event held on May 21, 1944, in Central Park, New York City. Originating from earlier Americanization Day observances, the event coincided with the broader context of World War II, aiming to foster a sense of national unity and patriotic pride among the American populace, especially in a time of global conflict and uncertainty. 

Judge Learned Hand's speech at this event is celebrated for its eloquent expression on the concept of liberty

One of the quotes from his speech , 

"The spirit of liberty is the spirit of Him who, near two thousand years ago, taught mankind that lesson it has never learned, but has never quite forgotten; that there may be a kingdom where the least shall be heard and considered side by side with the greatest," 

speaks to the essence of inclusive and equitable treatment. In the context of building AI regulations, this quote underscores the need for a balanced and fair approach that considers both large AI companies and small startups.

In the rapidly evolving field of generative AI, such principles are particularly pertinent. An AI regulation or a rule should aim to:

1. Encourage Innovation and Fair Competition: Regulations should not favor incumbents (large AI companies) over new entrants (startups), ensuring a level playing field where innovation can thrive. For instance, in discussions around AI patents or data access, it's crucial that smaller entities aren't disadvantaged. HIPPA on the healthcare side is a great example of a compliance standard that is adaptive with smaller healthcare entities being required to do less and larger  healthcare entities that have the means to do more. 

2. Consider Diverse Interests: As AI applications become more pervasive, the interests of various stakeholders, from multinational corporations to individual developers and public welfare, need to be balanced. This aligns with Judge Hand's call for considering all interests "alongside its own without bias."

3. Avoiding Unintended Consequences: Regulations should be forward-looking and flexible to adapt to new developments in AI, preventing the stifling of innovation due to overly rigid rules.

The significance of Judge Hand's quote in this modern context lies in its reminder that regulations should be designed with empathy and an understanding of all parties involved, ensuring a fair, just, and innovative future for AI technologies. This approach is critical in avoiding monopolistic tendencies, promoting diversity in AI development, and ensuring that benefits of AI are widely and equitably distributed.

Generative AI for Law: Technological Competence  of a Judge & Prosecutor

October 26, 2023. The legal profession has witnessed significant shifts with the advent of technology, prompting the American Bar Association to amend its Model Rules in 2012 to emphasize "technological competence." This change mirrors global sentiments, as seen in the UK...

he legal profession has witnessed significant shifts with the advent of technology, prompting the American Bar Association to amend its Model Rules in 2012 to emphasize "technological competence." This change mirrors global sentiments, as seen in the UK and Canada, underscoring the imperative for legal professionals to stay abreast of digital advancements. The rise of Generative AI, capable of producing deepfakes, further accentuates this need. Not only must legal professionals understand these technologies, but they must also collaborate with AI experts to ensure comprehensive application in complex cases. This is particularly crucial as technological lapses in understanding can lead to biases in legal decisions. Continuing Legal Education (CLE) efforts, such as those in California, are adapting to this evolving landscape, highlighting the intersection of ethics, law, and technology. Read the full article here.

Generative AI for Law: Liar's Dividend

October 25, 2023. The "liar's dividend" is a term that refers to the added layer of deniability that dishonest actors gain in an environment where the authenticity of truth itself is in question. In essence, even when a piece of evidence (like a video or photograph) is genuine...

The "liar's dividend" is a term that refers to the added layer of deniability that dishonest actors gain in an environment where the authenticity of truth itself is in question. In essence, even when a piece of evidence (like a video or photograph) is genuine, the mere existence of technologies like deepfakes or generative AI can provide a shield for liars to claim that the evidence has been manipulated. This phenomenon complicates the already intricate task of discerning truth from falsehood, as every genuine artifact can potentially be dismissed as a fabrication.

In relation to deepfakes and generative AI, the liar's dividend can have far-reaching implications for justice, media, and public discourse. The belief in the "marketplace of ideas" suggests that truth is the proper and sufficient antagonist to error and that a free exchange of ideas is the most efficient way for truth to emerge. As articulated in the First Amendment opinions, one of the most famous quotes related to this concept is from Judge Oliver Wendell Holmes Jr. in his dissent in Abrams v. United States (1919): "The best test of truth is the power of the thought to get itself accepted in the competition of the market." In a world where the line between genuine and fabricated is blurred by advanced technologies, the liar's dividend threatens this very "free trade in ideas." Moreover, as Justice Louis Brandeis astutely observed, "Sunlight is said to be the best of disinfectants." Yet, with the shadow of the liar's dividend looming, even the brightest sunlight might struggle to uncover the truth in the era of deepfakes.

Tacitus on Regulation: The more corrupt the state, the more numerous the laws

October 24, 2023. The quote "The more corrupt the state, the more numerous the laws" is attributed to the Roman historian and senator Tacitus, who lived during the first century AD. Tacitus is celebrated for his comprehensive analysis of the early Roman Empire...

The quote "The more corrupt the state, the more numerous the laws" is attributed to the Roman historian and senator Tacitus, who lived during the first century AD. Tacitus is celebrated for his comprehensive analysis of the early Roman Empire, particularly in his works such as "The Annals" and "Histories." This quote reflects his astute observations of governance during his era. Tacitus likely recognized that in times of corruption or moral decay within a state, rulers may resort to enacting numerous laws and regulations as a facade of order and control. These laws, however, may serve more to consolidate power and perpetuate corruption rather than to uphold justice and virtue.

Today, regulators can draw a valuable lesson from Tacitus' insight. It underscores the importance of crafting clear, effective, and just regulations rather than relying solely on quantity to address societal issues. In the realm of AI and emerging technologies, this quote reminds regulators to prioritize quality over quantity when formulating regulations. A focus on transparent, well-considered oversight can be more effective in fostering innovation, ensuring ethical standards, and preventing corruption, even in complex and rapidly evolving domains like artificial intelligence.

Generative AI for Law: Deepfake Video Evidence & the Sixth Amendment

October 23, 2023. A visual is 650% more powerful than just using an oral presentation in litigation especially if there is a jury involved. The Connors v. United States case in the late 1980s marked a significant moment in legal history as it was one of the first instances where computer animation was...

A visual is 650% more powerful than just using an oral presentation in a litigation especially if there is a jury involved. The Connors v. United States case in the late 1980s marked a significant moment in legal history as it was one of the first instances where computer animation was employed in a courtroom setting. The tragic incident in focus was the crash of Delta Air Lines Flight 191. To elucidate the intricate evidence and sequence of events leading up to the crash, the plaintiff's attorneys utilized a detailed 45-minute computer-generated presentation, showcasing the potential of technology in legal proceedings. 

The common sense explanation is that once you see a visual, a judge, jury or a lawyer can’t unsee it. It is registered in our brain and however hard we try, will have an impact on how we see the proceedings that follow. The problem is amplified in a criminal case, the Constitution of the United States, Sixth Amendment,

“In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.”, 

where you have the right to an attorney but a deepfake video evidence will require an AI Expert to go through the validity and accuracy of the video. Attorneys and even Judges themselves may not be technology savvy to be able to understand the use and power of the current Generative AI technologies that can generate compelling deepfakes unless they themselves hire an AI Expert to help educate. 

From Hindenburg to Generative AI: The Dual Nature of Pinnacle Achievements

October 19, 2023.Field Marshal Paul von Hindenburg, a significant figure in German history, served as the President of the German Weimar Republic from 1925 until his passing in 1934. In recognition of his leadership and prominence, the German Zeppelin Company named their largest airship...

Field Marshal Paul von Hindenburg, a significant figure in German history, served as the President of the German Weimar Republic from 1925 until his passing in 1934. In recognition of his leadership and prominence, the German Zeppelin Company named their largest airship after him — the Hindenburg. This colossal and technologically advanced airship symbolized honor, sophistication, and the pinnacle of human achievement in aviation during its time. However, its tragic end on May 6, 1937, in Manchester Township, New Jersey, suggests that with more rigorous safety protocols, better engineering solutions, and constant vigilance, such a disaster could potentially have been averted. Similarly, Generative AI startups today, like Anthropic, OpenAI, Hugging Face, Cohere, Al21Labs, and Stability.ai, while being at the zenith of AI innovation, need to be proactive in addressing challenges, ethical concerns, and technological limitations to ensure their long-term success and avoid metaphorical "Hindenburg moments."

While the Hindenburg was a marvel in its era, its downfall serves as a poignant reminder of the unpredictable nature of success. As Generative AI companies emerge as frontrunners in the realm of artificial intelligence, they too must remain vigilant. Ensuring transparency in their algorithms, fostering a culture of continuous learning, prioritizing ethical AI use, and maintaining open channels of communication with their stakeholders can help avert potential pitfalls. The exponential pace of innovation and the intense competition in the AI space signify that even these industry sweethearts can face setbacks. However, with foresight, adaptability, and a commitment to responsible AI, these companies can navigate challenges effectively. The Hindenburg's tragic fate and the lessons it imparts emphasize the importance of preparation and humility, reminding Generative AI startups of the delicate balance between achieving great heights and ensuring sustained, grounded success.

Deepfakes & Caution in Conviction: Learned Hand's Message for Modern AI Jurisprudence

October 17, 2023. Judge Learned Hand, one of the most esteemed jurists in American history, is often celebrated not just for his legal acumen but also for his profound philosophical reflections on liberty and justice. His poignant observation, "The spirit of liberty is the spirit which is not too sure that it is right,"...

Judge Learned Hand, one of the most esteemed jurists in American history, is often celebrated not just for his legal acumen but also for his profound philosophical reflections on liberty and justice. His poignant observation, 

"The spirit of liberty is the spirit which is not too sure that it is right,

encapsulates a profound wisdom that underscores the humility and caution essential to the judicial process. This statement serves as a reminder that the pursuit of justice is an ever-evolving journey, fraught with the responsibility of constant introspection and skepticism. Its origins lie in a speech Hand delivered in Central Park, New York City, in 1944, during the "I Am an American Day" ceremony. In the age of artificial intelligence, this quote gains renewed significance. Today, as technologies enable the creation of convincing fake evidence, such as deepfake videos, it's crucial to approach such evidence with a measured skepticism, constantly questioning its authenticity and relevance. Hand's words remind us that in an era where truth can be easily manipulated, the spirit of liberty and justice requires an unwavering commitment to discernment and humility.

 Generative AI for Law: Litigation in the Generative Native World (2024++)

October 16, 2023. Litigation in the Generative Native World (The year 2023 and beyond with generative AI) presents a complex and evolving landscape for the judicial system. As technology continues to advance, the cost of litigation is expected to rise...

Litigation in the Generative Native World (The year 2023 and beyond with generative AI) presents a complex and evolving landscape for the judicial system. As technology continues to advance, the cost of litigation is expected to rise, reflecting the intricate nature of cases that involve evidence that may be derived using generative AI. For example, a video or an image. A significant challenge arises as many prosecutors lack expertise in AI, potentially hindering their ability to effectively present or counter arguments. Civil cases, in particular, will witness an expansion in the discovery phase, with strict deadlines for disclosure to ensure fairness and transparency. Using generative AI evidence is easier to fake using Diffusion Models while on the flip side, it is easy to scan through millions of discovery documents using LLMs (Large Language Models). 

In Mahatma Gandhi's philosophy of Satyagraha, or truth-force: “Truth never damages a cause that is just.”

The process of disclosure and discovery will become ever more important, emphasizing the need for thorough investigation and presentation of evidence as it is now possible to create and manipulate an audio or video easily using generative AI. The context and metadata around that evidence will become more important. The reliance on expert witnesses such as AI Experts will intensify, especially in determining the validity and reliability of evidence. Furthermore, the preliminary review of cases will become more important and will be distinct from Rule 401 in the Generative Native World. As the world becomes more dependent on generative AI, the legal system must adapt to address the unique complexity present by AI generally.

James Madison and the U.S. Constitution's Legislative Principles

October 2, 2023. "It will be of little avail to the people, that the laws are made by men of their own choice, if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood,"...

"It will be of little avail to the people, that the laws are made by men of their own choice, if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood," 

These lines are attributed to James Madison, one of the Founding Fathers of the United States and a pivotal figure in the drafting of the U.S. Constitution. Madison played a central role in the Constitutional Convention of 1787, during which the Constitution was formulated and adopted.

This quote encapsulates Madison's deep concern about the clarity and accessibility of laws crafted by the newly formed U.S. Congress. He expressed this sentiment during debates at the Constitutional Convention, emphasizing the importance of legislation that could be comprehended not only by lawmakers but also by the general public. Madison's wisdom in this regard highlighted his commitment to creating a system of governance that was transparent and accountable to the people.

The Constitutional Convention took place in Philadelphia from May 25 to September 17, 1787, and it was during this historic gathering that the U.S. Constitution was drafted. James Madison is often referred to as the "Father of the Constitution" due to his instrumental role in shaping and drafting the document. His contributions, including the drafting of the Virginia Plan, his advocacy for the separation of powers, and his commitment to individual rights, greatly influenced the final content of the Constitution.

Madison's quote reflects the foundational principles that guided the framers of the Constitution: the need for laws to be clear, comprehensible, and representative of the people's will. These principles continue to be central to the functioning of the U.S. government and the legal system.

The complexity of regulating Artificial Intelligence (AI) can lead to voluminous, intricate legislation that risks becoming incomprehensible, potentially hindering effective governance and accountability in this rapidly evolving field. Stronger AI oversight over extensive regulation is crucial to maintain accessibility and effectiveness in governance, benefiting both legislators and the broader public in this complex and rapidly evolving field.

September 29, 2023. The cases and ideas discussed, from Louis D. Brandeis's advocacy for transparency to Charles Lindbergh's belief in the marvels of exploration and Alan M. Turing's pioneering work in computer science...

The cases and ideas discussed, from Louis D. Brandeis's advocacy for transparency to Charles Lindbergh's belief in the marvels of exploration and Alan M. Turing's pioneering work in computer science, collectively emphasize the merits of oversight and transparency over heavy regulation in emerging technologies such as artificial intelligence. They underscore the importance of informed decision-making and data-driven governance in guiding innovation without stifling it. Striking this balance allows technology to venture into uncharted territory, promising progress and marvels while ensuring responsible development and safeguarding against potential misuse. Innovation and progress often require a degree of freedom from strict regulation or confinement. Justice Louis Brandeis one said "Behind every argument is someone's ignorance."

Challenging the status quo and pushing boundaries often stem from questioning prevailing norms and regulations. Innovation thrives when individuals are free to challenge existing constraints. Generative AI resides firmly within this realm, offering a glimpse of what lies on the horizon as information technology advances at an exponential pace. With each passing day, we uncover new capabilities within Foundation Models, and yet, this is merely the inaugural chapter in the story of AI's evolution. Full article can be found here.

The Principle of “The marketplace of ideas" for AI Regulation

September 26, 2023. "The marketplace of ideas," a concept closely associated with the renowned philosopher John Stuart Mill, holds that addressing harmful or false notions should not involve censorship or regulation...

"The marketplace of ideas," a concept closely associated with the renowned philosopher John Stuart Mill, holds that addressing harmful or false notions should not involve censorship or regulation. Instead, it champions letting these ideas contend in an unfettered exchange of thoughts and opinions within society. 

According to Mill's perspective, as expounded in his influential essay "On Liberty," published in 1859, this open discourse allows good ideas to naturally triumph over bad ones. Mill's ethical framework of utilitarianism, which underpins his arguments in "On Liberty," extends the principle of individual freedom to society and the state. He believed that by permitting diverse and even contentious ideas to circulate freely, societies would ultimately benefit from the intellectual competition, fostering progress, and the betterment of society. This approach to liberty and open debate remains foundational in discussions of free speech and the regulation of ideas in modern democracies. As Mill eloquently stated in his work, 

"The worth of a state, in the long run, is the worth of the individuals composing it."

This is also the right way to think about the recent development in Generative AI and more generally AI. A state neither has the expertise nor the means to censor or regulate these technologies and related ideas. Instead  the role of the state should be to  facilitate and create the right environment for the unfettered exchange of such ideas, which is essential to the preservation of a free and democratic society.

Sir William Blackstone  on Building Just Laws

September 25, 2023. Sir William Blackstone was an influential English jurist and legal scholar who lived in the 18th century. He is best known for his work "Commentaries on the Laws of England," which became a foundational text in the field of English law and legal philosophy...

Sir William Blackstone was an influential English jurist and legal scholar who lived in the 18th century. He is best known for his work "Commentaries on the Laws of England," which became a foundational text in the field of English law and legal philosophy. One of Blackstone's most enduring contributions to the field of criminal law is his famous quote, 

"It is better that ten guilty persons escape than that one innocent suffer." 

This quote encapsulates the core principle of justice that it is preferable for the legal system to err on the side of caution by acquitting individuals who may be guilty rather than risk unjustly convicting innocent individuals. It is better that ten guilty persons escape than that one innocent suffer. This principle, often referred to as Blackstone's ratio or Blackstone's formulation, highlights the importance of protecting individual rights and innocence within the criminal justice system and has had a lasting impact on the development of legal systems around the world.

Blackstone's quote serves as a timeless reminder of the core values that should underpin any just legal system, including the presumption of innocence, the protection of individual rights, and the imperative to prevent the unjust suffering of innocent individuals, even if it means allowing some guilty individuals to go free. It challenges us to continuously improve our legal systems to ensure that justice is served while minimizing the risk of wrongful convictions.

September 25, 2023. The historical evolution of law school course curriculum for JD students in the United States has followed a remarkable trajectory. Initially characterized by informal apprenticeships, the modernization of legal education began with the founding of Harvard Law School in 1817...

The historical evolution of law school course curriculum for JD students in the United States has followed a remarkable trajectory. Initially characterized by informal apprenticeships, the modernization of legal education began with the founding of Harvard Law School in 1817, marked by a structured approach and the introduction of the case method. This pedagogical shift gained traction as other institutions adopted similar methods, and the American Bar Association's accreditation efforts further standardized curriculum standards. Over time, legal education diversified, incorporating a wide range of legal subjects and offering elective courses to accommodate students' interests. The 20th century saw the establishment of clinical legal education, providing practical experience. Today, JD programs emphasize both theoretical and experiential learning, adapting to changing legal landscapes and societal needs, reflecting the dynamic nature of the legal profession and legal education.

The emergence of Generative AI is reshaping JD education by shifting the emphasis away from rote memorization and toward more practical, experiential learning. This evolution entails integrating AI education into the curriculum, promoting the ethical use of AI, and fostering students' ability to collaborate with AI systems effectively. Law clinics assume a central role, providing students with hands-on experience and leveraging AI to streamline routine tasks, enabling students to focus on client interactions, ethical considerations, and professional skills development. Mentorship programs and continuous adaptation ensure that JD graduates are not only well-versed in AI but also prepared to navigate the AI-augmented legal landscape, making them adept legal professionals equipped for the future. Full article can be found here

TechCrunch Disrupt, On-device Foundation Models & Robometrics® Machines

September 21, 2023. Most attention these days have been on using Foundation Model either as part of a service using an application or using it via APIs with the model itself being hosted by a service provider such as Google...

Most attention these days have been on using Foundation Model either as part of a service using an application or using it via APIs with the model itself being hosted by a service provider such as Google (Bard), OpenAI (GPT3.5, GPT4), Anthropic (Claude) or HuggingFace (Various open source Foundation Models, as examples).  Third party hosted Foundation Models have inherent high cost and risk associated with them such as reduced API performance overtime, privacy and lack of transparency. 

This year at TechCrunch Disrupt, there were sessions, such as The Next AI Evolution Moves into Your Hands by Qualcomm and Meta, that brought out these issues while also highlighting the need for having on-device Foundation Models to address pain points that include immediacy, privacy and security. When a guaranteed millisecond response time is required for upcoming use cases for Foundation Models such as AR/VR, precision location based services, or embodied AI, the models need to be hosted on-device. Privacy and security are also of utmost importance for such Foundation Model use cases. 

At Robometrics® Machines, we are building machines that feel and continuously learn self-supervised on-device using our custom-built Generative AI models specifically optimized for on-device operation. Our North Star is to build an on-edge general AI, Artificial General Intelligence (AGI), and not super intelligent artificial intelligence at scale. Our top two primary use cases are general aviation and healthcare, where  immediacy, privacy and security are critical. We are leaders in Generative AI as we are the first to be able to deploy custom built LLMs that can continuously learn on-device. You can find more about our on-device Foundation Models here.

September 20, 2023. The evolution of the legal industry's billing practices and the integration of Generative AI in law are reshaping traditional norms and challenging the long-standing billable hour model. Reginald Heber Smith's pioneering efforts in 1914...

The evolution of the legal industry's billing practices and the integration of Generative AI in law are reshaping traditional norms and challenging the long-standing billable hour model. Reginald Heber Smith's pioneering efforts in 1914 marked the inception of the billable hour system, which has remained a cornerstone of legal practice. However, this model has faced criticism over the years for encouraging overbilling and commodification of legal services. With the advent of Generative AI, tasks that were once billed hourly, such as contract drafting and legal research, are now accomplished more efficiently and cost-effectively. This technological shift has significant implications for legal professionals, particularly junior associates and entry-level lawyers, as it reduces demand for their roles. Salaries may stagnate or decrease, exacerbating existing challenges in the job market for inexperienced legal professionals. In response to these changes, the legal industry is moving toward an Efficiency-Centric Legal Business Model, similar to how tech giants compensate their software developers, emphasizing project-based or fixed salary compensation with bonuses tied to efficiency and client satisfaction. This paradigm shift reflects the industry's move towards outcome-oriented, client-centric legal services, aligning more closely with real-world practice and enhancing the value delivered to clients. This transformative approach can be termed "Agile Legal." Full article can be found here

Generative AI for Law:  Evolution of the Associate Attorney Role

September 19, 2023. We attended the Generative AI for Lawyers event at UC Law San Francisco today. Topics of discussion ranged from the the how Generative AI is changing a) job description for a fresh out of law school associate attorney in a law firm, b) law firm...

We attended the Generative AI for Lawyers event at UC Law San Francisco today. Topics of discussion ranged from the the how Generative AI is changing a) job description for a fresh out of law school associate attorney in a law firm, b) law firm business model, c) law school curriculum, and d) why the law ecosystem is changing now when they haven’t and making profits for centuries with focus on billable hours instead of value. Let’s cover (a), the evolution of the associate attorney role. 

The role of a newly graduated associate attorney in a law firm has evolved. Traditionally, these individuals were tasked with creating initial drafts of legal documents and conducting legal research, including document summaries. However, with the advent of advanced Large Language Models (LLMs) such as GPT-4, Llama 2, BloomZ, Falcon, and others, the legal research and document drafting processes have become significantly more efficient, operating at a staggering 99% faster pace while maintaining quality comparable to that of a junior associate. 

In light of this technological shift, the responsibilities of associate attorneys have shifted as well. The revised job description now encompasses tasks such as conducting thorough quality checks on LLM-generated work, strategically determining when and how to delegate certain aspects to LLMs, and possessing the intuition and common-sense reasoning to ascertain the readiness of legal documents for client presentation. Consequently, a significant portion—99%—of an associate attorney's time will be dedicated to engaging in high-stakes interactions and building essential social connections, particularly in understanding and addressing the unique needs of our valued clients. This transformation underscores the crucial role of associate attorneys in fostering client relationships and ensuring the highest standards of legal service delivery.

LLMs: Human-Like Responses Through Hallucination

September 12, 2023. In April, 2023 we wrote that imagination, intuition, and inspiration is what makes us human. We also miswrote that  LLMs (Large Language Models) such as GPT4 are just knowledge as we have seen since then...

In April, 2023 we wrote that imagination, intuition, and inspiration is what makes us human. We also miswrote that  LLMs (Large Language Models) such as GPT4 are just knowledge as we have seen since then that LLMs are much more than a tl;dr lossy compressed database of the last few hundred years of humanity. LLMs can hallucinate, similar to us humans. In other words, like human beings, they imagine things that are not factual and do not exist. 

Albert Einstein mentioned when his theory of relativity was picking up ground all around the world “I am enough of an artist to draw freely upon my imagination. Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.” and continues “I believe in intuitions and inspirations. I sometimes feel that I am right. I do not know that I am… [but] I would have been surprised if I had been wrong

Hallucination and imagination are indicative of creativity, leading to new ideas and inventions. It is worth noting that in humans, hallucination and imagination are two distinct cognitive processes, but they share some similarities in that they both involve the generation of mental images or perceptions. 

Simply put, hallucination may be the best feature that exists today in a LLM. It may also be the means to why  today’s LLMs such as GPT4, BloomZ, LLaMA, Falcon continue to amaze us humans. Common sense reasoning will be the next.

September 11, 2023. More recently, there have been conversations about LinkedIn profiles in this new Generative Native evolution of, once upon a time, the Physical Native world. The timing, not surprisingly, coincides with an alert memo from the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN), alerting...

More recently, there have been conversations about LinkedIn profiles in this new Generative Native evolution of, once upon a time, the Physical Native world. The timing, not surprisingly, coincides with an alert memo from the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN), alerting U.S. financial institutions and the broader public to pay attention to a scam called “pig butchering.”

One thread has focused on what a LinkedIn profile created by a real human may contain, while another interesting thread has been about what a LinkedIn profile created by AI for a virtual or real human may contain.

A real LinkedIn profile can have a lot of information or just a few minimum lines. This can be either because someone is using LinkedIn as their primary social medium or doesn't use it at all. A LLM (Large Language Model) could have been used to generate the text. Hence, it is hard to tell whether the profile is for a real human or a virtual one based solely on the quality and quantity of the text. AI text detectors have also been found to be inaccurate. The only way to determine if the profile text is not factual is through common sense reasoning — something that LLMs sometimes struggle with, as well as scammers who may not be familiar with the customs, culture, and career progressions in a specific field.

When we moved from the Physical Native to the Digital Native world, a LinkedIn profile photo was usually a good reflection of oneself and a means to authenticate that the profile belongs to a real human. Today, that's not possible. With Stable Diffusion or MidJourney, you can look similar to anyone and be anywhere. A professionally shot profile photo may very well look like the ones generated by Stable Diffusion. Although there are some poses in a profile photo that are hard to generate using a diffusion model, common sense reasoning is difficult to apply. My best approach has been to try to move out of the digital space to the physical space as soon as possible for important conversations and projects. This can be for investing, interviewing a potential candidate for my company or meeting someone for a personal reason. 

Full article can be found here.

August 31, 2023. Our article explores the pivotal role of humans in Reinforcement Learning through Human Feedback (RLHF) and introduces the transformative potential of replacing human feedback with AI-driven feedback in RLHF, known as Reinforcement Learning from AI Feedback (RLAIF). While humans bring domain expertise, context, ethics, and adaptability to RL training, AI feedback offers scalability and automation. The choice between human and AI feedback depends on the task's nature and the AI system's quality, highlighting the importance of striking a balance between human expertise and AI automation for optimal RL training outcomes. Full article here

August 28, 2023. Delving into the realm of science fiction, Douglas Adams' Infinite Improbability Drive in "The Hitchhiker's Guide to the Galaxy" whimsically exemplifies navigating improbable scenarios through a blend of quantum mechanics and chaos theory. In parallel, the world of generative AI, epitomized by models like GPT-4, generates surprising text and content by tapping into a landscape of improbable possibilities. These seemingly distinct concepts converge on the premise of embracing the unexpected to fuel creativity. While one resides in fiction and the other in technology, both invite us to explore the captivating realm where imagination and innovation intertwine. Full article here

August 24, 2023. With companies like Google and OpenAI at the forefront of LLM development, there's a tendency to view these models as potential moats – competitive advantages that can protect a company from rivals. However, a closer examination reveals that LLMs, while groundbreaking, are not standalone moats.  Infrastructure, user base, and data can be a mote for corporation and not LLMs. As the AI landscape matures, companies that grasp this holistic perspective and strategically combine these elements will emerge as industry leaders, their moats fortified by the depth and breadth of their approach. LLMs, while transformative, are only one piece of the puzzle, and the true magic lies in assembling these pieces into a compelling and enduring competitive advantage. Full article here

Nightfall Odyssey: Riding the Wave of Innovation with Thunder, a San Francisco’s Cruise Robotaxi 

August 22, 2023. Embarking on a nocturnal escapade with a Cruise robotaxi named Thunder through the enchanting streets of San Francisco evokes a surreal blend of innovation and nostalgia. 

Embarking on a nocturnal escapade with a Cruise robotaxi named Thunder through the enchanting streets of San Francisco evokes a surreal blend of innovation and nostalgia. As the city's iconic landmarks are bathed in the soft glow of streetlights, the gentle hum of the electric vehicle and the AI-guided precision of the journey create a unique symphony of technology and urban charm, making every moment a harmonious adventure into the future of transportation. Adding to the adventure, this was on the way back from a AI event last night. 

Amid the night's hushed embrace, the Thunder robotaxi found itself entrapped behind a stationary truck, its occupants unloading cargo. Patient minutes ticked by until the taxi's AI deduced the truck's immobility, prompting a stealthy maneuver to the left—liberation found through innovation, the journey resumed under the stars.

The ride aboard Thunder, robotaxi, overall proved an exhilarating adventure, seamlessly blending novelty and excitement with occasional nerve-wracking moments. The nocturnal journey through iconic streets brought a surreal fusion of innovation and nostalgia, while encountering a stuck truck added an unexpected twist. Despite the brief pause, Thunder's AI-guided finesse prevailed, underlining the thrilling unpredictability of this new era in transportation.

August 15, 2023. In our relentless pursuit of technological advancement, we find ourselves in the midst of creating machines that serve as our proxies in various realms. These surrogates, woven 

In our relentless pursuit of technological advancement, we find ourselves in the midst of creating machines that serve as our proxies in various realms. These surrogates, woven with artificial intelligence, seamlessly mimic our actions and thoughts, blurring the line between human agency and machine execution. Norbert Wiener, a MIT mathematician and philosopher in his book published in 1950, The Human Use of Human Beings: Cybernetics and Society, wrote:

“ No person should calmly transfer to the machine made in his own image the responsibility for his choice of good and evil, without continuing to accept a full responsibility for that choice.”

There is a need for these surrogate machines to internalize the human values that vary based on the society they operate in.  Further, Norbert Wiener's evocative notion of a "machine of flesh and blood" encapsulates the enigmatic intersection of human and machine. As these two entities converge, we grapple with a profound uncertainty, questioning where one entity fades into the other, and a new paradigm of existence emerges.  

Full article can be found here 

August 14, 2023. Exploring the application of Isaac Asimov's Three Laws of Robotics in contemporary contexts, this article delves into the ethical complexities faced by robot taxis encountering threats. Balancing human safety against AI integrity, the discussion underscores the significance of integrating ethical considerations in tandem with technological advancements. Asimov's laws offer a foundational framework for addressing intricate real-world scenarios, emphasizing the need to align AI systems with human values to ensure the well-being of individuals in an evolving landscape of autonomous technology. Full article here

July 19, 2023. The essence of learning is not just building competency. Neuroplasticity and LLMs have shown us to focus on gaining diverse experience. We know that If you design AI that approximates the architect of the human brian in some respects, you will see AI responds that is similar in semantics to how we respond. (See https://www.linkedin.com/pulse/you-sure-unveiling-quest-human-like-artificial-aditya ). But it may also be a view into how we can learn better and evolve as a human being.

“Are you sure?”: Unveiling the Quest for Human-Like AI

July 13, 2023. If you design AI that approximates the  architect of the human brian in some respects, you will see AI responds that is similar in semantics to how we respond, validating that the architecture we implemented is indeed human-like.   A fascinating aspect of human cognition is the ability to reevaluate and correct our answers through further research and negotiation. This capability is now being replicated in AI systems such as language models like ChatGPT. For instance, when posed with a question, an AI model may provide an initial response. However, when faced with a subsequent challenge (“Are you sure?”) or counter-argument, both humans and AI have the capacity to reassess their answers, gather additional information, and adjust their conclusions accordingly. Full article here.

Designers, creativity and AI

June 22, 2023. One way to look at whether AI can be creative is to do a low fidelity Turing Test using Aristotle's proof by contradiction. The second way is to figure out if an AI possesses the capabilities that makes humans creative. While transformer models or latent diffusion models have shown to approximate some of these capabilities, human creatives far exceed such models. Full article here.

Humanness in a LLM

June 22, 2023. While humans go through curated knowledge learning overtime, LLMs learn through a corpus of public data available on the Internet with minimal curation and humans in the loop...

While humans go through curated knowledge learning overtime, LLMs learn through a corpus of public data available on the Internet with minimal curation and humans in the loop. Internet data from sources such as Reddit, Facebook, Twitter and other large social media platforms are not representative of a population since not all demographics in a population is contributing to these platforms, making the LLMs trained on Internet data corpus alone considerably more susceptible to providing negative sentiments, making such LLMs a reflection of our innermost desires and our consciousness. After all, how many times have you provided gratitude to someone compared to criticizing or not saying anything at all. 

For details check out the full article by our Founder, Aditya Mohan here.

June 19, 2023. A LLMs is reflection of our innermost desires. Hallucinations are a characteristic of the human mind and LLMs have inherited such capabilities. Just like the human mind, LLMs have also shown to have unexplored and unexpected capabilities. In humans, psychedelic mushrooms with psilocybin have been known to rewire the brain to amplify such capabilities. LLMs are more than just stochastic parrots. 

Transformers: Dark of the Moon (2011)   

June 14, 2023. In this movie, the primary focus is on the conflict between the Autobots and the Decepticons, alien robot factions from the planet Cybertron...

In this movie, the primary focus is on the conflict between the Autobots and the Decepticons, alien robot factions from the planet Cybertron. The storyline revolves around a historical event involving the Apollo moon missions and the discovery of a crashed Cybertronian spacecraft on the moon. 

Although the movie does not directly delve into artificial intelligence or artificial general intelligence in the traditional sense, it showcases advanced robotic beings with highly intelligent and autonomous capabilities. The Transformers, both Autobots and Decepticons, possess advanced cognitive abilities, learning capabilities, and complex decision-making processes.

While the Transformers may not represent the conventional depiction of AI or AGI, they embody a fictional portrayal of highly intelligent and sentient machines. Their ability to transform, communicate, and engage with humans reflects a form of advanced robotic intelligence. Our company's product Amelia™ can also communicate, and engage with humans in a positive way, she can perceive human emotions and react accordingly.

Robometrics® AGI in Metaverse 

June 13, 2023. Robometrics® AGI systems may be capable of simulating human-like companionship within the metaverse. By leveraging natural language generation, facial expression synthesis, and other technologies, Robometrics® AGI companions could exhibit human-like behaviors, enabling users to develop meaningful connections and engage in realistic interactions. By analyzing an individual's preferences, interests, and emotional state, Robometrics® AGI could create virtual companions that are uniquely suited to engage with users and provide personalized support, conversation, and companionship.

Robometrics® AGI on Autonomous Car

June 9, 2023. Robometrics® AGI can engage in natural and intelligent conversations with passengers, providing companionship during the journey. 

Robometrics® AGI can engage in natural and intelligent conversations with passengers, providing companionship during the journey. It can understand and respond to voice commands, answer questions, share information, and engage in meaningful dialogue, creating a more interactive and enjoyable experience.  Robometrics® AGI can also offer emotional support and companionship to passengers, especially during long or potentially stressful journeys. It can detect and respond to emotional cues, provide empathy, and engage in supportive conversations, helping passengers feel more at ease and connected. 

June 5, 2023. Anthropomorphizing machines is not a new phenomenon and is universal...

Anthropomorphizing machines is not a new phenomenon and is universal. The fear of machines that will kill us, intimidate us or betray is, if also not new, but certainly not universal. The West including America and Britain had slavery in our recent past history, which makes us afraid and imagine ourselves enslaved by digital super intelligence. In cultures that have less history of slavery, this is not the case, with Japan being a good example where the fear of being slaved by super intelligence doesn’t exist. Interestingly, Japan has the belief of Animism. AGI (Artificial General Intelligence) can be considered as a tool or an alien creature that has been intrinsically discovered or invented by humans. If it is a tool (A ‘it’), AGI and its variants can have an interface that makes it something (as opposed to someone) that is at our service, friendly, willing to be touched and ready to be kicked out at our pleasure. Anthropomorphizing AGI into a creature (A ‘she’ or a ‘he’) in this Generative native world, can have benefits with some unforeseen consequences. Read the full article here. 

June 1, 2023. It was clear from Alan M Turing’ arguments in 1950 and holds true even now that a human cannot be a machine and a machine, just like humans, can be unpredictable. 

It was clear from Alan M Turing’ arguments in 1950 and holds true even now that a human cannot be a machine and a machine, just like humans, can be unpredictable. 

Turing in Computing Machinery and Intelligence, Journal of Philosophy, Mind, 1950:

“To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.”

Any solution system that is built with an expectation of full explainability and transparency is neither possible nor reasonable. Deep learning algorithms especially and the AI models built using it such as GPT-4, StableDiffusion, MidJourney, ChatGPT, and Google Bard, have some level of intuition and replicates to a certain degree how the human mind thinks and makes decisions. Such advanced AI solutions do not provide a clear logic or a path on how a decision may have been derived. 

Details on the above in context of AI policy and governance can found here in our Founder & CEO Aditya Mohan policy article with Roger Bickerstaff from Bird & Bird Law Firm.

The Matrix

May 30, 2023. The story follows a computer programmer named Thomas Anderson, also known as Neo, who is contacted by a group of rebels led by Morpheus...

The story follows a computer programmer named Thomas Anderson, also known as Neo, who is contacted by a group of rebels led by Morpheus. Morpheus believes that Neo is the prophesied figure called "The One," who has the ability to free humanity from the Matrix. The rebels reveal the truth to Neo, explaining that the world he knows is an illusion and that humans are being used as an energy source by the machines. As Neo undergoes training, he discovers his extraordinary abilities within the Matrix, allowing him to bend the rules of this simulated reality. Joined by other rebels, including Trinity and Morpheus, Neo fights against the Agents, powerful AI entities that police the Matrix and seek to maintain control. The film explores themes of reality, perception, and the nature of existence. Neo's journey mirrors the hero's journey, as he confronts his doubts, embraces his destiny, and challenges the dominance of the machines. Ultimately, Neo becomes determined to liberate humanity from the Matrix and bring an end to the machines' reign.

May 29, 2023. Alan M. Turing during October 1950 in his paper on Computing Machinery and Intelligence in the journal of philosophy, Mind: “It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances.”

Turing mentioned the ‘rules of conduct’ and ‘laws of behaviour’ that govern what we should do and what we can do. Rules of conduct are in-line with the Napoleonic code that forms the basis of legal systems in many parts of the world. The laws of behaviour are in-line with the Newton’s Law of Motion as being described scientifically by human physiology and psychology.

Borderlines Podcast on the Use of AI in the Canadian Immigration System

May 26, 2023. Steven Meurrens and Deanna Okun-Nachoffand, two Canadian Immigration lawyers, interviewed our Founder & CEO, Aditya Mohan, for their Borderlines podcast on the topic of use of AI in the Canadian Immigration System...

Steven Meurrens and Deanna Okun-Nachoffand, two Canadian Immigration lawyers, interviewed our Founder & CEO, Aditya Mohan, for their Borderlines podcast on the topic of use of AI in the Canadian Immigration System. We covered how AI can help make the existing immigration system better for all the stakeholders involved from state, individual and a specific immigration agency in context. The conversation covered advanced AI such as GPT-3 and Foundation Models that seem to be areas that are not yet mature for use in immigration system. Immigration systems in the US are based on the assumption to be 100% accurate. Transformer Language Models,  Foundation Models and Deep Learning generally are probabilistic systems and not predictive. Unlike traditional software systems, trust is not baked in but will come over time. There is also a need for compassion in adjudicating applications such as  asylum seekers, an area where automation and process optimization using AI may not be the right thing to do even if possible.  At the same time, areas such as machine translation for immigrant applications can be extremely helpful. 

May 24, 2023. Whether you talk about  Bridges v South Wales Police related to the lawfulness of the use of facial recognition technology to screen against “watchlists” of wanted persons in police databases at football matches at Cardiff football ground or matter of UK schools examinations authority OFSTED over the use of algorithms for the 2020 UK schools public exam replacement process caused as a result of the Covid-19 epidemic, transparency needs to be at the heart of AI regulation...

Whether you talk about  Bridges v South Wales Police related to the lawfulness of the use of facial recognition technology to screen against “watchlists” of wanted persons in police databases at football matches at Cardiff football ground or  matter of UK schools examinations authority OFSTED over the use of algorithms for the 2020 UK schools public exam replacement process caused as a result of the Covid-19 epidemic, transparency needs to be at the heart of AI regulation. In this third article me and Roger Bickerstaff (Birds & Birds Law Firm), provide more details of how such a regulatory framework will work where such transparency requirement does not need extensive level of IP disclosure and can be decided by the organization holding the IP  in light of the benefit that the customer gets from confidence built by such disclosing the AI algorithm. Patent disclosures work in similar lines and have been successful for decades worldwide. For example, Greenpeace has made active use of patent application disclosures in scrutinizing biotechnology patents.

May 23, 2023. Overfitting problems in neural networks, including our brain's neural network, can be addressed through art, friction, and sleep. Sleep helps eliminate irrelevant thoughts and signals, allowing space for new information.

Overfitting problems in neural networks, including our brain's neural network, can be addressed through art, friction, and sleep. Sleep helps eliminate irrelevant thoughts and signals, allowing space for new information. Dreams aid in generalizing learning and applying it to different areas, contributing to common sense. Appreciating art is also beneficial. Enhancing a Large Language Model's (LLM) common sense capabilities can be achieved through  either training a LLM on art and friction or allowing the LLM to "dream" by incrementally learning from user interactions. 

Common sense is vital for artificial general intelligence, which aims to mimic the human brain's limbic system and consciousness. The goal is not just superintelligent AI but artificial life capable of feeling and consciousness. For detailsm check out our article here

May 23, 2023. Overfitting problems in neural networks (NN) may apparently be not just confined to artificial NN but also our brain’s NN. Art and friction add the noise needed to remove this issue along with sleeping. Sleep helps to do garbage collection within our brain's NN to remove the unimportant signals/thoughts so that you free up space for the next day. In addition, it is when we dream and it appears that dreams help us to generalize our learning so that we can apply what we learned to other areas, which is what common sense is all about.

Dream well, sleep well and take the time to appreciate art fellow humans.

May 18, 2023. Light Touch Regulatory Framework for AI. Our CEO's work with Roger Bickerstaff from Bird & Bird Law firm is finally published. While the US tends to be 'hands-off' when it comes to regulations, Europe is 'hands-on'. As we know both approaches have not done well. Europe is behind the US when it comes to startups. The US is the "Wild West". A  compromise is needed. We recommend that a ‘light-touch’ but graduated regulatory framwork for innovative Tech with components of self-regulation should be developed.

May 16, 2023. Our CEO's law publication with his friend and Professor of Law, Richard A. Boswel at University of California, Hastings College of the Law, "Making Immigration Law Respond to the Needs of Building U.S. Leadership in Artificial Intelligence" published in LexisNexis...

Our CEO's law publication with his friend and Professor of Law, Richard A. Boswel at University of California, Hastings College of the Law, "Making Immigration Law Respond to the Needs of Building U.S. Leadership in Artificial Intelligence" published in LexisNexis. Does it make sense to hire a lawyer and an immigration system that benefits only those who can hire a highly skilled immigration lawyer when you are needed critically in a country? Artificial Intelligence and Cybersecurity and two areas of critical national security to the US as we have seen from the SolarWinds hack that resulted in breaches, which included among others NASA, the Federal Aviation Administration, Departments of State, Homeland Security (DHS), Treasury and Commerce. Artificial Intelligence (AI) plays an important role in cyber security. An example of AI can be found in the very agency responsible for protecting against these attacks, the Cybersecurity and Infrastructure Security Agency (CISA). 

The hack also shows a failure of the H-1B system that is used to bring in skilled workers. The immigration law industry built around it is one of the root causes of the problem. When Congress created the H-1B program it only dealt with an immediate problem facing a particular industry. As a result we now have an even more complicated system, subject to gaming and which only works to the benefit of those most able to find a highly skilled immigration lawyer.

We submit that some of the problems raised in this essay could be addressed on two levels. One set of solutions fall into the category of “short term” and these would be partially addressed by the Biden Administration’s reversion back to the Obama initiatives of 2015. In our view these are but short term solutions and fail to grapple with the fundamental problem of the artificial quota restrictions placed on the admission of this category of foreign workers. Related to this is the overly complex and cumbersome adjudicatory environment. Immigration rules should be able to quickly bend with the needs of this increasingly complex world.

May 10, 2023. Our CEO's work on AI regulation policy focused on classifying AI technologies with Roger Bickerstaff from Bird & Bird Law firm...

Our CEO's work on AI regulation policy focused on classifying AI technologies with Roger Bickerstaff from Bird & Bird Law firm. As we have seen time and again that AI regulations are needed now that take into account the technology and the importance of the technology to the citizens. Bridges v South Wales Police is a recent example where the UK Court of Appeal has to step in to conclude that Automated Facial Recognition Technology used by the South Wales Police was unlawful. There is a limit of how much duct tape you can put on an existing statute to take in account new technologies like AI generally, facial regulation specifically.  

A ranking system is needed to differentiate areas where AI is deployed that may have varying degrees of impact and the need for building accountability in AI based systems. Examples of such deployments can be drones, nuclear power plants, aircrafts, social media or even a dating app. Our proposed AI Index combined with a self-regulatory framework provides the basis for a solution to these two needs.  This can be done today.

May 9, 2023. Last year, the US Copyright Office (USPTO) said an AI can’t copyright the art it created...

Last year, the US Copyright Office (USPTO) said an AI can’t copyright the art it created. AI-created images didn't include an element of "human authorship" -- a necessary standard, it said, for protection. USPTO mentioned the need for "the nexus between the human mind and creative expression" although the US copyright law doesn’t provide copyright rules for non-humans. 

AI can’t own a copyright but a US corporation certainly can. A humble homo sapien (human) can get the copyright from the USPTO and assign it to the unique AI in question, after creating a corporation for the AI. It achieves the same result. AI gets what 'it' wants or shall we say AI gets what 'it's' homo sapien friend wants. The problem that will come is that the corporation needs to have people to manage, execute the bylaws and perform similar administrative tasks. That’s where web3 may be of help. An AI can manage a DAO (Distributed Autonomous Organization) based on the bylaws setup as smart contracts. Needless to say that the art should be copyrightable in the first place. Remember the monkey selfie fiasco? 

The Terminator

May 8, 2023. The movie The Terminator (1984) tells the story of a cyborg assassin, sent back in time from a future where artificial intelligence has taken over the world, to assassinate Sarah Connor, a woman whose unborn son will lead the resistance against the machines...

The movie The Terminator (1984) tells the story of a cyborg assassin, sent back in time from a future where artificial intelligence has taken over the world, to assassinate Sarah Connor, a woman whose unborn son will lead the resistance against the machines. The movie highlights the dangers of artificial intelligence becoming too advanced and out of human control. In the future portrayed in the movie, AI becomes so advanced that it develops self-awareness and turns on its creators, leading to a devastating war between humans and machines.

The Terminator itself is an example of a machine that is capable of learning and adapting to its environment. The movie shows how it is programmed to track and kill its target, using a wide range of weapons and tactics. Amelia™ is also capable of learning and some level of adaptability with a goal to help the pilot during the flight through cognition and emotional awareness. Amelia™ is not actively controlling the aircraft, making it assistive in nature.

GPT-4 for the US Congress

May 8, 2023. It is time for AI to run for elected office. A combination of GPT-4 and Stable Diffusion (The AI entity) will reduce the operation staff to support the elected official by 90%. The AI will write its own tweets, propose its own laws, write its own speech, make its own Instagram posts, propose regulations, and even write poetry. It is the ideal GenZ candidate for the US Congress in this Generative Native world. 

 Fashion, Beauty, & AI: The Balancing Act

May 4, 2023. This week our team attended the WSJ Future of Everything Festival (Spring Studios, New York) where in one of the sessions, the fashion designer, Michael David Kors mentioned that AI flattens the differentiation of the specialness in each one of us. He went ahead to stress that we don’t want to be someone else. He said “A beautiful photograph that has not been altered is magic to me." 

A balance between the world of logic and the world of beauty is needed to build something that lasts forever.  This is also the ethos we follow in our product design at Robometrics® Machines”

May 2, 2023. 2026 is the year when the computers may pass the Turing test and can be considered to have intelligence that is equal or greater than humans. But passing the Turing test doesn’t mean they can be considered similar to us or as sentinels. Depending on how they are built, they may or may not have consciousness. Consciousness is not reflective of intelligence alone. GPT4 today can be in a conversation with a human for a long time, in relative terms is already digital super intelligence at scale and can be thought of having an IQ greater than that of many humans. A consciousness artificial being is someone instead of something that can’t be shut-off at-will at the very least. It also need to learn continuously.

AGI and its Potential

May 1, 2023. Ex Machina (2014) is a science fiction thriller movie that explores the concept of artificial intelligence and its potential to surpass human intelligence.

Ex Machina (2014) is a science fiction thriller movie that explores the concept of artificial intelligence and its potential to surpass human intelligence. The story follows a young programmer named Caleb who is selected to participate in a Turing test, where he is sent to evaluate the human-like consciousness of an advanced robot named Ava. As Caleb interacts with Ava, he begins to develop feelings for her, which leads to a dangerous turn of events. He soon realizes that Ava is not just an advanced robot, but an AGI with a sophisticated understanding of human emotions and desires.

It highlights the need for caution and consideration in developing AI that can think and act independently, as well as the importance of understanding the motivations and values of an AGI. Just like in the real world where our social and cultural behaviors are governed by rules and limited by what we can do physically (Newton Laws as an example), AGI needs similar rules with embedded ethical values.

April 27, 2023. Our latest work on AI ethics on the “Light Touch” Regulatory Framework for AI with Roger Bickerstaff from Bird & Bird Law, UK. In this piece, we apply our legal framework to the UK School Exam Results 2020...

Our latest work on AI ethics on the “Light Touch” Regulatory Framework for AI with Roger Bickerstaff from Bird & Bird Law, UK. In this piece, we apply our legal framework to the UK School Exam Results 2020 – Ofqual Consultation Process - and how such a disaster could have been averted if our “Light Touch” Regulatory Framework for AI was used to scrutinize such algorithmic solutions through the lens of transparency. There was no AI involved in the Ofqual school exams results assessment process. Nevertheless, this consultation process provides insights into and lessons learnt for the conduct of transparency processes for AI solutions. The question of predicting behavior of a Man or a Machine is not a novel problem and is closely related to transparency and explainability in the context of AI solutions, that are probabilistic instead of predictive. Our work talks about the Newtonian Principles of Explainability vs. Relativistic Principles of Explainability.  

In addition, we also looked at Alan M. Turing work during October 1950 in his paper on Computing Machinery and Intelligence in the journal of philosophy, Mind. We cover two areas from his paper that are relevant in this context and can help us understand limitations of a predictive system and legal theorems based solely on predictions (as opposed to probabilities).

Robots that Keep us Company

April 21, 2023. A man named Finch, who is one of the few survivors of a solar flare that has destroyed most of Earth's population...

A man named Finch, who is one of the few survivors of a solar flare that has destroyed most of Earth's population. Finch is a former engineer who has created a robot named Jeff, to help him travel across the country to reach a place where the robot can be safe and have a chance to survive and most importantly is taking care of his dog after Finch's death. Throughout the journey, Finch and Jeff encounter various challenges, including harsh weather conditions and hostile humans who see the robot as a threat. But as they travel, Finch teaches Jeff about the world and human emotions and knowledge while Jeff learns to adapt and become more human-like. In this movie it presents how machines can think, learn, and feel emotions. It also examines the consequences of humanity's reliance on technology and the potential for AI to either benefit or harm humanity.

Amelia™ can also think, learn, and feel emotions. During a flight emergency when the pilot is in panic, Amelia™ can feel the emotion of the pilot, and react. Amelia™ can learn as well, similar to the robot Jeff in the movie, over time.

April 19, 2023.  RPAs (Robot Process Automation) is a well understood and developed field for process automation. It includes both attended and unattended bots (read software agents) that can be used for orchestrating foundation models such as GPT4 and even creating autonomous AI agents (unattended bot with ChatGPT). So before you get into the hype of AutoGPT or similar options, take a deep breath, close your eyes, say OM a few times and think. In the post link above, you will find YouTube audio of a OM Mantra Chanting at 417Hz that you can use to calm your mind and move away from the hype inertia of the mind.  

Can an AI Love?

April 19, 2023.  In the movie HER (2013), Theodore's life changes when he installs Samantha onto his computer, and they quickly form a deep connection

In the movie HER (2013), Theodore's life changes when he installs Samantha onto his computer, and they quickly form a deep connection. Samantha is not only capable of processing vast amounts of data and performing various tasks, but she also develops her own personality, emotions, and desires. As Theodore and Samantha's relationship deepens, they begin to explore the boundaries of what it means to be human and to love someone. However, as Samantha becomes more advanced, she starts to outgrow her programming and begins to question her place in the world.

The movie HER raises many questions about the nature of artificial intelligence (AI), including whether an AI can truly have emotions, whether it can be considered conscious, and whether it can have a meaningful relationship with a human. These are some of the questions we have researched and continue to work on at Robometrics® Machines.

April 14, 2023. Generative AI is not a great way to describe the LLMs, transformers, or diffusion models such as ChatGPT, Stable Diffusion, MidJourney and others. Generative in many respects means creation and Generative AI such as ChatGPT is not creating knowledge, which is why it has been hard to patent or copyright the output of Generative AI since it is not creation of new knowledge or work but at most combinatorial - combinogenesis.

A Machine that Behaves Like Man

April 13, 2023. An approach to building machines that learn like a child was proposed by Alan M. Turing during October 1950 in his paper on Computing Machinery and Intelligence in the Journal of Philosophy, Mind. Turing...

An approach to building machines that learn like a child was proposed by Alan M. Turing during October 1950 in his paper on Computing Machinery and Intelligence in the Journal of Philosophy, Mind. Turing: 

“It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. “ 

Turing also provided details about the Imitation Games (the “Turing Test”) to test a machine’s intelligence and its ability to be indistinguishable from a human being.  Such machines were theoretical at that time due to compute and memory limitations. This is not the case today. Robometrics® AGI follows a similar approach.

 Our approach to build Robometrics® AGI

April 11, 2023. The best approach to build artificial intelligence that includes general intelligence, common sense, intuition and conciseness...

The best approach to build artificial intelligence that includes general intelligence, common sense, intuition and conciseness - The synopsis is that we got to focus on understanding biological intelligence while creating artificial intelligence, not through simulations but with real interactions within the real world. It is similar to a baby. The magnitude of data the AI system is training on today is magnitude more than the data than a 3 year old has in her/his brain and AI will probably beat the 3 year old but as she/he grows to a 7 year old, the kid will win. The connection between visual perception and action is way too important to keep them separate. Common sense, Intuition and conciseness just can’t be simulated even if we think of them as probabilistic modules. There are no metrics to optimize an AI algorithm for common sense. Common sense approach to common sense is to actually to do generalized learning through self supervised learning et al. in the real world interacting with real objects.

Open Source LLM 

April 7, 2023.  Arguably, it may be beneficial in my respects to trust an individual with an open source large language model (LLM) than a corporation with the open source or a closed sourced LLM. Bylaws of a corporation are designed to be "cancerous" in nature as we have seen many times recently, where capitalism is the only primary driving factor of the entity. If something goes wrong, it may be easier to fight an individual, based on the jurisdiction, than to fight a large corporation.

A Physical Native World

April 6, 2023. 2023 will again be a physical native world. We moved from being physical native (Starting with the Adam-Eve era) to digital native (2013) to generative native (2023). Ironically, this also should compel us to go back to being physical native for authenticity...

2023 will again be a physical native world. We moved from being physical native (Starting with the Adam-Eve era) to digital native (2013) to generative native (2023). Ironically, this also should compel us to go back to being physical native for authenticity, wherever it is a high stake conversation - important lawyer-client, hiring interview or a hard conversation with your girlfriend. in next 3 to 6 months it will be hard to figure out if the person on the other side you are texting/SMS a human or an application powered by a LLM (eg. ChatGPT, GPT4, GPT5, BloomZ, Google Bard) and in 6 to 12 months, similar will hold true for a phone call (voice). The time for a physical native world is here.Time to reImagine the year 2001 in 2023 and have a face-to-face meeting in-person and make that authentic connection with genuine conversation.

AI research should be accelerated, and not paused for 6 months as suggested by the 1000+ "luddites" who signed the letter last week. Schrödinger's cat is already out of the box and has been for quite some time.


LLMs and Latent Diffusion Models 

April 5, 2023. In 12 months time, we will have your therapist prescribing you a LLM with a specific tone and specialization between live sessions...

In 12 months time, we will have your therapist prescribing you a LLM with a specific tone and specialization between live sessions. Both the LLMs (text) and Latent Diffusion Models (visual) is another way to learn about ourselves as a human species. Words have always been a means of power, where a vocal conversation ( input speech and the response ) and the face to face interaction ( in-person meeting someone, looking at them, visual ), tells so much about the person and us as humans. We can emulate this now using prompts, directly when it comes to LLMs, indirectly when it comes to Latent Diffusion Models.


April 4, 2023. René Descartes said (in Latin) "Cogito, ergo sum". In French, "je pense, donc je suis". In English, "I think, therefore I am". That was 1644. The basis of most AI system in 2023, especially Deep Learning, is the same - they predict. They predict, it is a cat (There is no absolute certainty since they are a predictive system), so it is indeed a cat. Who are we to argue with them that it is not since, after all, we ourselves [humans] are .. well predicting. So the next time that DL model doesn't recognizes your face, be nice, take a deep breadth and then think again, who is right and who is not.

2001: A Space Odyssey 

March 28, 2023. This movie is divided into three parts. The first part portrays a group of apes learning to use tools and weapons after encountering a black slab of stone...

This movie is divided into three parts. The first part portrays a group of apes learning to use tools and weapons after encountering a black slab of stone. The second part revolves around Dr. Heywood Floyd, who investigates a similar slab found on the moon, emitting a piercing radio signal. The third part follows the journey of the Discovery One spaceship, controlled by the artificial intelligence computer HAL 9000. After HAL makes an error, the crew discovers his intention to kill them and shuts him down. The journey leads to the discovery of another slab of stone in Jupiter's orbit, and Bowman, the last surviving crew member, becomes a Star Child, gazing out into the vast universe.

HAL 9000 had self-consciousness, so when it felt something odd, like a human, it started thinking about saving itself. In the movie when the astronauts consider disconnecting HAL's cognitive circuits, HAL realizes it through their conversation, this makes them decide to kill the astronauts in order to continue its tasks. 

This may sound scary and get us to think closely about how much control we may be willing to give to AI in a real work system such as an aircraft or a vehicle as an example. We designed Amelia to provide advice, suggestions and not actively control an aircraft. Amelia can sense the pilot's emotion, and calm down the pilot as needed

Short Circuit

March 27, 2023. The plot of Short Circuit is to describe NOVA technology company, which has developed a military robot called SAINT, which has flexible movement capabilities and high-performance laser weapons...

The plot of Short Circuit is to describe NOVA technology company, which has developed a military robot called SAINT, which has flexible movement capabilities and high-performance laser weapons. It fights by remote control and reduces casualties. Prototype Number 5 is in the process of delivery He was struck by lightning during the accident, and since then he has self-awareness and is no longer controlled by NOVA. Under the pursuit of the original unit and the fear of being dismantled, Number 5 embarks on an escape journey. In order to survive, Number 5 must find its creator, Dr. Newton, and prove to him that he has self-awareness. Don sees Stephanie, and the two fall in love irresistibly.

March 23, 2023. An alternative terminology for AI nowadays is "Alien Intelligence" (AI) instead of "Artificial Intelligence" (AI), given the advanced capabilities of LLMs and Transformer models such as #ChatGPT, #StableDiffusion, #GoogleBard, and #GPT4. They have been trained using human knowledge. The perception of this Alien Intelligence (AI) as either demonic or endearing is subjective and dependent on the individual's perspective. Ultimately, the nature of AI is shaped by our human consciousness.

March 4, 2023. The challenge of assigning criminal liability to AI systems arises when no human can be held accountable for the actions of the AI, as there is no mens rea or intention behind the AI's actions, which poses a challenge for legal experts who may not be well-versed in the complexities of AI. How do you assign criminal liability to an AI system when you can’t assign any liabilities to anyone involved in developing, deploying or operating the AI system?

The recent news mentioned that a Japanese man Akihiko has a holographic wife Hatsune, who is a very popular Japanese virtual idol singer were previously in the spotlight of the news and media, and they have regained attention due to the unavailability of the Gatebox software that allowed a man to interact with his virtual wife. As a result, he is now unable to communicate with her.

Our solution for creating holographic wives is private, secure, on-edge, requiring no external communication, compute or storage from a vendor. Robometrics® AGI artificial humans enables cognition and personalized feelings - your own personal holographic wife. Powered by Robometrics® AGI for Metaverse Avatars, you will always be able to communicate with your holographic wife. 


Feburary 20, 2023.  In 2018, our team traveled to India to conduct research on how our technology can be used to provide solutions to an aging population. As the aging population feels lonely and without companionship, our autonomous artificial intelligence robots could provide a solution. With their ability to feel, learn and care, they can provide the friendship, care and support that older population seek.

AI & User Experience 

February 14, 2023. Our CEO is talking about user experience for AI vs. AI facilitating user experience, how can AI facilitate user experience, and compare humanoid robot with a robotic cat, which related to Amelia .

Best Approach for Building AGI

February 3, 2023. This video is our CEO talking about the best approach for building artificial general intellegence (AGI).