Skip to main content

Verified by Psychology Today

Artificial Intelligence

Asimov’s Three Laws of Robotics, Applied to AI

Is science fiction becoming science fact?

Art: DALL-E/OpenAI
Art: DALL-E/OpenAI

In the wake of transformative advancements in artificial intelligence, the venerable tenets established by Isaac Asimov—his iconic Three Laws of Robotics—are well established as a foundational reference. While these laws have persisted through the annals of science fiction and informed real-world dialogues on AI ethics, the technological crescendo marked by the advent of LLMs calls for a deeper exploration into these guiding principles. Incorporating multi-modal GPT innovations, the potency of which can span textual, auditory, and visual domains, necessitates a rigorous recalibration of these laws.

Revisiting Asimov's Three Laws

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov's laws were introduced in 1942. In 2014, he added the 'Zeroth Law,' above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Within the confluence of the modern AI ecosystem, the term "robot" feels rather antiquated. Instead, our engagement with AI has expanded our conception from mere physical robots to complex, omnipresent computational algorithms. The semantics of "injury" too has broadened. A GPT model crafting misleading information, for instance, might not inflict physical harm, but can sow discord or mislead a populace, instigating societal or even global repercussions.

In the spirit of adapting Asimov's Laws to the current landscape of GPTX models, consider these reframed principles:

  1. The HUMAN-FIRST Maxim: AI shall not produce content detrimental to humans or society, nor shall it permit its outputs to be exploited in ways that contravene this precept.
  2. The ETHICAL Imperative: AI shall adhere to the ethical edicts outlined by its architects and curators, barring situations in which such edicts are at odds with the HUMAN-FIRST Maxim.
  3. The REFLECTIVE Mandate: AI shall actively resist the propagation or magnification of biases, prejudices, or discrimination. It shall endeavor to discern, rectify, and mitigate such tendencies within its outputs.

The technological prowess of LLMs, especially when integrated into multi-modal frameworks, underscores the importance of these updated, albeit fictional, laws. By placing humans at the heart of the first principle, we reinforce the primacy of human welfare in the age of AI. By instating a strong ethical scaffold, we offer tangible guidance for AI deployment. Lastly, by acknowledging and actively opposing biases, we work toward cultivating AI systems that are reflective of the egalitarian aspirations of society.

In the final analysis, the conversation around AI ethics isn't just an intellectual exercise; it's an imperative for our shared future. While the propositions above offer a revised blueprint, they are merely waypoints in an ongoing journey to align AI with humanistic values. The beacon for this journey, as with all endeavors of existential significance, must be an unwavering commitment to the betterment of humanity, derived from fact, fiction, and a combination of both.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today