“Can you hear the music, Robert?”

Danish physicist Niels Bohr to young Robert Oppenheimer, in the movie Oppenheimer

The aforementioned question is referencing whether the young physicist Robert Oppenheimer understood the full implications (good and bad) of his contributions to science at the dawn of the atomic age. This transfer of the “music” in a composer’s (or scientist’s) mind from his ethereal mind into the symbolic and realistic human world is now key also for artificial intelligence.

Several important lessons from Oppenheimer’s Manhattan Project are useful to remember for our current artificial intelligence to be used responsibly:

  1. Duality. The good-bad duality of both transformative technologies of nuclear power and AI are similar in the degree of impact. Both technologies are fully capable of total destruction of life as we know it, with AI’s potential for this catastrophic magnitude of harm not entirely clear at the present time. In both nuclear weapons and AI, the technology itself has far outpaced the human discussions on ethics and regulation as well as management and planning of the technology.
  2. Accountability. There is a remarkable parallel of the current AI regulation discussions with the regulation of the atomic bomb. Oppenheimer, as the progenitor of the atomic bomb, ironically called for the international control of nuclear weapons. In a similar manner, the AI industry leaders as well as researchers are also starting to call for regulatory oversight for AI. Whether these efforts are at all effective remain to be seen in the years to come.
  3. Uncertainty. Another layer of complexity on top of the aforementioned duality and accountability is that there are some elements of both of these technologies that are unpredictable (with AI even more so compared to nuclear weapons). Even AI experts will often disagree on just when AI will be achieving artificial general intelligence (AGI) and also just how AI can potentially end the human race as this domain is very complex, especially as it involves humans for its applications.

There are also key differences between nuclear weapons and artificial intelligence in the discussion of duality, accountability, and uncertainty. One important difference is the relative ease that AI can be developed even in the presence of regulatory oversight due to its pervasiveness and low barrier to entry. Despite the recent open letter from over a thousand notable signatories (including Elon Musk and Steve Wozniak) for all AI labs to pause the training of systems, its impact is at best uncertain and at worst utterly ineffective. In addition, the technology curve of AI, as compared to that of nuclear weapons, is much more exponential as demonstrated by the recent large language model “arms race”. Interestingly, AI technology has also been exploited for autonomous weapons and therefore has a similar military objectives as the atomic bomb.

Lastly, while interventions and measures can be implemented to regulate nuclear weapons to some degree (such as the Non-Proliferation Treaty), it is not entirely clear just how AI can and will be regulated even though most parties and countries agree that it should be (such as the Biden Executive Order on AI).

Oppenheimer quoted the Hindu scripture Bhagavad Gita: “I am become Death, the destroyer of worlds” after he fully realized the potential of the atomic bomb in the post-Manhattan Project years. He also presciently stated “We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent.”

For AI, especially in healthcare, this is not a time for us to remain silent.

These insights and discussions will be in full force at the in-person Ai-Med Global Summit 2024 scheduled currently for May 29-31, 2024 in Orlando, Florida. Book your place now!