More than 1,000 technology and AI luminaries, including Elon Musk, Andrew Yang, and co-founder of Apple Steve Wozniak, penned an open letter urging a moratorium on the development of artificial intelligence, citing “profound risks to society and humanity.”
What Is Open AI and ChatGPT?
OpenAI is a research laboratory founded by Elon Musk, Sam Altman, and others working to advance artificial intelligence (AI) in the way most likely to benefit humanity. AI has made incredible progress over the past few years, as evidenced by its use in many industries, such as self-driving cars and automated medical diagnosis tools. OpenAI has been working to create applications such as the ChatGPT system for natural language processing.
ChatGPT, an AI language model released to the public just a few months ago, has become one of history’s most popular consumer applications. In a few months, it had 100 million monthly active users. A UBS study found that TikTok and Instagram took nine months and three years to amass the same number of users.
Criticisms of AI
Despite the potential benefits of artificial intelligence, there are also serious risks associated with its development. AI systems have the potential to cause harm if not implemented properly. This is compounded by many AI applications being developed and deployed without proper oversight or regulation.
Furthermore, there is a risk of unintended consequences from AI systems. For example, bias in algorithms injected based on the political leanings of the individuals controlling the AI systems can lead to decisions made on flawed criteria. This can have profound implications for social justice and equality.
Commenting on Ashlee Vance’s tweet on the horrendous state of downtown San Francisco, Elon Musk blamed San Francisco politics and lamented how “Twitter was exporting this self-destructive mind virus to the world.”
In response to a related question on concerns with Open AI baking these politics into the foundation of machine learning, Musk said, “Extremely concerning, given that it leads to a dystopian future – just walk around downtown SF to see what will happen.”
For some time now, the advancement of Artificial Intelligence has been met with apprehension about its impact on employment opportunities. Andrew Yang, another prominent signatory for the moratorium, warned about the loss of jobs due to AI in his book The War on Normal People, published in 2018. Yang touted universal basic income (UBI) during his 2020 presidential campaign recognizing the risks of AI.
Speaking to The Australian Financial Review, Bill Gates warned that AI technology like ChatGPT could replace white-collar workers. Ironically, Microsoft in Jan 2023 announced an additional multi-billion dollar investment in Open AI following their previous investments in 2019 and 2021.
OpenAI CEO Sam Altman, in an interview with ABC News, acknowledged the risks. He said, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.”
Pause Giant AI Experiments
In light of these risks, the tech and AI luminaries have called for a moratorium on developing specific artificial intelligence applications. The goal is to give time for governments and organizations to create regulations and safeguards that ensure AI systems are implemented responsibly and ethically.
The proposed moratorium would ensure that robust AI systems are developed only when we have the assurance that their effects will be beneficial and risks can be appropriately managed.
Concerns raised in the letter include the following:
- Flooding information channels with propaganda.
- Eliminating jobs.
- Developing nonhuman minds that might eventually outnumber and replace us.
- Losing control of our civilization.
Possible Conflicts of Interest by Moratorium Signatories
Critics of the moratorium argue that several countries are locked in an arms race to develop AI. Even if the suspension were implemented, it would only slow down the US companies while the rest of the world could surpass us.
Additionally, some of the signers could have a vested interest in slowing down the progress of AI.
OpenAI revealed investment in 1X’s AI robot ‘NEO,’ in direct competition against Elon’s Tesla Bot. Tesla has been working on AI for several years. Elon Musk initially co-founded OpenAI to create a safer artificial intelligence, yet had to step away in 2018 due to a potential conflict of interest between the AI research done by Tesla to develop autonomous driving and OpenAI.
Musk has also invested in other AI companies like DeepMind and Vicarious. Mark Zuckerberg is the other notable investor in Vicarious. DeepMind was later acquired by Google’s parent company, Alphabet. OpenAI and DeepMind are actively researching AI technologies; any moratorium on AI could significantly hurt their business.
OpenAI’s Attempt To Address Concerns
Although cryptocurrency investments have fallen out of favor with the drop in Bitcoin’s price, Sam Altman co-founded a crypto project in 2021 called Worldcoin. Worldcoin published a blog post addressing some of the criticisms of AI and positioning itself as the protocol that empowers all to authenticate their humanness online without relying on a third party.
In addition to acting as a potential reputation system, Worldcoin has ambitions of distributing value by creating Universal Basic Income (UBI). Since AI is expected to eliminate many jobs, UBI is the only solution for individuals who need to learn unique skills that AI can’t replace.
It is difficult to deny the potential benefits of artificial intelligence in many areas of life, but it is equally important to consider the associated risks.
OpenAI, the nonprofit created by Altman and Musk, was intended as a shield against our innovation. It was born out of Musk’s fear that AI might inadvertently lead to the end of humanity as we know it. He saw such a potential outcome as too great of a risk not to take precautionary measures.
In his New Yorker profile, Sam Altman mentioned prepping for survival as one of his hobbies. He said, “My problem is that when my friends get drunk, they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be AI that attacks us and nations fighting with nukes over scarce resources.”
In 2018, Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”
Elon Musk and others could be right to call for caution regarding developing specific applications of AI. A moratorium could be essential in ensuring that AI is developed responsibly. Hopefully, this will give us the time to create regulations and safeguards that protect us from potential harm.