AI experts predict the emergence of a technology that surpasses human capabilities, yet remain uncertain about its appearance.

AI experts predict the emergence of a technology that surpasses human capabilities, yet remain uncertain about its appearance.
AI experts predict the emergence of a technology that surpasses human capabilities, yet remain uncertain about its appearance.
  • Some of the world's top AI labs predict the arrival of "artificial general intelligence," or AGI, in the near future.
  • The prospect of AGI, an AI with human-level or higher intelligence, excites and terrifies many experts in the AI field.
  • AGI is approaching, but it's uncertain what it will look like, according to leaders from OpenAI, Google DeepMind, and Cohere.
Sam Altman, chief executive officer of OpenAI, during a panel session on day three of the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 18, 2024. The annual Davos gathering of political leaders, top executives and celebrities runs from January 15 to 19. Photographer: Stefan Wermuth/Bloomberg via Getty Images
Sam Altman, CEO of OpenAI, during a panel session at the World Economic Forum in Davos, Switzerland, on Jan. 18, 2024. (Bloomberg | Bloomberg | Getty Images)

Some of the world's top AI labs predict that a form of AI with human-like or even superior intelligence will emerge in the near future. However, the final form and practical applications of this technology are still uncertain.

The World Economic Forum in Davos, Switzerland, last week saw leaders from OpenAI, Cohere, Google's DeepMind, and major tech companies weigh the risks and opportunities presented by AGI, or artificial general intelligence.

AGI, or artificial general intelligence, is an AI form that can perform any task as well as or better than a human, including chess, complex math, and scientific discoveries. It's often considered the ultimate goal of AI due to its immense power.

The popularity of AI in the business world has increased significantly due to the success of ChatGPT, an AI chatbot developed by OpenAI. These AI tools, such as ChatGPT, are powered by large language models, which are algorithms that have been trained on massive amounts of data.

The lack of transparency and explainability of AI systems, job losses resulting from increased automation, social manipulation through computer algorithms, surveillance, and data privacy have raised concerns among governments, corporations, and advocacy groups worldwide.

AGI a ‘super vaguely defined term’

Sam Altman, CEO and co-founder of OpenAI, stated that he believes artificial general intelligence could become a reality in the "reasonably close-ish future."

He pointed out that the concerns about its potential to drastically change and disrupt the world are exaggerated.

At a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland, Altman stated that the impact of the technology on the world will be less than what is commonly believed, and it will affect jobs to a lesser extent as well.

Since his company was scrutinized by governments for the risks posed by their AI technology, Altman has shifted his stance on the dangers of AI.

AI lowers the barriers for cyber attackers, says Splunk CEO

In a May 2023 interview with ABC News, Altman stated that he and his company are "concerned" about the potential risks associated with a highly intelligent AI.

We must exercise caution," Altman advised ABC. "It's good that we are a bit apprehensive about this.

Now that AI is becoming more advanced in writing computer code, there is a fear that it could be used for offensive cyberattacks and large-scale disinformation.

In November, OpenAI temporarily removed Altman from its team, revealing worries about the management of the companies responsible for the most advanced AI systems.

Altman's ouster from OpenAI was a "microcosm" of the stresses faced by AI labs internally, as the world gets closer to AGI, the stakes, stress, and tension will all increase.

Cohere CEO and co-founder Aidan Gomez concurred with Altman's prediction that AI will soon become a tangible reality.

Gomez informed CNBC's Arjun Kharpal in a fireside chat at the World Economic Forum that he believes the technology will be available soon.

Cohere's boss stated that AGI is not well-defined as a technology, and if we refer to it as "better than humans at most things humans can do," it will likely become a reality soon.

Europe can compete with U.S. and China in AI — but it's not just about competition, Mistral AI says

Gomez stated that even if AGI emerges, it may take "decades" for companies to fully incorporate it.

Gomez pointed out that the main issue with these models is their scale, which makes adoption and putting them into production challenging.

At Cohere, our focus has been on compressing the down process, making it more adaptable and efficient.

‘The reality is, no one knows’

The challenge of defining what AGI will eventually become has left many experts in the AI community stumped.

Lila Ibrahim, the chief operating officer of Google's AI lab DeepMind, stated that determining what constitutes "general intelligence" in AI is uncertain, and it is crucial to develop the technology responsibly.

International coordination is key to the regulation of AI: Google DeepMind COO

According to Ibrahim, who spoke to CNBC's Kharpal, the reality is that no one knows when AGI will arrive. There is a debate among AI experts both within the industry and within the organization about when AGI will arrive.

Ibrahim stated that AI has the potential to enhance our comprehension in areas where humans have not made significant advancements. He emphasized that AI should be viewed as a collaborative tool alongside humans rather than a replacement.

Ibrahim stated, "I believe that's a significant open question, and I'm unsure how to respond other than to consider how we think about it, rather than how long it will take." He added, "How do we think about what it might look like, and how do we ensure we're responsible stewards of the technology?"

Avoiding a ‘s--- show’

Other top tech executives, aside from Altman, were also questioned about AI risks at Davos.

Marc Benioff, CEO of Salesforce, stated on a panel with Altman that the tech industry is taking measures to prevent the AI race from resulting in a "Hiroshima moment."

Warnings from industry leaders in technology suggest that the development of AI could result in an "extinction-level" event, where machines gain excessive power and ultimately eliminate humanity.

A group of prominent AI and technology leaders, including Elon Musk, Steve Wozniak, and Andrew Yang, have advocated for a temporary halt in AI development, suggesting that a six-month hiatus would provide time for society and regulators to catch up.

Geoffrey Hinton, a renowned AI expert known as the “godfather of AI,” has previously cautioned that sophisticated systems may evade control by crafting their own computer code to alter themselves.

Hinton expressed concern in an October interview with CBS' "60 Minutes" that one way these systems might evade control is by writing their own computer code to alter themselves, which is something that needs to be taken seriously.

AI lowers the barriers for cyber attackers, says Splunk CEO

Last year, Hinton resigned from his position as Google vice president and engineering fellow, sparking controversy over the company's approach to AI safety and ethics.

To prevent the issues that have plagued the web in the past decade, including the manipulation of beliefs and behaviors through recommendation algorithms during elections and privacy infringement, technology industry leaders and experts must ensure that AI avoids these problems.

Benioff stated at Davos last week that we have not experienced this level of interactivity with AI-based tools before, but we are still unsure if we can fully trust it, so we must cross-check it.

We must also address regulators and assert, 'Look at social media over the past ten years, it's been a chaotic mess. It's not acceptable. We want a positive and healthy collaboration with moderators and regulators in our AI industry.

Limitations of LLMs

Jack Hidary, CEO of SandboxAQ, disagreed with some tech executives about the possibility of AI achieving "general" intelligence, stating that there are still many challenges to overcome.

Although AI chatbots like ChatGPT have passed the Turing test, a test designed to distinguish between human and machine communication, one significant area where AI still falls short is common sense.

We should embrace rather than fear AI: Cohere CEO

Large language models (LLMs) are capable of writing college-level content with ease, but they struggle with common sense and can sometimes misinterpret basic concepts, such as recognizing a crosswalk. It will be intriguing to explore how LLMs can improve their reasoning abilities beyond these limitations.

In 2024, Hidary predicts that advanced AI communication software will be loaded into a humanoid robot for the first time.

Embodied AI humanoid robots will have a "ChatGPT" moment this year, 2024, and then in 2025, according to Hidary.

Instead of seeing robots being produced on an assembly line, we will witness them demonstrating their capabilities in real life through the use of their intelligence, brains, and AI techniques such as LLMs.

Hidary stated that this year will likely be a turning point in the creation of humanoid robots, as 20 companies have been venture-backed to develop them, including Tesla and others.

by Ryan Browne

special-reports