LLMs Have Stalled the Progress of AGI

Share via:


Everyone has questions about generative AI, the validity of LLMs, and the future they hold. While LLMs are often seen as overhyped in research circles, they are also considered by some as a major roadblock to achieving artificial general intelligence (AGI).

Reiterating this point, Mike Knoop, the co-founder of Zapier, recently expressed scepticism about the progress of AI language models toward achieving AGI. “LLMs have stalled in the progress to AGI and increasing scale will not help what is an inherently limited technology,” said Knoop in a recent interview.

Knoop’s assessment was based on the fact that current LLMs show low user trust, low accuracy and reliability. “And these problems are not going away with scale,” he added. This is not a new conversation as the reasoning and logical problems with current LLMs have been brought up several times. 

No Roads Lead to AGI

Regardless, the scepticism has not deterred the big-techs from the mad rush of building the best LLMs. Google, OpenAI, Microsoft, and Meta have been racing towards building LLMs – bigger and smaller alike. 

All this is while Yann LeCun, the chief of Meta AI, has obsessively said that LLMs won’t lead to AGI and the researchers getting into the AI field should not work on LLMs. 

Similarly, Francois Chollet, the creator of Keras, also recently shared similar thoughts on this. “OpenAI has set back the progress towards AGI by 5-10 years because frontier research is no longer being published and LLMs are an offramp on the path to AGI,” he said in an interview.

Knoop’s concerns are also well established and are heightened by his participation in the introduction of the ARC Prize, a competition designed to encourage novel approaches to AGI, particularly through the Abstraction and Reasoning Corpus (ARC).

Since its establishment, this benchmark—which evaluates the capacity for efficient skill acquisition— has not shown much improvement, supporting Knoop’s contention that present AI models do not seem to be headed towards AGI.

Forget AGI, Aim for Animal-level Intelligence

LeCun says AI should reach animal-level intelligence before heading towards AGI and Knoop has his concerns about LLMs. Likewise, Andrej Karpathy, the founder of Eureka Labs, has been quite vocal about the issues with LLMs. 

In his latest experiment, Karpathy proved that LLMs struggle with seemingly simple tasks and coined the term “Jagged Intelligence” to prove his point.

Even Yoshua Bengio, one of the godfathers of AI, said in an exclusive interview with AIM that when it comes to achieving the kind of intelligence that humans have, some important ingredients are still missing.

These problems are not unique. Noam Brown, a research engineer at OpenAI, experimented with LLMs by making them play basic games like tic-tac-toe. The outcome was dismal, since LLMs performed poorly in this simple game. 

Additionally, another developer tasked GPT-4 with solving tiny Sudoku puzzles. Here too, the model struggled, often failing to solve these seemingly simple puzzles. Even a study by Google DeepMind has proved that LLMs lack genuine understanding and, as a result, cannot self-correct or adjust their responses on command. 

Subbarao Kambhampati, professor of AI at Arizona State University, agrees with the notion. “They are just not made and meant to do that,” he said. He gave an example of how LLM-based chatbots are still very weak at maths problems.

Too Early to Write off LLMs

But, there is still time. While it’s true that we haven’t reached human-level intelligence yet, it’s not that we are never going to achieve it. 

OpenAI had claimed that GPT-4, released in March, has beaten human psychologists in understanding complex emotions. Infact, OpenAI CTO Mira Murati in a recent interview claimed that GPT-5 will have ‘PhD-level’ intelligence.

Meanwhile, Ilya Sutskever, the former chief scientist at OpenAI and founder of Safe Superintelligence, believes that text is the projection of the world. But how much of that is linked with LLMs is still questionable. LLMs, in a way, are building cognitive architecture from scratch, echoing the evolutionary and real-time learning processes, albeit with a bit more electricity.

Last month, AIM had discussed the various approaches taken by big tech companies, namely OpenAI, Meta, Google DeepMind, and Tesla, in the pursuit of AGI. Since then, tremendous progress has been made.

Undeterred, the research on LLMs also is still going strong with companies finding various ways to constantly improve the models. Recently, OpenAI released a research paper on Prover-Verifier Games’ (PVG) for LLMs which can solve Knoop’s problem. 

Similarly, Causality with LLMs would enable AI to understand cause-and-effect relationships, similar to human reasoning. Then, there is neurosymbolic AI that can enhance LLMs efficiency. 

We can safely say that LLMs are worth more than one shot to smoothen the road to AGI.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

LLMs Have Stalled the Progress of AGI


Everyone has questions about generative AI, the validity of LLMs, and the future they hold. While LLMs are often seen as overhyped in research circles, they are also considered by some as a major roadblock to achieving artificial general intelligence (AGI).

Reiterating this point, Mike Knoop, the co-founder of Zapier, recently expressed scepticism about the progress of AI language models toward achieving AGI. “LLMs have stalled in the progress to AGI and increasing scale will not help what is an inherently limited technology,” said Knoop in a recent interview.

Knoop’s assessment was based on the fact that current LLMs show low user trust, low accuracy and reliability. “And these problems are not going away with scale,” he added. This is not a new conversation as the reasoning and logical problems with current LLMs have been brought up several times. 

No Roads Lead to AGI

Regardless, the scepticism has not deterred the big-techs from the mad rush of building the best LLMs. Google, OpenAI, Microsoft, and Meta have been racing towards building LLMs – bigger and smaller alike. 

All this is while Yann LeCun, the chief of Meta AI, has obsessively said that LLMs won’t lead to AGI and the researchers getting into the AI field should not work on LLMs. 

Similarly, Francois Chollet, the creator of Keras, also recently shared similar thoughts on this. “OpenAI has set back the progress towards AGI by 5-10 years because frontier research is no longer being published and LLMs are an offramp on the path to AGI,” he said in an interview.

Knoop’s concerns are also well established and are heightened by his participation in the introduction of the ARC Prize, a competition designed to encourage novel approaches to AGI, particularly through the Abstraction and Reasoning Corpus (ARC).

Since its establishment, this benchmark—which evaluates the capacity for efficient skill acquisition— has not shown much improvement, supporting Knoop’s contention that present AI models do not seem to be headed towards AGI.

Forget AGI, Aim for Animal-level Intelligence

LeCun says AI should reach animal-level intelligence before heading towards AGI and Knoop has his concerns about LLMs. Likewise, Andrej Karpathy, the founder of Eureka Labs, has been quite vocal about the issues with LLMs. 

In his latest experiment, Karpathy proved that LLMs struggle with seemingly simple tasks and coined the term “Jagged Intelligence” to prove his point.

Even Yoshua Bengio, one of the godfathers of AI, said in an exclusive interview with AIM that when it comes to achieving the kind of intelligence that humans have, some important ingredients are still missing.

These problems are not unique. Noam Brown, a research engineer at OpenAI, experimented with LLMs by making them play basic games like tic-tac-toe. The outcome was dismal, since LLMs performed poorly in this simple game. 

Additionally, another developer tasked GPT-4 with solving tiny Sudoku puzzles. Here too, the model struggled, often failing to solve these seemingly simple puzzles. Even a study by Google DeepMind has proved that LLMs lack genuine understanding and, as a result, cannot self-correct or adjust their responses on command. 

Subbarao Kambhampati, professor of AI at Arizona State University, agrees with the notion. “They are just not made and meant to do that,” he said. He gave an example of how LLM-based chatbots are still very weak at maths problems.

Too Early to Write off LLMs

But, there is still time. While it’s true that we haven’t reached human-level intelligence yet, it’s not that we are never going to achieve it. 

OpenAI had claimed that GPT-4, released in March, has beaten human psychologists in understanding complex emotions. Infact, OpenAI CTO Mira Murati in a recent interview claimed that GPT-5 will have ‘PhD-level’ intelligence.

Meanwhile, Ilya Sutskever, the former chief scientist at OpenAI and founder of Safe Superintelligence, believes that text is the projection of the world. But how much of that is linked with LLMs is still questionable. LLMs, in a way, are building cognitive architecture from scratch, echoing the evolutionary and real-time learning processes, albeit with a bit more electricity.

Last month, AIM had discussed the various approaches taken by big tech companies, namely OpenAI, Meta, Google DeepMind, and Tesla, in the pursuit of AGI. Since then, tremendous progress has been made.

Undeterred, the research on LLMs also is still going strong with companies finding various ways to constantly improve the models. Recently, OpenAI released a research paper on Prover-Verifier Games’ (PVG) for LLMs which can solve Knoop’s problem. 

Similarly, Causality with LLMs would enable AI to understand cause-and-effect relationships, similar to human reasoning. Then, there is neurosymbolic AI that can enhance LLMs efficiency. 

We can safely say that LLMs are worth more than one shot to smoothen the road to AGI.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

In Expansion Push, Astrotalk’s FY24 Revenue Doubles To INR...

SUMMARY Noida-based Astrotalk has reportedly more than doubled its...

Aye Finance Bags INR 250 Cr From Singapore’s ABC...

SUMMARY Lending tech startup Aye Finance has now secured...

Chinese Tether laundromat, Bhutan enjoys recent Bitcoin boost: Asia...

Tether launderers sentenced as Bhutan’s Bitcoin hodling places...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!