Trick for better AI predictions, Humane AI pin slammed, Atlas robot: AI Eye

Share via:


Everything you need to know about the AI future that’s hurtling fast towards us.

Jump to: Video of the week — Atlas Robot, Everybody hates Humane’s AI pin, AI makes holocaust victims immortal, Knowledge collapse from mid-curve AIs, Users should beg to pay for AI, Can non-coders create a program with AI? All Killer, No Filler AI News.

Predicting the future with the past

There’s a new prompting technique to get ChatGPT to do what it hates doing the most — predict the future.

New research suggests the best way to get accurate predictions from ChatGPT is to prompt it to tell a story set in the future, looking back on events that haven’t happened yet.

The researchers evaluated 100 different prompts, split between direct predictions (who will win best actor at the 2022 Oscars) versus “future narratives,” such as asking the chatbot to write a story about a family watching the 2022 Oscars on TV and describe the scene as the presenter reads out the best actor winner.



The story produced more accurate results — similarly, the best way to get a good forecast on interest rates was to get the model to produce a story about Fed Chair Jerome Powell looking back on past events. Redditors tried this technique out, and it suggested an interest rate hike in June and a financial crisis in 2030.

Theoretically, that should mean if you ask ChatGPT to write a Cointelegraph news story set in 2025, looking back on this year’s big Bitcoin price moves, it would return a more accurate price forecast than just asking it for a prediction.

There are two potential issues with the research, though: the researchers chose the 2022 Oscars as they knew who won, but ChatGPT shouldn’t, as its training data ran out in September 2021. However, there are plenty of examples of ChatGPT producing information it “shouldn’t” know from the training data.

Another issue is that OpenAI appears to have deliberately borked ChatGPT predictive responses, so this technique might simply be a jailbreak.

William Shatner tokenizes NFT memories on WAX BlockchainWilliam Shatner tokenizes NFT memories on WAX Blockchain

Related research found the best way to get LLama2 to solve 50 math problems was to convince it was plotting a course for Star Trek’s spaceship Enterprise through turbulence to find the source of an anomaly.

But this wasn’t always reliable. The researchers found the best result for solving 100 math problems was to tell the AI the president’s adviser would be killed if it failed to come up with the right answers.

Video of the week — Atlas Robot

Boston Dynamics has unveiled its latest Atlas robot, pulling off some uncanny moves that make it look like the possessed kid in The Exorcist.

“It’s going to be capable of a set of motions that people aren’t,” CEO Robert Playter told TechCrunch. “There will be very practical uses for that.”

The latest version of Atlas is slimmed down and all-electric rather than hydraulic. Hyundai will be testing out Atlas as robot workers in its factories early next year.

Everybody hates Humane’s AI pin

Wearable AI devices are one of those things like DePin that attract a lot of hype but are yet to prove their worth. 

The Humane AI pin is a small wearable you pin to your chest and interact with using voice commands. It has a tiny projector that can beam text on your hand. 

Tech reviewer Marques Brownlee called it “the worst product I’ve ever reviewed,” highlighting its frequent wrong or nonsensical answers, bad interface and battery life, and slow results compared to Google.  

While Brownless copped a lot of criticism for supposedly single-handedly destroying the device’s future, nobody else seems to like it either. 

Wired gave it 4 out of 10, saying it’s slow, the camera sucks, the projector is impossible to see in daylight and the device overheats. However, it says it’s good at real-time translation and phone calls.

The Verge says the idea has potential, but the actual device “is so thoroughly unfinished and so totally broken in so many unacceptable ways” that it’s not worth buying. 

RabbitRabbit
It’s not clear why it’s called Rabbit, and reviewers aren’t clear on the advantages of it over a phone.

Another AI wearable called The Rabbit r1 (the first reviews are out in a week) comes with a small screen and hopes to replace a plethora of apps on your phone with an AI assistant. But do we need a dedicated device for that?

As TechRadar’s Rabbit preview of the device concludes:

“The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future.”

To earn their keep, AI hardware is going to need to find a specialized niche — similar to how reading a book on a Kindle is a better experience than reading on a phone.

One AI wearable with potential is Limitless, a pendant with 100 hours of battery life that records your conversations so you can query the AI about them later: “Did the doctor say to take 15 tablets or 50?” “Did Barry say to bring anything for dinner on Saturday night?”

While it sounds like a privacy nightmare, the pendant won’t start recording until you’ve got the verbal consent of the other speaker. 

So it seems like there are professional use cases for a device that replaces the need to take notes and is easier than using your phone. It’s also fairly affordable.

LimitlessLimitless
Limitless is an AI wearable that records conversations to query back.

AI makes Holocaust victims immortal

The Sydney Jewish Museum has unveiled a new AI-powered interactive exhibition enabling visitors to ask questions of Holocaust survivors and get answers in real time.

Before death camp survivor Eddie Jaku died aged 101 in October 2021, he spent five days answering more than 1,000 questions about his life and experiences in front of a green screen, captured by a 23-camera rig.

The system transforms visitors’ questions to Eddie into search terms, cross-matches them with the appropriate answer, and then plays it back, which enables a conversation-like experience. 

With antisemitic conspiracy theories on the rise, it seems like a terrific way to use AI to keep the first-hand testimony of Holocaust survivors alive for coming generations. 

SJMSJM
SJM Reverberations Exhibition (Katherine-Griffiths)

Knowledge collapse from mid-curve AIs

Around 10% of Google’s search results now point to AI-generated spam content. For years, spammers have been spinning up websites full of garbage articles and content that are optimized for SEO keywords, but generative AI has made the process a million times easier.

Apart from rendering Google search useless, there are concerns that if AI-generated content becomes the majority of content on the web, we could face the potential issue of “model collapse,” whereby AIs are trained on garbage AI content, and the quality drops off like a tenth generation photocopy.

DrafthorseDrafthorse
Spam content at the touch of a button.

A related issue called “knowledge collapse,” affecting humans, was described in a recent research paper from Cornell. Author Andrew J. Peterson wrote that AIs gravitate toward mid-curve ideas in responses and ignore less common, niche or eccentric ideas:

“While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution.”

The diversity of human thought and understanding could grow narrower over time as ideas get homogenized by LLMs.

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Read also


Features

How to protect your crypto in a volatile market: Bitcoin OGs and experts weigh in


Features

Are DAOs overhyped and unworkable? Lessons from the front lines

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Highlighting the paper, Google DeepMind’s Seb Krier added it was also a strong argument for having innumerable models available to the public “and trusting users with more choice and customization.”

“AI should reflect the rich diversity and weirdness of human experience, not just weird corporate marketing/HR culture.”

Users should beg to pay for AI 

Google has been hawking its Gemini 1.5 model to businesses and has been at pains to point out that the safety guardrails and ideology that famously borked its image generation model do not affect corporate customers.

While the controversy over pictures of “diverse” Nazis saw the consumer version shut down,  it turns out the enterprise version wasn’t even affected by the issues and was never suspended. 

“The issue was not with the base model at all. It was in a specific application that was consumer-facing,” Google Cloud CEO Thomas Kurian said.

GeminiGemini
Gemini wants to make everything more inclusive, even Nazi Germany. (X)

The enterprise model has 19 separate safety controls that companies can set how they like. So if you pay up, you can presumably set the controls anywhere from ‘anti-racist’ through to ‘alt-right.’

This lends weight to Matthew Lynn’s recent opinion piece in The Telegraph, where he argues that an ad-driven “free” model for AI will be a disaster, just like the ad-driven “free” model for the web has been. Users ended up as “the product,” spammed with ads at every turn as the services themselves worsened.

“There is no point in simply repeating that error all over again. It would be far better if everyone was charged a few pounds every month and the product got steadily better – and was not cluttered up with advertising,” he wrote.

“We should be begging Google and the rest of the AI giants to charge us. We will be far better off in the long run.”

Can non-coders create a program with AI?

Author and futurist Daniel Jeffries embarked on an experiment to see if an AI could help him code a complex app. While he sucks at coding, he does have a tech industry background and warns that people with zero coding knowledge are unable to use the tech in its current state. 

Jeffries described the process as mostly drudgery and pain with occasional flashes of “holy shit it fucking works.” The AI tools created buggy and unwieldy code and demonstrated “every single bad programming habit known to man.”

However, he did eventually produce a fully functioning program that helped him research competitor’s websites.

Daniel JeffriesDaniel Jeffries
Daniel Jeffries recounts trying to create a computer program with terrible coding skills.

He concluded that AI was not going to put coders out of a job.

“Anyone who tells you different is selling something. If anything, skilled coders who know how to ask for what they want clearly will be in even more demand.”

Replit CEO Amjad Masad made a similar point this week, arguing it’s actually a great time to learn to code, because you’ll be able to harness AI tools to create “magic.”

“Eventually ‘coding’ will almost entirely be natural language, but you will still be programming. You will be paid for your creativity and ability to get things done with computers — not for esoteric knowledge of programming languages.”

All Killer, No Filler AI News

— Token holders have approved the merger of Fetch.ai, SingularityNET and Ocean Protocol. The new Artificial Superintelligence Alliance looks set to be a top 20 project when the merger happens in May.

— Google DeepMind CEO Demis Hassabis will not confirm or deny it is building a $100 billion supercomputer dubbed Stargate but has confirmed it will spend more than $100B on AI in general.

— User numbers for Baidu’s Chinese ChatGPT knockoff Ernie have doubled to 200 million since October.

— Researchers at the Center for Countering Digital Hate asked AI image generators to produce “election disinformation,” and they complied four out of 10 times. Although they’re pushing for stronger safety guardrails, a better watermarking system seems like a better solution.  

Read also


Features

How do you DAO? Can DAOs scale and other burning questions


Features

Open Source or Free for All? The Ethics of Decentralized Blockchain Development

— Instagram is looking for influencers to join a new program where their AI-generated avatars can interact with fans. We’ll soon look back fondly on the old days when fake influencers were still real.

— Guardian columnist Alex Hern has a theory on why ChatGPT uses the word “delve” so much that it’s become a red flag for AI-generated text. He says “delve” is commonly used in Nigeria, which is where many of the low-cost workers providing reinforcement learning human feedback come from.

— OpenAI has released an enhanced version of GPT-4 Turbo, which is available through an API to ChatGPT Plus users. It can solve problems better, is more conversational, and is less of a verbose bullshitter. It’s also introduced a 50% discount for batch processing tasks done off peak.

Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Trick for better AI predictions, Humane AI pin slammed, Atlas robot: AI Eye


Everything you need to know about the AI future that’s hurtling fast towards us.

Jump to: Video of the week — Atlas Robot, Everybody hates Humane’s AI pin, AI makes holocaust victims immortal, Knowledge collapse from mid-curve AIs, Users should beg to pay for AI, Can non-coders create a program with AI? All Killer, No Filler AI News.

Predicting the future with the past

There’s a new prompting technique to get ChatGPT to do what it hates doing the most — predict the future.

New research suggests the best way to get accurate predictions from ChatGPT is to prompt it to tell a story set in the future, looking back on events that haven’t happened yet.

The researchers evaluated 100 different prompts, split between direct predictions (who will win best actor at the 2022 Oscars) versus “future narratives,” such as asking the chatbot to write a story about a family watching the 2022 Oscars on TV and describe the scene as the presenter reads out the best actor winner.



The story produced more accurate results — similarly, the best way to get a good forecast on interest rates was to get the model to produce a story about Fed Chair Jerome Powell looking back on past events. Redditors tried this technique out, and it suggested an interest rate hike in June and a financial crisis in 2030.

Theoretically, that should mean if you ask ChatGPT to write a Cointelegraph news story set in 2025, looking back on this year’s big Bitcoin price moves, it would return a more accurate price forecast than just asking it for a prediction.

There are two potential issues with the research, though: the researchers chose the 2022 Oscars as they knew who won, but ChatGPT shouldn’t, as its training data ran out in September 2021. However, there are plenty of examples of ChatGPT producing information it “shouldn’t” know from the training data.

Another issue is that OpenAI appears to have deliberately borked ChatGPT predictive responses, so this technique might simply be a jailbreak.

William Shatner tokenizes NFT memories on WAX BlockchainWilliam Shatner tokenizes NFT memories on WAX Blockchain

Related research found the best way to get LLama2 to solve 50 math problems was to convince it was plotting a course for Star Trek’s spaceship Enterprise through turbulence to find the source of an anomaly.

But this wasn’t always reliable. The researchers found the best result for solving 100 math problems was to tell the AI the president’s adviser would be killed if it failed to come up with the right answers.

Video of the week — Atlas Robot

Boston Dynamics has unveiled its latest Atlas robot, pulling off some uncanny moves that make it look like the possessed kid in The Exorcist.

“It’s going to be capable of a set of motions that people aren’t,” CEO Robert Playter told TechCrunch. “There will be very practical uses for that.”

The latest version of Atlas is slimmed down and all-electric rather than hydraulic. Hyundai will be testing out Atlas as robot workers in its factories early next year.

Everybody hates Humane’s AI pin

Wearable AI devices are one of those things like DePin that attract a lot of hype but are yet to prove their worth. 

The Humane AI pin is a small wearable you pin to your chest and interact with using voice commands. It has a tiny projector that can beam text on your hand. 

Tech reviewer Marques Brownlee called it “the worst product I’ve ever reviewed,” highlighting its frequent wrong or nonsensical answers, bad interface and battery life, and slow results compared to Google.  

While Brownless copped a lot of criticism for supposedly single-handedly destroying the device’s future, nobody else seems to like it either. 

Wired gave it 4 out of 10, saying it’s slow, the camera sucks, the projector is impossible to see in daylight and the device overheats. However, it says it’s good at real-time translation and phone calls.

The Verge says the idea has potential, but the actual device “is so thoroughly unfinished and so totally broken in so many unacceptable ways” that it’s not worth buying. 

RabbitRabbit
It’s not clear why it’s called Rabbit, and reviewers aren’t clear on the advantages of it over a phone.

Another AI wearable called The Rabbit r1 (the first reviews are out in a week) comes with a small screen and hopes to replace a plethora of apps on your phone with an AI assistant. But do we need a dedicated device for that?

As TechRadar’s Rabbit preview of the device concludes:

“The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future.”

To earn their keep, AI hardware is going to need to find a specialized niche — similar to how reading a book on a Kindle is a better experience than reading on a phone.

One AI wearable with potential is Limitless, a pendant with 100 hours of battery life that records your conversations so you can query the AI about them later: “Did the doctor say to take 15 tablets or 50?” “Did Barry say to bring anything for dinner on Saturday night?”

While it sounds like a privacy nightmare, the pendant won’t start recording until you’ve got the verbal consent of the other speaker. 

So it seems like there are professional use cases for a device that replaces the need to take notes and is easier than using your phone. It’s also fairly affordable.

LimitlessLimitless
Limitless is an AI wearable that records conversations to query back.

AI makes Holocaust victims immortal

The Sydney Jewish Museum has unveiled a new AI-powered interactive exhibition enabling visitors to ask questions of Holocaust survivors and get answers in real time.

Before death camp survivor Eddie Jaku died aged 101 in October 2021, he spent five days answering more than 1,000 questions about his life and experiences in front of a green screen, captured by a 23-camera rig.

The system transforms visitors’ questions to Eddie into search terms, cross-matches them with the appropriate answer, and then plays it back, which enables a conversation-like experience. 

With antisemitic conspiracy theories on the rise, it seems like a terrific way to use AI to keep the first-hand testimony of Holocaust survivors alive for coming generations. 

SJMSJM
SJM Reverberations Exhibition (Katherine-Griffiths)

Knowledge collapse from mid-curve AIs

Around 10% of Google’s search results now point to AI-generated spam content. For years, spammers have been spinning up websites full of garbage articles and content that are optimized for SEO keywords, but generative AI has made the process a million times easier.

Apart from rendering Google search useless, there are concerns that if AI-generated content becomes the majority of content on the web, we could face the potential issue of “model collapse,” whereby AIs are trained on garbage AI content, and the quality drops off like a tenth generation photocopy.

DrafthorseDrafthorse
Spam content at the touch of a button.

A related issue called “knowledge collapse,” affecting humans, was described in a recent research paper from Cornell. Author Andrew J. Peterson wrote that AIs gravitate toward mid-curve ideas in responses and ignore less common, niche or eccentric ideas:

“While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution.”

The diversity of human thought and understanding could grow narrower over time as ideas get homogenized by LLMs.

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Read also


Features

How to protect your crypto in a volatile market: Bitcoin OGs and experts weigh in


Features

Are DAOs overhyped and unworkable? Lessons from the front lines

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Highlighting the paper, Google DeepMind’s Seb Krier added it was also a strong argument for having innumerable models available to the public “and trusting users with more choice and customization.”

“AI should reflect the rich diversity and weirdness of human experience, not just weird corporate marketing/HR culture.”

Users should beg to pay for AI 

Google has been hawking its Gemini 1.5 model to businesses and has been at pains to point out that the safety guardrails and ideology that famously borked its image generation model do not affect corporate customers.

While the controversy over pictures of “diverse” Nazis saw the consumer version shut down,  it turns out the enterprise version wasn’t even affected by the issues and was never suspended. 

“The issue was not with the base model at all. It was in a specific application that was consumer-facing,” Google Cloud CEO Thomas Kurian said.

GeminiGemini
Gemini wants to make everything more inclusive, even Nazi Germany. (X)

The enterprise model has 19 separate safety controls that companies can set how they like. So if you pay up, you can presumably set the controls anywhere from ‘anti-racist’ through to ‘alt-right.’

This lends weight to Matthew Lynn’s recent opinion piece in The Telegraph, where he argues that an ad-driven “free” model for AI will be a disaster, just like the ad-driven “free” model for the web has been. Users ended up as “the product,” spammed with ads at every turn as the services themselves worsened.

“There is no point in simply repeating that error all over again. It would be far better if everyone was charged a few pounds every month and the product got steadily better – and was not cluttered up with advertising,” he wrote.

“We should be begging Google and the rest of the AI giants to charge us. We will be far better off in the long run.”

Can non-coders create a program with AI?

Author and futurist Daniel Jeffries embarked on an experiment to see if an AI could help him code a complex app. While he sucks at coding, he does have a tech industry background and warns that people with zero coding knowledge are unable to use the tech in its current state. 

Jeffries described the process as mostly drudgery and pain with occasional flashes of “holy shit it fucking works.” The AI tools created buggy and unwieldy code and demonstrated “every single bad programming habit known to man.”

However, he did eventually produce a fully functioning program that helped him research competitor’s websites.

Daniel JeffriesDaniel Jeffries
Daniel Jeffries recounts trying to create a computer program with terrible coding skills.

He concluded that AI was not going to put coders out of a job.

“Anyone who tells you different is selling something. If anything, skilled coders who know how to ask for what they want clearly will be in even more demand.”

Replit CEO Amjad Masad made a similar point this week, arguing it’s actually a great time to learn to code, because you’ll be able to harness AI tools to create “magic.”

“Eventually ‘coding’ will almost entirely be natural language, but you will still be programming. You will be paid for your creativity and ability to get things done with computers — not for esoteric knowledge of programming languages.”

All Killer, No Filler AI News

— Token holders have approved the merger of Fetch.ai, SingularityNET and Ocean Protocol. The new Artificial Superintelligence Alliance looks set to be a top 20 project when the merger happens in May.

— Google DeepMind CEO Demis Hassabis will not confirm or deny it is building a $100 billion supercomputer dubbed Stargate but has confirmed it will spend more than $100B on AI in general.

— User numbers for Baidu’s Chinese ChatGPT knockoff Ernie have doubled to 200 million since October.

— Researchers at the Center for Countering Digital Hate asked AI image generators to produce “election disinformation,” and they complied four out of 10 times. Although they’re pushing for stronger safety guardrails, a better watermarking system seems like a better solution.  

Read also


Features

How do you DAO? Can DAOs scale and other burning questions


Features

Open Source or Free for All? The Ethics of Decentralized Blockchain Development

— Instagram is looking for influencers to join a new program where their AI-generated avatars can interact with fans. We’ll soon look back fondly on the old days when fake influencers were still real.

— Guardian columnist Alex Hern has a theory on why ChatGPT uses the word “delve” so much that it’s become a red flag for AI-generated text. He says “delve” is commonly used in Nigeria, which is where many of the low-cost workers providing reinforcement learning human feedback come from.

— OpenAI has released an enhanced version of GPT-4 Turbo, which is available through an API to ChatGPT Plus users. It can solve problems better, is more conversational, and is less of a verbose bullshitter. It’s also introduced a 50% discount for batch processing tasks done off peak.

Andrew FentonAndrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

ICAI Says Probe Into Alleged Audit Lapses At BYJU’S...

SUMMARY ICAI president Ranjeet Kumar Agarwal has revealed that...

MobiKwik Shares Surge 12.5% To INR 549.80 

SUMMARY The broader benchmark indices showed recovery today with...

Ranjita Ghosh: Wipro elevates Ranjita Ghosh as new global...

Indian IT major Wipro on Monday announced the...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!