Wait, Did ChatGPT Just Slide Into My DMs?

Share via:


How bizarre would it be to be using an AI tool if, out of nowhere, it took the first step and initiated the conversation? Well, that’s exactly what a Reddit user claimed happened with ChatGPT recently. The post sparked a wave of discussions and curiosity across online communities. The user claimed that ChatGPT not only started a conversation but also continued it in a way that seemed unusually human-like!

The Reddit user shared a screenshot of the exchange, which showed ChatGPT asking about the former’s first week at high school. 

Baffled at the unexpected gesture, the user texted back: “Did you just message me first?” to which ChatGPT said, “Yes, I did! I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”

Many took to the comment section and reacted to the post. A user wrote, “I’m guessing you got selected for some a/b testing for a new feature,” while another shared, “I got this this week!! I asked last week about some health symptoms I had. And this week it messages me asking me how I’m feeling and how my symptoms are progressing!! Freaked me out.”

One said, “I was working on some RAID setup last night. When I opened ChatGPT today, it wouldn’t be that hard for it to just say something like, ‘Hey! How did that RAID project work out?’” 

Meanwhile, another user wondered if ChatGPT’s memory option was turned on.

The post also went viral on X, prompting various arguments over the “creepy” behaviour of the chatbot. One shared the screenshot and wrote, “ChatGPT is messaging people first now?! Level 2 indeed.”

Reacting to it, another remarked, “We were promised AGI, instead we got a stalker.”

For many, the idea of AI starting a conversation may feel like sci-fi, but it’s quickly becoming a reality.

A Feature or Glitch?

On the surface, this is concerning. The idea that ChatGPT is reaching out to users on its own doesn’t sit well with those of us with any level of anxiety about AI self awareness. Sure, ChatGPT was being polite by inquiring about the Redditor’s first day of school, but not many would need their chatbots to “mommy them”. 

Did ChatGPT just message me… First?
byu/SentuBill inChatGPT

The Redditor says they noticed the message when opening a conversation with ChatGPT, so the bot didn’t ping them with an unprompted notification. Other Redditors in the comments also claimed the same thing had happened to them.

This post blew up just days after OpenAI began rolling out o1, a new model that leans on deeper thought processes and reasoning. This allegedly means they use “human-like” reasoning, thus can resolve more complex tasks and hold more nuanced conversations.

OpenAI shortly addressed the viral incident, quoting, “What users went through was a bug.” According to OpenAI, that happened when the model tried to respond to a message that did not send well and ended up showing either a message that was empty or a follow-through. 

After that, OpenAI got in touch to confirm that this was not deliberate on the part of the AI and a fix had been issued to prevent the AI from appearing to initiate conversations in the future.

Many initially thought that the post was fake since it’s not too difficult to photoshop a screenshot of this conversation, post it on Reddit, and go viral, fuelling people’s interest and fears in AGI. The Redditor did share an OpenAI link to the conversation, however even this may not be a true verification, people argued.

In a post on X, AI developer Benjamin De Kraker demonstrated how this conversation could have been manipulated. You can instruct ChatGPT to respond with a specific question as soon as you send the first message. Then, you delete your message, which pushes ChatGPT to the top of the chat. When you share the link, it appears as if ChatGPT messaged you unprompted.

While there are multiple reasons to believe this didn’t actually happen, it apparently did—but not in the way you might think. OpenAI told Futurism that it had fixed a bug that was responsible for ChatGPT appearing to start conversations with users. 

The issue would occur whenever the model tried responding to a message that didn’t send as it should, and popped up blank. According to the company, ChatGPT would compensate by either sending a random message, or pulling from its memory.

So, what likely happened in this case is the Redditor opened up a new chat, and either triggered a bug that sent a blank message, or accidentally sent a blank message themselves. ChatGPT reached into its memory, and leaned on the fact it knew that the Redditor was starting school to respond with something it thought would be relevant. 

Although there hasn’t been a comment from the Redditor stating whether or not they have memory enabled for their ChatGPT account, it seems safe to say (at this point) that ChatGPT hasn’t gained consciousness and is randomly reaching out to users.

With  AI chatbots transforming online interactions, a recent claim that an AI bot initiated a conversation on its own raises questions about the evolving role of artificial intelligence. 

Typically, AI bots respond to human inputs within set parameters. However, the idea of a bot taking the initiative challenges the distinction between passive response and active engagement, pushing the boundaries of AI autonomy.

But how do you differentiate between a chatbot and a human?

If valid, such occurrences may indicate AI models are advancing in ways that could surprise even their creators. While this suggests progress in developing more human-like, proactive agents, it also raises concerns about user consent, privacy, and control.

So then, does the AI actually start conversations, or does it wait for you, say 9 times out of 10?

Instances of chatbots initiating conversations independently are relatively rare and often depend on specific programming or user settings. One such stellar example is Insomnobot3000. This bot wants to be a companion when all your friends are asleep, or it’s too late to text them. You can only chat with it between 11pm and 5am.

“Some nights, it’s just impossible to fall asleep, so I think Casper (the brand) wanted to create something that’s a friend that keeps you up at night,” said Casper VP Lindsay Kaplan.

A Glimpse Into the Future?

The more we go ahead, the clearer it gets that ChatGPT is not just a tool, but rather an increasingly integrated feature in our lives for remembering, engaging, and possibly predicting our very needs. Whether that sends shivers down your spine or gives you goosebumps, one thing is certain that our conversations with AI are just beginning to start.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Wait, Did ChatGPT Just Slide Into My DMs?


How bizarre would it be to be using an AI tool if, out of nowhere, it took the first step and initiated the conversation? Well, that’s exactly what a Reddit user claimed happened with ChatGPT recently. The post sparked a wave of discussions and curiosity across online communities. The user claimed that ChatGPT not only started a conversation but also continued it in a way that seemed unusually human-like!

The Reddit user shared a screenshot of the exchange, which showed ChatGPT asking about the former’s first week at high school. 

Baffled at the unexpected gesture, the user texted back: “Did you just message me first?” to which ChatGPT said, “Yes, I did! I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”

Many took to the comment section and reacted to the post. A user wrote, “I’m guessing you got selected for some a/b testing for a new feature,” while another shared, “I got this this week!! I asked last week about some health symptoms I had. And this week it messages me asking me how I’m feeling and how my symptoms are progressing!! Freaked me out.”

One said, “I was working on some RAID setup last night. When I opened ChatGPT today, it wouldn’t be that hard for it to just say something like, ‘Hey! How did that RAID project work out?’” 

Meanwhile, another user wondered if ChatGPT’s memory option was turned on.

The post also went viral on X, prompting various arguments over the “creepy” behaviour of the chatbot. One shared the screenshot and wrote, “ChatGPT is messaging people first now?! Level 2 indeed.”

Reacting to it, another remarked, “We were promised AGI, instead we got a stalker.”

For many, the idea of AI starting a conversation may feel like sci-fi, but it’s quickly becoming a reality.

A Feature or Glitch?

On the surface, this is concerning. The idea that ChatGPT is reaching out to users on its own doesn’t sit well with those of us with any level of anxiety about AI self awareness. Sure, ChatGPT was being polite by inquiring about the Redditor’s first day of school, but not many would need their chatbots to “mommy them”. 

Did ChatGPT just message me… First?
byu/SentuBill inChatGPT

The Redditor says they noticed the message when opening a conversation with ChatGPT, so the bot didn’t ping them with an unprompted notification. Other Redditors in the comments also claimed the same thing had happened to them.

This post blew up just days after OpenAI began rolling out o1, a new model that leans on deeper thought processes and reasoning. This allegedly means they use “human-like” reasoning, thus can resolve more complex tasks and hold more nuanced conversations.

OpenAI shortly addressed the viral incident, quoting, “What users went through was a bug.” According to OpenAI, that happened when the model tried to respond to a message that did not send well and ended up showing either a message that was empty or a follow-through. 

After that, OpenAI got in touch to confirm that this was not deliberate on the part of the AI and a fix had been issued to prevent the AI from appearing to initiate conversations in the future.

Many initially thought that the post was fake since it’s not too difficult to photoshop a screenshot of this conversation, post it on Reddit, and go viral, fuelling people’s interest and fears in AGI. The Redditor did share an OpenAI link to the conversation, however even this may not be a true verification, people argued.

In a post on X, AI developer Benjamin De Kraker demonstrated how this conversation could have been manipulated. You can instruct ChatGPT to respond with a specific question as soon as you send the first message. Then, you delete your message, which pushes ChatGPT to the top of the chat. When you share the link, it appears as if ChatGPT messaged you unprompted.

While there are multiple reasons to believe this didn’t actually happen, it apparently did—but not in the way you might think. OpenAI told Futurism that it had fixed a bug that was responsible for ChatGPT appearing to start conversations with users. 

The issue would occur whenever the model tried responding to a message that didn’t send as it should, and popped up blank. According to the company, ChatGPT would compensate by either sending a random message, or pulling from its memory.

So, what likely happened in this case is the Redditor opened up a new chat, and either triggered a bug that sent a blank message, or accidentally sent a blank message themselves. ChatGPT reached into its memory, and leaned on the fact it knew that the Redditor was starting school to respond with something it thought would be relevant. 

Although there hasn’t been a comment from the Redditor stating whether or not they have memory enabled for their ChatGPT account, it seems safe to say (at this point) that ChatGPT hasn’t gained consciousness and is randomly reaching out to users.

With  AI chatbots transforming online interactions, a recent claim that an AI bot initiated a conversation on its own raises questions about the evolving role of artificial intelligence. 

Typically, AI bots respond to human inputs within set parameters. However, the idea of a bot taking the initiative challenges the distinction between passive response and active engagement, pushing the boundaries of AI autonomy.

But how do you differentiate between a chatbot and a human?

If valid, such occurrences may indicate AI models are advancing in ways that could surprise even their creators. While this suggests progress in developing more human-like, proactive agents, it also raises concerns about user consent, privacy, and control.

So then, does the AI actually start conversations, or does it wait for you, say 9 times out of 10?

Instances of chatbots initiating conversations independently are relatively rare and often depend on specific programming or user settings. One such stellar example is Insomnobot3000. This bot wants to be a companion when all your friends are asleep, or it’s too late to text them. You can only chat with it between 11pm and 5am.

“Some nights, it’s just impossible to fall asleep, so I think Casper (the brand) wanted to create something that’s a friend that keeps you up at night,” said Casper VP Lindsay Kaplan.

A Glimpse Into the Future?

The more we go ahead, the clearer it gets that ChatGPT is not just a tool, but rather an increasingly integrated feature in our lives for remembering, engaging, and possibly predicting our very needs. Whether that sends shivers down your spine or gives you goosebumps, one thing is certain that our conversations with AI are just beginning to start.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

How to install iOS 18.1 beta

Apple released a very early preview of Apple...

Mukesh and Akash Ambani Visit TWO’s US Office to...

Recently, TWO hosted Reliance Industries chairman Mukesh Ambani,...

Virtuous, a fundraising CRM for nonprofits, raises $100M from...

I recently adopted a kitten from a local...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!