OpenAI’s ChatGPT chatbot recently experienced a significant glitch that allowed some users to view the titles of other users’ conversations, according to the CEO of OpenAI, Sam Altman.
The error was initially reported on social media sites Reddit and Twitter, where users shared images of chat histories that did not belong to them.
While Altman admitted that the corporation is “awful” about the issue, he promised users that it has been resolved. The bug caused privacy worries on the platform, with some users concerned that their sensitive information could be released through the tool after seeing chats appear in their history that they had not had with the chatbot.
One Reddit user posted a photo of their chat history that included titles like “Chinese Socialism Development,” as well as conversations in Mandarin that they had not initiated. On Tuesday, OpenAI told Bloomberg that it had briefly disabled the chatbot late on Monday to fix the error. They also assured users that they had not been able to access the actual chats.
Altman tweeted that there would be a “technical postmortem” soon to investigate the issue. While the company has fixed the error, the incident has highlighted the need for more robust data privacy protections for users of artificial intelligence tools.
The sources for this piece include an article in BBC.