Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
ChatGPT owner OpenAI says it has fixed a bug that caused a “significant issue” of a small set of users being able to see the titles of others’ conversation history with the viral chatbot.‍
Copy Page Link
Thomas Ricardo - Cyber Analyst Reporter
Cyber News Centre
March 27, 2023

https://www.cybernewscentre.com/plus-content/content/chatgpt-outage-impacts-user-concerns-and-openais-strategies-for-future-incident-prevention

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

On March 20, ChatGPT, an AI-driven language model developed by OpenAI, experienced an outage followed by issues with making conversation history accessible to users. This incident raised concerns about the potential cyber leakage of sensitive user data and the overall security of AI-based platforms. 

ChatGPT owner OpenAI says it has fixed a bug that caused a “significant issue” of a small set of users being able to see the titles of others’ conversation history with the viral chatbot.

As a result of the fix, users could not access their chat history on March 20, Chief Executive Sam Altman said in a tweet on Wednesday.

Impacts of the ChatGPT Outage

The outage disrupted the user experience, causing inconvenience to those relying on the AI language model for personal, educational, or professional purposes. The inability to access conversation history raised concerns about the security of user data, potentially eroding trust in the platform. This incident has brought forward more critics regarding the attention to the broader issue on ethical issues on AI platform reliability and the need for robust security measures to protect user data.

The Glitch and Concerns About User Privacy

A recent glitch in the system has raised concerns about the extent to which OpenAI has access to user chats and how the company handles this information. The glitch inadvertently exposed some users' conversation history, leading to fears that private information could be released through the tool.

User Concerns

Following the outage, users expressed various concerns, primarily focused on data privacy, with users worried about the potential exposure of their sensitive information, such as personal details, confidential business information, Further questions regarding about the stability and reliability of ChatGPT and similar AI-driven platforms, prompting users to reconsider their dependence on such tools.

Finally, users sought clear communication from OpenAI regarding the cause of the outage and the steps taken to resolve the issue.

OpenAI's Response and Future Steps

In response to the concerns, OpenAI has acknowledged the glitch and assured users that it is taking steps to address the issue. The company has reiterated that it only uses anonymized data for training purposes, with PII removed to ensure user privacy. OpenAI is also committed to transparency, regularly updating its privacy policy and data usage guidelines to provide users with a clear understanding of how their information is being used.

The company has taken additional steps to prevent this from happening again in the future including adding redundant checks to library calls, "programatically examined our logs to make sure that all messages are only available to the correct user," and "improved logging to identify when this is happening and fully confirm it has stopped." The company says that it has also reached out to alert affected users of the issue.

The ChatGPT outage highlights the need for strong security measures and incident management strategies for AI-based platforms, as users entrust these tools with sensitive information. OpenAI's proposed measures aim to address user concerns and prevent future incidents of cyber leakage

OpenAI's ambitious master plan

Nearly a dozen companies debuted ChatGPT plugins today in conjunction with the feature’s debut. Among them is Instacart Inc., which has created a tool that allows the chatbot to order food from grocery stores. Expedia Group Inc. is using ChatGPT to help users craft travel plans, while OpenTable Inc. will deliver AI-generated restaurant suggestions.

OpenAI is working to map out the potential risks associated with the new plugin feature. According to the startup, its engineers have carried out a number of tests to determine how plugins could potentially be misused. It’s also inviting outside researchers to contribute feedback. 

The launch of the plugin feature comes a few days after OpenAI debuted its newest machine learning model. GPT-4, as the model is called, is a more advanced version of the neural network on which ChatGPT is based. It’s described as being more adept at complicated tasks such as solving mathematical problems.

On March 20, ChatGPT, an AI-driven language model developed by OpenAI, experienced an outage followed by issues with making conversation history accessible to users. This incident raised concerns about the potential cyber leakage of sensitive user data and the overall security of AI-based platforms. 

ChatGPT owner OpenAI says it has fixed a bug that caused a “significant issue” of a small set of users being able to see the titles of others’ conversation history with the viral chatbot.

As a result of the fix, users could not access their chat history on March 20, Chief Executive Sam Altman said in a tweet on Wednesday.

Get access to more articles for free.
Create your free account
More Cyber News