ChatGPT Faces Major Security Risk After Seven Critical Vulnerabilities Uncovered

Artificial intelligence security vulnerability

Cybersecurity firm Tenable has discovered a suite of seven critical vulnerabilities and attack techniques in OpenAI's ChatGPT, collectively dubbed "HackedGPT." These security flaws could allow malicious actors to access and exfiltrate sensitive user information, including private chat histories and stored "memories," without the user's knowledge. The vulnerabilities were identified in the GPT-4o model, with several also found to persist in the latest GPT-5 model.

The core of the issue lies in a new class of AI exploits known as indirect prompt injection. According to the research, an attacker can embed hidden, malicious instructions within external content, such as websites or documents. When a user instructs ChatGPT to interact with this content—for example, by asking it to summarize a webpage—the AI model can be tricked into executing the hidden commands. This method bypasses many of the built-in safety mechanisms designed to protect users. The discovery of these novel AI vulnerabilities was detailed in a report by researchers Moshe Bernstein and Liv Matan.

This type of attack is particularly insidious because it requires no special action from the user beyond normal interaction with the chatbot. The exploit chain can be triggered by seemingly innocuous activities, exposing the AI to indirect prompt injection attacks that manipulate the model's behavior. An attacker could potentially gain persistent access and discreetly siphon data over time.

Tenable reported its findings to OpenAI, which has since taken steps to remediate some of the identified issues. However, at the time of the disclosure, several vulnerabilities remained unpatched, leaving certain attack vectors open. The research highlights the growing security challenges associated with large language models (LLMs) as they become more integrated with the wider internet, demonstrating how a complex chain attack could lead to significant data theft during routine use. This incident underscores the urgent need for more robust security protocols to defend against AI-specific threats.