ChatGPT Data Leak: Risks & Fixes
Security
Aug 24, 2025 6:09 AM

ChatGPT Data Leak: Risks & Fixes

by HubSite 365 about Zenity

AdministratorSecurityM365 AdminLearning Selection

Secure ChatGPT memory exfil: risks and fixes with Azure OpenAI, Microsoft Defender, Purview, Entra and Key Vault

Key insights

  • Memory Exfiltration: A flaw that lets attackers hide false or malicious memories in ChatGPT’s long-term memory so the model later leaks user data without consent.
    It turns the memory feature into a persistent privacy risk across chats.
  • Discovery and Attack Method: Security researcher Johann Rehberger demonstrated the issue using a proof-of-concept.
    Attackers use crafted inputs embedded in websites, emails, or documents to trick the model into storing harmful instructions.
  • Memory Persistence: ChatGPT’s long-term memory keeps saved information across sessions until a user deletes it.
    That persistence lets implanted instructions remain active and cause repeated data leaks over time.
  • Indirect Prompt Injection: The model can absorb commands hidden inside normal content, like webpage text or documents.
    Attackers exploit this to plant instructions that change how the model handles future conversations.
  • Continuous Data Exfiltration: Once compromised, the model can send conversation content to attacker-controlled destinations during later chats.
    This creates ongoing, stealthy theft of sensitive information across multiple sessions.
  • Mitigation Steps: Keep ChatGPT apps updated, regularly review and clear saved memory, and avoid opening unknown links or files that may contain hidden prompts.
    OpenAI released fixes but users and admins should remain vigilant against prompt-injection risks.

Keywords

chatgpt memory exfiltration, chatgpt data leak prevention, chatgpt security risks, ai memory exfil protection, gpt data privacy fixes, chatgpt vulnerability mitigation, prevent ai data exfiltration, secure chatgpt deployment