-
![](https://assets.chaos.social/accounts/avatars/000/293/781/original/781f2844ec3f7f75.png)
@ 𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕
2025-02-12 07:40:15
New hack uses prompt injection to corrupt Gemini’s long-term memory:
There's yet another way to inject malicious prompts into chatbots.
In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. […]
🤖 https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/
#google #ai #gemini #hacking #chatbot #openai #chatgpt #injection #memory