

A newly discovered prompt injection attack threatens to turn ChatGPT into a cybercriminal’s best ally in the data theft business. Dubbed AgentFlayer, the exploit uses a single document to conceal “secret” prompt instructions targeting OpenAI’s chatbot. A malicious actor could simply share the seemingly harmless document with their victim via…