Prompt Injection

All discussions tagged with this topic

Found 1 discussion
June 9, 2025

A Hacker News discussion explores whether LLMs and CV models could execute commands hidden in images via steganography, touching on prompt injection, model hallucinations, and AI security.