Prompt Injection

All discussions tagged with this topic

Found 1 discussion

A Hacker News discussion explores whether LLMs and CV models could execute commands hidden in images via steganography, touching on prompt injection, model hallucinations, and AI security.