Hallucination

All discussions tagged with this topic

Found 11 discussions

A Hacker News discussion explores whether LLMs and CV models could execute commands hidden in images via steganography, touching on prompt injection, model hallucinations, and AI security.