How a Handful of Malicious Documents Can Backdoor Massive AI Models It might seem that poisoning a huge AI model would require corrupting a substantial portion of its training data. However, groundbreaking research reveals this isn’t the case. Experts from Anthropic, ... adversarial machine learning AI safety AI security backdoor attacks data poisoning large language models model robustness research