A technique for removing safety training from language models. Stub Note This note is a placeholder. Content to be developed. Related Concepts Jailbreak AI Safety AI Alignment