Common Information
Type Value
Value
LLM Jailbreak
Category Attack-Pattern
Type Mitre-Atlas-Attack-Pattern
Misp Type Cluster
Description An adversary may use a carefully crafted [LLM Prompt Injection](/techniques/AML.T0051) designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.
Details Published Attributes CTI Title
Details Website 2024-11-24 33 AI System & security
Details Website 2024-10-24 0 AI Security: Can The Friendly Chatbot Hold The Cyber Line?
Details Website 2024-10-23 0 New LLM jailbreak method with 65% success rate developed by researchers
Details Website 2024-10-23 0 New LLM jailbreak method with 65% success rate developed by researchers
Details Website 2024-10-23 0 Deceptive Delight: Jailbreak LLMs Through Camouflage and Distraction
Details Website 2024-09-30 2 Threat-Informed Defense to Secure AI
Details Website 2024-09-05 8 Exploring Large Language Models: Local LLM CTF & Lab
Details Website 2024-08-22 0 Sysdig's AI Workload Security: The risks of rapid AI adoption