Common Information
Type Value
Value
Craft Adversarial Data
Category Attack-Pattern
Type Mitre-Atlas-Attack-Pattern
Misp Type Cluster
Description Adversarial data are inputs to a machine learning model that have been modified such that they cause the adversary's desired effect in the target model. Effects can range from misclassification, to missed detections, to maximizing energy consumption. Typically, the modification is constrained in magnitude or location so that a human still perceives the data as if it were unmodified, but human perceptibility may not always be a concern depending on the adversary's intended effect. For example, an adversarial input for an image classification task is an image the machine learning model would misclassify, but a human would still recognize as containing the correct class. Depending on the adversary's knowledge of and access to the target model, the adversary may use different classes of algorithms to develop the adversarial example such as [White-Box Optimization](/techniques/AML.T0043.000), [Black-Box Optimization](/techniques/AML.T0043.001), [Black-Box Transfer](/techniques/AML.T0043.002), or [Manual Modification](/techniques/AML.T0043.003). The adversary may [Verify Attack](/techniques/AML.T0042) their approach works if they have white-box or inference API access to the model. This allows the adversary to gain confidence their attack is effective "live" environment where their attack may be noticed. They can then use the attack at a later time to accomplish their goals. An adversary may optimize adversarial examples for [Evade ML Model](/techniques/AML.T0015), or to [Erode ML Model Integrity](/techniques/AML.T0031).
Details Published Attributes CTI Title
Details Website 2024-10-09 17 Resecurity | Cybercriminals Are Targeting AI Agents and Conversational Platforms: Emerging Risks for Businesses and Consumers