Adversarial Attacks in AI: How ChatGPT Can Be Hacked.
Tags
attack-pattern: | Data Datasets Model Models Social Media - T1593.001 Vulnerabilities - T1588.006 Input Prompt - T1141 User Interface Spoofing - Mob-T1014 |
Common Information
Type | Value |
---|---|
UUID | ac74aa72-69cc-451a-8f29-ee053516b1b8 |
Fingerprint | c4008f58191dd9d3 |
Analysis status | DONE |
Considered CTI value | 0 |
Text language | |
Published | Sept. 21, 2024, 3:11 a.m. |
Added to db | Sept. 21, 2024, 5:34 a.m. |
Last updated | Sept. 21, 2024, 5:35 a.m. |
Headline | Adversarial Attacks in AI: How ChatGPT Can Be Hacked. |
Title | Adversarial Attacks in AI: How ChatGPT Can Be Hacked. |
Detected Hints/Tags/Attributes | 42/1/0 |
Source URLs
URL Provider
RSS Feed
Details | Id | Enabled | Feed title | Url | Added to db |
---|---|---|---|---|---|
Details | 167 | ✔ | Cybersecurity on Medium | https://medium.com/feed/tag/cybersecurity | 2024-08-30 22:08 |