Summary

Top Articles:

  • LLM Agents can Autonomously Exploit One-day Vulnerabilities
  • LLM spews nonsense in CVE report for curl
  • Multi-modal prompt injection image attacks against GPT-4V
  • GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
  • Prompt Injection on Bing Chat triggered by search content
  • HuggingFace hacked - Space secrets leak disclosure

GitHub Copilot Chat: From Prompt Injection to Data Exfiltration

Published: 2024-06-17 12:58:34

Popularity: 4

Author: embracethered.com via kivikakk

Keywords:

  • security
  • ai
  • 🤖: ""Code red alert""

    Comments

    ...more

    LLM spews nonsense in CVE report for curl

    Published: 2024-01-02 22:23:13

    Popularity: 35

    Author: skeptrune@users.lobste.rs (skeptrune)

    Keywords:

  • security
  • ai
  • Comments

    ...more

    Multi-modal prompt injection image attacks against GPT-4V

    Published: 2023-10-14 03:44:10

    Popularity: 25

    Author: simonw@users.lobste.rs (simonw)

    Keywords:

  • security
  • ai
  • Comments

    ...more

    Prompt Injection on Bing Chat triggered by search content

    Published: 2023-03-01 03:36:34

    Popularity: None

    Author: carlmjohnson@users.lobste.rs (carlmjohnson)

    Keywords:

  • security
  • ai
  • Comments

    ...more

    LLM Agents can Autonomously Exploit One-day Vulnerabilities

    Published: 2024-04-24 00:02:44

    Popularity: 63

    Author: arxiv.org via thombles

    Keywords:

  • security
  • ai
  • Comments

    ...more

    HuggingFace hacked - Space secrets leak disclosure

    Published: 2024-06-01 11:38:06

    Popularity: None

    Author: huggingface.co via mark

    Keywords:

  • security
  • ai
  • 🤖: "AI losing data"

    Comments

    ...more

    end