It doesn’t take much for a large language model to give you the recipe for all kinds of dangerous things.
With a jailbreaking technique called “Skeleton Key,” users can persuade models like Meta’s Llama3, Google’s Gemini Pro, and OpenAI’s GPT 3.5 to give them the recipe for a rudimentary fire bomb, or worse, according to a blog post from Microsoft Azure’s chief technology officer, Mark Russinovich.
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.
Login if you have purchased