Emerging security threats – a look at attack vectors and future challenges An attack vector in cyber security is a pathway or method used by […]
Understanding limits
Jailbreaks and prompt injections 2 – Security and Privacy Considerations for Gen AI – Building Safe and Secure LLMs
Fortunately, with protections and guardrails in place in many public services that process generative AI prompts, such as Bing Chat, the malicious actor who is […]
Jailbreaks and prompt injections – Security and Privacy Considerations for Gen AI – Building Safe and Secure LLMs
Jailbreaks and prompt injections Both jailbreaks and direct/indirect prompt injections are another attack against LLMs. These two types of attacks are very closely related; with […]