Understanding and mitigating security risks in generative AI – Security and Privacy Considerations for Gen AI – Building Safe and Secure LLMs

Understanding and mitigating security risks in generative AI

If you are a user of generative AI and NLP LLMs, such as ChatGPT, whether you are an individual user or an organization, who is planning on adopting LLMs in your applications, there are security risks to be aware of.

According to CNBC in 2023, “Safety has emerged as a primary concern in the AI world since OpenAI’s release late last year of ChatGPT.”

The topic of security within AI is so relevant and critical that when ChatGPT went mainstream, the US White House officials in July 2023 requested seven of the top artificial intelligence companies— Microsoft, OpenAI, Google (Alphabet), Meta, Amazon, Anthropic, Inflection, and Meta—for voluntary commitments in developing AI technology. The commitments were part of an effort to ensure AI is developed with appropriate safeguards while not impeding innovation. The commitments included the following:

  • Developing a way for consumers to identify AI-generated content, such as through watermarks
  • Engaging independent experts to assess the security of their tools before releasing them to the public
  • Sharing information on best practices and attempts to get around safeguards with other industry players, governments, and outside experts
  • Allowing third parties to look for and report vulnerabilities in their systems
  • Reporting the limitations of their technology and providing guidance on the appropriate uses of AI tools
  • Prioritizing research on societal risks of AI, including discrimination and privacy
  • Developing AI with the goal of helping mitigate societal challenges such as climate change and disease

It will take some time before Congress can pass a law to regulate AI,” the US Commerce Secretary, Gina Raimondo, stated; however, she called the pledge a “first step” but an important one.

We can’t afford to wait on this one,” Raimondo said. “AI is different. Like the power of AI, the potential of AI, the upside and the downside is like nothing we’ve ever seen before.”

Fortunately, the benefits of using a large hyperscale cloud service such as Microsoft Azure are plentiful, as some of the security “guardrails” are already in place. We will cover these guardrails later in this chapter in the Applying Security Controls For Your Organization section.

This is notto say that ChatGPT or other LLMs are not safe or not secure. As with any product or service, there are bad actors who will try to exploit and find vulnerabilities for their own twisted benefits and you, as the reader, will need to understand that security is a required component on your journey to understanding or using generative AI. Security is not optional.

Additionally, also note that while the major companies listed previously (as well as others) have committed to ensuring AI is continually developed with safeguards in place, this is a shared responsibility. While the cloud does provide some security benefits, this needs to be repeated again: security is always a shared responsibility. That is, while a cloud service may have some security in place, ultimately, it is your responsibility to ensure you are following the security best practices identified by the cloud vendor and to also understand and follow best practices for specific LLMs that you may be integrating into your applications and services.

An analogy of shared responsibility we can use here is, say, if you park your car in a secure parking lot, with a lot of attendants and security gates to limit access, you would still lock your car when you leave it unattended. The manufacturer of the vehicle has put certain security precautions into the automobile, such as car door locks. You would need to take action and then lock your car doors to ensure a secure environment for any personal belongings inside the car. Both you and the automobile manufacturer share the responsibility of securing your vehicle.

You own your car and any contents inside your vehicle, so you will lock it up. Just like you own your own data (prompts and completions), you should ensure it is protected and secured, while the cloud vendor (the parking attendant in our analogy) will also help protect your data and others’ data as well by using appropriate safeguards.

Very similar to parking attendants protecting parked cars, cloud-based services, such as OpenAI/ Azure OpenAI, include some safety and privacy mechanisms to protect you and/or your organization.

As with any technology, generative AI can be used to accelerate amazing solutions and innovation to help with some of the most complex problems, yet it can also be used to exploit and, thus, create problems as well. Users can overshare personal or sensitive information with OpenAI through ChatGPT or use bad security practices, such as not using a strong, unique password to manage their ChatGPT account. Malicious actors look for some of these opportunities for mischief, and we’ll cover other threats in the next section.

In the next section, we will take a deeper look at some potential cyber security threats against a generative AI cloud-based service; we will then also take a look at what steps we can take to reduce our attack surface against these threats.

Leave a Reply

Your email address will not be published. Required fields are marked *