Training data poisoning 2 – Security and Privacy Considerations for Gen AI – Building Safe and Secure LLMs

  • As plugins are also known as connectors, plugins can integrate with third-party products or services, sometimes even executing tasks on the external service without leaving the chat session. In a large enterprise system, this all occurs in the background, quite often without the knowledge of the individual executing the prompt. For example, in customer support chatbot/ LLM use cases, you can have the plugin create an incident service ticket, such as a ServiceNow ticket, as part of the support interaction. What would happen if the plugin was given free rein and began opening thousands and thousands of support tickets? This could potentially lead to a service disruption or the DoS attack described earlier. Subsequently, if another user or team had a legitimate reason to open a critical support ticket, they may not be able to due to service unavailability.

So, how does one ensure their plugin design is secure and prevent plugins from causing service disruptions?

Important note

As there are secure programming guidelines to incorporate and protect code, these same guidelines should be followed. The guidelines vary according to the type of programming languages and frameworks, and they are widely publicized online, so ensure you are doing your due diligence to protect the execution code of your plugins and also protect any downstream services. A good practice, for example, is to rate limits on how much interaction the plugin can have with other systems, that is, control how much interaction a plugin can have for the downstream application. After all, you do not want to inadvertently cause a DoS attack by continually exceeding the processing rates of the downstream application or service, thus making the application unavailable for users. Creating an auditing trail of your plugin is also a best practice. This means that the execution code should log all the activity it is completing as the code is being processed. Creating this audit log of the plugin’s activity can serve a dual-purpose activity that is useful for not only ensuring the plugin is executing and completing tasks as it should and, thus, adhering to a secure plugin design but that the audit logs can also be used for troubleshooting an issue, such as slow response time(s), by using the plugin. Sometimes, the output of the plugin or even the LLM can take a long time to process or, worse, cause an insecure output, so audit logging can help pinpoint the root cause.

We will cover audit logging in the last section of this chapter, but let’s look at one more security threat you should understand to expand your knowledge of security threats against generative AI and LLM

security: the threat of insecure output handling.

Leave a Reply

Your email address will not be published. Required fields are marked *