Understanding limits – Deploying ChatGPT in the Cloud: Architecture Design and Scaling Strategies

Understanding limits

Any large-scale cloud deployment needs to be “enterprise-ready,” ensuring both the end user experience is acceptable and the business objectives and requirements are met. “Acceptable” is a loose term that can vary per user and workload. To understand how to scale to meet any user or business requirements, as the appetite for a service increases, we must first understand the basic limits, such as token limits. We covered these limits for most of the common generative AI GPT models in Chapter 5, however, we will quickly revisit them here.

As organizations scale up using an enterprise-ready service, such as Azure OpenAI, there are rate limits on how fast tokens are processed in the prompt+completion request. There is a limit to how many text prompts can be sent due to these token limits for each model that can be consumed in a single prompt+completion. It is important to note that the overall size of tokens for rate limiting includes both the prompt (text sent to the AOAI model) size plus the return completion (response back from the model) size, and depending on the model, the token limits on the model will vary. That is, the number of maximum token numbers used per a single prompt, will vary depending on the GenAI model used.

You can see your rate limits on the Azure OpenAI overview page or OpenAI account page. You can also view important information about your rate limits, such as the remaining requests, tokens, and other metadata in the headers of the HTTP response. Please see the reference link at the end of this chapter for details on what these header fields contain.

Here are a few token limits for various GPT models:

ModelToken Limit
GPT-3.5-turbo 03014,096
GPT-3.5-turbo-16k16,385
GPT-3.5-turbo-06134,096
GPT-3.5-turbo-16k-061316,384
GPT-48,192
GPT-4-061332,768
GPT-4-32K32,768
GPT-4-32-061332,768
GPT-4-Turbo128,000 (context) and 4,096 (output)

Figure 7.2 – Token limits for some GenAI models

While we already discussed prompt optimization techniques earlier in this book, in this chapter, we will look at some of the other ways to scale an enterprise-ready cloud GenAI service for applications and services that can easily exceed the token limits for a specific model and scale effectively.

Cloud scaling and design patterns

Since you learned about some of the limits imposed by Azure OpenAI and OpenAI in the previous section, we will now look at how to overcome these limits.

Overcoming these limits through a well-designed architecture or design pattern is critical for businesses to ensure they are meeting any internal service-level agreements (SLAs) and are providing a robust service without a lot of latency, or delay, in the user or application experience.

What is scaling?

As we described earlier, limits are imposed on any cloud architecture, just as there are hardware limits on your laptop (amount of RAM or disk space), on-premises data centers, and so on. Resources are finite, so we have come to expect these limits, even in cloud services. However, there are a few techniques we can use to overcome limitations so that we can meet our business requirements or user behavior and appetite.

Leave a Reply

Your email address will not be published. Required fields are marked *