close

Google Cloud has recently added GPU support to Cloud Run, integrating Nvidia L4 GPUs with 24 GB of vRAM. This enhancement provides developers and AI practitioners with a more efficient and scalable way to perform inference for large language models (LLMs).

A Perfect Match for Large Language Models

The integration of GPUs into Cloud Run offers significant benefits for those working with large language models. These models, which demand substantial computational power, can now be served with low latency and fast deployment times. Lightweight models like LLaMA2 7B, Mistral-8x7B, Gemma2B, and Gemma 7B are particularly well-suited for this platform. Leveraging Nvidia L4 GPUs allows for quick and efficient AI predictions.

Hassle-Free GPU Management

One of the key advantages of GPU support in Cloud Run is the simplicity it offers. With pre-installed drivers and a fully managed environment, there’s no need for additional libraries or complex setups. The minimum instance size required is 4 vCPUs and 16 GB of RAM, ensuring the system is robust enough to handle demanding workloads.

Cloud Run also retains its auto-scaling feature, now applicable to GPU instances. This includes scaling out up to five instances (with the potential for more through quota increases) and scaling down to zero when there are no incoming requests. This dynamic scaling optimizes resource usage and reduces costs, as users only pay for what they use.

Speed and Efficiency in Every Aspect

Performance is a core aspect of this new offering. The platform can quickly start Cloud Run instances with an attached L4 GPU, ensuring that applications are up and running with minimal delay. This rapid startup is crucial for time-sensitive applications.

Additionally, the low serving latency and fast deployment times make Cloud Run with GPU an attractive option for deploying inference engines and service frontends together. Whether using prebuilt inference engines or custom models trained elsewhere, this setup allows for streamlined deployment and operation, enhancing developer productivity.

Cost Efficiency and Sustainability

Cost efficiency is a key consideration alongside performance. Google Cloud Run’s pay-per-use model extends to GPU usage, offering an economical choice for developers. The ability to scale down to zero when not in use helps minimize costs by avoiding charges for idle resources.

The integration of GPUs also supports sustainable practices. By enabling real-time AI inference with lightweight, open-source models like Gemma2B, Gemma 7B, LLaMA2 7B, and Mistral-8x7B, developers can build energy-efficient AI solutions. Serving custom fine-tuned LLMs on a platform that scales dynamically also contributes to reducing the environmental impact, making it a responsible choice for modern AI development.

Check out the Cloud Run Documentation for more details – https://cloud.google.com/run/docs

Conclusion

Google Cloud Run’s addition of GPU support represents a significant development in cloud-based AI services. By combining the power of Nvidia L4 GPUs with the flexibility and scalability of Cloud Run, developers can build and deploy high-performance AI applications with ease. The preview is available in us-central1, offering a new set of possibilities for those looking to optimize their AI workloads.

In my view, this is probably the start of making LLMs available serverless, which can revolutionize the deployment and accessibility of even higher parameter models in the future. This evolution could lead to a new era in AI, where powerful models are more readily available and scalable without the need for extensive infrastructure management.

Tags : cloud-runllm gpu serverlessserverless
Navveen

The author Navveen