Program agentgateway for LLM consumption

In this free, on-demand technical lab, learn how to provision an agentgateway-backed AI gateway to proxy requests to OpenAI and leverage advanced LLM features.

  • Install kgateway: Enable agentgateway in a local Kubernetes cluster by installing the Gateway API CRDs, kgateway CRDs, and deploying the kgateway controller.
  • Provision an AI gateway: Deploy an agentgateway instance to serve as a centralized AI gateway for managing LLM traffic.
  • Define a backend for OpenAI: Configure a kgateway backend resource with authentication and default model settings for OpenAI.
  • Configure a route: Create an HTTPRoute that directs requests from the gateway to the OpenAI backend.
  • Test the setup: Use a placeholder client workload to send requests through the gateway and confirm that calls reach OpenAI without manually specifying credentials.

Gain hands-on experience managing LLM consumption in Kubernetes, including secure routing, credential management, and advanced features like prompt guards and enrichment.

Take the course
Program agentgateway for LLM consumption

Additional Resources