As cloud-native technologies continue to evolve, Solo.io’s introduction of the Gloo Gateway API marks a significant advancement in API gateway solutions. Transitioning from Gloo Edge to Gloo Gateway represents more than just a product name change. It offers a Kubernetes-native experience with improved performance and simplified management.
This guide will walk you through the Edge-to-Gateway migration process, focusing on key concepts such as API translation, migration steps, and post-migration considerations.
Gloo Edge has served as a robust API gateway, offering a variety of advanced features. Its performance and scalability are superb, in large part due to architectural decisions that allowed separately scaling the control plane (Gloo engine) from its data plane (Envoy proxy and related data path components, like external auth and rate limiting).
However, Gloo Gateway takes API management to the next level with a Kubernetes-native approach, leveraging Custom Resource Definitions (CRDs) based on the emerging Kubernetes Gateway API standard for streamlined configuration.
API Translation Considerations
It is important to note at the outset that upgrading from older versions of Gloo Edge to the newer Gloo Gateway does NOT require any API translation at all. The older Edge-style VirtualService
objects are still supported. However, if you value compliance with the Gateway API standard, then you’ll want to consider these translations. Converting to the new Gateway API standard is considered the best practice moving forward.
This section provides guidance and an example of API translation based on where you are applying policies: at the individual route level, the virtual host level, or the gateway / listener level.
Route Options
In Gloo Edge, routing is defined using Virtual Services. In Gloo Gateway, this functionality is managed by HTTPRoute
and TCPRoute
CRDs. Key route options include:
- Path Matching: Define precise routing based on URL paths, query parameters, or headers.
- Rewrite Rules: Modify request paths or headers before routing.
- Traffic Splitting: Distribute traffic among multiple backend services or versions.
This example below illustrates a common configuration change when moving from the Gloo Edge to the Gateway API in v1.17+. We’ll replace an Edge API VirtualService
with a Gateway API standard HTTPRoute
plus a RouteOption
.
Our goal is to declare a policy that will split traffic across two versions of my-workload
on virtual host api.example.com
. We will accept requests from the gateway with paths matching /api/my-org/my-workload/*
and will rewrite those requests to have the path /my-workload/*
before routing them to one of the upstream service versions. If the request is received with a header version: v2
, then we’ll route to version 2 of the service. Otherwise, we’ll route to version 1. In either case, we’ll apply an authNZ policy to the request, with different policies applicable for each version.
Let’s begin with the Edge API VirtualService
.
The configuration above shows an example of an Edge-style VirtualService
that includes URL path-matching plus header matching, rewriting a request path, and then traffic splitting across two versions of the same service. In addition, we’ll assume that we need to change authorization policies between service versions, and we accounted for that with extAuth
stanzas pointing to different authNZ configurations.
The configuration below illustrates how this Edge VirtualService
would be converted to the Gateway API, with a standard HTTPRoute
and route-specific policy injected with a RouteOption
.
Gloo Gateway has been in production use for many years now. More sophisticated policies like external authorization and rate limiting require capabilities beyond the current scope of the Gateway API standard. The good news is that while the standard does not yet offer these capabilities, it delivers extensibility by allowing vendors to specify their own Kubernetes CRDs that specify these policies. A standards-compliant implementation can then include these optional policies using mechanisms like the ExtensionRef
with to include a RouteOption
as shown in the listing above. This allows Gloo Gateway users to leverage Solo’s long history of innovation in areas like transformations and GraphQL support, while still coloring inside the lines of the new standard.
Another bit of good news is that the core policy CRDs from Gloo Edge are useable unchanged between older Edge versions and newer Gateway versions. These include the popular resources for external authorization like AuthConfig
, and for rate limiting like RateLimitConfig
.
Virtual Host Options
It’s important to allow fine-grained policy specification down to the individual route level, as we demonstrated in the previous section. But what if we want to incorporate these configurations at higher levels, say at the virtual host or even the entire gateway level? We’ll explore those topics next.
At the virtual host level, it is common for individual organizations within a larger enterprise to have separate requirements for authNZ. Snyk is a great example of this. They have used Gloo Edge to normalize declarative authNZ policies across business lines. But many organizations aren’t ready for that level of sophistication yet. As an intermediate step, they require delegating to different policies at a higher level than individual routes, as we showed earlier. They often want abstractions at a virtual host level, which supports more compact expressions of policy that span multiple routes.
Virtual host-level options in Gloo Gateway are typically managed through VirtualHostOption
objects. In the Gloo Edge API, these would have commonly been managed at the virtualHost
level of a VirtualService
resource.
Key virtual host options include:
- Host Matching: Specify domain names or host headers for routing decisions.
- TLS Termination: Configure SSL/TLS certificates for secure traffic handling.
- Custom Headers: Add or modify HTTP headers for requests and responses.
You can learn more about specifying VirtualHostOptions
in the Gloo Gateway product documentation.
Gateway / Listener Options
Sometimes organizations value the ability to deploy rapid policy changes at the outermost gateway level, so they are completely independent of any application changes happening deeper in the network. Such responsiveness is critical when responding to a zero-day exploit like the infamous Log4Shell attack of 2021. In cases like this, you want to deploy policies like Web Application Firewall quickly to detect malicious request patterns and reject them before they ever penetrate to the application level.
These sorts of global strategies are facilitated by applying policy at the gateway or listener level. These would have been applied at the Gateway
level in Gloo Edge. Now they are realized in Gloo Gateway’s Gateway
CRD and its companion, ListenerOption
, HTTPListenerOption
and TCPListenerOption
CRDs.
HTTP listeners in Gloo Edge are managed through Gateway
CRDs in Gloo Gateway. Key options include:
- Port Configuration: Define the ports on which the gateway listens for HTTP and HTTPS traffic.
- Protocol Handling: Specify protocols such as HTTP, HTTPS, or gRPC.
- Rate Limiting: Apply rate limits to manage traffic and prevent abuse.
Migration Steps
API changes are often the first step users consider in an Edge-to-Gateway migration initiative. Those are critical and often top-of-mind for engineers executing these changes. That’s why we considered them first.
But there are other factors to account for in these migrations as well. We’ll explore those factors in this section. The good news is that other than the optional API translation, these other steps follow along with best operational practices that have been around since Gloo Edge days.
1. Install Gloo Gateway
Start by installing Gloo Gateway in your Kubernetes cluster. Follow the official installation guide to deploy the necessary components and CRDs. Fans of GitOps platforms like ArgoCD may want to consider including management of Gloo components into your GitOps strategy.
2. Translate Gloo Edge Configurations
Map your Gloo Edge configurations to Gloo Gateway resources:
- Routes: Convert Virtual Services to
HTTPRoute
orTCPRoute
resources, incorporating route options like path matching and traffic splitting where necessary. - Virtual Hosts: Translate these to
Gateway
andVirtualHostOption
resources, setting up host matching, TLS termination, and custom headers. - Listeners: Define HTTP listeners using
Gateway
configurations and listener options (HTTPListenerOption
,TCPListenerOption
), specifying ports, protocols, and rate limits.
Note that many of the configurations for advanced capabilities like AuthConfig
for external authorization and RateLimitConfig
for rate limiting are 100% unchanged in moving from Edge to Gateway. It’s only the mechanism by which they’re incorporated into routing policies, say using RouteOptions
or VirtualHostOptions
within an HTTPRoute
, where you’ll see differences from the Edge approach.
3. Deploy and Test
Deploy the new configurations in a staging environment. Validate the following:
- Routing: Ensure that routes are correctly mapped and traffic is directed as expected.
- Virtual Hosts: Verify that virtual host settings, including TLS and headers, are correctly applied.
- Listeners: Check that listeners are properly configured for the desired ports and protocols.
4. Production Rollout
Once validated, update DNS records to point to the Gloo Gateway proxy and monitor traffic to ensure a smooth transition. See the Gloo Gateway Operations and Observability guides for more helpful ideas on managing production deployments.
Post-Migration Considerations
Observability and Monitoring
Leverage Gloo Gateway’s observability features to monitor traffic and performance. Set up OpenTelemetry, Prometheus and Grafana for real-time metrics and dashboards.
Security Enhancements
Utilize Gloo Gateway’s advanced security features, including TLS termination, external authorization, JWT validation, and comprehensive access controls, to bolster your security posture.
Continuous Management
Consider adopting GitOps practices like Argo Rollouts for continuous deployment and management of your Gloo Gateway configurations. Use Helm charts or Kubernetes Operators to streamline updates and maintenance.
Conclusion and Next Steps
Migrating from Gloo Edge API to Gloo Gateway API brings numerous benefits, including a more Kubernetes-native architecture and enhanced capabilities. With careful planning and understanding the API differences in area like route options, virtual host options, and listener-level options, you can ensure a smooth and efficient transition.
Do you require additional support in managing an upgrade to Gloo Gateway v1.17, with Gateway API support? Solo offers a wide range of options, all the way from free community support to enterprise support from Solo customer success engineers to dedicated professional services.
To better understand the fundamentals of the tech we’ve discussed here, check out this blog for a step-by-step tutorial through open-source Gloo Gateway featuring the Kubernetes Gateway API.
Learn more about the enterprise versions and support options for Gloo Gateway by requesting a live demo here or a trial of the enterprise product here.