If you are working to modernize your infrastructure to adopt technologies such as microservices, containers, Kubernetes, and more, then it does not make sense for you to adopt an API management technology that was built with (and for) technology that is 15 or more years old. Organizations are modernizing their application and infrastructure stacks. Oftentimes, this process includes taking advantage of the benefits provided by containers, Kubernetes, and related cloud-native technologies in order to accelerate the delivery of their APIs and services. A common occurrence in our industry is watching bloated or outdated vendors refashion their legacy products to fit new paradigms.
At Solo.io, we’ve adopted proxy technology built with this generation of cloud-native problems in mind. Envoy proxy, the de-facto standard for building L7-based solutions these days, underpins our Gloo Gateway product and provides a number of benefits over legacy API management vendors such as Apigee with:
- Better efficiency, cost, and performance
- Scalability, multi-cloud capabilities, and a platform-agnostic approach
- Options for high dynamism and ephemeral workloads
- Design for GitOps and platform engineering
- Builds with service mesh in mind
We frequently replace Apigee installations at major Fortune 100 organizations; especially those with strict compliance and regulatory oversight (such as financial services, telecoms, etc.) with our API gateway, and if you’re looking to modernize your API infrastructure, you should whole-heartedly steer away from Apigee (even if you don’t choose Solo.io!).
Let’s take a closer look at five reasons why.
1. Better Efficiency and Better Performance
A common observation when we replace Apigee is that the organizations that run Apigee at some amount of scale have provisioned an immense amount of resources to do so. Anyone currently running Apigee (and reads this) will know exactly what I’m talking about. If you plan to do the most basic installation (i.e., a “Hello World” installation), according to their documentation, you will need at least 10 virtual machines with 60 cores and 120 GB of RAM. If you want any kind of High Availability, you have to set up the Cassandra backend with multiple peers/replicas, across multiple virtual machines, with appropriate storage, monitoring, and more. This is not trivial.
To use a real-world example from a large financial institution that replaced Apigee with Gloo, they found that for the amount of hardware and resources they used to run Apigee, they could use one quarter of that hardware to run Gloo, and get five times more throughput (note that the savings and ROI across the board is much higher when you factor in the insane Apigee pricing).
When it comes to performance, comparing the two wasn’t even close. The aforementioned financial institution is very sensitive to latency spikes in its API traffic. They must remain within certain high-water marks or risk compliance and regulatory penalties. Latency as measured with Apigee at P99 was in the 200ms range, likely because the Apigee message processor is a Java-based system (complete with GC/pause the world GC!). No matter what tuning they tried, they couldn’t get much better. When deploying Gloo, we were able to get P99 to around 10ms even with very complicated auth, rate-limit flows, and configurations.
The bottom line: Apigee wasn’t built for the type of scale, performance, and resource utilization you expect out of cloud-native infrastructure.
2. Run Where You Need To, Not Where Apigee Tells You To
Another common roadblock for organizations looking to adopt Apigee for their cloud-native infrastructure is the perception that you are forced to use Google Cloud as the backing management plane, even if that is not a cloud your organization wishes to adopt. Apigee can run its data plane (message processing gateway) on any infrastructure, in theory, but it must communicate back to the management planes which run in Google Cloud (for the SaaS version). For many organizations this is not ideal.
Gloo Gateway was designed from the outset to not be tied to any type of cloud or workload infrastructure (or all of them!). In many cases, our customers are running across on-premises as well as their favorite public cloud (often AWS or Azure). Gloo Gateway was built with a decentralized deployment of its data plane, unlike Apigee which tends to favor centralized deployments. This causes unnatural traffic patterns forced into the architecture like API gateway hairpinning.
3. High Dynamism with Ephemeral Workloads
Apigee was built originally for a world of static virtual machines and physical servers set behind complex load balancers and L3 firewalling technology. With containers deployed into Kubernetes, this networking model has been turned upside down in favor of more cloud-native friendly networking.
For example, with containers and Kubernetes, Pods can frequently come up, go down, become unhealthy, restart, and more, and this can cause endpoint churn or other issues that Apigee cannot deal with. In Kubernetes, we tend to favor policies based around labels and workload identities (as opposed to IP addresses). Tasks like fine-grained traffic routing (such as blue-green deployments or A/B testing) become very difficult to do with Apigee, but are table stakes for any cloud-native API gateway. Gloo Gateway has endpoint service discovery built in, and since it’s built on Envoy proxy, it’s built from the ground up to support these highly dynamic and ephemeral environments.
Apigee has tried to cloud-wash its technology to also include an “Envoy proxy” adaptor, but what we hear from prospects and customers who have tried to adopt this is that it’s practically useless: it does not do any of the heavy lifting API policy enforcement that they expect from the API gateway. They presumably added it to “check the box” but doesn’t provide much.
4. The GitOps and Platform Engineering Fit
I recently wrote a blog about how Full-Lifecycle API Management is Dead and how the lifecycle of any API (which is written with software) should be tied into the software development lifecycle; not have its own diverged lifecycle based around outdated or proprietary tooling and specs. This continues to hold true and is evident in the way our customers choose to adopt an API gateway to fit their modernization efforts. For example, Gloo Gateway’s configuration model is built on a declarative, reconciliation-loop model that ties in nicely with tools like Git, ArgoCD, Flux, ArgoRollouts, and more.
Tools like Apigee with proprietary configuration formats and clunky UIs force more silos into an organization, are difficult to automate, and cause slowdown in software and API delivery pipelines. Tools like these should be avoided in favor of automation driven by GitOps and other platform tools.
5. Service Mesh in Mind
Last but not least, service mesh and API gateways have been converging, and for good reason. API gateways have historically been deployed and managed as large, monolithic centralized systems. Cloud-native architectures tend to favor more decentralized highly-dynamic deployment patterns, and this is where service mesh fits in. Can we push security, routing, and networking policies closer to the applications through the use of a service mesh that we can with a centralized API gateway? Can we get secure (based on mTLS) communication from the API gateway and into the graph of services?
Yes, we can, and it’s more efficient (fewer load balancers, less network hopping, avoid hairpinning, etc.) with an API gateway + service mesh. API gateways still play an important role as the front door to a service estate, but a service mesh can take a lot of the heavy lifting deep within a graph of services and API calls.
Future-proof Your API Management
Apigee and other outdated API gateways (Kong, 3scale, Layer7, etc.) have been built on outdated technologies and are not fit for modern cloud-native architectures. They have invested millions of dollars in clever marketing in an attempt to cloud-wash their products and convince you that somehow technology built 15 or more years ago is the right fit for your next-generation (5+ year future) architectures. Organizations evaluating Apigee today as a possible fit for their modernization should also factor in the risk of their costly mistake and the inevitable rip-and-replace when they start to hit the inefficiencies and mismatch to cloud-native paradigms outlined here. Those are extremely expensive mistakes.
Solo.io has a proven track record of replacing (or outright beating) Apigee for modern, cloud-native API gateway use cases as it’s a much better fit in just about every dimension. To learn more about modern API management, check out this complimentary guide.