A customer recently approached us with a problem. They use another vendor’s API gateway that satisfies most of their requirements with one notable exception: it fails on messages with elephantine payloads. They have requirements to issue requests that post up to gargantuan 100MB files. Could Gloo’s gateway technology help with such a problem? Another dimension to this porcine pickle is that they wanted to simultaneously have the gateway layer add some arbitrary custom headers to be injected along with the upstream request.
The purpose of this blog post is to try and wrap our arms around this oversized issue. We’ll work through an example to determine if Envoy proxy with a Gloo Gateway control plane can help us work through this problem. As a bonus, we’ll learn a bit more about how Gloo supports the new Kubernetes Gateway API standard, and even extends it in places where the core standard isn’t expressive enough to meet all our requirements. Feel free to follow along with this exercise in your own Kubernetes cluster.
Prerequisites
To complete this guide, you’ll need a Kubernetes cluster and associated tools, plus an instance of Gloo Gateway Enterprise. Note that there is a free and open source version of Gloo Edge, and it will work with this example as well. We ran the tests in this blog on Gloo Gateway Enterprise v1.17. Use this guide if you need to install Gloo Edge Enterprise. And if you don’t already have access to the Enterprise bits of Gloo Edge, you can request a free trial here.
We used GKE with Kubernetes v1.21.11 to test this guide, although any recent version with any Kubernetes provider should suffice.
For this exercise, we’ll also use some common CLI utilities like kubectl, curl, and git. Make sure these prerequisites are all available to you before jumping into the next section. I’m building this on MacOS but other platforms should be perfectly fine as well.
Clone Github Repo
The resources required for this exercise are available in the gloo-gateway-use-cases
repo on GitHub. Clone that to your workstation and switch to the large-payload-gateway-api
example directory:
Install htttpbin Application
HTTPBIN is a great little REST service that can be used to test a variety of http operations and echo the response elements back to the consumer. We’ll use it throughout this exercise. First, we’ll install the httpbin service on our kind cluster. Run:
You should see:
You can confirm that the httpbin pod is running by searching for pods with an app
label of httpbin
:
And you will see something like this:
Configure a Gateway Listener
A Gateway object represents a host:port listener that the proxy will expose to accept ingress traffic. We’ll establish a Gateway resource that sets up an HTTP listener to expose routes from all our namespaces. Gateway custom resources like this are a core part of the Gateway API standard.
Now we’ll apply this to our kube cluster:
Expect to see this response:
Now we can confirm that the Gateway has been activated:
You’ll see this sort of response from a kind cluster:
You can also confirm that Gloo Gateway has spun up an Envoy proxy instance in response to the creation of this Gateway
object by deploying gloo-proxy-http
:
Expect a response like this:
Generate Payload Files
If you’d like to follow along with this exercise, we’ll test our service using some preposterously large payloads that we generate for ourselves. (You wouldn’t want us to flood your network with these behemoths when you cloned our GitHub repo, would you?)
These commands all work on MacOS. Your mileage may vary on other platforms.
- 10MB:
echo {\"payload\": \" $(base64 -i /dev/urandom | head -c 10000000) \"\} > 10m-payload.txt
- 100MB:
echo {\"payload\": \" $(base64 -i /dev/urandom | head -c 100000000) \"\} > 100m-payload.txt
Install a Basic HTTPRoute
Let’s begin our routing configuration with the simplest possible route to expose httpbin
‘s portfolio of operations through our gateway proxy. You can sample the public version of this service here.
HTTPRoute
is one of the new Kubernetes CRDs introduced by the Gateway API, as documented here. We’ll start by introducing a simple HTTPRoute
for our service. This route manages routing and policy enforcement on behalf of an upstream service, like httpbin
in this case. We will begin with a simple configuration that forwards requests for any path on the api.example.com
virtual host to the httpbin
service.
Let’s apply this `HTTPRoute` now.
This is the expected response:
Test the Simple Route with Curl
Now that the HTTPRoute
is in place and is attached to our Gateway
object, let’s use curl
to display the response with the -i
option to additionally show the HTTP response code and headers. Since we plan to test the response of the gateway and service with large payload submissions, we’ll use the httpbin /post
endpoint.
Note that if you’re running on a cloud-provisioned cluster, you won’t access your service via port-forwarding to your localhost. Instead, you can obtain your proxy’s address using the glooctl
CLI like this: glooctl proxy url
. Then your curl
command would be expressed like this:
When you use the appropriate technique for your Kubernetes environment, then this command should complete successfully:
Inject a Custom Header using Gateway API Extensions
We’ll satisfy our customer’s custom header request by making one more change to our gateway configuration before we start ramping up to larger payloads.
We’ll use a transformation to modify our HTTP request to inject the custom header X-My-Custom-Header
with value my-custom-value
. This modified request will then be passed along to the backend httpbin
service. This type of requirement is common in scenarios where an integration is required, and you’d like to hide some of the required details from the consuming service.
Gloo Gateway and the Gateway API give us multiple avenues for satisfying this requirement. For this simple scenario, we could use a requestHeaderModifier
filter directly in the HTTPRoute
we built earlier. The standard doesn’t require this filter to be supported in the gateway, but it’s fairly common. Gloo Gateway fully supports it.
But in this case we’re going to use Gloo Gateway’s more fully featured transformation libraries. Why? Gloo technologies have a long history of providing sophisticated transformation policies with its gateway products, providing capabilities like in-line Inja templates that can dynamically compute values from multiple sources in request and response transformations.
The core Gateway API standard does not offer this level of sophistication in its transformations, but there is good news. The community has learned from its experience with earlier, similar APIs like the Kubernetes Ingress API. The Ingress API did not offer extension points, which locked users strictly into the set of features envisioned by the creators of the standard. This ensured limited adoption of that API. So while many cloud-native API gateway vendors like Solo support the Ingress API, its active development has largely stopped.
The good news is that the new Gateway API offers core functionality as described in this blog post. But just as importantly, it delivers extensibility by allowing vendors to specify their own Kubernetes CRDs to specify policy. In the case of transformations, Gloo Gateway users can now leverage Solo’s long history of innovation to add important capabilities to the gateway, while staying within the boundaries of the new standard. For example, Solo’s extensive transformation library is now available in Gloo Gateway via Gateway API extensions like RouteOption and VirtualHostOption.
We’ll add this to our gateway configuration by adding a RouteOption
describing the transformation, and by adding a reference to the new RouteOption
in our existing HTTPRoute
.
Here is the extension filter we add in our HTTPRoute
to activate our transformation. While these ExtensionRef
filters are part of the Gateway API standard, the Solo RouteOption
extension it points to is not part of the standard.
Now let’s apply these changes:
Here are the expected results:
Test, Test, Test
Managing with Marlin
Let’s not start with our full-grown, whale-sized payload. Instead, we’ll create a small clownfish-sized payload—we’ll call it Marlin—to get going. Note that Marlin swims upstream with its microscopic 100-byte payload with no problem. In addition, you can see the X-My-Custom-Header
with my-custom-value
that appears in the request headers that httpbin echoes back to the caller. So far, so good.
Cruising with Crush?
Marlin was no problem, so let’s move up the food chain by trying a sea turtle-sized payload that we’ll call Crush. Crush carries a 10MB payload, so he may create some cacophony.
This is not the response we wanted to see from Crush:
An HTTP 413 response indicates that we have overflowed Envoy’s default 1MB buffer size for a given request. Learn more about Envoy buffering and flow control here and here. It is possible to increase the Envoy buffer size, but this must be considered very carefully since multiple large requests with excessive buffer sizes could result in memory consumption issues for the proxy.
The good news is that for this use case we don’t require buffering of the request payload at all, since we are not contemplating transformations on the payload, which is what we see most commonly with cases like this. Instead, we’re simply delivering a large file to a service endpoint. The only transformation we require of the Envoy proxy is to add X-My-Custom-Header
to the input, which we have carried along since the original example.
Note that if you’d still prefer the approach of increasing Envoy’s buffer size to handle large payloads, there is an API in Gloo Edge for that, too. Check out the perConnectionBufferLimitBytes
setting in the ListenerOptions
API. This can be managed on a per-gateway level, as documented here. But generally speaking, eliminating buffering altogether offers superior performance and less risk.
Re-calibrating for Crush
So now we’ll apply a one-line change to our RouteOption
that sets the optional Gloo Gateway passthrough flag. It is commonly used in use cases like this to instruct the proxy NOT to buffer the payload at all, but simply to pass it through unchanged to the upstream service.
Note that you will only want to use this technique in routes where you are NOT performing transformation based on the payload content, like using extractors to pull selected elements from the message body into request headers. Buffering is absolutely required for those transformation types, and enabling passthrough mode would likely cause mysterious and unexpected behavior.
Here is the one-line change to the RouteOption
‘s transformation spec to enable massive message payloads:
Now apply the “passthrough” version of the RouteOption
:
Expect this response:
Note that for this and all subsequent examples, we’ll suppress the native httpbin output because it wants to echo back the entire original request payload. And life is too short to watch all of that scroll by. Instead, we’ll rely on curl
facilities to show just the response bits we care about: the total processing time, HTTP response code, and confirming the size of the request payload.
Now let’s retry Crush and watch him cruise all the way to Sydney with no constrictions:
Bashing with Bruce
Of course, the most fearsome payloads of all swim with Bruce, the great white shark. We’ll smash our bulkiest, Bruce-sized payloads against the proxy with our ultimate goal of 100MB.
Even Bruce ran the gauntlet with no problems, thanks to our passthrough
directive causing the proxy to bypass buffering of the payload. Even when we brought Bruce to the party and increased the payload size by an order of magnitude, there were no issues.
Cleanup
If you’d like to clean up the work you’ve done, you can either delete the entire Kubernetes cluster you created earlier, or simply delete the Kubernetes resources we’ve created over the course of this exercise and uninstall Gloo Gateway.
You should see a response like this that confirms the resources have been deleted.
Learn More
If you’ve followed along with us this far, then congratulations! You’ve not only navigated my gnarly Nemo puns and asinine alliterations, but you’ve also learned how to configure Gloo Gateway to handle lavishly large message payloads.
For more information, check out the following resources.
- Explore the documentation for Gloo Gateway.
- Request a live demo or trial for Gloo Gateway Enterprise.
- See video content on the solo.io YouTube channel.