No items found.

Gloo and Confluent Cloud, Part 2: Secure, Control, and Manage Confluent Cloud REST API with Gloo Gateway

July 26, 2023
Duncan Doyle

In part one of this blog series, I covered Gloo Gateway and Confluent Cloud, Apache Kafka and external consumers, Event Gateway patterns, Gloo Gateway, and Using the Confluent Cloud REST API.

Here in the second and last part of the series, we will go over securing, controlling, and managing the Confluent Cloud REST API with Gloo Gateway. We will use Gloo Gateway’s advanced API Gateway functionalities to:

  1. Secure our Confluent Cloud REST API
  2. Apply access policies to secure the REST API, including access to Kafka topics.
  3. Apply rate limiting policies to control traffic from external consumers to our Kafka environment.

Installing Gloo Gateway

First we need to install Gloo Gateway in our Kubernetes cluster (detailed installation instructions can be found here). Follow the instructions to install Gloo Gateway version 2.3.2 onto your Kubernetes cluster. Make sure that you also install the extAuthService and the rateLimiter.

Exposing the Confluent Cloud REST API

With Gloo Gateway installed, we can now expose the Confluent Cloud REST API via the Gateway. To do that, we need to deploy an ExternalService that creates an addressable destination in our Kubernetes cluster that points to the Confluent Cloud REST API. This will later allow us to create a route to this destination in our Gloo Gateway RouteTable.

Let’s first create a new confluent-cloud namespace in which we can deploy our resources:

kubectl create ns confluent-cloud

Most of the following commands require the cluster name of your Kubernetes cluster to be set. The easiest way to do this is to export the name of your Kubernetes cluster to an environment variable:

export CLUSTER_NAME=$(kubectl config view --minify -o jsonpath='{.clusters[].name}')

Apply the following ExternalService custom resource. This will create a destination inside your Kubernetes cluster that points to your Confluent Cloud REST API. Replace the {CONFLUENT_CLOUD_HOST} placeholder with the hostname of your Confluent Cloud environment (in my case this is pkc-z9doz.eu-west-1.aws.confluent.cloud):

cat <<EOF | kubectl apply -f -
apiVersion: networking.gloo.solo.io/v2
kind: ExternalService
metadata:
  name: confluent-cloud-rest-external-service
  namespace: confluent-cloud
spec:
  hosts:
  - ${CONFLUENT_CLOUD_HOST}
  ports:
  - name: https
    number: 443
    protocol: HTTPS
    clientsideTls: {}
  selector: {}
EOF

Note that we use the property clientsideTls to configure TLS Origination. This allows us in this example to use HTTP traffic (for sake of simplicity) on the Gateway, and use HTTPS in our communication with Confluent Cloud.

With our ExternalService deployed, we can now deploy the VirtualGateway and RouteTable.

Apply the following VirtualGateway custom resource. This will configure the Gloo Ingress Gateway to listen for HTTP traffic on port 80 for hostname kafka.example.com:

cat <<EOF | kubectl apply -f -
apiVersion: networking.gloo.solo.io/v2
kind: VirtualGateway
metadata:
  name: istio-ingressgateway
  namespace: gloo-mesh-gateways
spec:
  listeners:
    - port:
      number: 80
    http: {}
    allowedRouteTables:
      - host: kafka.example.com
workloads:
- selector:
  labels:
    istio: ingressgateway
  cluster: ${CLUSTER_NAME}
EOF

To be able to use the kafka.example.com hostname, add the kafka.example.com hostname to your /etc/hosts file, and point it to the IP Address of your Kubernetes cluster’s Ingress Gateway. When running this example on a local Kubernetes cluster, the address is simply 127.0.0.1:

127.0.0.1 kafka.example.com

We can now apply the RouteTable, which configures the routing from our Ingress Gateway into our ExternalService. Note that we apply the label route: confluent-cloud-rest to our route. This label will later be used in the authentication and rate-limit policies to select our route. Make sure to replace the placeholders {CONFLUENT_CLOUD_HOST} and {KAFKA_CLUSTER_ID} with the values of your Confluent Cloud Kafka instance ID (in my case these values are pkc-z9doz.eu-west-1.aws.confluent.cloud and lkc-c834f3):

cat <<EOF | kubectl apply -f -
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
  name: confluent-cloud-rest-route
  namespace: confluent-cloud
spec:
  hosts:
    - kafka.example.com
  virtualGateways:
    - name: istio-ingressgateway
      namespace: gloo-mesh-gateways
      cluster: ${CLUSTER_NAME}
  http:
    # Route for the db-external-service
    - name: confluent-cloud-rest
      labels:
        route: confluent-cloud-rest
      # Prefix matching
      matchers:
      - uri:
          prefix: /
      # Forwarding directive
      forwardTo:
        destinations:
        # Reference to the external service resource
        - ref:
            name: confluent-cloud-rest-external-service
            cluster: ${CLUSTER_NAME}
          kind: EXTERNAL_SERVICE
        # Route to specific Kafka cluster in Confluent Cloud
        hostRewrite: ${CONFLUENT_CLOUD_HOST}
        pathRewrite: /kafka/v3/clusters/${KAFKA_CLUSTER_ID}/
EOF

With the VirtualGateway, RouteTable and ExternalService deployed, and the kafka.example.com hostname mapped to the right ip-address in our /etc/hosts file, the Confluent Cloud Kafka cluster can now be accessed over HTTP via Gloo Gateway.

We will again use the cURL command to retrieve the topic information from our Confluent Cloud Kafka cluster using the following command. Make sure to replace the {API-KEY} placeholders with the values of your Kafka cluster:

curl -v -H 'Authorization: Basic {API-KEY}' http://kafka.example.com/topics

Confluent Cloud will send the same response we saw earlier, which contains information about the topics on your Kafka cluster. Note that we no longer have to specify the Confluent Cloud hostname and our Kafka instance ID, as the traffic routing is handled by our Gloo RouteTable and ExternalService.

Securing the Confluent Cloud RESTful API

With the base-configuration in place, and Gloo Gateway handling the HTTP traffic to Confluent Cloud REST API, we can enable more Gloo Gateway features. Let’s start by securing our Kafka endpoints with an additional Gloo Gateway API-Key.

You can use different mechanisms and protocols to secure your endpoints with Gloo Gateway, including OAuth2 and OpenID Connect. In this article we secure our APIs with API-Keys, as this architecture does not require any additional components, like an OAuth Provider, to be deployed on the Kubernetes cluster and be integrated with Gloo Gateway. (For more information about Gloo Gateway’s authentication and authorization capabilities, please consult the Gloo Gateway documentation.)

To secure the Kafka endpoints with an API-Key, we first need to create the API-Key Secret in Kubernetes. In this example, we simply will use a predefined API-Key. (In a production scenario you would use an API management tool such as Gloo Portal or Google Developer Portal, to generate an API key to use for your application’s domain.)

Apply the following Kubernetes secret which contains the api-key, a user-id and a user-email entry. Note that all values have been encoded in Base64:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: user-id-12345
  namespace: confluent-cloud
  labels:
    extauth: apikey
type: extauth.solo.io/apikey
data:
  # N2YwMDIxZTEtNGUzNS1jNzgzLTRkYjAtYjE2YzRkZGVmNjcy
  api-key: TjJZd01ESXhaVEV0TkdVek5TMWpOemd6TFRSa1lqQXRZakUyWXpSa1pHVm1OamN5
  # user-id-12345
  user-id: dXNlci1pZC0xMjM0NQ==
  # user12345@email.com
  user-email: dXNlcjEyMzQ1QGVtYWlsLmNvbQ==
EOF

Next, create the external auth server that is responsible for verifying credentials and determine permissions:

cat <<EOF | kubectl apply - -
apiVersion: admin.gloo.solo.io/v2
kind: ExtAuthServer
metadata:
  name: ext-auth-server
  namespace: gloo-mesh-addons
spec:
  destinationServer:
    ref:
      cluster: ${CLUSTER_NAME}
      name: ext-auth-service
      namespace: gloo-mesh-addons
    port:
      name: grpc
EOF

Finally, create an ExtAuthPolicy that enforces authorization with an API-Key on applicable routes. In our case this is the route to ExternalService that routes the traffic to Confluent Cloud . Note that the route-policy gets applied to all routes that have the label confluent-cloud-rest, like the route we’ve created earlier. Also note that the API-Key is also selected by label (i.e. extauth: apikey):

cat <<EOF | kubectl apply -f -
apiVersion: security.policy.gloo.solo.io/v2
kind: ExtAuthPolicy
metadata:
  name: api-key-auth
  namespace: confluent-cloud
spec:
  applyToRoutes:
  - route:
      labels:
        route: confluent-cloud-rest
  config:
    server:
      name: ext-auth-server
      namespace: gloo-mesh-addons
      cluster: ${CLUSTER_NAME}
    glooAuth:
      configs:
        - apiKeyAuth:
            headerName: api-key
            k8sSecretApikeyStorage:
              labelSelector:
                extauth: apikey
EOF

When we now try to list our Kafka topics, Gloo Gateway returns a 401 – Unauthorized:

curl -v -H 'Authorization: Basic {API-KEY}' http://kafka.example.com/topics

Adding the new Gloo API-Key in a request header gives us access to our topics again:

curl -v -H 'Authorization: Basic {API-KEY}' -H "api-key:N2YwMDIxZTEtNGUzNS1jNzgzLTRkYjAtYjE2YzRkZGVmNjcy" http://kafka.example.com/topics

Note that we’re passing 2 API-Keys in our headers, 1 API-Key with header name api-key for the Gloo Gateway authentication and one Confluent Cloud API-Key via the Authorization header for the Confluent Cloud authentication. This is just an example of how to add another layer of security using Gloo Gateway. As mentioned earlier, you can add more sophisticated authentication, for example OAuth based authentication, or even multi-step authentication with OPA (Open Policy Agent) using similar Gloo ExtAuth constructs.

Rate Limiting the Confluent Cloud RESTful API

Another interesting feature that Gloo Gateway can add to our Kafka system is rate-limiting. Rate limiting allows us to limit the number of requests per time unit (seconds, minutes, hours, etc.) based on policies. It enables us to protect the service from mis-use by clients, enforce service and/or business limits based on service offering categories and business plans, etc.

As an example use case, let’s implement a rate limiting policy that only allows 3 requests per minute with the API-Key we defined earlier.

First we apply the RateLimitServerSettings, which configures how clients connect to the rate-limiting server:

cat <<EOF | kubectl apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerSettings
metadata:
  name: rl-server
  namespace: gloo-mesh-addons
spec:
  destinationServer:
    port:
      name: grpc
    ref:
      cluster: ${CLUSTER_NAME}
      name: rate-limiter
      namespace: gloo-mesh-addons
EOF

Now we need to configure the rate-limit server and client configurations using the RateLimitServerConfig and RateLimitClientConfig CRs. In this example, the rate-limiting descriptor applies the rate-limit for any unique userId. Remember that the userId field is a data field in our API-Key secret. Hence, this allows us to rate-limit per API-Key, as long as the userId field is unique per API-Key:

cat <<EOF | kubectl apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerConfig
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-server-config
  namespace: gloo-mesh-addons
spec:
  destinationServers:
  - port:
      number: 8083
    ref:
      cluster: ${CLUSTER_NAME}
      name: rate-limiter
      namespace: gloo-mesh-addons
  raw:
    descriptors:
    - key: userId
      rateLimit:
        requestsPerUnit: 3
        unit: MINUTE
---
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitClientConfig
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-client-config
  namespace: gloo-mesh-addons
spec:
  raw:
    rateLimits:
    - actions:
      - metadata:
          descriptorKey: userId
          metadataKey:
            key: envoy.filters.http.ext_authz
            path:
              - key: userId
EOF

Finally, we can apply the RateLimitPolicy, which applies the rate-limit server config, client config and server settings to one or more routes. The routes are, as with the ExtAuthPolicy, selected using labels:

cat <<EOF | kubectl apply -f -
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitPolicy
metadata:
  name: kafka-rate-limit
  namespace: default
spec:
  applyToRoutes:
  - route:
      labels:
        route: confluent-cloud-rest
  config:
    ratelimitServerConfig:
      name: rl-server-config
      namespace: gloo-mesh-addons
      cluster: ${CLUSTER_NAME}
    ratelimitClientConfig:
      name: rl-client-config
      namespace: gloo-mesh-addons
    serverSettings:
      name: rl-server
      namespace: gloo-mesh-addons
    phase:
      postAuthz:
        priority: 1
EOF

Now when we execute our Kafka REST requests multiple times in a row(for example retrieving our Kafka topics information or sending records to the orders topic) we will see that after 3 requests per minute, we are rate limited and we get a 429 – Too Many Requests HTTP response. Execute the following request 4 times in a row, and observe how the last request will be rate-limited:

curl -v -H 'Authorization: Basic {API-KEY}' -H "api-key:N2YwMDIxZTEtNGUzNS1jNzgzLTRkYjAtYjE2YzRkZGVmNjcy" http://kafka.example.com/topics

Now when we execute our Kafka REST requests multiple times in a row(for example retrieving our Kafka topics information or sending records to the orders topic) we will see that after 3 requests per minute, we are rate limited and we get a 429 – Too Many Requests HTTP response. Execute the following request 4 times in a row, and observe how the last request will be rate-limited:

Try to send a number of records to our orders topic. Observer how, after the 3rd request per minute, the requests are rate limited and a 429 – Too Many Requests HTTP response is returned:

curl -v -X POST -H 'Content-Type: application/json' -H 'Authorization: Basic {API-KEY}' -H "api-key:N2YwMDIxZTEtNGUzNS1jNzgzLTRkYjAtYjE2YzRkZGVmNjcy" http://kafka.example.com/topics/orders/records -d '{"key": {"type": "STRING", "data": "orderId2"}, "value": {"type": "JSON", "data": { "productId":"456ijk" }}}'

Conclusion

Event-driven and event streaming architectures are popular architectural paradigms to implement real time data systems at scale. Often though, the benefits of event streaming platforms are only reaped internally in organizations, as exposing systems like Apache Kafka to external consumers can be difficult due to the use of non-standard protocols, security requirements, and network architectures. By using an API Gateway like Gloo Gateway, in combination with the Confluent Cloud REST API, we can create architectures in which the power of the event streaming platform can be safely and securely exposed to external consumers. Advanced functionalities like authentication and authorization based on API-Keys, OAuth and OPA, combined with features like rate-limiting, give us control over how the event streaming platform is exposed to consumers, provide fine-grained control over which consumers have access to which part of the system, and can protect the event streaming platform from external mis-use and potential abuse.

In this article we’ve shown a basic integration of Gloo Gateway with Confluent Cloud, laying the foundations of more advanced architectures powered by Gloo. Please consult the Gloo Gateway documentation to learn more about this powerful API Gateway and the features it provides.

Cloud connectivity done right