Deploy a multicloud API gateway
You've developed a world-class API, and now you want to make it available online on not just one cloud provider, but two or more.
Aside from the challenges involved in any multicloud deployment, you're looking for a multicloud API gateway that allows you to:
- Consistently apply security and traffic management policy in one place
- Provide a single pane of glass for observability
- Work identically in every cloud or environment
What you'll learn
In this tutorial, you'll learn how to implement ngrok as a multicloud API gateway with these broad steps:
- Set up the common pattern of using a single cloud endpoint to route all API traffic to internal agent endpoints.
- Use endpoint pooling to enable dead-simple load balancing between replicas of your API service.
- Connect non-replicated API services with additional internal agent endpoints.
What you'll need
- An ngrok account: Sign up for for free if you don't already have one.
- Your authtoken: Create an authtoken using the ngrok dashboard.
- A reserved domain: Reserve a domain
in the ngrok dashboard or using the ngrok
API.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
https://api.example.com
. - We'll refer to this domain as
{YOUR_NGROK_DOMAIN}
throughout the guide.
- You can choose from an ngrok subdomain or bring your own custom branded
domain, like
- The ngrok agent: Download the appropriate version and install it on the same machine or network as the API service you want to make available via ngrok's API gateway.
- (optional) An API key: Create an ngrok API key if you'd like to use the ngrok API to manage your cloud endpoints.
This guide uses endpoint pools, which are not yet generally available. To use them, you must request access to the developer preview.
Deploy a demo API service (optional)
If you don't yet have API services you'd like to bring online with a multicloud API gateway, or just want to quickly wire up a POC using ngrok, we recommend running multiple "replicas" of httpbin, a simple HTTP request and response service.
Assuming you have Docker installed on the systems where your API services run,
you can deploy a container listening on port 8080
.
Loading…
Spin up httpbin on your other cloud providers as replicas.
Add your API services to an endpoint pool
When your create multiple endpoints with the same URL and binding, ngrok pools them by default, load-balancing traffic between them. This can improve your API's performance and resiliency.
In one cloud, create an internal agent endpoint on an internal URL, like
https://foo.internal
. Replace 8080
if you've brought your own API service.
Loading…
Run the same command for each replica of your API service on other clouds.
Your API service replicas are now pooled and load-balanced, but not yet accessible on the public internet. To fix that, you need two things:
- A cloud endpoint for traffic routing and centralized policy management.
- A Traffic Policy rule that forwards traffic from your cloud endpoint to
https://foo.internal
.
Create a cloud endpoint for your API
Cloud endpoints are persistent, always-on endpoints that you can manage with the ngrok dashboard or API.
You centrally control your traffic management and security policy on your cloud endpoint, then forward traffic to your endpoint pool. That's much easier than trying to synchronize policies across multiple replicas and >1 cloud.
- Dashboard
- API
First, log into the ngrok dashboard. Click Endpoints → + New.
Leave the Binding value Public, then enter the domain name you reserved earlier. Click Create Cloud Endpoint.
With your cloud endpoint created, you'll see a default Traffic Policy in the dashboard. Paste in the YAML below to apply the rule.
Loading…
Click Save to apply your changes.
The ngrok
CLI provides a helpful wrapper around the ngrok API, which you can use to create a cloud endpoint and apply a file containing Traffic Policy rules.
Create a new file named policy.yaml
on your local workstation with the following YAML.
Loading…
Create a cloud endpoint on {YOUR_NGROK_DOMAIN}
, passing your policy.yaml
file as an option.
Loading…
You'll get a 201
response—save the value of id
, as you'll need it again later to continue configuring the Traffic Policy applied to your cloud endpoint.
Access your API services
At this point, your multicloud API gateway is ready for traffic!
All requests to {YOUR_NGROK_DOMAIN}
are forwarded to your load-balanced pool
of replicas.
If you're running httpbin as a demo API service, send a query to the /get
route—if you've brought your own service, change the route accordingly.
Loading…
With httpbin, you'll see a response like:
Loading…
Now is the time to start getting even more from Traffic Policy, our configuration language for managing traffic. You can attach Traffic Policy rules to both your cloud and agent endpoints, allowing you to compose some rules across all your APIs and others to individual upstream services or replicas.
See our tutorial on API gateway traffic management for details.
Extend your multicloud API gateway with non-replicated services
So far, you've deployed multiple replicas of a single API service and added load balancing with an endpoint pool.
What if you have more than one API service you'd like to make available behind a multicloud API gateway? Or you've acquired a new business and want to integrate their API services into your existing API gateway?
To make this work, you need to add additional forward-internal
actions and
implement a strategy for routing to different upstream services based on the
attributes of incoming requests.
Let's assume you want to start a new bar
service, which runs on port
9090
. Create a new internal agent endpoint on a unique URL, like
https://bar.internal
, to separate it from your endpoint pool.
Loading…
If you send another request to {YOUR_NGROK_DOMAIN}
, ngrok forwards it to
https://foo.internal
because your Traffic Policy rules don't yet contain a
strategy for routing traffic to multiple API services. ngrok has a few common
patterns to fix that.
Route by path
Path-based routing lets you direct traffic to different backend services based on the value of the path.
The Traffic Policy file below:
- Routes requests to
https://{YOUR_NGROK_DOMAIN}/foo
to thehttps://foo.internal
endpoint pool. - Routes requests to
https://{YOUR_NGROK_DOMAIN}/bar
to thehttps://bar.internal
agent endpoint. - Uses the
custom-response
action to capture requests to any other path and respond with a generic403
error.
Loading…
Apply this Traffic Policy rule to your cloud endpoint through the dashboard or API to start forwarding traffic.
Route by headers
You can also route to endpoints based on the dynamic value of a header with CEL interpolation.
The Traffic Policy file below:
- Checks whether requests contain a
x-api
header, and if so, forwards to the internal URL with the same name using CEL interpolation. - Uses the
custom-response
action to capture requests without an appropriate header and respond with a generic403
error.
Loading…
Apply this Traffic Policy rule to your cloud endpoint through the dashboard or API.
Apply this Traffic Policy rule to your cloud endpoint through the dashboard or
API to forward traffic based on the header values, like x-api: foo
or x-api: bar
.
What's next?
You've now brought your multicloud APIs online with ngrok's API gateway, which includes features like DDoS protection and global load balancing.
Next up, we recommend you:
- Continue your API gateway tutorial by adding and composing traffic management policies to finish offloading non-functional requirements, like rate limiting and authentication, to your ngrok API gateway.
- Check out your Traffic Inspector (documentation) to observe, modify, and replay requests across your API gateway.
- Explore other opportunities to manage and take action on API traffic in our Traffic Policy documentation.