API Gateway
Last updated
Was this helpful?
Last updated
Was this helpful?
API gateway is a typical pattern that many API developers are using to encapsulate their API endpoints. But, it is common that many are using API gateway anti-patterns as well. This article focuses on the correct usage of the API Gateway pattern.
An API(Application Programming Interface) Gateway is an interface where it sits in front of other back-end (Micro)services. API Gateway is the single-entry point for the back-end architecture where the communication channel normally ends in a database. API Gateway is ensuring architecture characteristics such as security, scalability, and high availability.
Can we implement our services (e.g., RESTFull or SOAP ) without using an API Gateway? Of course, we can. But the individual services need many non-business-specific functionalities such as Routing, authentication and authorization, TLS, Logging, and tracing, etc… Most importantly, if you do not use an API Gateway to expose your API endpoints, a client would know the back-end service distributions and deployment details. This is not a big issue if you are using your API endpoints to integrate with an internal front-end application. This is not a good practice for an Enterprise API Gateway, which can be exposed to unknown, on-demand clients.Diagram 1 : API Gateway
If you are using your Gateway only to abstract a single back-end or just to terminate your TLS communication, Reverse proxy or Load balance would be sufficient. But, inherently, a proper API Gateway would fulfill the following functionalities.
Multiple Back-ends ( Microservices )
Service Discovery
Circuit breaking
Authentication and Authorization
Rate limiting
Logging and tracing
Routing
Retry logic etc..
If you deploy a single API Gateway, even though it is simple and easy to manage, you are asking for the following troubles.
Single point of failure: If your API Gateway stops functioning properly, your entire back-end communication will go down. This is a disastrous situation and should be avoided at all costs. As a solution, you can have a scalable API Gateway cluster behind a hardware load balancer where the load balancer can be notified of the backpressure.
Various traffic requirements would traverse through a single API Gateway stack: Depend on the various business functionalities, the API requirements are different in terms of performance, security, compliances, etc. If you do have a single API Gateway cluster, all traffic will traverse through that, and you won’t be able to cater to different needs as required. This is not a good idea, especially if you are building an Enterprise API Gateway.
Make your DevOps functionality difficult: If you do any routing change or protocol change in any of your back-end services, it will impact your API Gateway as well. Hence, once you deploy such a change, the API gateway also needs to get updated using a single cluster of API Gateway.
Enterprise API Gateway
Microservices API Gateway
The Enterprise API Gateways are mainly using for the API marketplace where you are expecting any 3rd party to come and use your API after paying for the usage. Most of the API integrations cater to this nature, and the following are the main concerns you need to focus on when developing an Enterprise API Gateway.
A separate enterprise integration team would do deployment. A manual process of QA →Staging → UAT → Production would be used.
DevOps pipelines would hardly be used in this kind of scenario.
Error encapsulation(Custom errors) for clients.
Need to monitor invocation rates and HTTPS status. Maybe your billing would depend on this.
We need to focus more on security, data compliance, and standards, etc.
In Microservices, Gateway’s concerns are as follows.
Using inside your ecosystem where the API Gateway is exposed only to your internal clients.
High usage of CI/CD pipelines with deployments types such as Canary, Shadow, Blue-Green, etc.
The API may expose full details of errors along with the stack trace.
Non-functional requirements such as latency and performance would be considered over enterprise-level constraints such as standards, compliance, security, etc.
Following can be identified as the API Gateway Anti-patterns.
Overburning the Gateway by routing most of the traffic through a single service.
Not considering the scalability and resiliency factors of services that API Gateway is dependent upon ( Eg: Authentication, Service discovery. Routing, etc…) Failure of these services would affect directly to the API Gateway’s availability.
Avoid using the API Gateway as ESB by rerouting internal traffic using the API Gateway. The API Gateway should route only the external incoming traffic to your service layer.
Ensure a single point of failure is mitigated and managed.
Let your API Gateways unmonitored and isolated.
What are the commercial and open source products that we can use.
Kong Gateway Apache APISIX Tyk Goku WSO2 KrakenD Zuul
According to the definition by Gartner: “Microservice is a tightly scoped, strongly encapsulated, loosely coupled, independently deployable, and independently scalable application component.”
The goal of the microservices is to sufficiently decompose/decouple the application into loosely coupled microservices/modules in contrast to monolithic applications where modules are highly coupled and deployed as a single big chunk. This will be helpful due to the following reasons:
Each microservice can be deployed, upgraded, scaled, maintained, and restarted independent of sibling services in the application.
Agile development & agile deployment with an autonomous cross-functional team.
Flexibility in using technologies and scalability.
Different loosely coupled services are deployed based upon their own specific needs where each service has its fine-grained APIs model to serve different clients (Web, Mobile, and 3rd party APIs).
While thinking of the client directly communicating with each of the deployed microservices, the following challenges should be taken into consideration:
In the case where microservice is exposing fine-grained APIs to the client, the client should request to each microservice. In a typical single page, it may be required for multiple server round trips in order to fulfill the request. This may be even worse for low network operating devices such as mobile.
Diverse communication protocol (such as gRpc, thrift, REST, AMQP e.t.c) existing in the microservices makes it challenging and bulky for the client to adopt all those protocols.
Common gateway functionalities (such as authentication, authorization, logging) have to be implemented in each microservice.
It will be difficult to make changes in microservices without disrupting client connection. For e.g while merging or dividing microservices, it may be required to recode the client section.
To address the above-mentioned challenges, an additional layer is introduced that sits between the client and the server acting as a reverse proxy routing request from the client to the server. Similar to the facade pattern of Object-Oriented Design, it provides a single entry point to the APIs encapsulating the underlying system architecture which is called API Gateway.
Encapsulating the underlying system and decoupling from the clients, the gateway provides a single entry point for the client to communicate with the microservice system.
API gateway consolidates the edge functionalities rather than making every microservices implementing them. Some of the functionalities are:
Authentication and authorization
Service discovery integration
Response caching
Retry policies, circuit breaker, and QoS
Rate limiting and throttling
Load balancing
Logging, tracing, correlation
Headers, query strings, and claims transformation
IP whitelisting
IAM
Centralized Logging (transaction ID across the servers, error logging)
Identity Provider, Authentication and Authorization
The base concept of BFF is developing niche backends for each user experience. The guideline by Phil Calçado is ‘one experience, one BFF’. If the requirements across clients (IOS client, android client, a web browser e.t.c) vary significantly and the time to market of a single proxy or API becomes problematic, BFFs are a good solution. It should also be noted that the more complex design requires a complex setup.
GraphQL is a query language for your API. Phil Calçado presents in this article that BFF and GraphQL are related but not mutually exclusive concepts. He adds that BFFs are not about the shape of your endpoints, but about giving your client applications autonomy where you can build your GraphQL APIs as many BFFs or as an OSFA (one-size-fits-all) API.
The Netflix streaming service available on more than 1000 different device types (televisions, set‑top boxes, smartphones, gaming systems, tablets, e.t.c) handing over 50,000 requests per second during peak hours, found substantial limitations in OSFA (one-size-fits-all) REST API approach and used the API Gateway tailored for each device.
Some of the common baseline for evaluation criteria include simplicity, open-source vs propriety, scalability & flexibility, security, features, community, administrative (support, monitoring & deployment), environment provisioning(installation, configuration, hosting offering), pricing, and documentation.
Some API requests in API Gateway map directly to single service API which can be served by routing request to the corresponding microservice. However, in the case of complex API operations that requires results from several microservices can be served by API composition/aggregation (a scatter-gather mechanism). In case of dependency of one another service where synchronous communication is required, the chained composition pattern has to be followed. The composition layer has to support a significant portion of ESB/integration capabilities such as transformations, orchestration, resiliency, and stability patterns.
A root container is deployed with the special distributor and aggregator functionalities (or microservices). The distributor is responsible for breaking down into granular tasks and distributing those tasks to microservice instances. The aggregator is responsible for aggregating the results derived by business workflow from composed microservice.
Service mesh in microservices is a configurable network infrastructure layer that handles interprocess communication. This is akin to what is often termed as sidecar proxy or sidecar gateway. It provides a lot of functionalities such as:
Load Balancing
Service Discovery
Health Checks
Security
On the surface, it appears as though API gateways and service meshes solve the same problem and are therefore redundant. They do solve the same problem but in different contexts. API gateway is deployed as a part of a business solution that is discoverable by the external clients handling north-south traffic(face external client), however, service mesh handles east-west traffic (among different microservices).
Implementing service mesh avoids the resilient communication pattern such as circuit breakers, discovery, health checks, service observability in your own code. For a small number of microservices, alternative strategies for failure management should be considered as service mesh integration may overkill you. For a larger number of microservices, it will be beneficial.
Possible single point of failure or bottleneck.
Increase in response time due to additional network hop through API Gateway and risk of complexity.
Hence, the best option would be to have multiple API Gateways, specifically designed to cater to specific needs.Diagram 2 : More accurate usage of API Gateway
Bibek ShahFollowJul 4, 2020 · 7 min readGateway to Ghorepani Poonhill, Nepal (source: Wikipedia)
Fig. Communication in Microservices
In short, it behaves precisely as API management but it is important not to confuse API management with API Gateway.Fig. Microservice API Gateway
It is a variation of the API Gateway pattern. Rather than a single point of entry for the clients, it provides multiple gateways based upon the client. The purpose is to provide tailored APIs according to the needs of the client, removing a lot of bloats caused by making generic APIs for all the clients.Fig. Backend For Frontend (BFF) Pattern
Zuul 2 at Netflix is the front door for all requests coming into Netflix’s cloud infrastructure. Zuul 2 significantly improves the architecture and features that allow our gateway to handle, route, and protect Netflix’s cloud systems, and helps provide our 125 million members the best experience possible.Fig. Zuul in Netflix Cloud Architecture (Image Source: https://netflixtechblog.com)
AWS provides fully managed service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket where developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.Fig. AWS API Gateway
Kong Gateway is an open-source, lightweight API gateway optimized for microservices, delivering unparalleled latency performance and scalability. If you just want the basics, this option will work for you. It is scalable easily horizontally by adding more nodes. It supports large and variable workloads with very low latency.Fig. Kong API Gateway
API gateway with added features results in overambitious gateways that encourage designs that continue to be difficult to test and deploy. It is highly recommended to avoid aggregation and data transformation in the API Gateway. Domain smarts are better suited to be done in application code that follows the defined software development practices. Netflix API Gateway, Zuul 2 removed a lot of the business logic from Gateway that they had in Zuul to origin systems. For more details, refer here.Fig. Composite / Integration service in layered Microservices
Combining these two technologies can be a powerful way to ensure application uptime and resiliency while ensuring your applications are easily consumable. Viewing two as a contemporary can be a bad idea and it is better to view two as being complementary to one another in deployments that involve both microservices and APIs.Fig Layered Microservices with Service Mesh