Serverless Diary: How to Triumph Over Challenges- A2A/B2B Integration

Anuj Kothiyal
5 min readFeb 22, 2021
Original photo on pexels

I. Introduction

“Software entities should be open for extension but closed for modification — Bertrand Meyer”

The OPEN/CLOSE principle of SOLID is well known and understood among the developer community. It is a key to designing and writing good quality maintainable code. This principle is even applicable to a large extent while designing software architecture for a fast-paced AGILE delivery program. My idea of a “Transitional Architecture” is quite synonymous with the OPEN/CLOSE principle that your transitional designs should extend themselves as you move towards the Target Architecture in incremental steps/releases.

The focus of this blog is to extend the previous design and understand how to apply integration patterns to our EDA(Event-Driven Architecture)

II. What is A2A/B2B Integration?

A2A — Application to Application Integration refers to communication between applications deployed within an organization. With multi-cloud architecture adopted by many organizations, these applications can be on either the same or different platforms and networks.

B2B — Business to Business Integration refers to communication between applications deployed across different organizations and firewalls.

Why did I want to mention and discuss A2A/B2B? As the trend shifts towards building cloud-native solutions, the approach to designing and building secure integrations on the cloud are quite the same, especially when it comes to the public cloud. Approaching A2A integrations similar to B2B will ensure highly secure and loosely coupled applications, both of which are highly desirable traits for any software architecture.

III. Integration Design

Let’s iterate on the previous design to introduce the integration element. We will then understand the various points discussed above.

figure 1: serverless-way-to-integration

Understanding Requirements:

  • Given an existing Microservices event-driven architecture (EDA) deployed within a private network (AWS VPC in our case), I wish to reliably share messages with applications deployed in another AWS account and VPC.
  • Both sender and receiver application accounts have VPC peering enabled with non-overlapping CIDR.
  • The receiver application has a dedicated private API gateway per consumer application.

Understanding the design flow as per figure 1

  • The Microservice consumer (6) gets a message from the SQS, performs necessary validation and transformation on the received message, and then issues an HTTPS REST web service call on the endpoint of internal ALB (Application Load Balancer).
  • ALB acts as a proxy and forwards the traffic to a private API gateway which sends the message to backend service (Lambda, SQS, or S3) and provides back an acknowledgement.
  • Microservice (consumer) application successfully removes the message from the queue(this is automatic if using SQS with Lambda) on receiving the acknowledgement.
  • In case of any 4** HTTP error code (except 429), the lambda moves the message to DLQ. In the case of 429 and 5** error codes, lambda throws back an error onto the queue, and the failed message is re-tried after the message visibility timeout of SQS. The use of SQS makes the integration reliable and loosely coupled.

III. Understanding the design choices and alternative options

  • Why do we have an internal ALB?
    The simple answer, it’s an AWS shortfall of using private API gateways. In order to reach a private API Gateway from a VPC in another account, a sample call would look similar to:
    curl https://vpce-0aaa7w99a9c9c999s-asas1eeo.execute-api.eu-west-2.vpce.amazonaws.com/demo -H ‘Host: n1d40bvvvv.execute-api.eu-west-2.amazonaws.com’.
    So we connect to the DNS name of the VPC Endpoint, passing the hostname as a custom header Private API Gateway. This allows the VPC Endpoint to route the traffic to the correct API Gateway.

This solution would work for peered VPC’s in a different account, but it requires all the clients to pass a modified host header which couples the implementation with infrastructure deployment of another account. We certainly don’t want to be in a state where if another account destroys and recreates a new private API gateway, all calling applications would need to update hostname.

Using ALB we are able to point to a VPC Endpoint, which in turn points to a Private API Gateway. The ALB is accessible over VPN and VPC Gateway, ensuring all traffic to ALB remains private. So by using custom domains we are able to ‘trick’ the VPC endpoint into understanding where to send traffic, without requiring custom host headers.

If this was all within the same AWS account or you are happy to have that infrastructure coupling, then just drop the ALB from the design (integrate directly with private API gateway), the rest still remains a good pattern to follow.

  • What if we used a shared SQS queue/ SNS Topic: If both the sender and receiver applications are within the same organization, having a shared event stream (SQS or SNS) is also an option. Yes, it will work and is secure, but I recommend keeping application boundaries separate and sticking to industry-standard integrations via HTTPS/REST. This ensures loosely coupled applications with well-defined boundaries that are easy to version as well.
  • What if we had RBAC requirements: I recommend simplicity by design, like the approach discussed in figure 1. It has a dedicated private API gateway per client (consumer) where the client is trusted to perform all operations on that specific private API. If that’s not acceptable, then the alternative and more evolved approach is using cross-account IAM roles/permissions, which introduces additional overhead and complexity managed at the code and infrastructure level.
  • What if we were integrating with Application hosted on Azure/GCP etc: Use industry-standard approach, OAUTH2/JWT’s with REST to authorize and include roles as additional claims. The first blog of my serverless series presents a good example of integrating with Azure AD and Sharepoint.

Final Reflections and takeaways:

There is no one correct answer when it comes to integrating applications within or across the organizations. Integrations can be complicated, and the best implementation and pattern to be adopted is heavily driven by exact requirements, constraints, platforms, and infrastructure involved. The slightest change in any of the factors may warrant a different pattern. So why did I share this specific pattern? The answer is I wanted to share the quark involved with AWS Private API gateways, which one wouldn’t necessarily know until things are past design and into implementation, possibly even after that. All this sounds risky and complex? Perform a time-boxed spike. It may not be able to mitigate the risk entirely, but working with residual risk is a far better state to be in for any business. Not to ignore the fact that this also promotes confidence in the architecture design.

I find building Integrations as art that shares a lot of characteristics with spider silk, which are strong, have a great ability to stretch, can handle gusts of wind or the impact of a bug, and when it breaks it's only at the point of applied force, leaving the rest of the web intact and functional.

--

--

Anuj Kothiyal

Lead Digital Architect, Agile practitioner, Mentor and Fitness Enthusiast.