Tech Team Stories: Supporting Multi-Architecture Container Images

Hassen HarzallahLast updated on April 13, 2023
5 min

Ready to build better conversations?

Simple to set up. Easy to use. Powerful integrations.

Get free access

Ready to build better conversations?

Simple to set up. Easy to use. Powerful integrations.

Get free access

In the last years, containers have successfully become one of the most essential tools used to develop and run applications and Aircall is no exception. In fact, containers are involved in all the aspects of our work.

Aircall, the cloud-based phone system of choice for modern businesses, has always kept growing exponentially. Therefore, we need to implement more and more architectures to develop our product. Which means that it’s a real challenge to manage all the different containers during the development process.

Furthermore, this challenge became a real concern for our SRE team as we started migrating our AWS Lambda functions to Arm64 runtime because we needed to migrate most of the containers involved in Lambdas CI/CD to Arm64.

After looking for ways to make this transition easier, we decided to start using multi-architecture container images. In fact, they provide an easy way to manage containers with different architectures and don’t require making big changes to our existing process.

What are multi-architecture container images

Multi-architecture container images are container images that are compatible with different architectures at the same time. These images bring more simplicity when :

  • Creating images: have a common pipeline ( build, test, security …) for your different architectures.

  • Managing images: images are now pushed using a single naming convention, which help to set up your repository policies.

  • Using images: you can pull images from a common path, which means you don’t need to worry about selecting the right architecture while developing your application or while setting up your CI/CD.

How they work

First, let’s dig deeper into the container images in general and get a better view of what they are. Container images consist of two main parts: layers and a manifest. Each container image has one or more layers of file system content. The manifest specifies the layers that make up the image as well as its runtime characteristics and configuration.

When you first pull a container image for use in Docker or another container runtime, two things happen:

1. the manifest is pulled locally based on the specified image repository and tag.

2. the manifest is used to assemble the container file system from the layers specified.

For a concrete example, you can use the docker inspect <image> command to see the manifest of any local image in your Docker development environment. As you can see, platform characteristics such as architecture and operating system are clearly specified by the image manifest.

With multi-architecture image support in our container repository, it becomes easier for us to build docker images supporting multiple architectures and operating systems and refer to them by the same abstract manifest name. This is achieved through the support of an image specification component known as a manifest list, or image index.

A manifest list (or image index) allows for the nested inclusion of other image manifests, where each included image is specified by architecture, operating system, and other platform attributes

The container engine responsible for creating the container pulls from the registry the correct layers for the compute environment where it’s running based on the values in the manifest list.

For a concrete example, you can use the docker manifest inspect <image> command to see the manifest list of your docker image. As you can see, the manifest list contains the different manifests available for your image.

How we implemented them

In Aircall, we are using Amazon ECR to store our container images. Amazon ECR, as most of the repository services of major cloud providers, is compatible with multi-architecture images.

We are also using Kaniko to build our container images. Kaniko is a tool that allows us to build our images from a Dockerfile, inside a container or Kubernetes cluster.

Despite being a great tool to build container images, Kaniko doesn’t support the creation of manifest lists (issue link : https://github.com/GoogleContainerTools/kaniko/issues/786 ). As a consequence, we had to use another tool for that purpose.

Manifest-tool is the tool that we decided to use. It’s a command line utility used to view or push multi-platform container image references.

Manifest-tool is pretty straightforward to use, having 2 basic commands :

  • Inspect: to inspect the manifest of any image

  • Push: to create a manifest list in a registry, using either a YAML file describing the images to assemble or by using a series of command line parameters.

Multi-architecture container images were easy to support and we only needed to add a step in our Gitlab CI pipeline, which is used to create docker images.

The new step added is used to create a manifest list using the manifests of each architecture that are already built and pushed. This step has been added without breaking our existing process.

This is our process with the new step added to support multi-architectures:

1. Build docker images for each architecture using Dockerfile and Kaniko. We also have a fleet of Gitlab Runners with different OS and architectures.

2. Push docker image of each architecture using different tags (eg: latest-amd64)

3. Create manifest list assembling the different images already pushed into one common tag (eg: latest)

4. Scan all the docker images for security vulnerabilities.

Now, as the multi-architecture container image is used, we can use it inside our code and our gitlab-ci pipelines using that common tag “latest“, which will simplify our workflow.

Concrete example

This is our Gitlab CI job that we added to create multi-architecture container images.


As you can see, this step has been added without breaking the existing workflow.

As an example, we will look at one of our CI/CD jobs created using Gitlab CI in order to deploy lambda functions to the different runtimes (x86 and arm64) using AWS SAM CLI.

Before implementing multi-architecture images, we had to create a new environment variable in each of our Gitlab runners to define a tag suffix that will be used to select docker images from our ECR repository.

Then, when we needed to spin a new container to run our CI/CD jobs, we had to pull the correct docker image using a tag defined by that environment variable. For example, this is an extract from a gitlab-ci file used to run AWS SAM deployment jobs:

Now, using the multi-architecture images, we no longer need to create that environment variable in our Gitlab runners nor to add a suffix in the image tag. The pipeline is becoming easier to maintain as it is common for all the different architectures.


Conclusion

To summarize, supporting multi-architecture container images was both a simple and a rewarding task, as it simplifies the use and the management of multiple architectures for your container images. The transition was also very smooth as we didn’t break the tag convention already existing in our teams, but we enriched it.

More information

1. OCI image index specification: https://github.com/opencontainers/image-spec/blob/main/image-index.md

2. Introducing multi-architecture container images for Amazon ECR : https://aws.amazon.com/blogs/containers/introducing-multi-architecture-container-images-for-amazon-ecr/


Published on December 21, 2022.

Ready to build better conversations?

Aircall runs on the device you're using right now.