AWS Lambda: Insights from the engine room

Jaime LlamasLast updated on July 4, 2024
5 min

Ready to build better conversations?

Simple to set up. Easy to use. Powerful integrations.

Get free access

Ready to build better conversations?

Simple to set up. Easy to use. Powerful integrations.

Get free access

At Aircall, we're used to working with multiple microservices that are serverless. Lambda is the quintessential serverless service in AWS, allowing developers to run code without provisioning or managing servers. However, like any powerful tool, there's a learning curve to mastering its intricacies. Here, I'll share some insights from my experience working with AWS Lambda, covering mainly topics related with the lambda infrastructure setup.

General setup

Architecture

When choosing between ARM and x86 architectures for AWS Lambda, ARM generally offers better cost-efficiency and performance-per-dollar, with potential cost savings and improved energy efficiency. It's often a good choice for new projects and compute-intensive tasks. However, x86 maintains an advantage in compatibility with legacy code and niche libraries. As always, the best choice depends on your specific use case. For more information, take a look at the dedicate AWS article on this.

Memory

One of the first things to consider when working with Lambda is memory allocation. Lambda functions can be configured with different memory sizes. Choosing the right memory size is crucial for optimal performance and cost-effectiveness. Allocating too little memory can lead to execution timeouts or out-of-memory errors, while over-allocating memory can result in higher costs without any performance gains.

Here are some interesting tools and benchmarks to optimize Lambda memory:

Reducing cold start time

When executing Lambda functions, it's important to minimize cold start times and deployment overhead. There are some points to be considered here that can be relevant to you:

One-time initialization

Lambda functions can benefit from one-time initialization, especially when working with clients or third-party libraries, since global scope is reusable for every following invocation after cold start. By initializing these resources outside the main function handler, you can significantly improve cold start times and overall performance. However, it's essential to ensure that these resources are properly disposed of or reused across invocations to avoid resource leaks.

Lambda layers

Lambda layers allow you to create and manage code and data that can be shared across multiple Lambda functions. By using Layers, you can keep your deployment packages smaller and more manageable, as you don't need to include the same dependencies or libraries in every function's deployment package. A shared layer could include, for example, libraries related to authentication or logging since it tends to be a common mechanism for any team in a company.

Exclude from the bundle

Depending on your build method, you might be able to exclude certain files/libraries that won't be bundled, but the import will be preserved in the code. This can be useful when you want to exclude third-party dependencies or libraries that are already available in the Lambda execution environment, such as the AWS SDK, which is already available in the Lambda execution runtime.

Challenge your dependencies

Carefully review the dependencies you're including in your bundle and remove any unused or unnecessary dependencies. Challenge your dependencies with questions like:

  • Is it really a production dependency or can it be moved to development? At Aircall we work mainly with Typescript so these are good example of dev dependencies:
    - @aws-sdk/*, @commitlint/*, @types/*, es-lint/*, typescript, jest.

  • Do I really need this library? axios, zod or lodash are large libraries. If you barely take advantage of them in your code, consider excluding them and build or require something lighter.

Minification

It's easy to think on minification when facing cold start issues since it's been a common practice during the last years in the industry. But, is it really effective? Carefully review this neat article by Maciej Radzikowski and read the conclusions, you will likely change your mind.

Optimizing executions

It's a common architecture pattern where Lambdas are invoked by some other AWS service like SQS. In order to make Lambda work more efficient and cost saving, we can consider batch processing. Let’s see how it works:

Batch processing

Batch processing with AWS Lambda allows you to invoke a Lambda function with multiple events or records at once, instead of processing them individually. This can be useful when you have a high volume of events or records that need to be processed, as it can improve efficiency and reduce the overall number of invocations and associated costs.

Advantages:

  • Improved Performance: Batch processing can significantly improve the overall throughput and performance of your Lambda function by reducing the overhead of individual invocations.

  • Cost Savings: Since you're processing multiple events or records in a single invocation, you can potentially reduce the overall number of invocations, which can lead to cost savings.

Disadvantages:

  • Complexity: Batch processing introduces additional complexity in terms of handling partial failures, retries, and error management, as you need to ensure that failed events or records are properly handled and retried. Individual failures can be handled by batchItemFailures.

  • Potential Latency: In some cases, batching events or records may introduce additional latency, as Lambda functions may need to wait for a certain number of events or records to accumulate before processing the batch.

Filter criteria

Amazon SQS also offers a powerful feature called Lambda event filtering which allows you to selectively invoke Lambda functions based on specific event attributes or patterns. This can be particularly useful in scenarios where you need to process only a subset of events based on certain conditions, reducing unnecessary invocations and improving overall efficiency.

By leveraging filter criteria, you can create more targeted and efficient event processing pipelines, simplifying the code of your Lambda, reducing costs and improving the overall performance of your serverless applications.

Overall...

As you can see, mastering AWS Lambda involves understanding its nuances and best practices. Be iterative on your approach when building a system made of serverless resources such as Lambda, SQS, SNS, Step Functions, etc. You don't need to make it perfect from the very beginning, just try to understand the use case and adjust the infrastructure accordingly.


Published on July 4, 2024.

Ready to build better conversations?

Aircall runs on the device you're using right now.