Skip to main content

Significant enhancements to the AWS Lambda Service

Image
Blue and orange neon stripes

After so many announcements during AWS re:Invent 2020, surrounding one of the company's core serverless components, we asked Nasstar’s very own cloud expert, AWS Ambassador and AWS Technical Practice Lead, Jason Oliver, to try and unpack, simplify and add value to what we understand about the recent updates.

Introduction

Launched in November 2014, AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes.

With the advent of serverless architectures heavily relying on components such as AWS Lambda, Nasstar has observed the benefits they bring to any organisation. The advantages of using serverless technologies have been underscored in recent times with unexpected spikes in demands due to COVID-19, where systems have either crashed under sudden spikes in demand or companies have been forced to implement tactical strategies such as arbitrary queuing mechanisms to flatten out the spikes that we have all likely experienced with online retailers and others.

Cost reduction

Lambda has always benefitted from billing on a function execution duration of 100ms. However, from 1 December, AWS has reduced this to 1ms increments instead of being rounded up to the nearest 100ms increment per invoke, providing incredible billing granularity and further cost optimisation.

For example, a function that runs in 30ms on average used to be billed for 100ms. Now, it will be billed for 30ms, resulting in a 70% drop in its duration spend. In our experience, this makes a profound impact when looking at real-world scale in a production workload.

This update is available to all regions where Lambda is available, applied automatically and no changes are required to leverage the cost reduction.

NOTE: There is no change in Lambda pricing: you pay for requests and usage, with duration and Provisioned Concurrency charged at a rate proportional to the amount of memory configured.

Larger functions

AWS also announced the expansion of the compute and memory options available to Lambda functions; now up to six virtual CPUs (vCPUs) and ten GB memory respectively - a three times increase compared to previous limits.

Larger functions are available now in 17 regions including EU (Ireland) and EU (London).

Container Images

AWS announced a significant change in that Lambda would now support Container Images. Before this, the boundary of a Lambda function was discrete code. This development enables customers to package and deploy functions as Container Images leveraging the same improved Lambda service detailed above.

NOTE: The 15-minute function timeout remains unchanged, so if a lengthy execution is required, then Batch, Step Functions or Fargate may be a better solution.

AWS provides Lambda base image options for Node.js, Python, Java, .NET, Go, and Ruby. All are patched and maintained by AWS. Additionally, customer-provided base images are supported on Linux kernels.

Requirements for the customer-provided containers are that an AWS Lambda Runtime Interface Client (agent) is running, 512 MB of writable storage is available at /tmp, and that only the default user is permitted to provide the least privilege security during execution.

Container Images currently have feature support with all major features including provisioned concurrency, EFS, X-Ray, VPC, reserved concurrency, triggers, destinations, etc. This is with the notable exceptions (at launch) of both code signing and layers. Also, Container Images supports all major frameworks, including Serverless Application Model (SAM), CloudFormation, Cloud Development Kit (CDK), Serverless framework, and Terraform.

An Amazon Elastic Container Registry (ECR) repository is used to deploy a container image.

Container Images are available now in eight regions including EU (Ireland) but are not currently available in EU (London).

Insights

Announced in October 2019, Amazon CloudWatch Lambda Insights is now in public preview. This is an opt-in service at the function level much like X-Ray and provides clearer and simpler operational visibility of functions by automatically collating and summarising Lambda performance metrics, errors, and logs in prebuilt dashboards, saving on time-consuming, manual work.

CloudWatch Lambda Insights is an operational oversight, and visibility tool Nasstar has found invaluable to monitor the behaviour of functions such as runway concurrency, memory usage and cold starts.

Summary

These updates constitute a massive step-change and will be welcome news for those with memory or computationally intensive workloads such as machine learning (ML) applications, batch and extract, transform and load (ETL) jobs, modelling, genomics, gaming, high-performance computing (HPC), and media processing – all of these become easier to implement and scale with Lambda.

By harnessing the power of serverless technologies, including AWS Lambda, customers can benefit from:

  • Improved developer efficiency
  • Simplifying workload architecture
  • Improving TCO for high memory, high compute workloads.
  • Improved serverless application performance.
  • Advanced Vector Extensions (AVX) support improves latency at no additional cost.

With the recent enhancements to the AWS Lambda service, we foresee and hope that many more organisations will embrace this AWS super-service to unlock its power and versatility, while better handling unprecedented environmental change – all while reaping the cost benefits of serverless technology.