Run AWS Lambda Serverless Functions Locally

Estimated reading time: 2 Minutes

Since I discovered AWS Lambda, or Serverless functions in general, I was always confused as to how should I run or debug these locally? I mean, there are several approaches as to how to do this, but some are only relevant to specific runtimes, while others don’t exactly imitate the environment that AWS supply.

So, given that I wanted to test my function, also during CI, and have something as close as possible to the real deal, I wasn’t left with much options. Usually after testing most cases locally, I would just spin up an EC2 instance or upload my function to AWS Lambda to verify that everything works as expected.

This was the case until I discovered docker-lambda GitHub repository.

Eureka!

This awesome project gives a perfect toolbox for developers to test their Serverless functions before uploading them to AWS Lambda platform.

According to the repository, docker-lambda gives you the following abilities:

A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.

The way this works is by harnessing the power of containers. For every runtime, the projects maintainers created a docker image which contains the full Lambda filesystem.

You can use this great tool for all sorts of purposes:

  • Developing functions intended for AWS Lambda
  • For fast local ability to reproduce issues with your function
  • Testing function in CI
  • Compile native dependencies

And probably more, it’s up to you.

It supports all current runtime options, and the beauty of it, that it’s really easy to use.

You can run these containers in two modes: as a single execution, or as an API server that listens for invoke events. The default is single execution mode, which outputs all logging to stderr and the result of the handler to stdout.

The structure of running these containers (in single execution mode) is as follows:

$ docker run --rm \
  -v <code_dir>:/var/task:ro,delegated \
  [-v <layer_dir>:/opt:ro,delegated] \
  lambci/lambda:<runtime> \
  [<handler>] [<event>]
  • docker run --rm – will run the container and remove it after it terminates
  • -v creates a volume inside the container from your code_dir to /var/task inside the running container
  • ro,delegated options ensure the directories are mounted read-only and have the highest performance
  • the volume for the layer_dir is optional
  • lambci/lambda:<runtime> is the actual runtime image
  • The last arguments – handler and event are for most runtimes. The first is for the handler and the second is for the event

Let’s give it a go

Let’s look at a hello-world example from Python:

import json
def lambda_handler(event, context):
    # Add your logic here
    return {
        'statusCode': 200,
        'body': json.dumps('This works great!')
    }

Now save this function to a file called main.py and run the following command from the same directory (assuming you have Docker installed):

$ docker run --rm -v "$PWD":/var/task:ro,delegated lambci/lambda:python3.8 main.lambda_handler

And for those of you who already worked with AWS Lambda previously, the following output should look rather familiar:

START RequestId: ea2eda32-0b4f-1a91-58f1-b8c36b9ef406 Version: $LATEST
END RequestId: ea2eda32-0b4f-1a91-58f1-b8c36b9ef406
REPORT RequestId: ea2eda32-0b4f-1a91-58f1-b8c36b9ef406	Init Duration: 495.40 ms	Duration: 2.46 ms	Billed Duration: 100 ms	Memory Size: 1536 MB	Max Memory Used: 25 MB	

{"statusCode":200,"body":"\"This works great!\""}

That’s all there is to it. This is a simple example of a single execution mode. Don’t forget that you can also run in “stay-open” API mode.

Now you can test your AWS serverless functions locally without any hassle!

See you on the next one. Comment, like and share at will.

Leave a Reply