- Can be deployed beyond Lambda's capacity limits.
- Easy version management by building Docker images.
AWS Lambda has 250MB limit storage when uploading zip file.
This limit storage also includes storage of Lambda layer.
Docker can overcome this problem by allowing you to deploy container images of up to 10GB.
Cautions : In these document assume that is conducted using AWS base image for Lambda and using node js.
In order to deploy lambda by using docker, there are three prerequisites.
AWS Command Line Interface is needed to access AWS ECR.
The Built image by using docker should be attached to AWS ECR.
The Docker is neeeded for building code to docker.
In these document, we will deploy lambda by using node js.
"Dockerfile" is set of instructions when building code to images.
This file has to be made in top path of your project.
We can make file named "Dockerfile".
This file can be recognized by installing Docker.(Please watch above Prerequisites #2)
Here are some examples of "Dockerfile"
Example 1.
FROM public.ecr.aws/lambda/nodejs:20
# Copy function code
COPY index.js ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
You can set runtime nodejs version by using "FROM".(Role that specifies the base image)
You can use "COPY" in order to move file in Lambda.
It is responsible for copying files or directories from the host system into the container.
"CMD" is set the entry point of Lambda.
Example 2.
# First stage
FROM public.ecr.aws/lambda/nodejs:18 as stage
# Install unzip
RUN yum install -y -q sudo unzip
ENV CHROMIUM_VERSION=1002910
# Download and extract necessary files to /opt/chrome directory
RUN curl "https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F$CHROMIUM_VERSION%2Fchrome-linux.zip?generation=1652397748160413&alt=media" > /tmp/chromium.zip && \
unzip /tmp/chromium.zip -d /tmp/ && \
mv /tmp/chrome-linux/ /opt/chrome && \
rm /tmp/chromium.zip
# Download and extract necessary files to /opt/chromedriver directory
RUN curl "https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F$CHROMIUM_VERSION%2Fchromedriver_linux64.zip?generation=1652397753719852&alt=media" > /tmp/chromedriver_linux64.zip && \
unzip /tmp/chromedriver_linux64.zip -d /tmp/ && \
mv /tmp/chromedriver_linux64/chromedriver /opt/chromedriver && \
rm /tmp/chromedriver_linux64.zip
# Second stage
FROM public.ecr.aws/lambda/nodejs:18 as base
COPY chrome-deps.txt /tmp/
RUN yum install -y $(cat /tmp/chrome-deps.txt)
# Copy /opt/chrome and /opt/chromedriver built in the previous stage to the current image
COPY --from=stage /opt/chrome /opt/chrome
COPY --from=stage /opt/chromedriver /opt/chromedriver
# Continue with the remaining tasks
COPY package.json package-lock.json ./
COPY src ./src
COPY index.js .
RUN npm install
CMD [ "index.handler" ]
This sample is conducted to downloading Chrome and Chromedriver while building code to images.
The "RUN" command allows you to apply new file system changes to layers in your Docker image and have the results reflected in the next build step.
The "ENV" command allows you to set env.
You can use this command to define environment variables, whose values you can use later.
And also you can set stage(FROM ~ as stage/base).
This can allow you to improved speed when building docker.
Here is link.
Example 1.
name: Deploy Lambda Function
on:
push:
branches:
- main
jobs:
deploy:
name: build and deploy lambda
strategy:
matrix:
node-version: [18.x]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install and build
run: |
npm ci
npm run build --if-present
working-directory: ./
env:
CI: true
- name: tsc
run: tsc
- name: Install AWS CLI
run: |
sudo apt-get update
sudo apt-get install -y awscli
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ca-central-1
# Uncomment the following section to log in to AWS ECR
- name: AWS ECR Login
run: aws ecr get-login-password --region ${{ secrets.AWS_REGION }} | docker login --username AWS --password-stdin ${{ secrets.AWS_REGISTRY_URL }}
- name: Build Docker Image
run: docker build --platform linux/x86_64 -t your-build-name .
- name: Tag Docker Image
run: docker tag your-build-name:latest ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Push Docker Image to ECR
run: docker push ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Update Lambda Function
run: aws lambda update-function-code --function-name your-lambda-function-name --image-uri ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Delete Untagged Images In ECR
run: |
# Get untagged image digest
UNTAGGED_IMAGES=$(aws ecr describe-images --repository-name your-ecr-repository-name --query 'imageDetails[?imageTags==null].imageDigest' --output json)
# Delete untagged images
if [ -n "$UNTAGGED_IMAGES" ]; then
for IMAGE_DIGEST in $(echo "$UNTAGGED_IMAGES" | jq -r '.[]'); do
aws ecr batch-delete-image --repository-name your-ecr-repository-name --image-ids imageDigest=$IMAGE_DIGEST
done
fi
In this example, the project is set to typescript. and .gitignore is set to not push node_modules. Here are some details.
- name: Install AWS CLI
run: |
sudo apt-get update
sudo apt-get install -y awscli
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
# Uncomment the following section to log in to AWS ECR
- name: AWS ECR Login
run: aws ecr get-login-password --region ${{ secrets.AWS_REIGON }} | docker login --username AWS --password-stdin ${{ secrets.AWS_REGISTRY_URL }}
- name: Build Docker Image
run: docker build --platform linux/x86_64 -t your-build-name .
- name: Tag Docker Image
run: docker tag your-build-name:latest ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Push Docker Image to ECR
run: docker push ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Update Lambda Function
run: aws lambda update-function-code --function-name your-lambda-function-name --image-uri ${{ secrets.AWS_REGISTRY_URL }}:latest
- name: Delete Untagged Images In ECR
run: |
# Get untagged image digest
UNTAGGED_IMAGES=$(aws ecr describe-images --repository-name your-ecr-repository-name --query 'imageDetails[?imageTags==null].imageDigest' --output json)
# Delete untagged images
if [ -n "$UNTAGGED_IMAGES" ]; then
for IMAGE_DIGEST in $(echo "$UNTAGGED_IMAGES" | jq -r '.[]'); do
aws ecr batch-delete-image --repository-name your-ecr-repository-name --image-ids imageDigest=$IMAGE_DIGEST
done
fi
This process should be set Login>Build>Tag>Push>Update.
By tagging, you can handle latest version of Docker image which is built.
"secrets.AWS_REGISTRY_URL" should be set the uri of ECR private repository.