Package node.js code for AWS lambda with its minimal dependencies.
This module allows you to have node.js files for AWS Lambda fuction alongside other code, and makes it easy to package a lambda function with only those dependencies that it needs. You can them update a lambda directly, or prepare the packaged code in local or S3 zip archive, including for use with CloudFormation.
npm install --save-dev aws-lambda-upload
$(npm bin)/aws-lambda-upload [options] <start-file>
Here, <start-file>
is the path of the JS file to serve as the entry point into the Lambda. Note that in all cases, you'll use the basename of <start-file>
as the filename to use for Lambda handler.
Use --lambda <name>
flag to update a Lambda with the given name that you have previously created on AWS (e.g. using AWS Lambda
console).
Available programmatically as updateLambda(startPath, lambdaName, options)
.
Use --zip <path>
flag to save the packaged lambda code to a zip file. It may then be used with e.g. aws lambda update-function-code
command or as in a CloudFormation template with aws cloudformation package
command.
Available programmatically as packageZipLocal(startPath, outputZipPath, options)
.
Use --s3
to save the packaged lambda code to S3, and print the S3 URI to stdout.
The zip file will be saved to the bucket named by --s3-bucket
flag (defaulting to "aws-lambda-upload"
),
and within that to folder (prefix) named by --s3-prefix
flag (defaulting to empty). The basename of the
file will be its MD5 checksum (which is exactly what aws cloudformation package
does), which avoids
duplication when uploading identical files.
Available programmatically as packageZipS3(startPath, options)
.
Use --cfn <path>
flag to interpret <start-path>
as the path to a CloudFormation template (.json or .yml file), package
any mentioned code to S3, replace with S3 locations, and output the adjusted template as JSON to <path>
(-
for stdout).
This is similar to aws cloudformation package
command. It will process the following keys in the template:
- For
Resource
withType: AWS::Lambda::Function
, processesCode
property. - For
Resource
withType: AWS::Serverless::Function
, processesCodeUri
property.
In both cases, if the relevant property is a file path, interprets it as a start JS file,
packages it with packageZipS3()
and replaces the property with S3 information
in the format required by CloudFormation. If file path is relative, it's interpreted relative to the directory of the template.
Available programmatically as cloudformationPackage(templatePath, outputPath, options)
If your entry file requires other files in your project, or in node_modules/
,
that's great. All dependencies will be collected and packaged into a temporary zip file.
Note that it does NOT package your entire directory or all of node_modules/
.
It uses collect-js-deps
(which uses browserify) to examine the require()
calls
in your files, and recursively collects all dependencies. For files in
node_modules/
, it also includes any package.json
files as they affect the
import logic.
Actually, all browserify options are supported, by including them after --
on the command line
(<start-path>
should come before that).
Since the main file of a Lambda must be at top-level, if <start-path>
is in a subdirectory
(e.g. lib/my_lambda.js
), a same-named top-level helper file (e.g. my_lambda.js
) will be added
to the zip archive for you. It's a one-liner that re-exports the entry module to let you use it
as the Lambda's main file.
With --tsconfig <path>
, you may specify a path to tsconfig.json
or to the directory containing it,
and typescript dependencies will be compiled to JS and included. You'll have to have
tsify installed.
It is a convenience shortcut for including the tsify browserify plugin,
and is equivalent to including this browserify option -- -p [ tsify -p <path> ]
to collect-js-deps
.
To be able to update Lambda code or upload anything to S3, you need sufficient permissions. Read about configuring AWS credentials for how to set credentials that AWS SDK can use.
To use --lambda
flag, the credentials you use need to
at least give you the permission of lambda:UpdateFunctionCode
for the
resource of arn:aws:lambda:<region>:<account-id>:function:<function-name>
.
Read more here.
To use --s3
or --cfn
flags, the credentials need to give you the permission to list and create objects in the relevant S3 bucket.
E.g. the following policy works for the default bucket used by aws-lambda-upload
:
Suggested IAM Policy for default S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::aws-lambda-upload"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
],
"Resource": [
"arn:aws:s3:::aws-lambda-upload/*"
]
}
]
}
Before you run tests for the first time, you need to set up localstack. You can do it with
npm run setup-localstack
Note that localstack has a number of requirements.
Once set up, you can run tests with npm test
as usual.