Skip to main content

aws_lambda

Invokes an AWS lambda for each message. The contents of the message is the payload of the request, and the result of the invocation will become the new contents of the message.

Introduced in version 3.36.0.

# Common config fields, showing default values
label: ""
aws_lambda:
parallel: false
function: ""
region: eu-west-1

It is possible to perform requests per message of a batch in parallel by setting the parallel flag to true. The rate_limit field can be used to specify a rate limit resource to cap the rate of requests across parallel components service wide.

In order to map or encode the payload to a specific request body, and map the response back into the original payload instead of replacing it entirely, you can use the branch processor.

Error Handling​

When Benthos is unable to connect to the AWS endpoint or is otherwise unable to invoke the target lambda function it will retry the request according to the configured number of retries. Once these attempts have been exhausted the failed message will continue through the pipeline with it's contents unchanged, but flagged as having failed, allowing you to use standard processor error handling patterns.

However, if the invocation of the function is successful but the function itself throws an error, then the message will have it's contents updated with a JSON payload describing the reason for the failure, and a metadata field lambda_function_error will be added to the message allowing you to detect and handle function errors with a branch:

pipeline:
processors:
- branch:
processors:
- aws_lambda:
function: foo
result_map: |
root = if meta().exists("lambda_function_error") {
throw("Invocation failed due to %v: %v".format(this.errorType, this.errorMessage))
} else {
this
}
output:
switch:
retry_until_success: false
cases:
- check: errored()
output:
reject: ${! error() }
- output:
resource: somewhere_else

Credentials​

By default Benthos will use a shared credentials file when connecting to AWS services. It's also possible to set them explicitly at the component level, allowing you to transfer data across accounts. You can find out more in this document.

Examples​

This example uses a branch processor to map a new payload for triggering a lambda function with an ID and username from the original message, and the result of the lambda is discarded, meaning the original message is unchanged.

pipeline:
processors:
- branch:
request_map: '{"id":this.doc.id,"username":this.user.name}'
processors:
- aws_lambda:
function: trigger_user_update

Fields​

parallel​

Whether messages of a batch should be dispatched in parallel.

Type: bool
Default: false

function​

The function to invoke.

Type: string
Default: ""

rate_limit​

An optional rate_limit to throttle invocations by.

Type: string
Default: ""

region​

The AWS region to target.

Type: string
Default: "eu-west-1"

endpoint​

Allows you to specify a custom endpoint for the AWS API.

Type: string
Default: ""

credentials​

Optional manual configuration of AWS credentials to use. More information can be found in this document.

Type: object

credentials.profile​

A profile from ~/.aws/credentials to use.

Type: string
Default: ""

credentials.id​

The ID of credentials to use.

Type: string
Default: ""

credentials.secret​

The secret for the credentials being used.

Type: string
Default: ""

credentials.token​

The token for the credentials being used, required when using short term credentials.

Type: string
Default: ""

credentials.role​

A role ARN to assume.

Type: string
Default: ""

credentials.role_external_id​

An external ID to provide when assuming a role.

Type: string
Default: ""

timeout​

The maximum period of time to wait before abandoning an invocation.

Type: string
Default: "5s"

retries​

The maximum number of retry attempts for each message.

Type: int
Default: 3