Skip to main content

aws_dynamodb

Inserts items into a DynamoDB table.

Introduced in version 3.36.0.

# Common config fields, showing default values
output:
label: ""
aws_dynamodb:
table: ""
string_columns: {}
json_map_columns: {}
max_in_flight: 1
batching:
count: 0
byte_size: 0
period: ""
check: ""
region: eu-west-1

The field string_columns is a map of column names to string values, where the values are function interpolated per message of a batch. This allows you to populate string columns of an item by extracting fields within the document payload or metadata like follows:

string_columns:
id: ${!json("id")}
title: ${!json("body.title")}
topic: ${!meta("kafka_topic")}
full_content: ${!content()}

The field json_map_columns is a map of column names to json paths, where the dot path is extracted from each document and converted into a map value. Both an empty path and the path . are interpreted as the root of the document. This allows you to populate map columns of an item like follows:

json_map_columns:
user: path.to.user
whole_document: .

A column name can be empty:

json_map_columns:
"": .

In which case the top level document fields will be written at the root of the item, potentially overwriting previously defined column values. If a path is not found within a document the column will not be populated.

Credentials​

By default Benthos will use a shared credentials file when connecting to AWS services. It's also possible to set them explicitly at the component level, allowing you to transfer data across accounts. You can find out more in this document.

Performance​

This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages with the field max_in_flight.

This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.

Fields​

table​

The table to store messages in.

Type: string
Default: ""

string_columns​

A map of column keys to string values to store. This field supports interpolation functions.

Type: object
Default: {}

# Examples
string_columns:
full_content: ${!content()}
id: ${!json("id")}
title: ${!json("body.title")}
topic: ${!meta("kafka_topic")}

json_map_columns​

A map of column keys to field paths pointing to value data within messages.

Type: object
Default: {}

# Examples
json_map_columns:
user: path.to.user
whole_document: .
json_map_columns:
"": .

ttl​

An optional TTL to set for items, calculated from the moment the message is sent.

Type: string
Default: ""

ttl_key​

The column key to place the TTL value within.

Type: string
Default: ""

max_in_flight​

The maximum number of messages to have in flight at a given time. Increase this to improve throughput.

Type: int
Default: 1

batching​

Allows you to configure a batching policy.

Type: object

# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m

batching.count​

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: int
Default: 0

batching.byte_size​

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: int
Default: 0

batching.period​

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

batching.check​

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string
Default: ""

# Examples
check: this.type == "end_of_transaction"

batching.processors​

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array
Default: []

# Examples
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
processors:
- merge_json: {}

region​

The AWS region to target.

Type: string
Default: "eu-west-1"

endpoint​

Allows you to specify a custom endpoint for the AWS API.

Type: string
Default: ""

credentials​

Optional manual configuration of AWS credentials to use. More information can be found in this document.

Type: object

credentials.profile​

A profile from ~/.aws/credentials to use.

Type: string
Default: ""

credentials.id​

The ID of credentials to use.

Type: string
Default: ""

credentials.secret​

The secret for the credentials being used.

Type: string
Default: ""

credentials.token​

The token for the credentials being used, required when using short term credentials.

Type: string
Default: ""

credentials.role​

A role ARN to assume.

Type: string
Default: ""

credentials.role_external_id​

An external ID to provide when assuming a role.

Type: string
Default: ""

max_retries​

The maximum number of retries before giving up on the request. If set to zero there is no discrete limit.

Type: int
Default: 3

backoff​

Control time intervals between retry attempts.

Type: object

backoff.initial_interval​

The initial period to wait between retry attempts.

Type: string
Default: "1s"

backoff.max_interval​

The maximum period to wait between retry attempts.

Type: string
Default: "5s"

backoff.max_elapsed_time​

The maximum period to wait before retry attempts are abandoned. If zero then no limit is used.

Type: string
Default: "30s"