table_storage
DEPRECATED
This component is deprecated and will be removed in the next major version release. Please consider moving onto alternative components.
This component has been renamed to azure_table_storage
.
- Common
- Advanced
# Common config fields, showing default valuesoutput:label: ""table_storage:storage_account: ""storage_access_key: ""storage_connection_string: ""table_name: ""partition_key: ""row_key: ""properties: {}max_in_flight: 1batching:count: 0byte_size: 0period: ""check: ""
# All config fields, showing default valuesoutput:label: ""table_storage:storage_account: ""storage_access_key: ""storage_connection_string: ""table_name: ""partition_key: ""row_key: ""properties: {}insert_type: INSERTmax_in_flight: 1timeout: 5sbatching:count: 0byte_size: 0period: ""check: ""processors: []
Performance​
This output benefits from sending multiple messages in flight in parallel for
improved performance. You can tune the max number of in flight messages with the
field max_in_flight
.
This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.
Fields​
storage_account
​
The storage account to upload messages to. This field is ignored if storage_connection_string
is set.
Type: string
Default: ""
storage_access_key
​
The storage account access key. This field is ignored if storage_connection_string
is set.
Type: string
Default: ""
storage_connection_string
​
A storage account connection string. This field is required if storage_account
and storage_access_key
are not set.
Type: string
Default: ""
table_name
​
The table to store messages into. This field supports interpolation functions.
Type: string
Default: ""
# Examplestable_name: ${!meta("kafka_topic")}
partition_key
​
The partition key. This field supports interpolation functions.
Type: string
Default: ""
# Examplespartition_key: ${!json("date")}
row_key
​
The row key. This field supports interpolation functions.
Type: string
Default: ""
# Examplesrow_key: ${!json("device")}-${!uuid_v4()}
properties
​
A map of properties to store into the table. This field supports interpolation functions.
Type: object
Default: {}
insert_type
​
Type of insert operation This field supports interpolation functions.
Type: string
Default: "INSERT"
Options: INSERT
, INSERT_MERGE
, INSERT_REPLACE
.
max_in_flight
​
The maximum number of messages to have in flight at a given time. Increase this to improve throughput.
Type: int
Default: 1
timeout
​
The maximum period to wait on an upload before abandoning it and reattempting.
Type: string
Default: "5s"
batching
​
Allows you to configure a batching policy.
Type: object
# Examplesbatching:byte_size: 5000count: 0period: 1sbatching:count: 10period: 1sbatching:check: this.contains("END BATCH")count: 0period: 1m
batching.count
​
A number of messages at which the batch should be flushed. If 0
disables count based batching.
Type: int
Default: 0
batching.byte_size
​
An amount of bytes at which the batch should be flushed. If 0
disables size based batching.
Type: int
Default: 0
batching.period
​
A period in which an incomplete batch should be flushed regardless of its size.
Type: string
Default: ""
# Examplesperiod: 1speriod: 1mperiod: 500ms
batching.check
​
A Bloblang query that should return a boolean value indicating whether a message should end a batch.
Type: string
Default: ""
# Examplescheck: this.type == "end_of_transaction"
batching.processors
​
A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
Type: array
Default: []
# Examplesprocessors:- archive:format: linesprocessors:- archive:format: json_arrayprocessors:- merge_json: {}