sql_insert
Inserts a row into an SQL database for each message.
Introduced in version 3.59.0.
- Common
- Advanced
# Common config fields, showing default valuesoutput:label: ""sql_insert:driver: ""dsn: ""table: ""columns: []args_mapping: ""max_in_flight: 64batching:count: 0byte_size: 0period: ""check: ""
# All config fields, showing default valuesoutput:label: ""sql_insert:driver: ""dsn: ""table: ""columns: []args_mapping: ""prefix: ""suffix: ""max_in_flight: 64batching:count: 0byte_size: 0period: ""check: ""processors: []
Examples​
- Table Insert (MySQL)
Here we insert rows into a database by populating the columns id, name and topic with values extracted from messages and metadata:
output:sql_insert:driver: mysqldsn: foouser:foopassword@tcp(localhost:3306)/foodbtable: footablecolumns: [ id, name, topic ]args_mapping: |root = [this.user.id,this.user.name,meta("kafka_topic"),]
Fields​
driver
​
A database driver to use.
Type: string
Options: mysql
, postgres
, clickhouse
, mssql
.
dsn
​
A Data Source Name to identify the target database.
Drivers​
The following is a list of supported drivers and their respective DSN formats:
Driver | Data Source Name Format |
---|---|
clickhouse | tcp://[netloc][:port][?param1=value1&...¶mN=valueN] |
mysql | [username[:password]@][protocol[(address)]]/dbname[?param1=value1&...¶mN=valueN] |
postgres | postgres://[user[:password]@][netloc][:port][/dbname][?param1=value1&...] |
mssql | sqlserver://[user[:password]@][netloc][:port][?database=dbname¶m1=value1&...] |
Please note that the postgres
driver enforces SSL by default, you
can override this with the parameter sslmode=disable
if required.
Type: string
# Examplesdsn: tcp://host1:9000?username=user&password=qwerty&database=clicks&read_timeout=10&write_timeout=20&alt_hosts=host2:9000,host3:9000dsn: foouser:foopassword@tcp(localhost:3306)/foodbdsn: postgres://foouser:foopass@localhost:5432/foodb?sslmode=disable
table
​
The table to insert to.
Type: string
# Examplestable: foo
columns
​
A list of columns to insert.
Type: array
# Examplescolumns:- foo- bar- baz
args_mapping
​
A Bloblang mapping which should evaluate to an array of values matching in size to the number of columns specified.
Type: string
# Examplesargs_mapping: root = [ this.cat.meow, this.doc.woofs[0] ]args_mapping: root = [ meta("user.id") ]
prefix
​
An optional prefix to prepend to the insert query (before INSERT).
Type: string
suffix
​
An optional suffix to append to the insert query.
Type: string
# Examplessuffix: ON CONFLICT (name) DO NOTHING
max_in_flight
​
The maximum number of inserts to run in parallel.
Type: int
Default: 64
batching
​
Allows you to configure a batching policy.
Type: object
# Examplesbatching:byte_size: 5000count: 0period: 1sbatching:count: 10period: 1sbatching:check: this.contains("END BATCH")count: 0period: 1m
batching.count
​
A number of messages at which the batch should be flushed. If 0
disables count based batching.
Type: int
Default: 0
batching.byte_size
​
An amount of bytes at which the batch should be flushed. If 0
disables size based batching.
Type: int
Default: 0
batching.period
​
A period in which an incomplete batch should be flushed regardless of its size.
Type: string
Default: ""
# Examplesperiod: 1speriod: 1mperiod: 500ms
batching.check
​
A Bloblang query that should return a boolean value indicating whether a message should end a batch.
Type: string
Default: ""
# Examplescheck: this.type == "end_of_transaction"
batching.processors
​
A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
Type: array
# Examplesprocessors:- archive:format: linesprocessors:- archive:format: json_arrayprocessors:- merge_json: {}