metric
Emit custom metrics by extracting values from messages.
- Common
- Advanced
# Common config fields, showing default valueslabel: ""metric:type: countername: ""labels: {}value: ""
# All config fields, showing default valueslabel: ""metric:type: countername: ""labels: {}value: ""parts: []
This processor works by evaluating an interpolated field value
for each message and updating a emitted metric according to the type.
Custom metrics such as these are emitted along with Benthos internal metrics, where you can customize where metrics are sent, which metric names are emitted and rename them as/when appropriate. For more information check out the metrics docs here.
Examples​
- Counter
- Gauge
In this example we emit a counter metric called Foos
, which increments for every message processed, and we label the metric with some metadata about where the message came from and a field from the document that states what type it is. We also configure our metrics to emit to CloudWatch, and explicitly only allow our custom metric and some internal Benthos metrics to emit.
pipeline:processors:- metric:name: Foostype: counterlabels:topic: ${! meta("kafka_topic") }partition: ${! meta("kafka_partition") }type: ${! json("document.type").or("unknown") }metrics:aws_cloudwatch:namespace: ProdConsumerregion: eu-west-1path_mapping: |root = if !["Foos","input.received","output.sent"].contains(this) { deleted() }
In this example we emit a gauge metric called FooSize
, which is given a value extracted from JSON messages at the path foo.size
. We then also configure our Prometheus metric exporter to only emit this custom metric and nothing else. We also label the metric with some metadata.
pipeline:processors:- metric:name: FooSizetype: gaugelabels:topic: ${! meta("kafka_topic") }value: ${! json("foo.size") }metrics:prometheus:path_mapping: 'if this != "FooSize" { deleted() }'
Fields​
type
​
The metric type to create.
Type: string
Default: "counter"
Options: counter
, counter_by
, gauge
, timing
.
name
​
The name of the metric to create, this must be unique across all Benthos components otherwise it will overwrite those other metrics.
Type: string
Default: ""
labels
​
A map of label names and values that can be used to enrich metrics. Labels are not supported by some metric destinations, in which case the metrics series are combined. This field supports interpolation functions.
Type: object
Default: {}
# Exampleslabels:topic: ${! meta("kafka_topic") }type: ${! json("doc.type") }
value
​
For some metric types specifies a value to set, increment. This field supports interpolation functions.
Type: string
Default: ""
parts
​
An optional array of message indexes of a batch that the processor should apply to. If left empty all messages are processed. This field is only applicable when batching messages at the input level.
Indexes can be negative, and if so the part will be selected from the end counting backwards starting from -1.
Type: array
Default: []
Types​
counter
​
Increments a counter by exactly 1, the contents of value
are ignored
by this type.
counter_by
​
If the contents of value
can be parsed as a positive integer value
then the counter is incremented by this value.
For example, the following configuration will increment the value of the
count.custom.field
metric by the contents of field.some.value
:
pipeline:processors:- metric:type: counter_byname: CountCustomFieldvalue: ${!json("field.some.value")}
gauge
​
If the contents of value
can be parsed as a positive integer value
then the gauge is set to this value.
For example, the following configuration will set the value of the
gauge.custom.field
metric to the contents of field.some.value
:
pipeline:processors:- metric:type: gaugepath: GaugeCustomFieldvalue: ${!json("field.some.value")}
timing
​
Equivalent to gauge
where instead the metric is a timing.