journal plugin for Fluentd
Modified from Fluentd-plugin-Systemd.Change the opening mode to open through files not the path.This solves the problem of occupying a DELETE file. This approach is more convenient and customizable
- journal input plugin to read logs from the systemd journal
Simply use RubyGems:
gem install fluent-plugin-journal
or
td-agent-gem install fluent-plugin-journal
<source>
@type journal
tag kubelet
matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
read_from_head true
path /var/log/journal/*/*
pattern /(system.journal|user-\d{4}.journal)/
<storage>
@type local
path /var/log/fluentd-journald-kubelet-cursor.json
</storage>
<entry>
fields_strip_underscores true
fields_lowercase true
</entry>
</source>
<match kubelet>
@type stdout
</match>
<system>
root_dir /var/log/fluentd
</system>
path
Path to the systemd journal's absolute position, defaults to /var/log/journal/*/*
.You can use * as a wildcard.
If you have journal logs at /root/log/system.journal
you can config path like : /root/log/*
You can configure whatever path you want.
pattern
The regular expression that you want to open a file. In the above example, you can open a file such as system. journal
or user-1000.journal
. Defaults to /./
,That means open all the files.
filters
This parameter name is depreciated and should be renamed to matches
matches
Expects an array of hashes defining desired matches to filter the log messages with. When this property is not specified, this plugin will default to reading all logs from the journal.
See matching details for a more exhaustive description of this property and how to use it.
storage
Configuration for a storage plugin used to store the journald cursor.
read_from_head
If true reads all available journal from head, otherwise starts reading from tail, ignored if cursor exists in storage (and is valid). Defaults to false.
entry
Optional configuration for an embedded systemd entry filter. See the Filter Plugin Configuration for config reference.
tag
Required
A tag that will be added to events generated by this input.
<filter kube-proxy>
@type systemd_entry
field_map {"MESSAGE": "log", "_PID": ["process", "pid"], "_CMDLINE": "process", "_COMM": "cmd"}
field_map_strict false
fields_lowercase true
fields_strip_underscores true
</filter>
Note that the following configurations can be embedded in a systemd source block, within an entry block, you only need to use a filter directly for more complicated workflows.
field_map
Object / hash defining a mapping of source fields to destination fields. Destination fields may be existing or new user-defined fields. If multiple source fields are mapped to the same destination field, the contents of the fields will be appended to the destination field in the order defined in the mapping. A field map declaration takes the form of:
{
"<src_field1>": "<dst_field1>",
"<src_field2>": ["<dst_field1>", "<dst_field2>"],
...
}
Defaults to an empty map.
field_map_strict
If true, only destination fields from field_map
are included in the result. Defaults to false.
fields_lowercase
If true, lowercase all non-mapped fields. Defaults to false.
fields_strip_underscores
If true, strip leading underscores from all non-mapped fields. Defaults to false.
Given a systemd journal source entry:
{
"_MACHINE_ID": "bb9d0a52a41243829ecd729b40ac0bce"
"_HOSTNAME": "arch"
"MESSAGE": "this is a log message",
"_PID": "123"
"_CMDLINE": "login -- root"
"_COMM": "login"
}
The resulting entry using the above sample configuration:
{
"machine_id": "bb9d0a52a41243829ecd729b40ac0bce"
"hostname": "arch",
"msg": "this is a log message",
"pid": "123"
"cmd": "login"
"process": "123 login -- root"
}