Azure Storage Append Blob output plugin buffers logs in local file and uploads them to Azure Storage Append Blob periodically. This plugin is a fork of microsoft/fluent-plugin-azure-storage-append-blob which is not actively maintained.
gem install fluent-plugin-azure-storage-append-blob-lts
Add following line to your Gemfile:
gem "fluent-plugin-azure-storage-append-blob-lts"
And then execute:
bundle
<match pattern>
type azure-storage-append-blob
azure_cloud <azure cloud environment>
azure_storage_dns_suffix <your azure storage dns suffix> # only used for Azure Stack Cloud
azure_storage_account <your azure storage account>
azure_storage_access_key <your azure storage access key> # leave empty to use MSI
azure_storage_connection_string <your azure storage connection string> # leave empty to use MSI
azure_storage_sas_token <your azure storage sas token> # leave empty to use MSI
azure_imds_api_version <Azure Instance Metadata Service API Version> # only used for MSI
azure_token_refresh_interval <refresh interval in min> # only used for MSI
azure_container <your azure storage container>
azure_msi_client_id <Azure Managed Identity Client ID> # only used for MSI
auto_create_container true
path logs/
azure_object_key_format %{path}%{time_slice}_%{index}.log
time_slice_format %Y%m%d-%H
compress false
compute_checksums true
# if you want to use %{tag} or %Y/%m/%d/ like syntax in path / azure_blob_name_format,
# need to specify tag for %{tag} and time for %Y/%m/%d in <buffer> argument.
<buffer tag,time>
@type file
path /var/log/fluent/azurestorageappendblob
timekey 120 # 2 minutes
timekey_wait 60
timekey_use_utc true # use utc
</buffer>
</match>
Default: AZUREPUBLICCLOUD
Cloud environment used to determine the storage endpoint suffix to use, see here.
Use AZURESTACKCLOUD
for Azure Stack Cloud.
Your Azure Storage endpoint suffix. This can be retrieved from Azure Storage connection string, EndpointSuffix
section.
Your Azure Storage Account Name. This can be retrieved from Azure Management portal.
azure_storage_access_key
or azure_storage_sas_token
or azure_storage_connection_string
(Any required or all empty to use MSI)
Your Azure Storage Access Key (Primary or Secondary) or shared access signature (SAS) token or full connection string. This also can be retrieved from Azure Management portal.
If all are empty, the plugin will use the local Managed Identity endpoint to obtain a token for the target storage account.
Default: 2020-12-01
The Instance Metadata Service is used during the OAuth flow to obtain an access token. This API is versioned and specifying the version is mandatory.
See here for more details.
Default: 60
(1 hour)
When using MSI, the initial access token needs to be refreshed periodically.
Azure Storage Container name
Azure Identity Client ID to use for accessing Azure Blob service.
Default: true
This plugin creates the Azure container if it does not already exist exist when you set 'auto_create_container' to true.
The format of Azure Storage object keys. You can use several built-in variables:
- %{path}
- %{time_slice}
- %{index}
to decide keys dynamically.
%{path} is exactly the value of path configured in the configuration file. E.g., "logs/" in the example configuration above. %{time_slice} is the time-slice in text that are formatted with time_slice_format. %{index} is used only if your blob exceed Azure 50000 blocks limit per blob to prevent data loss. Its not required to use this parameter.
The default format is "%{path}%{time_slice}-%{index}.log".
For instance, using the example configuration above, actual object keys on Azure Storage will be something like:
"logs/20130111-22-0.log"
"logs/20130111-23-0.log"
"logs/20130112-00-0.log"
With the configuration:
azure_object_key_format %{path}/events/ts=%{time_slice}/events.log
path log
time_slice_format %Y%m%d-%H
You get:
"log/events/ts=20130111-22/events.log"
"log/events/ts=20130111-23/events.log"
"log/events/ts=20130112-00/events.log"
The fluent-mixin-config-placeholders mixin is also incorporated, so additional variables such as %{hostname}, etc. can be used in the azure_object_key_format
. This is useful in preventing filename conflicts when writing from multiple servers.
azure_object_key_format %{path}/events/ts=%{time_slice}/events-%{hostname}.log
Format of the time used in the file name. Default is '%Y%m%d'. Use '%Y%m%d%H' to split files hourly.
Default: false
If true
, compress (gzip) the file prior to uploading it.
Note: If desired, set .gz
suffix via azure_object_key_format
.
Example:
azure_object_key_format %{path}%{time_slice}-%{index}.log.gz
Default: true
Whether to compute MD5 checksum of the blob contents during append operation and provide it in a header for the blob service.
You want to set it to false
in FIPS-enabled environments.
gem install bundler
bundle install
bundle exec rake test
-
Create Storage Account and VM with enabled MSI
-
Setup Docker ang Git
-
SSH into VM
-
Download this repo
git clone https://github.com/microsoft/fluent-plugin-azure-storage-append-blob-lts.git cd fluent-plugin-azure-storage-append-blob-lts
-
Build Docker image
docker build -t fluent .
-
Run Docker image with different set of parameters:
STORAGE_ACCOUNT
: required, name of your storage accountSTORAGE_ACCESS_KEY
: storage account access keySTORAGE_SAS_TOKEN
: storage sas token with enough permissions for the plugin
You need to specify
STORAGE_ACCOUNT
and one of auth ways. If you run it from VM with MSI, justSTORAGE_ACCOUNT
is required. Keep in mind, there is no way to refresh MSI Token, so ensure you setup proper permissions first.docker run -it -e STORAGE_ACCOUNT=<storage> -e STORAGE_ACCESS_KEY=<key> fluent
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.