Replies: 1 comment
-
My bad. I found that giving an unique tag name can solve this problem. Blob use tag name and timestamp as its name. # docker-compose.yaml
version: "3"
services:
fluentbit-1:
image: fluent/fluent-bit:2.1.8
volumes:
- ./fluent-bit/conf:/fluent-bit/etc
ports:
- "12201:12204"
# give a unique task id in environment variables
environment:
TASK_ID: "1"
env_file:
- ./fluent-bit/.env
deploy:
resources:
limits:
cpus: '2'
memory: 1G
fluentbit-2:
image: fluent/fluent-bit:2.1.8
volumes:
- ./fluent-bit/conf:/fluent-bit/etc
ports:
- "12202:12204"
environment:
TASK_ID: "2"
env_file:
- ./fluent-bit/.env
deploy:
resources:
limits:
cpus: '2'
memory: 1G and update the fluent bit setting. # employee.conf
[INPUT]
name tcp
listen 0.0.0.0
port 12204
format json
# use the environment variable TASK_ID as the tag name.
tag employee-${TASK_ID}
Mem_Buf_Limit 500MB
[OUTPUT]
match employee-${TASK_ID}
name azure_kusto
tenant_id ${APP_TENANT_ID}
client_id ${APP_CLIENT_ID}
client_secret ${APP_CLIENT_SECRET}
ingestion_endpoint ${ADX_INGESTION_ENDPOINT}
database_name employee
table_name employee
ingestion_mapping_reference employee_fluent_bit_mapping
[OUTPUT]
match employee-${TASK_ID}
name azure_blob
account_name ${STORAGE_ACCOUNT_NAME}
shared_key ${STORAGE_SHARED_KEY}
blob_type blockblob
container_name employee
path logs/
auto_create_container on
tls on |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, maintenance team.
I do a pressure test to send the test log (format is json) to azure data explorer through by fluent bit. but I found out that some data was lost during the transfer.
I use docker compose to start the three fluent bit on local, here is my docker compose yaml file.
All the fluent bit container use the same configures.
I send 1000 single line json in a short time.
But I only found 671 entries in the adx database.
In ADX Insights, I see that Ingestion fails because BlobAlreadyReceived occurs.
I found that in the debug message, different containers produce exactly the same url for the adx ingestion blob.
I suspect that this is the cause of the error, but I'm not sure if I've got something wrong with the config.
In addition, the output of azure_blob also encounters a client error, resulting in data loss.
But I didn't open Log Analytics to see the error message in detail, so I'll test this part later.
Beta Was this translation helpful? Give feedback.
All reactions