Open Source Integration Testing as part of upstream releases #4390
Replies: 4 comments
-
CC @zhonghui12 We will bring this open source (non-vendor) focused testing idea up in the next Fluent Community meeting. |
Beta Was this translation helpful? Give feedback.
-
Adding @patrick-stephens who’s looking at this from the Calyptia side |
Beta Was this translation helpful? Give feedback.
-
Agreed, I'm looking at general improvements under this change: #3753
It includes some level of testing for releases, although the tests above are more specifically resilience and performance tests. I would agree these should feed in - essentially there is some minimum level of validation for staging builds and then we trigger these longer running tests on those staging builds before approving the release. The actual test cases can also be used as a form of validation for user infrastructure, i.e. run them in-situ to help identify any issues there. I agree with keeping verification vendor-agnostic although it would also be useful to include some level of verification for common output plugins. We'll probably need a monotonic count or similar output message to verify all messages were received (coping with out-of-order retries too). If we have the basic framework in place it will make it easy to evolve by user submission of PRs for new test cases, targets, etc. benefiting everyone then. |
Beta Was this translation helpful? Give feedback.
-
I'm going to convert this issue to a discussion |
Beta Was this translation helpful? Give feedback.
-
I think the Fluent Bit community should work towards having a higher bar for releases, to ensure stability, and improve user confidence.
The most common use case for Fluent Bit users is collecting k8s log files. It would be really cool if we had automated testing prior to releases that did the following:
This way, we test each release candidate against real-world use cases before releasing it.
We could have two types of tests:
Beta Was this translation helpful? Give feedback.
All reactions