Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This takes a fairly different approach to how we emit our logs to GCS. Previously we received them in one container and wrote them out to the local filesystem, and a sidecar would periodically enumerate the files emitted by that process, concatenate them, and send them up to GCS in a single upload.
When we shifted to Cloud Run, this approach became problematic because the filesystem is backed by memory, so under heavy load the event handler could see a lot of memory pressure between rotations and between the filesystem and the concatenation for the upload they end up in memory twice.
By collapsing the two processes together and simply uploading things directly, we can initiate a new file write, trickle events to that writer, and then flush the active writers. Worst case the client is dumb and buffers things once, but in a perfect world this would initiate an upload of unknown size and we would stream events as they come in, which would dramatically reduce our memory pressure to roughly O(active events).