This repository has been archived by the owner on Feb 12, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR supersedes #2227 as a fix for "random" daemon crashes with
NodeError: Cannot call write after a stream was destroyed
(details in libp2p/js-libp2p#374).Steps to reproduce: libp2p/js-libp2p#374 (comment)
@alanshaw confirmed the underlying error was not specific to
buffer-peek-stream
, but caused by the way Hapi handles pipelines with multiple PassThrough/Transform/pipe calls. We had two Transforms: one for content-type sniffing, and one for streaming compression.Turns out go-ipfs does not compress anything by default, so we can remove PassThrough responsible for streaming of compressed payload and fix the issue while keeping optimization introduced by
buffer-peek-stream
(bit better than #2227 because gateway does not hit datastore twice).Closes libp2p/js-libp2p#374
Closes libp2p/pull-mplex#13
Closes #2227