Support variable-length sequences for mamba block with position indices #434
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enable the mamba block to support variable-length sequence inputs using positional encoding. Passing Positional Indices results in negligible performance loss for the mamba block. For common variable-length sequence distributions, performance can be improved by 4-6x.
Refer to the link [Feature] Support variable-length sequences for mamba block #244 to replace cumulative sequences with a positional encoding matrix. The position encoding is more suitable for parallel acceleration and is more commonly found in the outputs of various dataloaders.
In the Mamba module, there are two steps that are not sequence-wise: conv1d and selective_scan. Sub-sequences within the same sequence can affect each other, and we have modified causal-conv1d and selective_scan. These two CUDA operators are implemented using position encoding to eliminate the mutual influence between sub-sequences.