Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tensor parallelism to the paged llama model #185

Merged
merged 4 commits into from
Sep 25, 2024

Commits on Sep 25, 2024

  1. Add tensor parallelism to the paged llama model

    This adds one test that checks the sharded vs the unsharded
    veriants.
    
    Make `sharktank.examples.paged_llm_v1` support a tensor parallelism
    CLI option.
    
    This change adds a lot of sharded variants for PyTorch API-equivalent
    ops but some of them lack auto-testing.
    index_copy_, index_put_, slicing, flatten, unflatten and reshape have tests.
    
    Check that replication and splitting of un unsharded tensor is not an
    actual copy. It is probably unintuitive that when ran through PyTorch
    the sharded result shares the same memory.
    It may be better to change the semantics and require that it is actually
    a copy. During exporting this would insert copies that the compiler
    would need to optimize out.
    
    Add test for sharded paged KV cache.
    sogartar committed Sep 25, 2024
    Configuration menu
    Copy the full SHA
    e0372c8 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    05ff38b View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    1063c60 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    836ef7e View commit details
    Browse the repository at this point in the history