Skip to content

Actions: nod-ai/shark-ai

CI - shortfin - Python 3.13 Free-threaded

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
763 workflow runs
763 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add fp8 quantization for conv and linear layers
CI - shortfin - Python 3.13 Free-threaded #271: Pull request #277 synchronize by nithinsubbiah
October 17, 2024 23:20 4m 0s nithinsubbiah:punet_f8
October 17, 2024 23:20 4m 0s
Initial commit of SD inference server.
CI - shortfin - Python 3.13 Free-threaded #270: Pull request #265 synchronize by monorimet
October 17, 2024 22:48 3m 55s shortfin-sd
October 17, 2024 22:48 3m 55s
[sharktank] Evaluation - Add Perplexity test
CI - shortfin - Python 3.13 Free-threaded #269: Pull request #286 synchronize by archana-ramalingam
October 17, 2024 22:46 4m 8s perplexity-test
October 17, 2024 22:46 4m 8s
[sharktank] Evaluation - Add Perplexity test
CI - shortfin - Python 3.13 Free-threaded #268: Pull request #286 synchronize by archana-ramalingam
October 17, 2024 19:49 4m 1s perplexity-test
October 17, 2024 19:49 4m 1s
Skip failing host_cpu_system_test tests on Windows.
CI - shortfin - Python 3.13 Free-threaded #267: Pull request #293 opened by ScottTodd
October 17, 2024 18:18 3m 59s ScottTodd:shortfin-windows-hostcpu-system
October 17, 2024 18:18 3m 59s
Fix reshaping of split sharded tensor
CI - shortfin - Python 3.13 Free-threaded #266: Pull request #291 opened by sogartar
October 17, 2024 15:51 4m 26s sogartar:fix-sharded-split-tensor-resharding
October 17, 2024 15:51 4m 26s
Benchmark Llama 3.1 f16 and fp8 with CI
CI - shortfin - Python 3.13 Free-threaded #265: Pull request #284 synchronize by aviator19941
October 17, 2024 15:38 4m 45s benchmark-llama-fp8-ci
October 17, 2024 15:38 4m 45s
Benchmark Llama 3.1 f16 and fp8 with CI
CI - shortfin - Python 3.13 Free-threaded #264: Pull request #284 synchronize by aviator19941
October 17, 2024 15:30 5m 1s benchmark-llama-fp8-ci
October 17, 2024 15:30 5m 1s
Benchmark Llama 3.1 f16 and fp8 with CI
CI - shortfin - Python 3.13 Free-threaded #263: Pull request #284 synchronize by aviator19941
October 17, 2024 15:28 2m 24s benchmark-llama-fp8-ci
October 17, 2024 15:28 2m 24s
Fix iree_helpers_test on Windows with callstacks (debug build). (#288)
CI - shortfin - Python 3.13 Free-threaded #262: Commit 0c2e965 pushed by ScottTodd
October 17, 2024 15:05 5m 11s main
October 17, 2024 15:05 5m 11s
Fix iree_helpers_test on Windows with callstacks (debug build).
CI - shortfin - Python 3.13 Free-threaded #261: Pull request #288 synchronize by ScottTodd
October 17, 2024 14:51 5m 11s ScottTodd:shortfin-windows-tests
October 17, 2024 14:51 5m 11s
Skip crashing shortfin python tests on Windows. (#289)
CI - shortfin - Python 3.13 Free-threaded #260: Commit 95792af pushed by ScottTodd
October 17, 2024 14:46 5m 52s main
October 17, 2024 14:46 5m 52s
Skip crashing shortfin python tests on Windows.
CI - shortfin - Python 3.13 Free-threaded #259: Pull request #289 synchronize by ScottTodd
October 17, 2024 14:44 5m 2s ScottTodd:shortfin-windows-pytest
October 17, 2024 14:44 5m 2s
Put in some docs about how parameters are loaded
CI - shortfin - Python 3.13 Free-threaded #258: Pull request #281 synchronize by renxida
October 17, 2024 13:37 4m 36s renxida:document-shortfin-parameter-loading
October 17, 2024 13:37 4m 36s
Put in some docs about how parameters are loaded
CI - shortfin - Python 3.13 Free-threaded #257: Pull request #281 synchronize by renxida
October 17, 2024 13:37 42s renxida:document-shortfin-parameter-loading
October 17, 2024 13:37 42s
Add device selection to shortfin llm demo (#275)
CI - shortfin - Python 3.13 Free-threaded #256: Commit 0c48865 pushed by renxida
October 17, 2024 13:32 5m 2s main
October 17, 2024 13:32 5m 2s
Add device selection to shortfin llm demo
CI - shortfin - Python 3.13 Free-threaded #255: Pull request #275 synchronize by renxida
October 17, 2024 13:14 6m 3s renxida:shortfin-system-selection
October 17, 2024 13:14 6m 3s
Add device selection to shortfin llm demo
CI - shortfin - Python 3.13 Free-threaded #254: Pull request #275 synchronize by renxida
October 17, 2024 13:13 1m 40s renxida:shortfin-system-selection
October 17, 2024 13:13 1m 40s
Add fp8 quantization for conv and linear layers
CI - shortfin - Python 3.13 Free-threaded #253: Pull request #277 synchronize by nithinsubbiah
October 17, 2024 01:01 4m 56s nithinsubbiah:punet_f8
October 17, 2024 01:01 4m 56s
Add fp8 quantization for conv and linear layers
CI - shortfin - Python 3.13 Free-threaded #252: Pull request #277 synchronize by nithinsubbiah
October 17, 2024 00:58 3m 31s nithinsubbiah:punet_f8
October 17, 2024 00:58 3m 31s
Add fp8 quantization for conv and linear layers
CI - shortfin - Python 3.13 Free-threaded #251: Pull request #277 synchronize by nithinsubbiah
October 17, 2024 00:42 4m 24s nithinsubbiah:punet_f8
October 17, 2024 00:42 4m 24s
Add fp8 quantization for conv and linear layers
CI - shortfin - Python 3.13 Free-threaded #250: Pull request #277 synchronize by nithinsubbiah
October 17, 2024 00:34 5m 40s nithinsubbiah:punet_f8
October 17, 2024 00:34 5m 40s
Benchmark Llama 3.1 f16 and fp8 with CI
CI - shortfin - Python 3.13 Free-threaded #249: Pull request #284 synchronize by aviator19941
October 16, 2024 23:45 5m 2s benchmark-llama-fp8-ci
October 16, 2024 23:45 5m 2s
WIP Fix in-place tensor mutation when sharding
CI - shortfin - Python 3.13 Free-threaded #248: Pull request #290 synchronize by sogartar
October 16, 2024 23:05 3m 57s sogartar:llama-missing-cache-export
October 16, 2024 23:05 3m 57s
WIP Fix in-place tensor mutation when sharding
CI - shortfin - Python 3.13 Free-threaded #247: Pull request #290 opened by sogartar
October 16, 2024 21:48 3m 55s sogartar:llama-missing-cache-export
October 16, 2024 21:48 3m 55s
ProTip! You can narrow down the results and go further in time using created:<2024-10-16 or the other filters available.