Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
commit a1d8742 Author: Simon Mo <[email protected]> Date: Mon Sep 9 23:21:00 2024 -0700 Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319) commit 6cd5e5b Author: Dipika Sikka <[email protected]> Date: Mon Sep 9 23:02:52 2024 -0400 [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217) commit c7cb5c3 Author: Kyle Sayers <[email protected]> Date: Mon Sep 9 16:27:26 2024 -0400 [Misc] GPTQ Activation Ordering (vllm-project#8135) commit f9b4a2d Author: Vladislav Kruglikov <[email protected]> Date: Mon Sep 9 21:20:46 2024 +0300 [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292) commit 58fcc85 Author: Adam Lugowski <[email protected]> Date: Mon Sep 9 11:16:37 2024 -0700 [Frontend] Add progress reporting to run_batch.py (vllm-project#8060) Co-authored-by: Adam Lugowski <[email protected]> commit 08287ef Author: Kyle Mistele <[email protected]> Date: Mon Sep 9 09:45:11 2024 -0500 [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272) commit 4ef41b8 Author: Alexander Matveev <[email protected]> Date: Sun Sep 8 00:01:51 2024 -0400 [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267) commit cfe712b Author: Joe Runde <[email protected]> Date: Sat Sep 7 14:03:16 2024 -0600 [CI/Build] Use python 3.12 in cuda image (vllm-project#8133) Signed-off-by: Joe Runde <[email protected]> commit b962ee1 Author: sumitd2 <[email protected]> Date: Sat Sep 7 23:48:40 2024 +0530 ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026) commit 36bf815 Author: Isotr0py <[email protected]> Date: Sun Sep 8 01:45:44 2024 +0800 [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269) commit e807125 Author: Isotr0py <[email protected]> Date: Sat Sep 7 16:38:23 2024 +0800 [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201) commit 9f68e00 Author: Cyrus Leung <[email protected]> Date: Sat Sep 7 16:02:39 2024 +0800 [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258) commit ce2702a Author: youkaichao <[email protected]> Date: Fri Sep 6 22:40:46 2024 -0700 [tpu][misc] fix typo (vllm-project#8260) commit 795b662 Author: Wei-Sheng Chin <[email protected]> Date: Fri Sep 6 20:18:16 2024 -0700 Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241) commit 2f707fc Author: Cyrus Leung <[email protected]> Date: Sat Sep 7 10:57:24 2024 +0800 [Model] Multi-input support for LLaVA (vllm-project#8238) commit 41e95c5 Author: Kyle Mistele <[email protected]> Date: Fri Sep 6 21:49:01 2024 -0500 [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256) Co-authored-by: Kyle Mistele <[email protected]> commit 12dd715 Author: William Lin <[email protected]> Date: Fri Sep 6 17:48:48 2024 -0700 [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943) commit 29f49cd Author: Patrick von Platen <[email protected]> Date: Sat Sep 7 01:02:05 2024 +0200 [Model] Allow loading from original Mistral format (vllm-project#8168) Co-authored-by: Michael Goin <[email protected]> commit 23f3222 Author: Dipika Sikka <[email protected]> Date: Fri Sep 6 18:29:03 2024 -0400 [Misc] Remove `SqueezeLLM` (vllm-project#8220) commit 9db52ea Author: rasmith <[email protected]> Date: Fri Sep 6 17:26:09 2024 -0500 [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248) commit 1447c97 Author: Alexey Kondratiev(AMD) <[email protected]> Date: Fri Sep 6 14:51:03 2024 -0400 [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203) commit de80783 Author: Rui Qiao <[email protected]> Date: Fri Sep 6 09:18:35 2024 -0700 [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938) commit e5cab71 Author: afeldman-nm <[email protected]> Date: Fri Sep 6 12:01:14 2024 -0400 [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191) commit baa5467 Author: Nick Hill <[email protected]> Date: Thu Sep 5 20:39:29 2024 -0700 [BugFix] Fix Granite model configuration (vllm-project#8216) commit db3bf7c Author: Jiaxin Shan <[email protected]> Date: Thu Sep 5 18:10:33 2024 -0700 [Core] Support load and unload LoRA in api server (vllm-project#6566) Co-authored-by: Jee Jee Li <[email protected]> commit 2febcf2 Author: sroy745 <[email protected]> Date: Thu Sep 5 13:25:29 2024 -0700 [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962) commit 2ee4528 Author: Michael Goin <[email protected]> Date: Thu Sep 5 11:09:46 2024 -0400 Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165) commit 9da25a8 Author: Alex Brooks <[email protected]> Date: Thu Sep 5 06:48:10 2024 -0600 [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029) Signed-off-by: Alex-Brooks <[email protected]> Co-authored-by: DarkLight1337 <[email protected]> commit 8685ba1 Author: [email protected] <[email protected]> Date: Thu Sep 5 17:03:37 2024 +0530 Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860) commit 288a938 Author: Cyrus Leung <[email protected]> Date: Thu Sep 5 18:51:53 2024 +0800 [Doc] Indicate more information about supported modalities (vllm-project#8181) commit e39ebf5 Author: Elfie Guo <[email protected]> Date: Wed Sep 4 22:12:26 2024 -0700 [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173) commit ba262c4 Author: Kevin H. Luu <[email protected]> Date: Wed Sep 4 20:33:12 2024 -0700 [ci] Mark LoRA test as soft-fail (vllm-project#8160) Signed-off-by: kevin <[email protected]> commit 4624d98 Author: Woosuk Kwon <[email protected]> Date: Wed Sep 4 20:31:48 2024 -0700 [Misc] Clean up RoPE forward_native (vllm-project#8076) commit 1afc931 Author: William Lin <[email protected]> Date: Wed Sep 4 17:35:36 2024 -0700 [bugfix] >1.43 constraint for openai (vllm-project#8169) Co-authored-by: Michael Goin <[email protected]> commit e01c2be Author: Maureen McElaney <[email protected]> Date: Wed Sep 4 19:50:13 2024 -0400 [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161) commit 32e7db2 Author: Simon Mo <[email protected]> Date: Wed Sep 4 16:34:27 2024 -0700 Bump version to v0.6.0 (vllm-project#8166) commit 008cf88 Author: Harsha vardhan manoj Bikki <[email protected]> Date: Wed Sep 4 16:33:43 2024 -0700 [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062) Co-authored-by: Harsha Bikki <[email protected]> commit 77d9e51 Author: Cody Yu <[email protected]> Date: Wed Sep 4 13:23:22 2024 -0700 [MISC] Replace input token throughput with total token throughput (vllm-project#8164) Co-authored-by: Michael Goin <[email protected]> commit e02ce49 Author: Kyle Mistele <[email protected]> Date: Wed Sep 4 15:18:13 2024 -0500 [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649) Co-authored-by: constellate <[email protected]> Co-authored-by: Kyle Mistele <[email protected]> commit 561d6f8 Author: Woosuk Kwon <[email protected]> Date: Wed Sep 4 13:05:50 2024 -0700 [CI] Change test input in Gemma LoRA test (vllm-project#8163) commit d1dec64 Author: alexeykondrat <[email protected]> Date: Wed Sep 4 14:57:54 2024 -0400 [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369) Co-authored-by: Simon Mo <[email protected]> commit 2ad2e56 Author: Cody Yu <[email protected]> Date: Wed Sep 4 11:53:25 2024 -0700 [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131) commit d331156 Author: wnma <[email protected]> Date: Wed Sep 4 18:55:37 2024 +0800 [Bugfix] remove post_layernorm in siglip (vllm-project#8106) commit ccd7207 Author: TimWang <[email protected]> Date: Wed Sep 4 14:17:05 2024 +0800 chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103) commit 855c262 Author: Cyrus Leung <[email protected]> Date: Wed Sep 4 13:22:17 2024 +0800 [Frontend] Multimodal support in offline chat (vllm-project#8098) commit 2be8ec6 Author: Peter Salas <[email protected]> Date: Tue Sep 3 21:38:21 2024 -0700 [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963) commit e16fa99 Author: Dipika Sikka <[email protected]> Date: Tue Sep 3 22:12:41 2024 -0400 [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972) Co-authored-by: Michael Goin <[email protected]> commit 61f4a93 Author: Woosuk Kwon <[email protected]> Date: Tue Sep 3 18:35:33 2024 -0700 [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137) commit d4db9f5 Author: Nick Hill <[email protected]> Date: Tue Sep 3 17:57:41 2024 -0700 [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964) commit 2188a60 Author: Dipika Sikka <[email protected]> Date: Tue Sep 3 17:21:44 2024 -0400 [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976) commit dc0b606 Author: Simon Mo <[email protected]> Date: Tue Sep 3 14:11:42 2024 -0700 [CI] Change PR remainder to avoid at-mentions (vllm-project#8134) commit 0af3abe Author: Woosuk Kwon <[email protected]> Date: Tue Sep 3 13:29:24 2024 -0700 [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128) commit f1575dc Author: Kevin H. Luu <[email protected]> Date: Tue Sep 3 13:25:09 2024 -0700 [ci] Fix GHA workflow (vllm-project#8129) Signed-off-by: kevin <[email protected]> commit c02638e Author: tomeras91 <[email protected]> Date: Tue Sep 3 22:37:08 2024 +0300 [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118) commit 652c83b Author: Antoni Baum <[email protected]> Date: Tue Sep 3 12:28:25 2024 -0700 [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750) commit 6d646d0 Author: Alexander Matveev <[email protected]> Date: Tue Sep 3 14:50:29 2024 -0400 [Core] Optimize Async + Multi-step (vllm-project#8050) commit 95a178f Author: Kevin H. Luu <[email protected]> Date: Tue Sep 3 11:32:27 2024 -0700 [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124) Signed-off-by: kevin <[email protected]> commit bd852f2 Author: Cody Yu <[email protected]> Date: Tue Sep 3 10:49:18 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120) Co-authored-by: Tao He <[email protected]> Co-authored-by: Juelianqvq <[email protected]> commit ec26653 Author: Isotr0py <[email protected]> Date: Tue Sep 3 21:37:52 2024 +0800 [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061) commit 0fbc669 Author: Woosuk Kwon <[email protected]> Date: Mon Sep 2 20:35:42 2024 -0700 [Bugfix] Fix single output condition in output processor (vllm-project#7881) commit 6e36f4f Author: wang.yuqi <[email protected]> Date: Tue Sep 3 05:20:12 2024 +0800 improve chunked prefill performance [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874) commit dd2a6a8 Author: Isotr0py <[email protected]> Date: Mon Sep 2 23:48:56 2024 +0800 [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055) commit 4ca65a9 Author: Isotr0py <[email protected]> Date: Mon Sep 2 20:43:26 2024 +0800 [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056) commit e2b2aa5 Author: Woosuk Kwon <[email protected]> Date: Sun Sep 1 23:09:46 2024 -0700 [TPU] Align worker index with node boundary (vllm-project#7932) commit e6a26ed Author: Lily Liu <[email protected]> Date: Sun Sep 1 21:23:29 2024 -0700 [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244) commit f8d6014 Author: Shawn Tan <[email protected]> Date: Sun Sep 1 21:37:18 2024 -0400 [Model] Add Granite model (vllm-project#7436) Co-authored-by: Nick Hill <[email protected]> commit 5b86b19 Author: Roger Wang <[email protected]> Date: Sun Sep 1 14:46:57 2024 -0700 [Misc] Optional installation of audio related packages (vllm-project#8063) commit 5231f08 Author: Roger Wang <[email protected]> Date: Sat Aug 31 16:35:53 2024 -0700 [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049) commit 8423aef Author: Robert Shaw <[email protected]> Date: Sat Aug 31 15:44:03 2024 -0400 [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059) commit 4f5d844 Author: Nicolò Lucchesi <[email protected]> Date: Sat Aug 31 09:27:58 2024 +0200 [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037) commit d05f0a9 Author: Cyrus Leung <[email protected]> Date: Sat Aug 31 13:26:55 2024 +0800 [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052) commit 622f8ab Author: Pavani Majety <[email protected]> Date: Fri Aug 30 22:18:50 2024 -0700 [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013) commit 1248e85 Author: Wenxiang <[email protected]> Date: Sat Aug 31 03:42:57 2024 +0800 [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729) Co-authored-by: Your Name <[email protected]> Co-authored-by: Zeqi Lin <[email protected]> Co-authored-by: Zeqi Lin <[email protected]> commit 2684efc Author: Woosuk Kwon <[email protected]> Date: Fri Aug 30 09:01:26 2024 -0700 [TPU][Bugfix] Fix tpu type api (vllm-project#8035) commit 058344f Author: Kaunil Dhruv <[email protected]> Date: Fri Aug 30 08:21:02 2024 -0700 [Frontend]-config-cli-args (vllm-project#7737) Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: Kaunil Dhruv <[email protected]> commit 98cef6a Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 23:20:34 2024 +0800 [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028) commit f97be32 Author: Jungho Christopher Cho <[email protected]> Date: Sat Aug 31 00:19:27 2024 +0900 [VLM][Model] TP support for ViTs (vllm-project#7186) Co-authored-by: Roger Wang <[email protected]> Co-authored-by: Roger Wang <[email protected]> commit afd39a4 Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 23:03:28 2024 +0800 [Bugfix] Fix import error in Exaone model (vllm-project#8034) commit 2148441 Author: Richard Liu <[email protected]> Date: Fri Aug 30 00:27:40 2024 -0700 [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613) commit dc13e99 Author: Yohan Na <[email protected]> Date: Fri Aug 30 15:34:20 2024 +0900 [MODEL] add Exaone model support (vllm-project#7819) commit 34a0e96 Author: Avshalom Manevich <[email protected]> Date: Fri Aug 30 11:11:39 2024 +0700 [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995) commit 80c7b08 Author: Woosuk Kwon <[email protected]> Date: Thu Aug 29 19:35:29 2024 -0700 [TPU] Async output processing for TPU (vllm-project#8011) commit 428dd14 Author: afeldman-nm <[email protected]> Date: Thu Aug 29 22:19:08 2024 -0400 [Core] Logprobs support in Multi-step (vllm-project#7652) commit 4abed65 Author: Cyrus Leung <[email protected]> Date: Fri Aug 30 08:49:04 2024 +0800 [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998) commit 0c785d3 Author: Wei-Sheng Chin <[email protected]> Date: Thu Aug 29 16:48:11 2024 -0700 Add more percentiles and latencies (vllm-project#7759) commit 4664cea Author: chenqianfzh <[email protected]> Date: Thu Aug 29 16:09:08 2024 -0700 support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445) commit 257afc3 Author: Harsha vardhan manoj Bikki <[email protected]> Date: Thu Aug 29 13:58:14 2024 -0700 [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885) Co-authored-by: Harsha Bikki <[email protected]> commit 86a677d Author: Dipika Sikka <[email protected]> Date: Thu Aug 29 16:46:55 2024 -0400 [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973) commit d78789a Author: Isotr0py <[email protected]> Date: Fri Aug 30 03:54:49 2024 +0800 [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954) commit c334b18 Author: kushanam <[email protected]> Date: Thu Aug 29 12:15:04 2024 -0700 extend cuda graph size for H200 (vllm-project#7894) Co-authored-by: youkaichao <[email protected]> commit 6b34215 Author: Pavani Majety <[email protected]> Date: Thu Aug 29 11:53:11 2024 -0700 [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend. + BugFix for kv_cache_dtype=auto (vllm-project#7985) Co-authored-by: Simon Mo <[email protected]> Co-authored-by: Cody Yu <[email protected]> commit 3f60f22 Author: Alexander Matveev <[email protected]> Date: Thu Aug 29 14:18:26 2024 -0400 [Core] Combine async postprocessor and multi-step (vllm-project#7921) commit f205c09 Author: Jonas M. Kübler <[email protected]> Date: Thu Aug 29 07:18:13 2024 +0200 [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899) commit ef99a78 Author: youkaichao <[email protected]> Date: Wed Aug 28 21:27:06 2024 -0700 Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982) commit 74d5543 Author: Peter Salas <[email protected]> Date: Wed Aug 28 20:24:31 2024 -0700 [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974) commit a7f65c2 Author: youkaichao <[email protected]> Date: Wed Aug 28 17:32:26 2024 -0700 [torch.compile] remove reset (vllm-project#7975) commit 4289cad Author: Nick Hill <[email protected]> Date: Wed Aug 28 17:22:43 2024 -0700 [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957) Co-authored-by: Robert Shaw <rshaw@neuralmagic> commit af59df0 Author: Michael Goin <[email protected]> Date: Wed Aug 28 19:19:17 2024 -0400 Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961) commit ce6bf3a Author: youkaichao <[email protected]> Date: Wed Aug 28 16:10:12 2024 -0700 [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898) Co-authored-by: Woosuk Kwon <[email protected]> commit 3cdfe1f Author: bnellnm <[email protected]> Date: Wed Aug 28 18:11:49 2024 -0400 [Bugfix] Make torch registration of punica ops optional (vllm-project#7970) commit fdd9daa Author: Mor Zusman <[email protected]> Date: Thu Aug 29 01:06:52 2024 +0300 [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651) commit 8c56e57 Author: Stas Bekman <[email protected]> Date: Wed Aug 28 13:54:23 2024 -0700 [Doc] fix 404 link (vllm-project#7966) commit eeffde1 Author: Woosuk Kwon <[email protected]> Date: Wed Aug 28 13:10:21 2024 -0700 [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967) commit e5697d1 Author: rasmith <[email protected]> Date: Wed Aug 28 14:37:47 2024 -0500 [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386) commit b98cc28 Author: Pavani Majety <[email protected]> Date: Wed Aug 28 10:01:22 2024 -0700 [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798) Co-authored-by: Simon Mo <[email protected]> commit ef9baee Author: Cyrus Leung <[email protected]> Date: Wed Aug 28 23:11:18 2024 +0800 [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948) commit 98c12cf Author: Stas Bekman <[email protected]> Date: Wed Aug 28 05:12:32 2024 -0700 [Doc] fix the autoAWQ example (vllm-project#7937) commit f52a43a Author: youkaichao <[email protected]> Date: Wed Aug 28 01:27:07 2024 -0700 [ci][test] fix pp test failure (vllm-project#7945) commit e358053 Author: Cody Yu <[email protected]> Date: Wed Aug 28 00:36:31 2024 -0700 [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
- Loading branch information