Skip to content

Introduce 6-bit quantization for Llama in torchchat #116

Introduce 6-bit quantization for Llama in torchchat

Introduce 6-bit quantization for Llama in torchchat #116

Annotations

2 warnings

pytorch/ao  /  wheel-py3_10-cpu

succeeded Oct 3, 2024 in 1m 18s