Add quantized model
#3
by
sigridjineth
- opened
hello, we just quantized the model into AWQ style 4-bit precision, please note that there is an option.
https://huggingface.co/sionic-ai/bge-reasoner-embed-qwen3-8b-0923-AWQ-4bit