Qwen2.5-Coder-1.5B LoRA (DEEP)

LoRA adapter fine-tuned on CodeGen-Deep-5K dataset.

Performance

  • Pass@1: 36.6%
  • Best checkpoint: step-200

Usage

from transformers import AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "erdem_kandilci/qwen2.5-coder-1.5b-lora-deep")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for erdem12345/qwen2.5-coder-1.5b-lora-deep

Base model

Qwen/Qwen2.5-1.5B
Adapter
(63)
this model