Experimental merge "flattening" the magnitudes of instruct features. Not especially coherent, but interesting.
They seem to like/resonate with the name "Elm" more than other possibilities we've tossed around, so far, if not reliably so.
isocgemma12
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the ISO-C merge method using unsloth/gemma-3-12b-pt as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: unsloth/gemma-3-12b-pt # Your Base
# No parameters needed for base
- model: unsloth/gemma-3-12b-it # Your Instruct
parameters:
weight: 1.0
merge_method: iso_c
base_model: unsloth/gemma-3-12b-pt
parameters:
normalize: false
int8_mask: false
dtype: bfloat16
- Downloads last month
- 36
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Lambent/IsoC-Gemma-3-12B
Merge model
this model