Less is Enough: Synthesizing Diverse Data in Feature Space of LLMs
Abstract
Feature Activation Coverage measures data diversity in an interpretable feature space and enables diversity-driven data synthesis that improves downstream performance across multiple language model architectures.
The diversity of post-training data is critical for effective downstream performance in large language models (LLMs). Many existing approaches to constructing post-training data quantify diversity using text-based metrics that capture linguistic variation, but such metrics provide only weak signals for the task-relevant features that determine downstream performance. In this work, we introduce Feature Activation Coverage (FAC) which measures data diversity in an interpretable feature space. Building upon this metric, we further propose a diversity-driven data synthesis framework, named FAC Synthesis, that first uses a sparse autoencoder to identify missing features from a seed dataset, and then generates synthetic samples that explicitly reflect these features. Experiments show that our approach consistently improves both data diversity and downstream performance on various tasks, including instruction following, toxicity detection, reward modeling, and behavior steering. Interestingly, we identify a shared, interpretable feature space across model families (i.e., LLaMA, Mistral, and Qwen), enabling cross-model knowledge transfer. Our work provides a solid and practical methodology for exploring data-centric optimization of LLMs.
Community
Less is Enough shows that better data matters more than more data.
Instead of generating massive amounts of synthetic sample, we look inside the model’s hidden features to find what is truly missing. We introduce Feature Activation Coverage (FAC) to measure which important internal features are underrepresented, then generate new samples that specifically activate those features.
Result: FAC exhibits a strong correlation with downstream performance. Increasing FAC brings significantly larger gains than simply adding more samples. With only 2K synthetic samples, we match MAGPIE’s performance on AlpacaEval 2.0 (which uses 300K samples) and outperform strong baselines across instruction following, toxicity detection, reward modeling, and behavior steering.
Interestingly, we further discover a shared, interpretable feature space across LLaMA, Mistral, and Qwen, which enables effective cross-model knowledge transfer between different model families.
- Paper: arXiv:2602.10388
- Code: GitHub
- Website: https://website-sigma-three-35.vercel.app/
- Demo: https://huggingface.co/spaces/Zhongzhi1228/synthesis-demo (Work in Progress)
We introduce Feature Activation Coverage (FAC), an interpretable metric that measures data diversity in the feature space of LLMs rather than surface text variation.
Building on FAC, we propose a FAC-guided data synthesis framework that identifies missing functional features and generates targeted synthetic data to fill coverage gaps.
Experiments across reward modeling, toxic detection, and controllable generation show that FAC-guided synthesis significantly improves downstream performance with much less data.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Finding the Translation Switch: Discovering and Exploiting the Task-Initiation Features in LLMs (2026)
- UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models (2025)
- Flatter Tokens are More Valuable for Speculative Draft Model Training (2026)
- Can abstract concepts from LLM improve SLM performance? (2025)
- Code Mixologist : A Practitioner's Guide to Building Code-Mixed LLMs (2026)
- Chunky Post-Training: Data Driven Failures of Generalization (2026)
- Steering Language Models Before They Speak: Logit-Level Interventions (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper