[HTML payload içeriği buraya]
27.5 C
Jakarta
Saturday, May 16, 2026

Attaining 10,000x coaching information discount with high-fidelity labels


Experiments

We wished to grasp which fashions and duties would profit most from our curation course of. As baselines for our experiments, we fine-tuned two LLMs of various sizes (Gemini Nano-1 with 1.8B parameters and Nano-2 with 3.25B parameters) on two duties of various complexity (decrease and better, primarily based on professional alignment) utilizing crowdsourced labels. Every crowdsourced information set has ~100K annotations and a robust class imbalance, with round 95% benign labels on common.

We in contrast every of those 4 baseline circumstances towards the corresponding curated situation by which every mannequin (Nano-1 and Nano-2) is fine-tuned over a number of rounds utilizing the curation course of described above. At every iteration, we chosen our curated set of examples and used them for mannequin analysis and fine-tuning, as described above. All fashions plateaued earlier than reaching parity with the consultants’ inside alignment, so we stopped at 6 iterations (~400 fine-tuning and ~250 analysis samples) for the decrease complexity process and 5 iterations (~250 fine-tuning and ~150 analysis samples) for the upper complexity process. (Word that the decrease complexity process had a bigger number of examples, which can account for the longer time wanted to converge.) Each information units had a last class steadiness of ~40% constructive examples.

The desk under supplies an outline of the dimensions and high quality of the information utilized in every situation. Consultants reached a median pairwise Cohen’s Kappa of .81 (on the decrease complexity process) and .78 (on the upper complexity process) via the curation course of. We think about these the ceiling for mannequin efficiency. To evaluate the standard of our crowdsourced information, we calculated Kappa alignment between crowdsourced annotations and consultants primarily based on our full curated set, which was .59 (decrease complexity) and .41 (larger complexity).

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles