The Paradigm Shift to Data-Centric AI
For decades, AI research focused on building better models. The new frontier of progress, however, lies in systematically engineering the data itself. This shift is driven by critical bottlenecks in the model-centric approach.
% of AI Projects Fail
due to issues with data quality, not model flaws.
% of Research was Model-Centric
historically, leading to a focus on code over data.
Projected Year of Data Exhaustion
for high-quality public text data, forcing a move to data engineering.
Toolkit: Programmatic Labeling with Weak Supervision
When you have abundant unlabeled data but no labels, weak supervision allows you to programmatically create a large training set by encoding domain knowledge into heuristic "Labeling Functions" (LFs).
The Weak Supervision Pipeline
Write Labeling Functions (LFs)
Encode heuristics (e.g., keyword searches, patterns, LLM prompts) to programmatically vote on labels or abstain.
Train Generative Label Model
This model learns the accuracies and correlations of your LFs by observing their agreements and disagreements—no ground truth needed.
Generate Probabilistic Labels
The output is a full training set with "soft" labels (e.g., 90% Class A), capturing the model's aggregate confidence.
Train Discriminative End Model
A powerful end model (e.g., a Transformer) learns from the probabilistic labels to generalize beyond the simple heuristics of the LFs.
Toolkit: Efficient Data Selection with Active Learning
When your labeling budget is limited, Active Learning (AL) helps you maximize model performance by intelligently selecting the most valuable data points for manual annotation.
The Query Strategy Explorer
The "brain" of an active learner is its query strategy. Select a strategy below to understand its core principle for choosing which data to label next.
Toolkit: Creating New Data with Generative Methods
When you need to fill gaps in your dataset, cover rare edge cases, or simply create more data, generative methods provide the solution. Optimal Transport offers a principled, geometric framework for this task.
Principled Augmentation with Optimal Transport
Naive Interpolation (e.g., Mixup)
Simply averaging data points can create unrealistic samples that fall "off" the true data manifold, harming model training.
Wasserstein Barycenters (OT)
OT finds a geometric "average" that respects the data's structure, producing realistic, high-fidelity samples.
Build Your Strategy: A Unified Framework
These techniques are most powerful when combined. Use this interactive guide to determine the best data-building strategy for your specific problem.
Recommended Strategy:
Your recommended workflow will appear here...