AutoScientist Automates Simultaneous Data and Weight Tuning
Adaption launched AutoScientist to automate model fine-tuning by optimizing training datasets and model weights simultaneously.
On May 13, 2026, Adaption released AutoScientist, an automated fine-tuning tool that replaces manual data selection and hyperparameter adjustments with a self-improving loop. Led by CEO Sara Hooker, the startup targets enterprise engineering teams that require specialized, high-performance models but lack the dedicated research headcount to manage iterative training pipelines. The system operates entirely in real-time, preventing models from needing to be taken offline for scheduled retraining intervals.
Simultaneous Optimization Architecture
Traditional fine-tuning pipelines treat data preparation and model weight updates as sequential, isolated steps. AutoScientist changes this by optimizing both the training data and the model weights simultaneously. The system uses Adaption’s Adaptive Data platform to continuously ingest evolving datasets, transforming them automatically into specialized formats.
When engineering teams weigh fine-tuning vs RAG, the primary constraint on fine-tuning is often the manual labor required to clean and structure the data. Automating this adaptation cycle removes that friction. Adaption reports that this dual-optimization approach has more than doubled performance metrics across test models evaluated specifically on code generation tasks.
Together AI Integration
The infrastructure backing AutoScientist relies on a native integration with Together AI, announced on April 30, 2026. This partnership embeds Together Fine-Tuning directly into the Adaption platform. Developers can now process incoming data, run automated fine-tuning protocols on open-source weights, and deploy the resulting artifacts within a unified environment. If your architecture requires evaluating AI output continuously to refine specialized models, keeping the entire pipeline under one control plane reduces the infrastructure fragmentation typical of generative AI rollouts.
Market Positioning
The current landscape of automated machine learning tools splits between industrial fine-tuning APIs and experimental academic agents. The launch coincides with an industry shift toward “AI Scientists,” following 2025 and 2026 projects like Sakana AI’s The AI Scientist and AutoScience AI’s CARL. While those systems focus on autonomous academic discovery, AutoScientist is built specifically for industrial fine-tuning applications. It competes directly with established cloud provider pipelines to capture enterprise workflows.
| Platform | Target Use Case | Automation Scope |
|---|---|---|
| Adaption AutoScientist | Enterprise application tuning | Simultaneous data and weight optimization |
| OpenAI Fine-Tuning API | Proprietary model adaptation | Automated weight tuning on fixed datasets |
| Hugging Face AutoTrain | Open-source model deployment | Managed training environments |
| Google Vertex AI AutoML | General cloud ML workloads | Hyperparameter and architecture search |
Adaption is offering a 30-day free trial of AutoScientist starting at launch. If you maintain models dedicated to code generation or narrow technical reasoning, route a subset of your raw operational data through the Adaptive Data pipeline. Testing the real-time weight adjustments against your current static models will clarify whether the automated tuning loop yields higher precision for your specific domain constraints.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
How to Fine-Tune Qwen3 on AMD MI300X Using ROCm
Learn how to configure ROCm 6.1 environment variables and use the Hugging Face stack to fine-tune Qwen3-1.7B on AMD hardware without CUDA.
CyberSecQwen-4B Defeats Cisco 8B on CTI-MCQ Benchmark
Team athena19 fine-tuned a 4-billion parameter model on a single AMD MI300X GPU that outperforms Cisco's 8B model for defensive cyber threat intelligence.
EMO Pretraining Decouples Mixture-of-Experts Subsets
AI2 and UC Berkeley researchers introduced EMO, a pretraining constraint that groups MoE experts by semantic domain to allow independent subnet deployment.
IBM Granite 4.1 Pushes Dense 8B Model Past Previous 32B MoE
IBM released the Granite 4.1 open-source model family featuring dense text architectures, a 512K context window, and specialized vision and speech variants.
What Is Quantization in AI?
Quantization shrinks AI models by reducing numerical precision. Here's how it works, what formats exist, and how to choose the right tradeoff between size, speed, and quality.