Ai Agents 3 min read

Google Research Debuts FigGen and ReviewerAgent AI Tools

New AI agents FigGen and ReviewerAgent automate scientific visualization and peer review tasks to streamline the academic publishing workflow.

Google Research released two specialized AI agents aimed at automating high-friction tasks in scientific publishing. Built on the Gemini 1.5 Pro architecture, FigGen handles scientific visualization while ReviewerAgent automates preliminary peer review checks. If you build multi-agent systems for highly constrained domains, the implementation details offer a blueprint for strict formatting and factual consistency.

FigGen: Code Execution for Visualization

FigGen utilizes a code-generation-for-visualization loop to maintain mathematical precision. The agent writes and executes Python code using libraries like Matplotlib, Seaborn, and Plotly to render the final chart. This approach guarantees exact data representation and produces vector outputs in EPS and SVG formats.

Users upload style templates mimicking specific journals like Nature or Science. The agent then enforces consistent font sizes, stroke weights, and color palettes across multi-panel figures. Researchers can iterate on the output using natural language prompts to adjust scales, change axis labels, or move legends.

ReviewerAgent: Automated Peer Review Checks

ReviewerAgent acts as a pre-submission filter for academic authors and journal editors. It automates the mechanical checks that often fatigue human reviewers. The agent processes manuscripts through three specific operational layers.

Verification LayerAgent ActionTarget Errors
Standard ComplianceChecks against CONSORT and ARRIVE frameworksMissing methodological details
Internal ConsistencyCross-references abstract claims with tables and resultsStatistical mismatches, data-entry faults
Literature SynthesisScans Google Scholar for related datasetsOmitted citations, unexplained contradictions

In Google’s internal testing on a sample of 500 previously published papers, ReviewerAgent flagged major technical flaws in 14% of the dataset. If you evaluate AI output for domain-specific accuracy, configuring agents to cross-reference their own inputs against external search tools is a highly effective pattern.

Deployment Details and Market Access

Both agents are deployed through Google’s Vertex AI platform within a dedicated academic workspace. Verified researchers with .edu email addresses receive free access up to a monthly token limit. Commercial publishers are routed to an enterprise pricing tier. Several major open-access publishers, including PLOS, are piloting ReviewerAgent in their submission pipelines as an initial screening layer.

The tools have sparked debate regarding the automation of peer review. Editors value the detection of statistical errors. Critics argue the automation risks creating a loop where models write papers and other models review them. This dynamic could enforce a bias toward the mean that penalizes non-standard research methods. Understanding techniques for reducing AI hallucination is critical when deploying these automated approval chains in production.

When building AI tools for strict technical domains, visual output requires deterministic code generation. You must decouple the reasoning step from the rendering step. Offloading the actual chart creation to executed Python code ensures your users get mathematically accurate, scalable vector graphics every time.

Get Insanely Good at AI

Get Insanely Good at AI

The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.

Keep Reading