Earn 14 free days when your bug report or suggestion is accepted — how it works
Back to blog

AI Code Review with Screenshot Capture and Figma Comparison

CodeLoop TeamApril 30, 20266 min read

AI Code Review with Screenshot Capture and Figma Comparison

Most AI code-review tools stop at "the test passed". For UI projects, that's not enough. The page can render correctly, the test can pass, and the design can still be wrong. This post walks through the pipeline that catches design drift the moment the agent introduces it.

The pipeline

  • The agent edits a UI file.
  • The MCP layer (CodeLoop) detects a UI change and calls:
  • - codeloop_capture_screenshot on each affected route.

    - codeloop_visual_review for a vision-model sanity check.

    - codeloop_design_compare for a pixel diff against the Figma export.

  • If the design diff exceeds a threshold (default 2.5% per region), the gate-check returns continue_fixing with the diff regions as repair tasks.
  • The agent fixes the spacing / colour / typography drift and re-runs.
  • Only when the diff is below threshold does codeloop_gate_check return ready_for_review.
  • Setting it up

    You need:

  • CodeLoop installed: npx codeloop init.
  • Figma exports under designs/ (PNG) — one per major route. Or, if you use the [Figma REST API integration](https://codeloop.tech/docs/design-compare), a small .codeloop/figma.json mapping routes to frame URLs.
  • The user rule (auto-installed) that tells the agent to call codeloop_design_compare on UI changes.
  • That's it. The pipeline runs locally; no design files leave your machine unless you opt into the Figma REST integration (which uses your own FIGMA_API_TOKEN).

    What the pixel diff actually checks

    codeloop_design_compare uses pixelmatch under the hood. The output is structured per region:

    {

    "regions": [

    { "name": "header.logo", "diff_pct": 0.2, "ok": true },

    { "name": "header.cta", "diff_pct": 4.1, "ok": false, "evidence": "diff-header-cta.png" },

    { "name": "form.email", "diff_pct": 0.0, "ok": true }

    ],

    "overall_pct": 1.6,

    "ok": false,

    "blocker_regions": ["header.cta"]

    }

    The agent uses blocker_regions to know exactly which regions to fix.

    Multi-viewport coverage

    If your designs are responsive (mobile / tablet / desktop frames), CodeLoop fans the comparison across all three viewports automatically. The gate-check fails if any viewport's overall_pct is above threshold.

    Why this is hard to do without CodeLoop

    Each piece (screenshot, pixel diff, Figma export, vision review) exists separately as a library. The hard part is the orchestration: knowing which routes the agent's edit affected, when to capture, what to compare against, how to feed the diff back as repair tasks. CodeLoop's MCP tools wire that loop together so the agent does it on its own.

    Read more

    - Design comparison docs

    - Visual review docs

    - 29 MCP tools

    Frequently asked questions

    How do I set up AI code review with screenshot capture?

    Install CodeLoop (`npx codeloop init`). The MCP layer auto-detects UI changes and calls codeloop_capture_screenshot + codeloop_visual_review on each affected route. No additional setup needed.

    How do I compare against Figma designs?

    Drop PNG exports under `designs/`, or configure `.codeloop/figma.json` for live Figma REST API fetching. CodeLoop's codeloop_design_compare runs pixelmatch per region and returns blocker regions for the agent to fix.

    What's the diff threshold I should use?

    Default is 2.5% per region. Tighten to 1% for pixel-perfect brand work; loosen to 5% for rapid prototyping. Configure via `design_compare.threshold_pct` in `.codeloop/config.json`.