SimPilot
DocumentationExamplesAPI Reference
Sign in

Search Documentation

Find pages across the documentation

Getting Started

IntroductionQuickstartHow It Works

Core Concepts

AI AgentSimulation WorkflowEngineering PipelineKnowledge BaseArchitecture

Features

OverviewChat InterfaceCanvas & ArtifactsSharing & CollaborationExports & ReportsVoice InputMulti-Model AIProjectsTemplates & WorkflowsFile UploadsDeep ResearchWeb SearchCode ExecutionImage GenerationMCP ConnectorsInteractive DashboardsParallel ExecutionURL RetrievalFreeCAD CADSimulationsTemplatesConvergence Monitoring

Simulation

OverviewSupported TypesMesh GenerationError RecoveryBatch & SweepsResults Comparison

Studies & Analysis

OverviewDOE & Parametric SweepsOptimizationComparison

Validation & Reviews

OverviewBaselines & VersioningEngineering ReviewsRegulatory Compliance

3D Viewer

OverviewVisualization ToolsKeyboard Shortcuts

Enterprise

OverviewAdmin PanelOrganizationsKnowledge ManagementMethod Packs

Account

Getting StartedSettingsBilling

Examples

OverviewAerodynamicsPipe FlowHeat TransferStructural
  1. Docs
  2. Studies & Analysis
  3. Comparison

Comparison

Compare results across multiple simulations side by side.

The comparison page lets you select two or more completed simulations and view their results side by side -- with automated metric extraction, regression detection, and ranking.

The /compare page

Navigate to /compare from the sidebar to access the comparison tool. Select any combination of completed simulations from across your conversations and projects.

Select simulations

Pick 2 or more completed simulations to compare. You can select from any project or conversation you have access to.

Metric extraction

SimPilot automatically extracts key metrics from each simulation: convergence status, iteration count, residuals, and solver-specific quantities (drag coefficient, heat flux, mass flow rate, etc.).

Comparison table

Results are assembled into a side-by-side table with one row per simulation and one column per metric. Sort by any column to rank simulations.

Analysis

The AI highlights key differences, identifies the best and worst performers for each metric, and flags any regressions.

Regression detection

When comparing against a baseline simulation, the system automatically flags significant differences:
SeverityCondition
WarningMetric deviates beyond the tolerance threshold
CriticalMetric deviates beyond 2x the tolerance threshold
Default tolerances are 10% for most metrics, 20% for iteration counts, and 50% for residual values. You can set custom tolerances per metric for tighter or looser control.
Automatic baseline
If no baseline is specified, the first simulation in your selection is used as the reference. You can change the baseline at any time.

Rankings

For each extracted metric, SimPilot ranks all simulations from best to worst. Rankings make it easy to answer questions like:
  • Which configuration produced the lowest drag?
  • Which mesh density gave the fastest convergence?
  • Which turbulence model best matched the experimental data?

What you can compare

Comparisons work across different:
  • Parameter values -- Same setup at different inlet velocities, temperatures, or pressures
  • Mesh densities -- Coarse vs. medium vs. fine mesh results
  • Turbulence models -- k-epsilon vs. k-omega SST vs. Spalart-Allmaras
  • Solvers -- OpenFOAM vs. SU2 for the same problem
  • Geometry variants -- Different design iterations of the same component

Export

Download comparison results for external use:
  • CSV: Full metric table for spreadsheet analysis
  • Report: Include the comparison in a generated simulation report (see Exports & Reports)
📊

Interactive Dashboards

Visualize comparison data with charts, heatmaps, and parallel coordinates.
📦

Batch & Parameter Sweeps

Batch runs include automatic comparison when all jobs complete.
PreviousOptimizationNextOverview

On this page

The /compare pageRegression detectionRankingsWhat you can compareExport