SerenQuantDocs
Sign up
Sign up
© 2026 SerenQuant. All rights reserved.
DocsPrivacyTermsStatusSupport
Docs
Documentation
Start here
Overview
Guide library
GuidesGetting Started: APIGetting Started: MCPAuth + Workspace ScopesAccount and Workspace ManagementMarkets WorkspaceMarkets Symbol WorkspaceMarket Guides and Beta ScopeFutures ReadinessResearch WorkflowResearch Workbench, Pipeline, and ExperimentsResearch Specialist MethodsResearch Context SurfacesBacktests Run LifecycleBacktests Run Detail and CompareStrategy Sweeps and OptimizerExecution WorkflowProviders and BenchmarkingSettings and Runtime ConfigAgent WorkbenchNews WorkflowDatasets WorkflowGlobal Language SelectionMFA Getting StartedMFA Recovery + Device LossMFA Trusted DevicesMFA API ReferenceLLM Market Analysis LoopStrategy Generation + Backtest LoopPromotion + Risk Guardrails
Reference
API ReferenceMCP Reference
Lifecycle
Changelog
  1. Docs/
  2. Guides/
  3. Backtests Run Lifecycle
Public docs

Guide: Backtests Run Lifecycle

Launch, review, and carry forward backtest runs from the signed-in validation workspace.

Signed-in workflow

What this guide covers

Use this guide when you need to move from a chosen market idea into a queued run, then inspect the verdict and decide whether to rerun, compare, or carry it forward.

Use /backtests as the operational home for recent evidence, active-run monitoring, and the next recommended validation action.

Treat /backtests/new as a commitment surface where symbol, interval, range, and validation posture are locked before deep tuning starts.

Use run detail and compare views to decide whether a result should be revised, compared, or carried forward into downstream operations.

Routes and surfaces

These are the current backtest surfaces that shape the run lifecycle in the product.

/backtests

Backtests home

Review run KPIs, queue state, completed results, and the current operating focus from the primary list page.

/backtests/new

New run

Carry the symbol, range, and strategy into one starter run first, then expand into deeper execution assumptions only after the core commitment is clear.

/backtests/[runId]

Run detail

Inspect the metrics, progress, and deeper evidence for a specific backtest after it has been created.

/backtests/compare

Compare

Compare completed runs when you need to rank variants, ranges, or assumptions side by side.

/strategies/sweeps | /strategies/optimizer

Adjacent strategy tools

Use sweeps and optimizer surfaces when you need broader search or parameter iteration around the backtest workflow.

Recommended run lifecycle

Keep the backtest flow explicit so you can distinguish between exploration, validation, and promotion-ready evidence.

01

Define the run clearly

Start from a specific market context, symbol, range, and strategy hypothesis before you queue anything, ideally from the same context you already selected in Markets.

02

Monitor queue and progress

Use the Backtests home route to track queued or running jobs while keeping the latest strong result and the next recommendation in the same view.

03

Inspect completed runs

Read the verdict first, then review return, drawdown, and diagnostics to decide whether the run is promising, mixed, weak, or blocked.

04

Compare before promotion

Compare candidates or carry the result into portfolio, live, or operations only after the verdict and evidence chain are coherent.

Run-list and monitoring controls

The main backtests page is the operating surface for queue visibility and candidate selection.

workflow hero

Workflow hero and KPI strip

The workflow hero and KPI strip should show the latest meaningful result, active queue posture, and the next validation move before you drop into the raw run table.

status summary

Running, completed, and failure posture

Use the KPI strip to distinguish between an idle list, a queue backlog, and a healthy completed-run backlog before deciding to launch another job.

layout controls

Density and layout preferences

Page layout controls and density toggles define how much run context is visible at once. They matter when the page turns into an operational queue.

run table

Run list and detail navigation

The run table is the handoff from monitoring into inspection. It combines symbol, strategy, range, status, stage, progress, and direct links into run detail.

New-run wizard controls

The new-run page is a structured wizard, not a flat form.

wizard posture

Step rail, step availability, and draft prompts

The step rail and step-unavailable warnings keep run setup ordered. Strategy prompt drafts can also pre-seed the wizard before you touch the main form.

run context

Context rail and manifest summary

The context rail and manifest summary explain what the run currently means after the core commitment is clear, so the user can validate or launch without reconstructing the setup.

universe definition

Universe, dataset, benchmark, and portfolio inputs

The universe panel owns watchlist, instrument, run mode, candidate, dataset, optimizer profile, benchmark, rebalance, and date-range decisions.

strategy and realism

Strategy, validation, execution, and realism assumptions

The non-universe panels capture strategy parameters, validation mode, execution models, queue behavior, costs, robustness, news features, and submission or validation actions.

Evidence before promotion

Record the strategy context and execution assumptions that produced the run so comparison stays honest.

Treat a strong return number as incomplete until you review run status, progress, and related diagnostics.

Prefer compare views when choosing among variants instead of promoting a single run in isolation.

Link the run back to research, risk guardrails, and execution realism before any production-facing decision.

Next steps

Open Backtests

Go to the signed-in run list, start from the guided validation workspace, and open the latest result review or compare surface.

Strategy generation guide

Review the upstream research-to-backtest sequence when you need a fuller hypothesis-building path.

Promotion and risk guide

Check the guardrails that apply before a backtest result influences production promotion or live execution.

Last updated

Mar 24, 2026

Feedback

Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.

Open feedback issue
Previous
Research Context Surfaces
Next
Backtests Run Detail and Compare
On this page

Jump to the section you need without losing your place.

  • What this guide covers
  • Routes and surfaces
  • Recommended run lifecycle
  • Run-list and monitoring controls
  • New-run wizard controls
  • Evidence before promotion
  • Next steps
Last updated

Mar 24, 2026

Feedback

Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.

Open feedback issue