Launch, review, and carry forward backtest runs from the signed-in validation workspace.
Use this guide when you need to move from a chosen market idea into a queued run, then inspect the verdict and decide whether to rerun, compare, or carry it forward.
Use /backtests as the operational home for recent evidence, active-run monitoring, and the next recommended validation action.
Treat /backtests/new as a commitment surface where symbol, interval, range, and validation posture are locked before deep tuning starts.
Use run detail and compare views to decide whether a result should be revised, compared, or carried forward into downstream operations.
These are the current backtest surfaces that shape the run lifecycle in the product.
Review run KPIs, queue state, completed results, and the current operating focus from the primary list page.
Carry the symbol, range, and strategy into one starter run first, then expand into deeper execution assumptions only after the core commitment is clear.
Inspect the metrics, progress, and deeper evidence for a specific backtest after it has been created.
Compare completed runs when you need to rank variants, ranges, or assumptions side by side.
Use sweeps and optimizer surfaces when you need broader search or parameter iteration around the backtest workflow.
Keep the backtest flow explicit so you can distinguish between exploration, validation, and promotion-ready evidence.
Start from a specific market context, symbol, range, and strategy hypothesis before you queue anything, ideally from the same context you already selected in Markets.
Use the Backtests home route to track queued or running jobs while keeping the latest strong result and the next recommendation in the same view.
Read the verdict first, then review return, drawdown, and diagnostics to decide whether the run is promising, mixed, weak, or blocked.
Compare candidates or carry the result into portfolio, live, or operations only after the verdict and evidence chain are coherent.
The main backtests page is the operating surface for queue visibility and candidate selection.
The workflow hero and KPI strip should show the latest meaningful result, active queue posture, and the next validation move before you drop into the raw run table.
Use the KPI strip to distinguish between an idle list, a queue backlog, and a healthy completed-run backlog before deciding to launch another job.
Page layout controls and density toggles define how much run context is visible at once. They matter when the page turns into an operational queue.
The run table is the handoff from monitoring into inspection. It combines symbol, strategy, range, status, stage, progress, and direct links into run detail.
The new-run page is a structured wizard, not a flat form.
The step rail and step-unavailable warnings keep run setup ordered. Strategy prompt drafts can also pre-seed the wizard before you touch the main form.
The context rail and manifest summary explain what the run currently means after the core commitment is clear, so the user can validate or launch without reconstructing the setup.
The universe panel owns watchlist, instrument, run mode, candidate, dataset, optimizer profile, benchmark, rebalance, and date-range decisions.
The non-universe panels capture strategy parameters, validation mode, execution models, queue behavior, costs, robustness, news features, and submission or validation actions.
Record the strategy context and execution assumptions that produced the run so comparison stays honest.
Treat a strong return number as incomplete until you review run status, progress, and related diagnostics.
Prefer compare views when choosing among variants instead of promoting a single run in isolation.
Link the run back to research, risk guardrails, and execution realism before any production-facing decision.
Go to the signed-in run list, start from the guided validation workspace, and open the latest result review or compare surface.
Review the upstream research-to-backtest sequence when you need a fuller hypothesis-building path.
Check the guardrails that apply before a backtest result influences production promotion or live execution.
Mar 24, 2026
Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.
Open feedback issueJump to the section you need without losing your place.
Mar 24, 2026
Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.
Open feedback issue