SerenQuantDocs
Sign up
Sign up
© 2026 SerenQuant. All rights reserved.
DocsPrivacyTermsStatusSupport
Docs
Documentation
Start here
Overview
Guide library
GuidesGetting Started: APIGetting Started: MCPAuth + Workspace ScopesAccount and Workspace ManagementMarkets WorkspaceMarkets Symbol WorkspaceMarket Guides and Beta ScopeFutures ReadinessResearch WorkflowResearch Workbench, Pipeline, and ExperimentsResearch Specialist MethodsResearch Context SurfacesBacktests Run LifecycleBacktests Run Detail and CompareStrategy Sweeps and OptimizerExecution WorkflowProviders and BenchmarkingSettings and Runtime ConfigAgent WorkbenchNews WorkflowDatasets WorkflowGlobal Language SelectionMFA Getting StartedMFA Recovery + Device LossMFA Trusted DevicesMFA API ReferenceLLM Market Analysis LoopStrategy Generation + Backtest LoopPromotion + Risk Guardrails
Reference
API ReferenceMCP Reference
Lifecycle
Changelog
  1. Docs/
  2. Guides/
  3. Research Workbench, Pipeline, and Experiments
Public docs

Guide: Research Workbench, Pipeline, and Experiments

Connect saved context, hypotheses, candidates, and experiment promotion inside the deeper signed-in research stack.

Deep surface guide

What this guide covers

Use this guide when the primary research page has already produced a credible scan or label set and you need to move that work into saved views, hypotheses, candidate strategy tracking, and experiment promotion.

Workbench is the persistence layer for research context. Use it when scan results are worth revisiting rather than rerunning from scratch.

Pipeline formalizes the hypothesis-to-candidate path. Use it when the idea needs ownership, stage control, and explicit handoff into backtests.

Experiments are the promotion layer. Use them to collect completed runs, inspect candidate health, and decide what should advance.

Routes and surfaces

These are the deeper research coordination surfaces behind the top-level research workflow.

/research/workbench

Research workbench

Review saved watchlists, recent views, pattern runs, and recent backtests from one research memory surface.

/research/pipeline · hypotheses & candidates

Pipeline

Create hypotheses, track candidate strategies, and move them across staged statuses with explicit handoff actions.

/research/experiments

Experiments

Create experiment groups, attach completed runs, and promote the strongest run into a more durable decision.

/research/experiments · candidate health & demotions

Candidate health and demotions

Use the experiment registry to inspect candidate counts, approved runs, drift failures, and recent demotion signals.

/backtests/new · candidate handoff

Backtest handoff

Move from candidate definition into run creation once the hypothesis and stage model are coherent enough to validate.

Recommended research coordination loop

Use this sequence to keep the saved research stack explicit and reviewable.

01

Stabilize the context in workbench

Start by checking saved views, pattern runs, and recent backtests so you know whether this is a fresh idea or an iteration.

02

Formalize the idea in pipeline

Create the hypothesis, link it to a candidate strategy, and keep stage and rationale visible instead of relying on loose notes.

03

Advance candidates deliberately

Move candidate stages only when the supporting evidence and launch path into backtests are explicit.

04

Use experiments for selection and promotion

Attach completed runs to experiments, review candidate health, and promote only the runs that remain coherent under comparison.

Example research coordination scenarios

Use these patterns when workbench, pipeline, and experiments need to work as one governed chain.

pipeline promotion

Promote a workbench insight into a managed candidate

Start with a saved workbench view or pattern run, confirm that the idea is still current, then formalize it as a pipeline hypothesis with explicit stage, owner, and rationale instead of leaving it as a loose observation.

Expected outcome

The idea moves from reusable context into a candidate that can be tracked, challenged, and handed into backtests without losing its origin story.

health review

Use experiment health to demote a weakening path

Open the experiments registry, review candidate counts, approved runs, and recent demotion signals, then compare that evidence back to the pipeline stage before advancing anything further.

Expected outcome

Weak paths get slowed or demoted with evidence instead of drifting forward because nobody explicitly reviewed their health.

validation launch

Hand a mature candidate into backtests

Once the hypothesis, candidate stage, and recent evidence line up, launch into /backtests/new from the handoff path so the validation run inherits the same scope and rationale.

Expected outcome

The backtest launch is traceable to a named candidate and stage, which keeps later experiment promotion reproducible.

Review and guardrails

Do not treat workbench as passive storage. It should help you decide whether a hypothesis is new, stale, or ready for promotion.

Use pipeline stages as an operating control, not just metadata, so candidate transitions remain explainable.

Experiments should collect completed evidence, not replace candidate governance or backtest comparison discipline.

Keep the link between candidate stage, backtest launch, and experiment promotion explicit so the research chain remains reproducible.

Next steps

Open Research Workbench

Jump into the signed-in workbench and inspect saved context, pattern runs, and recent backtests.

Research specialist methods guide

Open the specialist methods when the hypothesis needs options, sizing, calibration, or microstructure support.

Backtests run detail guide

Move into the deeper validation surfaces once a candidate has turned into an actual backtest run.

Last updated

Mar 24, 2026

Feedback

Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.

Open feedback issue
Previous
Research Workflow
Next
Research Specialist Methods
On this page

Jump to the section you need without losing your place.

  • What this guide covers
  • Routes and surfaces
  • Recommended research coordination loop
  • Example research coordination scenarios
  • Review and guardrails
  • Next steps
Last updated

Mar 24, 2026

Feedback

Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.

Open feedback issue