Connect saved context, hypotheses, candidates, and experiment promotion inside the deeper signed-in research stack.
Use this guide when the primary research page has already produced a credible scan or label set and you need to move that work into saved views, hypotheses, candidate strategy tracking, and experiment promotion.
Workbench is the persistence layer for research context. Use it when scan results are worth revisiting rather than rerunning from scratch.
Pipeline formalizes the hypothesis-to-candidate path. Use it when the idea needs ownership, stage control, and explicit handoff into backtests.
Experiments are the promotion layer. Use them to collect completed runs, inspect candidate health, and decide what should advance.
These are the deeper research coordination surfaces behind the top-level research workflow.
Review saved watchlists, recent views, pattern runs, and recent backtests from one research memory surface.
Create hypotheses, track candidate strategies, and move them across staged statuses with explicit handoff actions.
Create experiment groups, attach completed runs, and promote the strongest run into a more durable decision.
Use the experiment registry to inspect candidate counts, approved runs, drift failures, and recent demotion signals.
Move from candidate definition into run creation once the hypothesis and stage model are coherent enough to validate.
Use this sequence to keep the saved research stack explicit and reviewable.
Start by checking saved views, pattern runs, and recent backtests so you know whether this is a fresh idea or an iteration.
Create the hypothesis, link it to a candidate strategy, and keep stage and rationale visible instead of relying on loose notes.
Move candidate stages only when the supporting evidence and launch path into backtests are explicit.
Attach completed runs to experiments, review candidate health, and promote only the runs that remain coherent under comparison.
Use these patterns when workbench, pipeline, and experiments need to work as one governed chain.
Start with a saved workbench view or pattern run, confirm that the idea is still current, then formalize it as a pipeline hypothesis with explicit stage, owner, and rationale instead of leaving it as a loose observation.
The idea moves from reusable context into a candidate that can be tracked, challenged, and handed into backtests without losing its origin story.
Open the experiments registry, review candidate counts, approved runs, and recent demotion signals, then compare that evidence back to the pipeline stage before advancing anything further.
Weak paths get slowed or demoted with evidence instead of drifting forward because nobody explicitly reviewed their health.
Once the hypothesis, candidate stage, and recent evidence line up, launch into /backtests/new from the handoff path so the validation run inherits the same scope and rationale.
The backtest launch is traceable to a named candidate and stage, which keeps later experiment promotion reproducible.
Do not treat workbench as passive storage. It should help you decide whether a hypothesis is new, stale, or ready for promotion.
Use pipeline stages as an operating control, not just metadata, so candidate transitions remain explainable.
Experiments should collect completed evidence, not replace candidate governance or backtest comparison discipline.
Keep the link between candidate stage, backtest launch, and experiment promotion explicit so the research chain remains reproducible.
Jump into the signed-in workbench and inspect saved context, pattern runs, and recent backtests.
Open the specialist methods when the hypothesis needs options, sizing, calibration, or microstructure support.
Move into the deeper validation surfaces once a candidate has turned into an actual backtest run.
Mar 24, 2026
Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.
Open feedback issueJump to the section you need without losing your place.
Mar 24, 2026
Report unclear guidance, stale contracts, missing coverage, or broken docs UI on this page.
Open feedback issue