Trade journal as the canonical record.
A bathroom scale that reads 5 pounds lighter than reality doesn't make you 5 pounds lighter. The number on the dial isn't the truth; the number on the calibrated medical scale is. The same problem exists on every trading dashboard: the position sizer shows a projected return, the equity curve shows a path, the Inst Metrics surface shows Sharpe and Profit Factor — and unless they all derive from the same canonical source, they'll disagree, and the trader will pick whichever number flatters them. The framework's design rule is unambiguous: the Trade Journal is the source of truth. Every other surface that displays performance derives from it. When the journal disagrees with anything else, the journal wins.
Why one source of truth matters
Without it, you get the bathroom-scale problem at portfolio scale. The sizer projects 24% required CAGR to hit your goal; the equity curve dashboard says you've done 18% YTD; the Inst Metrics card shows a 1.4 Sharpe; the Trade Journal shows actual P&L of +6.2% YTD with 14 trades. Which is "your performance"? They're measuring different things, often at different sample sizes, often with different definitions of "win" or "open" or "closed." The trader cherry-picks, almost always upward, and the felt experience drifts toward overconfidence.
The framework's display contract (shipped as v6.6.0) makes this unambiguous: the Trade Journal computes Profit Factor, win rate, average winner, average loser, Sharpe, and Actual CAGR from closed trades only. Every other surface that displays those metrics reads from the same source. When the Risk Status Bar shows your Profit Factor, it's identical to what the Inst Metrics card shows, which is identical to what the Trade Journal page shows.
The metrics that actually matter
- Profit Factor (PF) — Σ(winners) ÷ |Σ(losers)|. Above 1.5 = real edge. Above 2.0 = strong edge. Below 1.2 = either no edge or insufficient sample.
- Win rate — fraction of closed trades that were winners. Less informative than PF for swing trading because high-R:R strategies can be profitable at 40% win rate (Lesson 3's Trader B).
- Average winner / Average loser ratio (R-multiple) — the realized R:R. Should match or exceed the per-trade R:R floor (Lesson 4) over a sample.
- Sharpe ratio — risk-adjusted return. Requires ≥ 26 weekly observations to be meaningful. Below that sample size, it's noise.
- Actual CAGR — annualized compound return from the journal's first-entry baseline. Requires ≥ 6 months of journal data. Below that, displays as "insufficient sample."
- Excess Return / Alpha — these are different. "Excess Return" = portfolio return − benchmark return (raw). "Alpha" = portfolio return − beta-adjusted benchmark return (controls for portfolio's market exposure). Most retail dashboards conflate them; the framework labels them separately.
The sample-size gates
A 1.8 Sharpe over 8 weeks is statistically meaningless. The framework gates display of metrics behind sample-size minimums:
- Sharpe: ≥ 26 weekly observations (~6 months of weekly data)
- Actual CAGR: ≥ 6 months from first journal entry
- Profit Factor: displayed at any sample, but flagged as "insufficient sample" below 30 closed trades
- Win rate: same — displayed but flagged below 30 trades
Below the gate, the metric reads "—" or "insufficient sample." Above the gate, it displays the computed value. This prevents the early-career trader from looking at "your Sharpe is 2.4!" computed off 12 trades and concluding they have a strategy. They have noise.
The baseline question
"What's your CAGR" is a math question, but only if both sides agree on what "starting capital" means. The framework's baseline rule: Actual CAGR is computed from the journal's first entry, not from a hardcoded START_VALUE. If you funded the account with $50,000, took your first trade three months later, and the equity at that moment was $51,200 (interest earned in cash), then $51,200 is the baseline. Not $50,000.
This matters because hardcoded baselines drift from reality. A user who switches strategies, takes a break, or restarts shouldn't have an Actual CAGR distorted by the residual of an old period. The journal's first entry is the trader's actual operational starting point.
Adherence — the metric most trading dashboards skip
The Friday close ritual (Lesson 10) ASSESS phase asks: "did you take any trade NOT on last Friday's plan?" The Trade Journal records this as an adherence flag on every trade. Over time, the journal accumulates an adherence percentage — the share of trades that were on-plan vs. off-plan. The framework's empirical observation: the on-plan trades materially outperform the off-plan trades for nearly every user, by a margin much larger than most users expect. The journal makes the pattern undeniable; the dashboard surfaces the differential explicitly.
This is the metric most retail platforms don't track because they don't know what "on-plan" means — they have no concept of a Friday plan. The framework does: every entry the user actually plans during the Friday ritual gets logged as on-plan; every off-plan entry is marked at execution time. The differential read at year-end is often the most actionable single piece of feedback the journal produces.
What the framework does
- Trade Journal as canonical store — every closed trade with entry/exit/size/PnL/on-plan flag
- All metrics surfaces derive from it — Risk Status Bar, Inst Metrics card, Performance Tracker, Velocity Panel
- Sample-size gates on Sharpe (≥26 wks), CAGR (≥6 mo), with "insufficient sample" copy below
- Baseline anchor = first journal entry, not hardcoded START_VALUE
- Excess Return vs Alpha labeled separately to prevent the conflation
The real lesson
Performance only means something when it's measured against a calibrated baseline with a real sample. The Trade Journal is the calibrated baseline; sample-size gates enforce the real-sample requirement. Every other dashboard surface that displays performance reads from it, so the numbers can't disagree. The trader's job is to log every closed trade honestly (on-plan flag, exit reason, structure read at exit) — the journal becomes the forensic record that, over time, says more about whether the strategy works than any single trade or any single month ever can.
Related: L10 — Friday close ritual · L3 — Trader A vs B