OPERATIONAL DEPTHINTERMEDIATE · LESSON 22 / 24~7 min read

Trade journal as the canonical record.

A bathroom scale that reads 5 pounds lighter than reality doesn't make you 5 pounds lighter. The number on the dial isn't the truth; the number on the calibrated medical scale is. The same problem exists on every trading dashboard: the position sizer shows a projected return, the equity curve shows a path, the Inst Metrics surface shows Sharpe and Profit Factor — and unless they all derive from the same canonical source, they'll disagree, and the trader will pick whichever number flatters them. The framework's design rule is unambiguous: the Trade Journal is the source of truth. Every other surface that displays performance derives from it. When the journal disagrees with anything else, the journal wins.

Why one source of truth matters

Without it, you get the bathroom-scale problem at portfolio scale. The sizer projects 24% required CAGR to hit your goal; the equity curve dashboard says you've done 18% YTD; the Inst Metrics card shows a 1.4 Sharpe; the Trade Journal shows actual P&L of +6.2% YTD with 14 trades. Which is "your performance"? They're measuring different things, often at different sample sizes, often with different definitions of "win" or "open" or "closed." The trader cherry-picks, almost always upward, and the felt experience drifts toward overconfidence.

The framework's display contract (shipped as v6.6.0) makes this unambiguous: the Trade Journal computes Profit Factor, win rate, average winner, average loser, Sharpe, and Actual CAGR from closed trades only. Every other surface that displays those metrics reads from the same source. When the Risk Status Bar shows your Profit Factor, it's identical to what the Inst Metrics card shows, which is identical to what the Trade Journal page shows.

The metrics that actually matter

The sample-size gates

A 1.8 Sharpe over 8 weeks is statistically meaningless. The framework gates display of metrics behind sample-size minimums:

Below the gate, the metric reads "—" or "insufficient sample." Above the gate, it displays the computed value. This prevents the early-career trader from looking at "your Sharpe is 2.4!" computed off 12 trades and concluding they have a strategy. They have noise.

The baseline question

"What's your CAGR" is a math question, but only if both sides agree on what "starting capital" means. The framework's baseline rule: Actual CAGR is computed from the journal's first entry, not from a hardcoded START_VALUE. If you funded the account with $50,000, took your first trade three months later, and the equity at that moment was $51,200 (interest earned in cash), then $51,200 is the baseline. Not $50,000.

This matters because hardcoded baselines drift from reality. A user who switches strategies, takes a break, or restarts shouldn't have an Actual CAGR distorted by the residual of an old period. The journal's first entry is the trader's actual operational starting point.

⌬ Trade journal metrics
60
45%
2.2R
32 wks
Profit Factor1.80
Sharpe (gated ≥26 wks)~1.4
Actual CAGR (gated ≥6 mo)~28%
Sample-size verdictSample passes all gates
60 closed trades + 32 weeks = past every gate. PF 1.80 = solid edge (above 1.5 floor). Sharpe and CAGR computable. The journal's reading is canonical; every other surface displays this same number.
Drop trades to 12 and weeks to 6 — Sharpe and CAGR gate to "insufficient sample." PF still computable but flagged. Drag winrate to 60% with R-multiple at 1.0 — PF still 1.5, but the strategy is win-rate-dependent rather than R:R-dependent. Different Trader-A vs Trader-B profile (Lesson 3).

Adherence — the metric most trading dashboards skip

The Friday close ritual (Lesson 10) ASSESS phase asks: "did you take any trade NOT on last Friday's plan?" The Trade Journal records this as an adherence flag on every trade. Over time, the journal accumulates an adherence percentage — the share of trades that were on-plan vs. off-plan. The framework's empirical observation: the on-plan trades materially outperform the off-plan trades for nearly every user, by a margin much larger than most users expect. The journal makes the pattern undeniable; the dashboard surfaces the differential explicitly.

This is the metric most retail platforms don't track because they don't know what "on-plan" means — they have no concept of a Friday plan. The framework does: every entry the user actually plans during the Friday ritual gets logged as on-plan; every off-plan entry is marked at execution time. The differential read at year-end is often the most actionable single piece of feedback the journal produces.

What the framework does

The real lesson

Performance only means something when it's measured against a calibrated baseline with a real sample. The Trade Journal is the calibrated baseline; sample-size gates enforce the real-sample requirement. Every other dashboard surface that displays performance reads from it, so the numbers can't disagree. The trader's job is to log every closed trade honestly (on-plan flag, exit reason, structure read at exit) — the journal becomes the forensic record that, over time, says more about whether the strategy works than any single trade or any single month ever can.


Related: L10 — Friday close ritual · L3 — Trader A vs B

← LESSON 21
Sovereignty Cap
LESSON 23 →
Catalyst calendar + OPEX