ADVANCED MARKET READSADVANCED · LESSON 30 / 36~7 min read

Trap & Structure Coach internals.

Lesson 16 introduced the 10-source confluence merger as a black box: ten candidate levels in, fewer merged levels out, weight-3+ clusters drive the framework's structural reads. This lesson opens the box. The actual algorithm in structure_geometry.py is straightforward — pairwise merging until convergence — but the calibration is the interesting part. Why 0.15× ATR specifically? Why those ten sources and not eight or fifteen? And what does the trap detector actually look for inside the merged level set? The answers shape every trade the framework approves or refuses.

The merger algorithm in code

The merger runs in three passes:

# Pass 1: gather all 10 candidate levels
candidates = [pivot_R1, pivot_R2, pivot_S1, pivot_S2,
              prev_day_high, prev_day_low,
              prev_week_high, prev_week_low,
              sma_20, sma_50, sma_200,
              vwap_session, vwap_5d,
              hvn_top3, lvn_top3,
              gap_edges_recent, swept_levels,
              round_numbers_in_range]
# (some of those produce multiple values; total ~10-15 raw)

# Pass 2: pairwise merge under 0.15× ATR
band = 0.15 * atr_14
clusters = []
for level in sorted(candidates):
    if not clusters or level - clusters[-1].avg_price > band:
        clusters.append(Cluster(level))
    else:
        clusters[-1].add(level)

# Pass 3: re-merge — clusters whose centers drifted within band
# during Pass 2 sometimes overlap after avg recalculation
for c1, c2 in adjacent_pairs(clusters):
    if abs(c1.avg - c2.avg) <= band:
        merge(c1, c2)

Output: each surviving cluster has an avg_price (weighted average of contributing source prices) and a weight (count of sources). The framework's structural reads operate on weight ≥ 3 clusters; weight-2 are stored but not surfaced; weight-1 are computed but discarded after merging.

Why 0.15× ATR specifically

This was empirical — backtested across 200 large-cap names over 5 years, varying the merge band from 0.05× to 0.40× ATR. Three measurements per band setting:

0.15× was the maximum-predictive-value point. Below 0.10×, missed-confluence rate climbed sharply (band too tight, clusters fragmented). Above 0.20×, false-merge rate rose without offsetting predictive gain. 0.15× is the local optimum — and it's been stable across recalibrations because the underlying microstructure (retail order-book clustering at "approximately the same price") is itself stable.

Why those 10 sources, not 8 or 15

Each source contributes information of a different kind. Adding more correlated sources doesn't add information; adding fewer types loses information.

Source categoryInformation typeExamples in the set
MechanicalComputed from prior session — same for every traderPivot points, prev-day H/L, prev-week H/L
Smoothed priceTime-averaged consensus levelsSMA 20/50/200
Volume-weightedWhere volume actually transactedVWAP, HVN/LVN from volume profile
Event-drivenDiscrete points of recent flowGap edges, swept levels
PsychologicalRound-number / cognitive anchoring$100, $250, $500, etc.

Five categories; ten total levels with two from each broad category. Adding a sixth category (e.g., Fibonacci levels — purely mathematical, no real-flow grounding) adds correlation without adding new information types. Removing a category (e.g., dropping volume-weighted) loses real flow data. The 10-source set is the local optimum on information diversity per computational cost.

⌬ Confluence builder + trap-detector
3
Up
1.6×
No
Structural readTradeable confluence
Trap detectorNo trap
Trade verdictGO — structurally clean
Weight-3 cluster + trend up + volume 1.6× + no HT divergence = textbook structural setup. The trap detector finds nothing flag-worthy. Entry trigger arms; the chandelier-exit math anchors to the cluster as primary support.
Drop weight to 1 — single-source level, framework refuses (decoration, not structure). Add HT divergence — trap detector fires "bull-stack distribution" or similar. Weight-3+ cluster with HT divergence is the most common trap pattern.

The trap detector's specific patterns

The Trap & Structure Coach doesn't just identify confluence — it specifically looks for traps: setups that look structurally clean but have a hidden tell that flips them from tradeable to dangerous. The patterns it explicitly checks:

When any pattern fires, the audit card surfaces a trap chip naming the specific pattern. Override exists; the journal records each.

What changes when you read the internals

Two practical shifts for the trader:

  1. You stop over-trusting weight-2 clusters. They're not the framework's structural reads. The dashboard sometimes still shows them; that doesn't make them tradeable. The audit only weights 3+.
  2. You read trap chips as specific patterns, not generic warnings. "Trap: bull-stack distribution" means a specific HT/OBV signature; "trap: gravestone at resistance" means a specific candle anatomy. Knowing the patterns lets you read other charts manually for the same signature.

The real lesson

The confluence merger is mechanically simple but its calibration carries the framework's empirical edge. 0.15× ATR isn't arbitrary — it's the local optimum on predictive value across 5 years and 200 names. The 10-source set covers five distinct information categories; adding correlated sources doesn't help. The trap detector's specific pattern names tell you what the framework refuses, why, and lets you read the same patterns manually. The whole machinery exists to keep weight-1 noise from polluting decisions and to surface the specific trap signatures that look clean in retail chart-reading but consistently fail in disciplined backtests.


Related: L16 — confluence merger · L17 — hidden tape · L18 — sweep detection

← LESSON 29
Whale Confirmation
LESSON 31 →
The 10 AI surfaces