Appearance
Account Value Tracker
The AccountValueAggregationTracker Durable Object provides real-time peak/bottom detection for hourly and daily account value metrics. It is geo-located (1 instance per location) and tracks all subscribed users in that region.
High-Level Design
Coefficient-Based Valuation
Instead of tracking full position state, the tracker reduces each user's account value to a linear formula:
accountValue = constantTerm + Σ(coefficient[token] × price[token])Where:
constantTerm = balance - Σ(entryPrice × sizeAsset × direction)— the price-independent portioncoefficient[token] = sizeAsset × direction— the price-dependent portion per token
UserPortfolio sends a new coefficient snapshot (with monotonically increasing seq) on every position change. The tracker stores these snapshots with a valid_from timestamp, allowing it to reconstruct account value at any point by finding the applicable snapshot and multiplying by prices.
This design means the tracker never needs to understand positions, leverage, or margin — it only needs prices and coefficients.
Data Flow
UserPaperTrade AccountValue PriceAlert /
Portfolio AggregationTracker PriceCollector
│ │ │
│ 1. subscribe │ │
│ (capital, │ │
│ coefficients) │ │
│──────────────────────────>│ │
│ │ │
│ 2. updateCoefficients │ │
│ (snapshot, seq) │ │
│ (on each position change) │ │
│──────────────────────────>│ │
│ │ │
│ │ 3. alarm() │
│ │ (every 5 min) │
│ │ fetchPriceRange │
│ │ (start, end) │
│ │──────────────────────>│
│ │ │
│ │ 4. prices[] │
│ │<──────────────────────│
│ │ │
│ │ 5. For each user: │
│ │ accountValue │
│ │ = constantTerm + │
│ │ Σ(coeff × price) │
│ │ Track peak/bottom │
│ │ │
│ 6. onHourlyExtremes │ ← at hour boundary │
│ (peak, bottom) │ │
│ │ │
│<──────────────────────────│ │
│ │ │
│ 7. onDailyExtremes │ ← at UTC midnight │
│ Finalized │ │
│ (peak, bottom) │ │
│ │ │
│<──────────────────────────│ │Alarm Processing Pipeline
Each 5-minute alarm runs a sequential pipeline. If any step hits the subrequest limit, it stores pending state and defers remaining work to the next alarm (100ms retry).
alarm()
├─ Step 0: Resume pending callbacks (daily cursor / hourly index)
├─ Step 1: Day boundary — emit daily callbacks, reset trackings
├─ Step 2: Hour boundary — calculate and emit hourly extremes
├─ Step 3: Batch process prices — advance cursors, update extremes
├─ Step 4: Retry failed callbacks (exponential backoff)
└─ Step 5: Cleanup old hourly snapshots (every ~24h)Next alarm interval adapts to state:
- 100ms — catching up or pending callbacks remain
- 60s — waiting for price data that hasn't arrived yet
- 5 min — normal steady-state
Pipeline progress (lastHourlyEmitTimestamp, lastDailyEmitDate) is persisted to SQLite and reloaded on initialize(), so the tracker resumes from where it left off after DO eviction. In-memory pending callback state (cursor position, buffered hourly callbacks) is intentionally not persisted — daily callbacks track progress via dailyDate on each tracking row, and hourly callbacks are idempotent, so re-emission after eviction is safe.
On-Demand Hourly Calculation
Hourly extremes are not tracked incrementally. At each hour boundary, the tracker replays the full price history for the past hour from PriceCollector and recalculates peak/bottom from scratch.
This is a deliberate tradeoff: 24x fewer DB write operations (no per-tick updates) in exchange for a burst of reads once per hour. It also eliminates race conditions — there's no incremental state that can become inconsistent.
Incremental Daily Tracking
Daily extremes are tracked incrementally via batch processing. Every 5-minute alarm loads trackings in cursor-based batches of 100 (keyset on user_id), fetches prices from lastProcessedTimestamp + 1 to now (capped at 300 seconds per batch), walks the ticks to update daily peak/bottom, and advances each user's cursor. Daily extremes are always up-to-date within a ~5-minute window.
Day Boundary Handling
At UTC midnight, daily emissions are triggered — but only after a precondition is met:
All trackings must have caught up to the end of the previous day. If any user's lastProcessedTimestamp is still behind yesterday's end, daily emission is deferred. This guarantees no peaks or bottoms are missed.
Once caught up, the tracker fetches the price at midnight (cached across alarm iterations) and emits onDailyExtremesFinalized() in batches of 500. Successful trackings are reset for the new day (extremes reinitialized to account value at midnight). If the subrequest limit is reached mid-batch, a keyset cursor (user_id) is stored and resumed next alarm.
Keyset pagination (WHERE user_id > cursor) is used here instead of OFFSET because resetTrackingForNewDay mutates the daily_date column — OFFSET-based pagination would skip or double-process rows.
Coefficient Consistency
Coefficient updates flow from UserPortfolio to the tracker asynchronously. The system uses sequence numbers with gap detection to ensure eventual consistency:
- UserPortfolio increments a monotonic
coefficient_seqand sends the snapshot to the tracker - If the tracker detects a gap (received seq 5 but expected seq 4), it returns
gapDetected: true - UserPortfolio responds by reconciling — replaying all pending snapshots from its local
pending_coefficient_updatestable
Failed sends are persisted locally and retried via alarm with exponential backoff. Duplicates are detected by seq and silently dropped.
Missing Price Data Handling
When calculating extremes and a price is missing for a token with a non-zero coefficient, the tracker stops processing immediately at that timestamp. It does not advance the cursor past missing data, ensuring no peak/bottom is silently skipped.
To distinguish "data hasn't arrived yet" from "data will never arrive," the tracker uses the finalizedUpTo cursor from PriceCollector:
stoppedAtTimestamp <= finalizedUpTo→ gap is permanent, advance cursor past itstoppedAtTimestamp > finalizedUpTo→ data may still arrive, wait (60s retry)
Account Version Staleness
When a user resets their account (new challenge), the accountVersion increments. During batch processing, the tracker batch-loads current versions and skips any tracking whose version no longer matches. This prevents ghost updates for stale accounts.
Failed Callback Retry
If a callback to UserPortfolio fails (network error, DO unavailable), it's persisted to the failed_callbacks table and retried with exponential backoff (1m, 5m, 15m, 1h, 2h). After 5 attempts, the callback is dropped. For daily callbacks specifically, a successful retry also advances daily_date to today to prevent re-emission.
Failed callback retries only run when there's no other work to do (no pending batches or boundary processing).