Observability & statistics
Omega is built so behavior is traceable: one channel, typed intents, flows that emit expressions, and agents that react on the bus. That shape is exactly what good statistics and observability need — not a black box of scattered setState calls.
This page explains what to measure, what you already get in the package, and how to layer your own metrics without polluting production builds.
Example statistical dashboard (illustrative)
The diagram below is not live data from your app: it shows the kinds of charts that fit Omega naturally — category mix on the channel, intent → expression latency bands, an events/minute curve, and a flow snapshot row. You can reproduce these from traces, inspector exports, or custom timers around handleIntent / emitExpression.
What each panel means
The figure is a layout example with placeholder numbers. In a real setup you would compute the same views from traces, inspector exports, or your own timers.
Channel events (1 min) — A count mix of everything that crossed the Omega channel in the last minute, grouped by rough kind: Intents (mostly UI → flow), Agent (agent / bus traffic you attribute to agents), Nav (
navigate.*and related), and Other (everything else). Spikes in one bucket often point to a chatty widget, a loop, or a navigation storm.Intent to expression (ms) — Latency bands for “how long from handling an intent until the owning flow emits the next
OmegaFlowExpression.” p50 is the typical case; p95 catches tail latency users still feel. This is the closest single chart to “Omega felt speed” for the screen.Events per minute (session excerpt) — A time series of channel throughput (events per minute) over a short window (here labeled start → +10 min). Use it to spot bursts, plateaus, or regressions after a release.
Active flows (snapshot) — A point-in-time row of flows that exist in the session: each pill is a flow name plus a coarse state (for example running vs idle vs sleeping). It answers “what is alive right now?” — useful next to the inspector’s flow list.
Why this matters for your product
Users do not feel “Omega’s speed” as an abstract score. They feel time to feedback after a tap, consistency under load, and whether bugs are explainable after the fact. Omega’s architecture makes those questions answerable:
- Intent → expression — How long from a UI intent until the owning flow emits the next
OmegaFlowExpression? That is the closest thing to “Omega latency” for the screen. - Event throughput — How many channel events fire per second during a stress path? Spikes often reveal accidental loops or chatty agents.
- Flow state — Which flows are running, sleeping, or idle? Misconfigured activation shows up immediately in a snapshot.
You can present these ideas to stakeholders as observability: the same discipline that sells observability platforms, but mapped to flows and agents instead of generic logs alone.
What you get today (no extra code)
These capabilities ship with omega_architecture and the docs site. They are aimed at debug and internal builds.
| Capability | What it gives you |
|---|---|
Inspector (OmegaInspector, launcher, VM page) | Recent channel events (default 30 visible in the overlay), payloads as JSON, and a snapshot of all flows — id, state, last expression. |
| Time travel | Record & replay sessions: ordered events you can step through — ideal for “what happened in the five seconds before the bug?” |
omega trace / CLI | Export and inspect traces from the terminal — good for statistics over saved sessions (counts, ordering, which intent fired). |
| Contracts (debug) | Contracts validate that intents and expressions match what the flow declared — fewer surprises when you aggregate behavior. |
None of this requires you to trust magic dashboards: you can see the same events your flows and agents see.
Statistics that fit Omega (recommended set)
When you design a metrics story (even if the first version is a spreadsheet from a trace file), prioritize these:
- UI → flow response — Stopwatch from
handleIntent(or from your widget before emit) until the next expression your screen cares about. Per-flow percentiles (p50 / p95) tell you if a regression is in flow logic or in network / agent IO. - Channel volume — Events per second (global or per
namespace). Sudden doubling after a release is a classic smell. - Agent behavior — Count of
onActioncalls, errors, or retries per session — especially if agents wrap APIs or local storage. - Navigation — Count of
navigate.*intents and failures; pairs well with the navigation guide.
Flutter’s own DevTools remains the place for frame timing and jank; Omega-focused stats complement that instead of replacing it.
Rolling your own (thin and safe)
For production, keep overhead minimal:
- Wrap hot paths in
kDebugModeor a compile-time flag (e.g.assert-only blocks, or aTelemetryinterface with a no-op implementation in release). - Prefer sampling (e.g. one in N sessions) for detailed timelines.
- Attach correlation ids on
OmegaEventmetadata if you export to analytics — the channel is a natural choke point to stamp an id once.
The channel & events and data flow guides show where to hook without breaking the model.
See also
- Inspector & VM Service — overlay, dialog, browser, VM Service
- Time travel & traces — recording and replay
- Data flow — end-to-end path from UI to agent
- Flutter widgets —
OmegaInspector,OmegaBuilder, debug shell
If you later want first-class histograms inside the inspector (e.g. rolling latency of intent → expression), that is a natural extension of the same pipeline — the architecture is already event-centric and flow-centric, which is the hard part.