🔬Under the Hood

Music that listens before it plays

Omix runs a real-time adaptive audio engine built in Rust. It watches how you work, scores your focus level, and reshapes every layer of the music to match, without ever sending data off your machine.

Six systems, one seamless loop

Every few hundred milliseconds, Omix runs through this pipeline. You never notice it. The music just fits.

Sensing
Context
Playback
Atmosphere
Scoring
Control
1

Activity Detection

Omix listens to system-level keyboard and mouse events to understand how you're working. But raw event counting isn't enough. Holding down the spacebar looks very different from actually typing a sentence.

The engine tracks keystroke variety: it needs to see multiple distinct keys in a short window before it counts as real typing. This means holding a single key or tapping a modifier doesn't register as productive input. Mouse movement requires actual displacement, and scroll events are throttled to avoid inflating activity during casual browsing.

The result is a clean activity signal that distinguishes between focused work and idle fidgeting, without recording what you actually type.

2

Context Awareness

Not all screen time is equal. Writing code in your IDE is different from scrolling Twitter. Omix knows which application has focus and factors that into its scoring.

Over a hundred apps are mapped to productivity weights. Development tools and writing apps score high, entertainment scores low, everything else falls somewhere in between. The engine also watches how often you switch apps. Rapid context-switching is a signal that focus is breaking down, and the score adjusts accordingly.

This layer gives Omix something no other focus music app has: an understanding of what you're doing, not just whether your hands are moving.

3

Productivity Scoring

Input signals and app context feed into a scoring engine that produces a single number: your current productivity level, from zero to one. This score drives everything downstream.

The engine uses scene-based hysteresis to keep transitions smooth. Rather than jumping between states, it moves through four natural phases (Idle, Warming Up, Flow, and Deep Focus) with minimum dwell times at each level. Rising into a higher state is fast (you earned it). Falling back is deliberately slow (a quick pause shouldn't kill your momentum).

There's also a focused reading detector: a state machine that recognizes when you're reading (scrolling with minimal keyboard activity) and holds the score steady instead of letting it decay. Reading is focus too.

Focus Phases
Idle
Warming Up
Flow
Deep
AmbientFull orchestration
4

Adaptive Audio

This is where the score becomes sound. An orchestrator maps your productivity level to audio parameters in real time, controlling four dimensions of the music simultaneously.

Stem layering

Each track is split into four stems: bass, instruments, melody, and drums. They fade in at different productivity thresholds. At idle, you hear a minimal bed. At peak focus, everything is playing. Drums only appear when you're deep in flow.

EQ shaping

A low-pass filter opens up as productivity rises, letting more high-frequency detail through. The result: music sounds warm and muted when you're idle, crisp and present when you're locked in.

Spatial depth

Reverb is at its highest when you're idle, so the music feels distant and ambient. As you enter flow, the reverb pulls back to zero. The music becomes direct and immediate, matching your mental sharpness.

Dynamic range

A compressor adapts as more stems become active, keeping loudness consistent whether one layer is playing or all four. You never need to touch the volume knob.

All transitions use perceptually-linear interpolation. Changes are calculated in amplitude space rather than decibels, so fades sound natural to your ear instead of mathematically even. The entire audio engine is built on Kira, a Rust audio library designed for real-time, low-latency playback.

5

Ambient Scenes

Running alongside the music engine is a separate ambient audio engine that layers environmental soundscapes underneath the music. Choose from six scenes: Mountain Rain, Forest Camp, Buzzy Cafe, Flowing Creek, Chalet Fireplace, and Spaceship Hum.

The ambient engine uses gapless looping and smooth crossfades when you switch scenes, so there's never a jarring cut. Like the music, ambient volume is driven by the orchestrator. When you're idle, ambient sounds are at their fullest, creating a calm, enveloping backdrop. As your productivity rises, they fade back to let the music take over.

6

Your Controls

The adaptive engine handles the hard work, but some things are personal preference. Omix gives you dials to fine-tune the experience:

Mix Dial

Blend between ambient and music. Slide toward ambient for a more environmental feel, or toward music for a stronger beat. The adaptive engine works with your preference, not against it.

Neuro Attunement

Control how much binaural beat presence you want, or turn it off entirely. Some people love the subtle pulsing; others prefer pure music. Both are valid.

Genre & Scene

Pick your music genre (Deep House, Lofi Beats, Post Rock, Jazz Fusion) and ambient scene independently. The engine adapts the same way regardless of what you choose.

7

The Neuro Layer

Underneath the music, Omix generates subtle binaural beats. These are slightly different frequencies in each ear that produce a perceived pulsing tone. Research suggests these can help guide brainwave entrainment toward states associated with focus and calm.

The neuro engine is adaptive too. The carrier frequency, beat rate, and volume all shift based on your productivity level. During idle states, slower beat frequencies encourage a relaxed baseline. As you enter flow, the frequency rises to support sustained attention.

A follow system monitors the volume of the main music bus and adjusts the binaural layer to stay perceptible but never dominant. It sits below conscious awareness, like a subtle guide rather than a distraction.

In December 2025, Stanford's SHAPE Lab published research showing that real-time audio feedback anchors attention more effectively than static sound, with particularly strong results for people with ADHD. Omix's neuro layer works on this same principle: continuous, responsive feedback rather than a fixed signal.

8

Privacy by Architecture

Omix monitors keyboard events, mouse activity, and which app has focus. That sounds invasive, so here's exactly what happens with that data.

All processing is local. Activity detection runs entirely on your machine. No cloud. No telemetry. No analytics on your work habits.
No keystrokes are recorded. The engine counts key events and measures variety. It never captures which keys you press.
App names stay on your machine. The focused app is used locally for scoring and never transmitted anywhere.
No screen recording or screenshots. Omix never accesses your screen content. It only reads system-level input events and window metadata.

This isn't a policy decision. It's an architectural one. The data never exists in a form that could be exfiltrated. There's no server to breach because the data is never on one.

Hear the difference

Try Omix free for 7 days. No credit card required.
Mac & Windows.