The false equivalence
Every few months, a new privacy scandal lands. A data broker sells location data from a prayer app. An ad network fingerprints children across devices. A social platform lets advertisers target people in crisis. The response is always the same: a brief public outcry, a regulatory fine that amounts to a rounding error, and a return to business as usual.
The industry shrugs because it treats all uses of behavioural data as morally equivalent. If you are using data about what people do, the logic goes, you are in the surveillance business. This is wrong. It conflates the mechanism with the intent. It ignores architecture. And it lets the worst actors hide behind the same language as the best.
What surveillance advertising actually does
Surveillance advertising works by identifying individuals across contexts. It follows a person from a health forum to a news site to a shopping app, building a persistent profile that can be sold, shared, or leaked. The value is in the identity graph. The more a system knows about who you are, the more it can charge for access to you.
This model requires data to leave the device. It requires third-party cookies, device fingerprints, email hashes, or mobile advertising IDs. It requires a supply chain of intermediaries, each of which takes a cut and each of which introduces a new vector for misuse. The person being profiled has no visibility into who holds their data, where it goes, or how long it persists.
The incentive structure is adversarial. The system works best when people do not know how much is known about them. Transparency is a threat to the business model.
What behavioural intelligence does differently
Behavioural intelligence starts from a different premise. The goal is not to identify a person. The goal is to understand a pattern. What does this behaviour mean? What does this sequence of actions suggest about intent? What is the right response to this signal, right now, for this context?
At Intent, the processing happens on the device. Raw behavioural data never leaves the phone. Instead, the on-device model interprets the signals locally and produces a privacy twin: a mathematical representation of behavioural patterns that carries no personally identifiable information. The twin can be used to match intent to offers, content, or experiences. But it cannot be reverse-engineered to identify the person.
This is not a privacy workaround. It is a fundamentally different architecture. The data never moves. The intelligence does.
Ethics is not the trade-off. It is the advantage.
The surveillance model assumes a trade-off: more privacy means less effectiveness. This assumption is empirically false. Intent signals processed on-device are fresher, more contextual, and more accurate than stale third-party segments assembled from week-old cookie data.
A person browsing travel content on a Tuesday evening after checking their calendar is exhibiting a real-time intent signal. A third-party cookie that says they visited a travel site three weeks ago is noise. The on-device approach sees the live signal. The surveillance approach sees the echo.
Intent clients consistently see higher engagement, higher conversion, and lower waste when they move from surveillance-derived segments to behavioural intelligence. The ethical approach is also the effective one. This is not a coincidence. When you understand what someone actually wants, you do not need to manipulate them into wanting it.
The architecture is the argument
Privacy policies are promises. Architectures are proofs. Any company can write a privacy policy that says it respects user data. Very few companies build systems where misuse is structurally impossible.
When data never leaves the device, there is no database to breach. There is no third-party to subpoena. There is no data broker to sell to. The architecture itself enforces the ethics. This is what separates behavioural intelligence from surveillance advertising. Not the stated intent. The structural reality.
The industry needs better distinctions
Regulators, journalists, and consumers tend to paint all data use with the same brush. This is understandable given the industry track record. But it is also counterproductive. If the company that processes everything on-device is treated the same as the company that sells data to brokers, there is no incentive to build the better system.
The distinction between surveillance and intelligence is not semantic. It is architectural. It is measurable. And it matters. Companies that understand behaviour without identifying individuals are building something genuinely different. The language we use should reflect that.
Behavioural intelligence is not surveillance done politely. It is a different system with different incentives, different architectures, and different outcomes. The sooner the industry recognises the distinction, the sooner it can move past the false choice between personalisation and privacy.