The convergence problem
Open any major streaming service in any country. The recommendations look remarkably similar. The same genres surface. The same patterns of content are promoted. The same engagement mechanics drive discovery. Different people in different cultures with different histories are being guided toward the same content by the same algorithms optimising for the same metrics.
This is not a conspiracy. It is a structural outcome. When AI systems are trained on the same data, optimised for the same objectives, and deployed at the same scale, they produce convergent outputs. The models learn what works on average. And what works on average is, by definition, average.
How AI narrows choice
Recommendation algorithms work by finding patterns in aggregate behaviour. If most people who watch film A also watch film B, the system recommends film B to anyone who watches film A. This is useful. It is also self-reinforcing. Film B gets more views because it is recommended more. It gets recommended more because it gets more views. The loop tightens.
Over time, the system converges on a narrow set of options that satisfy the average taste. Niche content, local content, unusual content is surfaced less because it does not match the dominant pattern. The tail shrinks. The head grows. Diversity of consumption declines even as the catalogue expands.
This dynamic is not limited to entertainment. It operates in news, in shopping, in music, in search results. Every system that optimises for engagement or click-through rate is subject to the same convergence pressure. The popular becomes more popular. The different becomes invisible.
The cultural cost
Human cultures are maintained by differences. Language, music, food, storytelling, values. These develop over centuries in specific places with specific histories. They are inherently local and particular. When AI systems push billions of people toward the same content and the same patterns of consumption, they erode the substrate on which cultural diversity depends.
A teenager in Lagos and a teenager in Lisbon are exposed to increasingly similar cultural inputs. Their musical tastes converge. Their fashion references converge. Their aspirations converge. The AI did not intend this. It simply optimised for engagement across the largest possible audience, and engagement, at scale, favours the universal over the particular.
The individual cost
There is a more intimate version of the same problem. AI personalisation often narrows individual taste rather than expanding it. If you listen to jazz, the algorithm gives you more jazz. It rarely suggests that you might enjoy West African highlife or Brazilian bossa nova, even though the musical connections are real. It optimises for what you already like, not for what you might discover.
Over time, the filter bubble effect applies not just to information but to identity. People become narrower versions of themselves, reinforced by algorithms that reward consistency and penalise exploration. The serendipity that produces growth is algorithmically disfavoured because it is unpredictable.
The alternative: understanding individuals, not averaging them
The homogenisation risk is not inherent to AI. It is inherent to a specific kind of AI: systems that model populations and optimise for averages. There is a different approach. Systems that model individuals and optimise for understanding.
Intent processes behavioural signals on-device. The model does not compare a person to a population. It reads the individual’s behaviour directly. The patterns it detects are specific to that person, in that context, at that moment. There is no averaging. There is no aggregation. The intelligence is individual.
This matters because individual behaviour is far more varied than aggregate models suggest. A person who reads poetry and follows Formula 1 and researches gardening equipment does not fit neatly into a segment. Traditional systems would pick the dominant signal and suppress the rest. On-device intelligence sees the full pattern and can respond to each dimension independently.
Preserving human complexity
No one is average. This is not a slogan. It is a statistical fact. The average of a population describes no actual member of it. A system designed around the average will serve no one well. It will serve most people poorly and some people not at all.
Behavioural intelligence preserves human complexity by refusing to reduce people to segments. The on-device model sees the person, not the cohort. It responds to what they are doing, not to what people like them have done. This is a fundamentally different relationship between AI and the individual.
The responsibility of AI builders
The homogenisation risk is real and accelerating. Every recommendation system, every content algorithm, every personalisation engine that optimises for aggregate patterns is contributing to it. This is not a problem that regulators can solve with rules. It is a design problem.
AI builders have a choice. Build systems that flatten human diversity into manageable segments. Or build systems that understand and preserve it. The architecture determines the outcome. On-device intelligence that reads individual behaviour without aggregation is one answer. There may be others. But the question must be asked before it is too late to change the answer.