era · present · technocratic

Media Control

Who controls the narrative — and how the machinery works

By Esoteric.Love

Updated  1st April 2026

APPRENTICE
WEST
era · present · technocratic
SUPPRESSED
EPISTEMOLOGY SCORE
75/100

1 = fake news · 20 = fringe · 50 = debated · 80 = suppressed · 100 = grounded

The PresenttechnocraticEvents~18 min · 3,506 words

The stories you believe about your world didn't arrive by accident. They were selected, framed, amplified, or buried — and the machinery behind those choices shapes everything from how you vote to what you fear.

TL;DRWhy This Matters

For most of human history, controlling who could speak to large numbers of people meant controlling what large numbers of people knew. The printing press broke one kind of monopoly. The telegraph broke another. Radio and television created new ones almost immediately. Each revolution in communication technology has been followed, with remarkable consistency, by a period of consolidation — a moment when the tools of mass attention get absorbed into fewer and fewer hands. We are living through another such moment right now, and the stakes may be higher than at any previous point in history.

The reason this matters urgently is not simply ideological. It is structural. When the organizations responsible for informing democratic citizens are owned by a shrinking number of entities, when local newspapers vanish from counties and towns faster than they can be replaced, when the algorithms deciding what billions of people see each morning are written by engineers accountable to shareholders rather than the public — the architecture of shared reality itself begins to degrade. Democracy does not require that everyone agree. It does require that people have access to a baseline of credible, locally relevant, independently verified information. That baseline is currently under extraordinary pressure.

The connection between past and present here is not abstract. The propaganda techniques systematically analyzed throughout the twentieth century — the selection of which facts to emphasize, the emotional framing of dry policy questions, the repetition of preferred narratives until they feel like common sense — did not disappear with the regimes that pioneered them. They were refined, digitized, and made available at scale to governments, corporations, political operatives, and, in a genuinely new development, to ordinary individuals with a smartphone and a grudge. Understanding who controls the narrative today requires understanding both the traditional machinery of media ownership and the newer, stranger machinery of algorithmic amplification.

And the future dimension is, if anything, the most disorienting. Artificial intelligence now enables the generation of convincing text, audio, and video at near-zero marginal cost. The question of who controls the narrative is about to become dramatically more complicated, because the tools of narrative creation are no longer restricted to organizations with studios, printing presses, or broadcast licenses. What happens to the concept of a "controlled narrative" when anyone can fabricate a convincing one? The answer to that question is not yet written, and how societies respond in the next decade will determine much of what political and social reality looks like for generations.

The Architecture of Ownership

To understand media control, you have to start with ownership — not because owners always dictate content directly, but because ownership structures create the incentive environments in which editorial decisions are made. This is established, well-documented territory, not speculation.

Media consolidation — the process by which ownership of news and entertainment outlets concentrates into fewer corporate entities — accelerated dramatically in the United States and globally following the deregulatory waves of the 1980s and 1990s. In 1983, approximately fifty companies controlled the majority of American media. By the early 2000s, that number had shrunk to around six major conglomerates. The names change as mergers and acquisitions reshape the landscape, but the dynamic has remained consistent: scale is rewarded, independence is economically precarious, and the pressure toward consolidation is structural rather than conspiratorial.

What this means in practice is that the same parent company may own the local television station you watch for weather, the newspaper whose website you visit for community news, several of the radio stations you hear during your commute, and a digital portal that aggregates content from all of them. This is not inherently sinister — there are genuine economic efficiencies in shared infrastructure — but it does mean that editorial diversity can be illusory. When a parent company issues cost-cutting directives, or when an owner has documented political preferences, the effects ripple across ostensibly independent outlets simultaneously.

The news desert phenomenon represents perhaps the starkest consequence of consolidation and the broader economic disruption of local journalism. Research from the Hussman School of Journalism and Media at the University of North Carolina has tracked the collapse of local newspapers across the United States over two decades with careful, county-level granularity. The findings are sobering: between 2004 and 2020 alone, the United States lost more than a quarter of its newspapers, and the pace of closure has accelerated since. Hundreds of counties — home to millions of citizens — now have no local newspaper at all. Ghost newspapers — outlets that still technically exist but have been gutted of reporting staff to the point of publishing little original local journalism — add to the picture. The erosion is not uniform; it hits hardest in rural areas, in lower-income communities, and in regions where there is no economic incentive for a media company to maintain a genuine reporting presence.

The implications are not theoretical. Studies have found correlations between news desert conditions and lower voter turnout, higher municipal borrowing costs (because bond markets price in information asymmetry), reduced civic engagement, and increased political polarization — partly because national partisan media fills the vacuum left by absent local journalism. When there is no one covering the school board meeting, no one investigating the local water authority, no one writing about the business that just laid off three hundred people, the information architecture of democratic accountability collapses in a way that is very difficult to rebuild.

The Economics of Attention

Ownership structures alone don't explain how the modern narrative machinery works. To understand that, you have to understand the economy that funds most of the media you consume for free: the attention economy.

The foundational principle is simple and somewhat grim: your attention is the product being sold. Whether you are watching broadcast television, scrolling a social media feed, or reading an ad-supported digital news site, the business model depends on maximizing the time you spend engaged with the platform so that engagement can be sold to advertisers. This creates a set of incentives that are largely independent of any individual owner's ideology. Content that generates strong emotional responses — outrage, fear, disgust, tribally affirming contempt — tends to perform better in engagement metrics than content that is accurate, nuanced, or locally specific but not emotionally charged. This is not a conspiracy; it is a documented consequence of optimizing for the metric that funds the system.

Engagement optimization — the practice of algorithmically surfacing content predicted to maximize user interaction — is the mechanism through which these incentives operate at scale on social media platforms. It is worth being precise here about what is established versus debated. It is established that major platforms use algorithmic ranking systems that prioritize content associated with high engagement signals. It is established that emotionally provocative content tends to generate more engagement signals than calm, informational content. It is debated — and the debate is genuine, with serious researchers on multiple sides — how large the real-world political effects of this are, and whether the platforms' own moderation and ranking adjustments have meaningfully mitigated the dynamic.

What is not seriously in dispute is that the advertising-funded attention economy has created a structural tension between the incentives of media businesses and the informational needs of democratic citizens. A local story about a zoning dispute that genuinely matters to five thousand people generates far less advertising revenue than a national story about partisan conflict that generates rage-clicks from five million people. The economics systematically reward the latter. This is a design problem, not a moral failing of any individual journalist or editor.

Propaganda: Old Techniques, New Amplifiers

The word propaganda carries twentieth-century weight — images of wartime posters, Soviet newsreels, totalitarian ministries of information. It is tempting to treat it as a historical artifact rather than a living practice. This would be a mistake.

Propaganda, defined carefully, means the systematic use of communication to promote a particular viewpoint or agenda, typically by selecting, framing, and emphasizing information in ways designed to produce desired beliefs or behaviors, rather than to inform or to reason with an audience. Defined this way, it is not the exclusive province of authoritarian states. It is practiced by democratic governments during wartime, by corporations managing their public image during crises, by political campaigns during election cycles, and by interest groups across the ideological spectrum every single day.

The classical techniques are well-documented and have been studied by communication scholars since at least the 1920s. Framing — presenting factually accurate information within a context that predisposes the audience toward a particular interpretation — is perhaps the most pervasive. The same economic statistic can be framed as evidence of a vibrant recovery or a deepening inequality crisis depending on which aspects are emphasized. Neither framing need involve any factual falsehood. Repetition exploits the cognitive tendency to mistake familiarity for truth — a phenomenon psychologists call the illusory truth effect, which is robustly established in experimental literature. Repeat a claim often enough, in enough contexts, and people's confidence in its truth increases regardless of whether it has ever been verified.

Manufacturing consent — a phrase associated with the political analysis of Edward Herman and Noam Chomsky, though the underlying concept has been analyzed by many scholars — refers to the process by which the range of acceptable opinion in a society gets quietly narrowed through the cumulative effect of media structures, sourcing practices, and editorial norms, without any central coordinator issuing explicit instructions. The claim is not that every journalist is complicit in a conspiracy; it is that the structures within which journalists operate systematically favor certain voices, certain framings, and certain assumptions about what is obvious and what requires explanation. This analysis is debated — critics argue it underestimates the genuine diversity of opinion in pluralistic media environments — but the underlying mechanisms it identifies are real and worth examining carefully.

What has changed in the contemporary period is not the basic toolkit of propaganda but the scale and speed at which it operates, and the degree to which it has been democratized. Disinformation — the deliberate creation and circulation of false or misleading content — now moves through social networks at speeds that make the traditional concept of fact-checking almost categorically inadequate. A fabricated story can reach tens of millions of people in hours. A careful refutation, issued days later by a credentialed institution, reaches a fraction of that audience and is processed against the backdrop of an already-formed impression.

The Platform Question

The emergence of a small number of technology platforms as the dominant intermediaries of public information represents a genuinely novel development in the history of media control — one that doesn't map cleanly onto earlier frameworks.

Previous media control debates were largely about publishers and broadcasters: entities that produced content and distributed it to audiences. The major platforms — search engines, social media networks, and the app stores that control mobile information access — are better understood as infrastructure, though infrastructure of a peculiar kind that makes editorial decisions constantly while officially disavowing editorial responsibility. When a platform decides that a particular type of content violates its community standards, it is making an editorial judgment. When its algorithm amplifies one kind of content and suppresses another, it is shaping the information environment. The legal and conceptual frameworks that governed broadcast and print media were built for a world where those who distributed information were clearly identifiable as editors or common carriers. The platforms occupy an uncomfortable middle position that existing frameworks handle poorly.

The governance question here is genuinely unresolved. On one side: there are serious arguments that the power platforms exercise over public discourse warrants regulatory oversight, transparency requirements, and potentially structural remedies. On the other: there are serious arguments that regulating platform content decisions risks creating government leverage over speech that is more dangerous than the problems it purports to solve. Different democracies are reaching different conclusions, and the divergence itself is informative. The European Union has moved toward assertive platform regulation with its Digital Services Act, imposing transparency and accountability requirements. The United States has remained largely deferential to platform autonomy. India, Brazil, and others have pursued more aggressive government control of platform operations — in ways that raise their own significant free-press concerns.

The filter bubble concept — the idea that algorithmic personalization creates information environments in which individuals are systematically exposed only to content that confirms existing beliefs — became culturally prominent in the 2010s and is now a fixture of popular discourse about media. The empirical picture is more complicated than the popular version suggests, and this is worth flagging honestly. Research on filter bubbles has produced mixed results; some studies find that algorithmic feeds actually expose people to more cross-cutting political content than they would seek out voluntarily, while others find significant segregation effects depending on platform and population studied. The honest answer is that the filter bubble story is probably too simple as commonly stated, but that doesn't mean the underlying concern about algorithmic curation and epistemic segregation is unfounded.

State Actors and Information Warfare

Any honest examination of media control in the contemporary period has to address the role of state-sponsored information operations — governments deliberately using digital infrastructure to shape narratives both domestically and internationally.

This is documented territory, not speculation. Multiple investigations, including by academic research groups, journalistic outlets, and intelligence agencies in several countries, have established that governments run systematic campaigns to manipulate online information environments. The techniques include networks of inauthentic accounts amplifying preferred narratives, targeted advertising to specific population segments, the strategic seeding of divisive content designed not to persuade but to deepen existing social fractures, and the use of state-controlled media outlets as amplification channels for content originally placed in nominally independent venues.

Computational propaganda — the use of automated accounts, data analytics, and targeted communication strategies to manipulate political discourse at scale — has been studied extensively by researchers at institutions including the Oxford Internet Institute, whose work represents some of the most careful documentation of how these operations function across different political systems. Their findings suggest that computational propaganda is not confined to any single government or political tendency; it has been deployed by authoritarian regimes, by democracies during election campaigns, and by non-state actors pursuing a wide range of agendas.

What makes state information operations particularly complex to analyze is the way they interact with genuine organic political sentiment. A state actor seeking to amplify social division does not need to invent grievances from nothing; it can work with real grievances, real communities, real content — amplifying, distorting, and reframing in ways that are extremely difficult to distinguish from organic activity. The goal is often not to create a specific belief but to degrade the overall information environment: to make it harder to know what is real, who is credible, and what can be trusted. This epistemic warfare model is perhaps more insidious than the classic propaganda model precisely because it does not require persuading anyone of anything false. It operates by manufacturing uncertainty.

Journalism, Independence, and the Structural Pressures on Truth-Telling

In the middle of all these structural forces — ownership concentration, attention economy incentives, platform dynamics, state information operations — are journalists: individuals who, by and large, entered a profession because they believed in its civic function and are now trying to do that work within institutional environments that are under severe economic and political stress.

It is important to hold two things simultaneously here without collapsing one into the other. First: the vast majority of working journalists in pluralistic democracies are genuinely trying to report accurately and are not agents of any controlling interest. Journalistic culture maintains norms of verification, source independence, and skepticism toward power that are real and consequential, even when imperfectly practiced. Second: those norms operate within structural constraints that shape what gets covered, how it gets framed, and whose voices get treated as authoritative — often in ways that individual journalists are not fully aware of and that don't require any individual bad actor.

Source dependence is one of the most consequential of these structural constraints. Journalists need information, and information tends to come from people in positions of institutional authority — government officials, corporate communications departments, credentialed experts in established fields. This creates a systematic tendency to take seriously and amplify the perspectives of people already embedded in power structures, and to treat outsider perspectives as requiring much higher evidentiary burdens before they merit coverage. This isn't bias in the crude ideological sense; it is an institutional tendency embedded in the practical logistics of newsgathering that has been analyzed carefully by media scholars for decades.

Both-sides journalism — the practice of presenting "both sides" of a contested issue as a default formula for appearing objective — creates its own distortions. When the genuine expert consensus on a factual question (such as climate science, vaccine safety, or electoral integrity) is presented as one "side" in a debate with a contrary position held by a small minority of credentialed experts or by politically motivated non-experts, the audience receives a systematically inaccurate picture of the actual state of knowledge. This is an area where the journalism profession itself has been doing significant self-examination, with growing adoption of the concept of false equivalence as a recognized failure mode distinct from traditional bias.

Citizen Navigation in a Contested Information Environment

The question of how individuals might navigate a media environment characterized by all of the above — consolidation, attention-economy distortions, propaganda, platform power, state information operations — is one that attracts a great deal of well-intentioned but sometimes superficial advice.

Media literacy education has been promoted as a primary response, and there is genuine value in teaching people to examine sources, check claims across multiple outlets, distinguish between news and opinion, and recognize common manipulation techniques. There is also evidence that some forms of media literacy education can backfire, producing a generalized suspicion of all institutional sources — a kind of epistemic nihilism in which nothing is trusted and all claims are treated as equally contestable. The research on what kinds of media literacy interventions actually improve information quality in real-world decision-making (as opposed to scoring well on awareness tests) is still developing.

Lateral reading — the practice of quickly opening multiple tabs to check what others say about a source rather than diving deep into the source itself — has been identified by researchers studying how professional fact-checkers operate, as a more effective strategy than careful close reading of a single source. It's a simple but counterintuitive technique: rather than reading an article deeply to evaluate it, immediately look sideways at what credible third parties say about where it comes from. The reason it works is that it exploits the distributed nature of the web; deceptive or unreliable sources are usually identifiable through their reputation history, which is documented somewhere accessible.

There is also the deeper structural question of whether individual-level media literacy is sufficient response to what are fundamentally structural problems. The information environment is not neutral terrain; it is shaped by powerful economic and political forces that individual awareness, while necessary, cannot fully counter. The analogy sometimes used is that teaching individuals to avoid pollution is valuable, but it does not substitute for regulating the industries that create it. This framing is contested, but it captures something real about the limits of purely individual-level solutions to systemic problems.

The Questions That Remain

What does it mean for democratic self-governance when the economic model that historically funded local journalism — print advertising — has collapsed without a sustainable replacement emerging at anywhere near the same scale? Is public funding of local journalism a viable solution, and if so, how do you structure it to prevent it from becoming government control through financial dependence?

If algorithmic amplification genuinely shapes what billions of people believe is important and true, and if those algorithms are proprietary and minimally transparent, who should have oversight authority over them — and what would legitimate, non-censorious oversight even look like in practice?

At what point does the proliferation of generative AI-enabled content — synthetic text, audio, video — make the concept of a "verifiable factual record" functionally unenforceable for most citizens navigating most information? Is there a technical or institutional response to this that can scale before the damage becomes irreversible?

Are the mechanisms of propaganda and narrative control categorically different when practiced by democratic governments compared to authoritarian ones — or is that a distinction that flatters democracies more than the evidence warrants?

When a society loses its shared information commons — the set of facts, events, and interpretations that most citizens accept as roughly real — can it recover, and what has that process historically required?