Big Tech: Social Control and Political Subservience

Big Tech under scrutiny. A letter stamped by Congress and an already-scheduled date — October 8, 2025 — lined up four “community-centric” platforms: Reddit, Discord, Steam, and Twitch. House Oversight Chair James Comer says he wants to understand how to prevent online “radicalization” and “political incitement” after the killing of Charlie Kirk. That’s the official framing. In the background, however, you can glimpse another scene: a tragedy used as leverage to narrow the boundaries of what’s sayable on open social platforms, right on the eve of election season. Not just a fact-finding act — a political signal to those moderating the digital squares where linguistic disorder is the rule, not the exception.

Big Tech at Donald Trump’s Court

That same month, across town in Washington, the complementary act played out. A White House dinner with Big Tech, a narrative of “American dominance in AI,” cameras trained on investment promises. Mark Zuckerberg dropped a headline number — $600 billion “by 2028” — later contextualized by Meta CFO Susan Li as a multi-year cumulative projection, more an order-of-magnitude estimate than a hard contractual pledge. The message still lands: those speaking the language of AI-infrastructure — capex, data centers, permitting, energy — get the stage and the legitimacy. Those hosting noisy conversations without bringing hardware, chips, or megawatts are summoned to Committee to explain themselves.

Big Tech and the Supreme Court

This elective pressure works in two beats. First, reward the actors deemed “strategic” for national AI; then discipline those carrying the daily weight of public discourse. No new laws required: a threatened hearing, a headline, a measure of moral suasion often suffice. In the U.S., this slide has a technical name — jawboning — and a legal boundary. In Murthy v. Missouri (2024), the Supreme Court didn’t close the door on government-platform dialogue, but clarified the threshold: collaboration cannot become coercion or “significant encouragement” that turns private choices into state action. The thin line, in other words, separates informing from pressing so hard that debate chills.

Free-Speech Law

Below that line, the First Amendment compass remains clear. With Brandenburg v. Ohio (1969), the Court held the state cannot punish even harsh speech unless it is directed to and likely to incite imminent lawless action. With Matal v. Tam (2017), it reminded that offense alone doesn’t forfeit constitutional protection: in the U.S., “hate speech” as such isn’t an unprotected category. Translated to the present, broad political pressure for takedowns of offensive content risks brushing the red line, especially when not anchored to transparent, autonomous platform standards.

The broadcast parallel makes the effect visible without doctrine. The Kimmel/ABC case showed how raised-eyebrow regulation works: no written order, but the regulator’s lifted eyebrow — the same regulator that grants licenses or reviews mergers — can push a network toward the safer path. On social platforms, the tools differ (hearings, legislative threats, posts from the bully pulpit), not the outcome: self-censorship to avoid trouble, noise-reduction to avoid friction, conformity as insurance against reputational and regulatory risk.

Big Tech USA: at Trump’s table

Net Neutrality and Big Tech

Meanwhile, the infrastructure ground is shifting. In January 2025, the Sixth Circuit blocked the FCC’s attempt to restore federal net-neutrality rules, finding the agency lacked a basis to reclassify ISPs as common carriers. In the following months, a deregulatory-minded FCC began a “delete, delete, delete” of remaining rules. The practical paradox: loosen control over the network pipes while ramping up informal pressure on what flows through them. The public posture aligns: freedom for carriers; attention — and, when needed, pressure — on content and its intermediaries.

The TikTok dossier is the clearest example of governing the infosphere without an outright ban. The deal shaped in September avoids a total prohibition but builds a “soft nationalization”: Oracle and U.S. investors oversee data, security, and the algorithm for U.S. users; ByteDance remains connected via licenses and minority stakes, while the recommendation system is replicated/retrained within a U.S. sovereignty perimeter. A feed isn’t shut off; the architecture that decides what we see is re-engineered. It’s the same grammar tying industrial policy, national security, and control over information flows — and it squarely involves Big Tech.

AI, Government, and Big Tech

On the AI-government front, the coupling is even clearer. In 2025 the U.S. Army consolidated dozens of contracts into a decade-long agreement with Palantir (ceiling up to $10B) to procure data/analytics/AI software more quickly and bring algorithmic capability into operations. In parallel, partnerships with systems integrators like Accenture Federal Services institutionalized an AI pipeline for federal agencies. If AI becomes public infrastructure, those who enable it rise in political rank; those who stress-test it — by hosting conflict and community disorder — land under the magnifying glass. The differential treatment isn’t a bug; it’s the new rule.

Running alongside is the administration’s crypto doctrine. With Executive Order 14178, the White House set the direction for digital financial technologies; subsequent steps introduced a Strategic Bitcoin Reserve approach for managing seized digital assets and a federal stablecoin framework (the GENIUS Act) to harmonize state regimes and reserve requirements. The same logic animates AI: twin infrastructures — computation and digital finance — at the core of state capacity. In this frame, providers of technical power nodes are system allies; hosts of noisy dissent are public-order problems to be trained toward new forms of compliance. Big Tech sits on both sides of that divide.

Which brings us back to the opening question: what will Congress actually ask of the platforms summoned for October 8? If safety is the goal, lawmakers can demand public moderation criteria, traceable escalation paths, openings for independent research on recommender systems, periodic external audits, and granular reporting. If, instead, the hearing only calls for “less noise” and “more cleanliness,” tragedy will have been converted into procedure and procedure into discipline. Not more transparent platforms — just more obedient ones.

The dual-lane outcome would be hard to dispute: Meta, Apple, Google, Microsoft, OpenAI — Big Tech at the dinner and in the “national mission” narrative; Reddit, Discord, Steam, Twitch in the hearing room to justify how they shoulder digital disorder. Two audiences, two lexicons, two prices. That’s where “social control and political subservience” stops being provocation and becomes a description of the settlement: reward those who build computational power; steer those who host the public word.

No law is needed when a Committee invitation will do. It’s up to those in power to choose whether that becomes a tool for transparency or a lever for conformity.
Read more →

Sources

Similar Posts