The Money Machine in Washington
In the U.S., the AI industry has turned to a familiar playbook: lobbying. Big players are pouring millions into shaping the political and legal landscape as wrongful death lawsuits and regulatory questions mount (The Guardian).
Lobbying surge: Tech giants have rapidly become some of the largest donors in Washington, spreading money across both parties.
Shaping liability: With lawsuits emerging over AI-related harms, lobbying aims to limit corporate responsibility.
The risk: Policy could be crafted not to protect citizens but to shield industry. As history with Big Tobacco and Big Oil suggests, lobbying can slow or distort meaningful regulation.
Beijing’s Heavy Hand
China is taking a different route: state-driven acceleration. Beijing is pushing an aggressive AI expansion strategy – backed by subsidies, directives, and central planning. But as Reuters Breakingviews notes, the approach risks “backseat driving” an industry that thrives on creativity.
Scale over efficiency: Subsidies risk creating overcapacity, with too many firms chasing the same goals.
Central control: Heavy guidance can limit experimentation, crowding out unexpected innovation.
Political priority: Unlike in Washington, the goal isn’t just profit but maintaining geopolitical leverage in AI supremacy.
A Tale of Two Systems
U.S. model: Driven by corporate money, AI policy risks becoming a defensive shield for tech giants. Innovation is vibrant, but regulation may be captured by the very firms it should oversee.
Chinese model: Driven by state priorities, AI development risks inefficiency and conformity. Innovation is vast in scale, but creativity may be stifled by central directives.
Both systems are wrestling with the same question: who controls the trajectory of a technology that will define the century?
Digging Deeper: Two Paths, One Struggle
The contest between Washington’s money-driven lobbying and Beijing’s state-directed planning is more than a clash of governance styles. It is a test of whether humanity can govern transformative technologies in ways that balance innovation, safety, and equity. The details of these two models reveal not only their strengths and weaknesses but also their striking similarities: both prioritize institutional power over public interest.
Washington: Lobbying as Governance
The United States has long relied on a combination of market dynamism and political lobbying to shape the trajectory of new technologies. Artificial intelligence is no exception. In 2024 alone, Google, Microsoft, Amazon, Meta, and Apple collectively spent over $70 million on lobbying activities, making them some of the largest political spenders in Washington (OpenSecrets). By comparison, the entire U.S. oil and gas industry spent roughly $124 million, suggesting that Big Tech has reached lobbying parity with legacy industries that historically defined regulatory capture.
The motivations for such spending are clear. Wrongful death lawsuits have already been filed against AI developers, ranging from claims of faulty medical guidance to flawed autonomous systems. The industry fears a regulatory backlash that could impose strict liability standards similar to those that eventually constrained tobacco or pharmaceutical companies. Lobbying is thus directed at shaping the narrative: AI is to be understood as a broadly beneficial technology with occasional edge-case harms, not as a system in need of structural accountability.
This framing follows a familiar arc. In the mid-twentieth century, tobacco companies spent decades funding research and lobbying campaigns to obscure the health risks of smoking (Proctor, 2012). Fossil fuel companies repeated the tactic in the climate arena, casting doubt on overwhelming evidence of carbon-driven warming (Oreskes & Conway, 2010). AI firms now appear poised to use similar strategies: frame dissent as alarmism, emphasize the economic costs of over-regulation, and cultivate political alliances that prioritize growth over accountability.
Perhaps the most striking dimension of U.S. AI lobbying is the revolving door effect. Former regulators, congressional staff, and even ex-White House advisors are routinely hired by tech companies to influence legislation from within (CNBC). This blurring of the line between regulator and regulated undermines the credibility of any oversight process.
For citizens, the implications are sobering. If AI regulation in Washington becomes as captured as financial regulation was in the years leading up to the 2008 crisis, the public may face decades of exposure to systemic risks before meaningful safeguards are enacted.
Beijing: Central Planning at Scale
China’s approach to AI governance is equally ambitious but follows the logic of central planning rather than corporate lobbying. The technology is woven directly into Beijing’s geopolitical strategy. The 14th Five-Year Plan, unveiled in 2021, explicitly named AI as a “strategic industry” alongside semiconductors and green energy (SCMP). Billions in subsidies have been directed to AI startups, universities, and provincial governments tasked with building local AI ecosystems.
At first glance, this model promises speed. Just as China rapidly scaled its solar panel industry to dominate global production – achieving over 70% market share by 2021 (IEA) – Beijing hopes to replicate that trajectory in AI. But the risks of overcapacity loom large. Subsidized startups may flood the market, chasing similar projects without competitive differentiation. This dynamic has already been observed in China’s electric vehicle sector, where dozens of government-backed firms have struggled to survive despite enormous state support (Financial Times).
More concerning is the alignment of AI development with state surveillance goals. Technologies such as facial recognition, predictive policing, and social credit scoring have been prioritized, funneling innovation into politically safe but socially restrictive domains (Human Rights Watch). While U.S. lobbying seeks to minimize liability, Chinese central planning seeks to maximize political control. Both come at the cost of broader social benefit.
Two Faces of the Same Problem
Despite their differences, the U.S. and Chinese models share a troubling commonality: both prioritize institutional interests over democratic accountability. In Washington, that institution is the corporation, insulated by money and influence. In Beijing, it is the state, insulated by centralized authority. The results are mirrored distortions. In the U.S., regulatory frameworks risk being hollowed out by corporate lobbying. In China, innovation risks being narrowed by political imperatives. Citizens in both countries face the prospect of living with powerful technologies governed not by transparent deliberation but by institutional self-interest.
Both systems display the classic features of institutional self-dealing long described in political economy. In the U.S., well-organized firms with concentrated resources can reliably tilt policy in their favor, a process captured by George Stigler’s theory of “regulatory capture” (Stigler 1971). Even critics of the regulatory capture thesis concede that big corporations often succeed in bending policy to their interests (Carpenter & Moss 2013). China’s equivalent is not capture but centralization: rules are written to advance state priorities – security, social stability, geopolitical advantage – even when that narrows research horizons or sidelines civil liberties (HRW 2019). The surface forms differ, but in both countries the governing logic is the same: preserve institutional power first.
This logic deepens the “Collingridge dilemma.” Early in a technology’s life cycle, interventions are easiest but information is scarce; later, when harms are visible, reform is politically and economically costly (Collingridge 1980). In the U.S., lobbying efforts often delay or water down early regulation, insisting on “wait and see” approaches that leave dangerous systems entrenched before oversight matures. In China, central mandates accelerate rapid deployment and lock in architectures before independent scrutiny is possible. Both dynamics amplify the same outcome: governance arrives late, after risks have already scaled.
The concentration of technical capacity exacerbates the problem. Training frontier AI systems increasingly requires access to scarce chips, massive proprietary datasets, and hyperscale cloud resources, all controlled by a handful of firms or state-favored labs (Stanford AI Index 2024). As costs rise, independent researchers struggle to replicate claims or audit systems. In the U.S., the choke points are corporate – Amazon Web Services, Google Cloud, Microsoft Azure. In China, they are state-backed consortia. In both cases, meaningful oversight becomes nearly impossible because the material resources required to check claims are monopolized. The public is asked to trust outputs it cannot independently verify.
Key governance functions are also migrating into opaque domains. In the U.S., corporate terms of service and internal policy teams effectively function as private law, dictating how billions of users interact with AI systems. Kate Klonick has called platform rule-makers “the new governors” of online speech (Klonick 2018), while Lawrence Lessig’s dictum “code is law” underscores how engineering choices embed governance into technical defaults (Lessig 1999). In China, policy objectives are embedded into technical standards and directives, leaving little room for contestation or local variation (SCMP 2021). In both countries, crucial decisions are made in ways that are neither transparent nor easily challengeable.
A credible alternative requires polycentric oversight, a governance model with multiple centers of accountability that can monitor and correct one another. Elinor Ostrom’s work on complex systems demonstrates that overlapping, layered institutions often prevent failures better than a single monopolistic authority (Ostrom 2010). In AI, we already see early gestures toward this in the NIST AI Risk Management Framework (NIST 2023), the EU AI Act with its risk-tiered obligations (European Parliament 2024), and the Digital Services Act mechanisms for transparency and independent dispute resolution (European Commission). These models are incomplete and contested, but they illustrate a path beyond capture and centralization: governance that remains corrigible and legible to the publics it affects.