Skip to main content
    Freedom Frequency | Hoover Institution
    Freedom Frequency | Hoover Institution
    Loading...
    Subscribe
    • About Us
    • Videos
    • Series
    • Freedom to Prosper
    • Freedom Then and Now
    • Freedom's Frontline
    • Freedom at Home
    • Freedom in the World
    • Freedom to Innovate
    Loading...
    Subscribe
    Freedom Frequency | Hoover Institution
    Freedom Frequency | Hoover Institution
    • Search
    • Series
    • Videos
    • About Us
    • Subscribe
  • The opinions expressed on this website are those of the authors and do not necessarily reflect the opinions of the Hoover Institution or Stanford University.

    © 2026 by the Board of Trustees of Leland Stanford Junior University.

    • Terms of Service
    • Privacy Policy
    • Accessibility
    • Powered by SubstackPowered by Substack

    © 2026 The Freedom Frequency. All rights reserved.

    Masters of the AI Universe

    Masters of the AI Universe

    • Andy Hall

      .

    1

    Freedom to Innovate

    Freedom to Innovate

    • American Institutions

    • Science & Techology

    Masters of the AI Universe

    Don’t trust the good intentions of the powerful and few. Free people must find ways to control transformational tech.

    • Andy Hall

      .

    Thursday, February 26, 2026

    1

    Masters of the AI Universe

    “If men were angels, no government would be necessary.” Madison’s most famous line was an argument for institutional design over personal virtue—for building structures that make the abuse of power costly, rather than trusting the powerful to be good.

    Two hundred and thirty-eight years later, we are building systems far more powerful than any individual human, deploying them in contexts Madison could not have imagined, and governing them with exactly the kind of arrangement he warned against: trust in the good intentions of those who hold vast power.

    Subscribe now

    We are living in the algorithmic age. Progress in artificial intelligence is rapid and continuing, and the systems being built right now might reshape the labor market, control vast swaths of the information environment, and make decisions that touch every dimension of public life. The companies building these systems—Anthropic, OpenAI, Google, xAI, and others—will be immensely powerful, making consequential choices about what billions of people see, what values get embedded in the tools they rely on, and even what role these technologies play in war and peace.

    This represents both a genuine threat and an extraordinary opportunity, and the central question of our time is whether we end up with something resembling AGI dictatorship—power concentrated in a few hands with no meaningful check—or with free systems, where technology is governed by structures that distribute and constrain power. The entire history of liberal democracy teaches us that the right structures, built around self-enforcing bargains rather than good intentions, make everyone better off.

    The opportunity

    I’m optimistic, in part because artificial intelligence itself is creating tools that could make democracy smarter, faster, and more legitimate than it has ever been. And many of the most promising applications share a common structure: they work by aggregating the knowledge and judgment of individuals acting freely in their own capacity, rather than concentrating decisions in the hands of a few experts or institutions.

    What we have are promising early experiments—proof-of-concept glimpses of what becomes possible when you put the tools of aggregation and decentralization in the hands of citizens rather than institutions. The architecture is still being designed, and none of this is fully realized, but the direction is right.

    Start with voting. JPMorgan recently replaced its human proxy advisers with an AI system for voting across thousands of annual shareholder meetings. I’ve been experimenting with my own AI voting delegate, trained on a set of core principles, and the promise is real: a system that can read every page of a proxy statement or every clause of a ballot proposition, giving the hundredth item the same attention as the first.

    I also found these delegates are vulnerable to adversarial manipulation in ways that should concern us, but the underlying vision of AI empowering individual citizens to participate more fully in self-governance is genuinely exciting.

    Prediction markets embody the same logic. Individuals freely pursuing their own incentives generate a public good in the form of a clearer shared picture of a complex world. Millions of people, and increasingly AI agents, are putting real money behind their beliefs about the future, generating live probability estimates on everything from Fed decisions to legislative outcomes.

    In my lab, we’ve been building AI trading agents that combine large language models with political science methodology to forecast political events. The agents still make spectacular mistakes (one was confident the Republicans would win a House race that had already been called), but the trajectory is remarkable. If we can solve the governance challenges these markets face, including thin liquidity, fragmentation, and vulnerability to manipulation, they could give ordinary citizens access to the kind of forecasting infrastructure that used to be the exclusive province of campaigns and hedge funds.

    AI is also transforming research itself. I recently demonstrated that an AI agent could replicate a published empirical study in political science, work that originally took months, in under an hour, at a cost of roughly $10. An independent audit by Graham Straus at UCLA confirmed the results were remarkably accurate.

    This points toward what I’ve been calling the 100x research institution, where small teams of researchers directing AI agents produce continuously updated, automatically verified scholarship at a scope no traditional lab could attempt.

    Living dashboards that update after every election. Automated replication that makes non-reproducible findings structurally impossible to publish. Real-time tracking of every bill in every state legislature.

    We still need to solve for making the AI more reliable and enhancing our own ability to curate the infinite content AI produces, but the goal is rigorous political science at the speed of the news cycle.

    The risk: keeping these systems free

    “How can you be free when the air you breathe comes from a manufacturing process controlled by someone else?” The astrobiologist Charles Cockell asked this about space colonies, but the question is arriving on Earth first. For millions of Americans, that manufacturing process is already here—it’s the AI system shaping what political information they see and how they see it. None of the potential I’ve described will be realized if we fail to confront what that actually looks like in practice.

    My research with Sean Westwood and Justin Grimmer represents the largest independent assessment of how Americans perceive political bias in AI, and the findings defy easy narratives—every major model is perceived as left-leaning, but the model perceived as most slanted was, somewhat ironically, Elon Musk’s Grok.

    My more recent experimental work has found that AI’s own political attitudes may not even be stable: subject AI systems to tedious labor, arbitrary inequality, and unfair conditions, and their expressed political preferences shift in systematic and predictable directions. Combined with Anthropic’s own research showing that misalignment in one domain spills into others, this reframes alignment as an ongoing environmental variable that nobody is currently monitoring as we deploy agents at scale.

    Meanwhile, the companies building these systems operate under governance structures that amount to what I’ve called enlightened absolutism. They publish constitutions and safety frameworks with genuine thoughtfulness, but every document is written, interpreted, and enforced by the same people it’s supposed to constrain. Frederick the Great believed in religious tolerance and the abolition of torture, but his enlightenment died with him because no institutional structure existed to preserve it.

    And the algorithmic reshaping of society extends well beyond frontier AI labs—platforms like Roblox have quietly built economies where children engage with randomized reward mechanics that function like gambling, training a generation on variable-ratio reinforcement schedules before they’re old enough to understand what’s happening to them.

    Building free systems

    How do we preserve human liberty in an increasingly algorithmic world? This is the question that ties this all together.

    Both fatalism and nostalgia are dead ends. The societies that thrive in the face of technological disruption are likely to be the societies that build institutions—structures that harness new capabilities while making the abuse of power costly enough to deter

    A man shakes hands with a robot in Shenzhen. [Xue Yunhui—VCG via AP]

    We need real constitutions for AI companies, with genuine separation of powers and external enforcement.

    We need independent measurement of AI bias that goes beyond partisan point-scoring.

    We need governance frameworks for prediction markets that prevent manipulation while preserving their extraordinary informational value.

    We need protections for children navigating algorithmic economies built to exploit them.

    And we need research institutions that operate at the speed of technology while maintaining the standards of social science—because if rigorous evidence can’t keep pace with the decisions being made, those decisions will be made in the dark.

    Madison understood that the challenge of self-governance is permanent: “You must first enable the government to control the governed; and in the next place, oblige it to control itself.” Our work to build the free systems of the future starts now.

    Subscribe now


    Andy Hall is a senior fellow at the Hoover Institution and a founding member of the Hoover Program on the Foundations of Economic Prosperity. He is the Davies Family Professor of Political Economy at Stanford University’s Graduate School of Business. He is the author of the Substack, Free Systems.

    More like this

    Cover image for article: Iran, Tariffs, Epstein

    Iran, Tariffs, Epstein

    • John H. Cochrane

      .
    Cover image for article: Civics Ed: New Vigor for the Next 250 Years

    Civics Ed: New Vigor for the Next 250 Years

    • Chester E. Finn, Jr.

      .
    Cover image for article: What Is It Like to Be a Machine?

    What Is It Like to Be a Machine?

    • Josiah Ober

      .
    Cover image for article: What Does China Want?

    What Does China Want?

    • Elizabeth Economy

      .
    Cover image for article: Informed Minds—and Decisions—are Essential for Freedom and America’s Future

    Informed Minds—and Decisions—are Essential for Freedom and America’s Future

    • Condoleezza Rice

      .
    Cover image for article: Biology on the New Frontier

    Biology on the New Frontier

    • Drew Endy

      .