Artificial Intelligence for Policy Makers
Chapter 9 of Mostly Harmless AI
This is an early draft of Chapter 9 of my upcoming book Mostly Harmless AI. I’m deeply grateful for all suggestions and criticism you might have.
Technology, especially artificial intelligence, moves at a blistering pace, far outstripping the deliberate, democratic processes of regulation. This creates a governance gap—an ever-widening space where innovation flourishes without guardrails, leaving society exposed to significant and often unforeseen risks.
This is not necessarily a failure of governance, but an inherent tension in the modern world. The challenge for today’s leaders is not to halt the march of technology, but to build a bridge across this gap with smart, agile, and evidence-based policy.
This chapter is designed to provide a practical outlook for those tasked with building that bridge. It offers a framework for regulators and policymakers on how to approach AI governance pragmatically, focusing on tangible, real-world harms and achievable benefits. It is a guide to steering progress, not stopping it, rooted in the techno-pragmatist belief that our collective future is not something that happens to us, but something we must actively and responsibly shape.
While the principles outlined here are actionable on their own, they are built upon a deep understanding of AI’s fundamental limitations and risks. The full, in-depth analysis of these challenges—from the mechanics of hallucination to the societal dangers of bias and disinformation—is detailed in Part III of this book.
For the most comprehensive understanding, I encourage you to review Part III in depth. Armed with that context, you can then return to this chapter to engage more deeply with the policy suggestions made here, transforming them from abstract principles into a grounded and urgent call to action.
A final disclaimer. Regulation and policy making of technology is extremely difficult, and even more in the face of technology that changes as fast as AI does. Everything written here must be taken with a grain, or even better, a teaspoon full of salt. Furthermore, no specific advice will fit all contexts. Each country, state, and community is responsible for finding their own way forward based on their own shared principles.
Why Regulation is Necessary
Before we can chart a path forward, we must first understand the terrain of risks that requires thoughtful governance. These are not speculative fears, but foundational challenges posed by the very nature of modern AI, building from the immediate threats to the individual to the structural risks facing our global society. Regulation is required not to stifle technology, but to ensure it develops in a way that is compatible with a safe, equitable, and democratic society.
Let's start with privacy. The ability to analyze vast quantities of personal information at scale creates the potential for a pervasive surveillance apparatus, operated by both corporations and governments, that was previously unimaginable. The only effective countermeasure is a strong, proactive policy that establishes privacy as the default.
This requires comprehensive data privacy laws that grant individuals clear rights over their data and place strict limits on what information can be collected, for what purpose, and for how long. Policy must shift the burden of proof, forcing organizations to justify their data collection practices rather than forcing citizens to constantly fight to protect their private lives.
Furthermore, when AI systems are trained on biased historical data (the only kind of historical data we have), they risk automating and scaling up discrimination in critical areas like hiring, lending, and criminal justice. Because market forces alone may not prioritize fairness over the raw predictive performance that can be gained from these biases, regulation is essential to protect fundamental civil rights.
Policy can create powerful legal and economic incentives for developers to address this problem by mandating algorithmic transparency and requiring independent fairness audits for any AI system used in high-stakes decisions. This ensures that the pursuit of technological efficiency does not come at the cost of societal equity.
Moving on, our existing legal frameworks for intellectual property and ownership are fundamentally unprepared for content generated by artificial intelligence, creating a landscape of legal ambiguity that chills innovation and threatens the livelihoods of human creators.
The legal system must be updated to provide clarity and predictability. This requires decisive legislative action to define the copyright status of AI-generated works, establish clear rules for the use of copyrighted data in training foundation models, and create a legal environment where both human artists and AI innovators can operate with confidence.
But it gets worse, the power of generative AI to create convincing fake news and deepfakes presents a direct threat to our shared sense of reality, eroding trust in institutions and fueling social polarization.
A regulatory approach here requires a delicate balance. Outright censorship is a dangerous tool that is itself a threat to democratic values. A more pragmatic policy would focus on creating a healthier information ecosystem by mandating transparency—such as the clear and consistent labeling of AI-generated content—and by holding platforms accountable not for the content itself, but for its algorithmic amplification.
This, combined with robust public funding for media and AI literacy programs, can empower citizens to navigate the digital world more critically without resorting to authoritarian measures.
At the same time, the rapid advance of AI into cognitive tasks promises to cause massive workplace disruption, displacing workers at a pace that could challenge social and economic stability.
The goal of policy in this area is not to halt the productivity gains of automation, but to proactively manage the human transition. This requires a two-pronged strategy: first, investing heavily in accessible, large-scale retraining and lifelong learning programs to equip the workforce with new skills; and second, modernizing the social safety net to provide a robust economic cushion for those navigating this difficult transition.
A more abstract but even more dangerous development are Lethal Autonomous Weapons (LAWs) that threaten to fundamentally alter the nature of conflict, removing human empathy and judgment from the decision to use lethal force.
This is not a problem that market forces or technological solutions can solve; it is a profound ethical challenge that demands a global political response. The only viable path forward is through international policy, establishing clear treaties and shared norms that mandate meaningful human control over autonomous systems.
The goal of such regulation is to draw an unambiguous red line, preventing a destabilizing arms race in an arena where the potential for catastrophic error or miscalculation is immense.
A Pragmatic Stance on Existential Threats
Finally, any serious policy discussion must address the so-called existential risks, which involve the potential for AI to destroy human civilization altogether.
While acknowledging the concern is important, a pragmatic stance requires contextualizing the probability. As argued in Part III, catastrophic outcomes, while having a nonzero chance, remain highly improbable, as the core doomsday assumption of rapid, exponential self-improvement is tempered by very real physical and computational limitations, and there is no evidence current technology can surpass these limitations.
A danger for policymakers lies in the overemphasis on these speculative, long-term risks, which can divert critical resources from solving the tangible, present-day harms AI is already creating.
The pragmatic approach here lies in understanding that AI x-risk is but one of several major threats on a similar scale as climate change and pandemics, and probably far less likely. Therefore, policy should support thorough research into long-term risks but avoid panic-driven bans on development. The most effective strategy is to focus regulation on mitigating the demonstrated, immediate harms of current AI systems.
The Challenge of Smart Regulation
Identifying the risks is only the first step. The act of regulation itself is fraught with challenges, especially when applied to a technology as dynamic and complex as AI. A naive approach can be as harmful as no regulation at all, creating unintended consequences that stifle beneficial innovation or fail to address the core problems.
Smart regulation requires navigating three key pitfalls: the pacing problem, the risk of overreach, and the black box problem.
The Pacing Problem
Traditional legislative cycles, which can take years to produce new laws, are fundamentally mismatched with the exponential pace of AI development. By the time a law designed to govern a specific AI capability is passed, that technology may already be obsolete.
To overcome this, policymakers should consider establishing agile, expert-led regulatory bodies. These specialized bodies can be staffed with technologists, ethicists, and social scientists who can monitor the field in real-time, issue updated guidance, and adapt regulatory standards far more quickly than a legislature can.
Avoiding Overreach
In the face of uncertainty and fear, the temptation can be to enact broad, sweeping prohibitions on AI development. This would be a profound mistake. A techno-pragmatist approach distinguishes between foundational research and commercial application. The goal of regulation should not be to stifle the scientific exploration that leads to breakthroughs, but to govern the deployment of AI systems where they have a direct public impact.
Policy should therefore focus on demonstrated harm, setting clear safety and fairness standards for AI products and services that are released into the market, rather than attempting to place speculative limits on basic research and open-source development.
The Black Box Problem
Many of the most powerful AI systems operate as black boxes, where even their own creators cannot fully explain the specific logic behind a given decision. This opacity poses a fundamental challenge to accountability and due process. How can an individual appeal a decision they cannot understand?
Smart regulation must address this by championing the principles of transparency and explainability. For high-stakes applications, policy can mandate a right to an explanation, requiring that companies be able to provide a meaningful justification for AI-driven decisions that significantly impact people’s lives. This incentivizes the development and adoption of Explainable AI (XAI) techniques, ensuring that as systems become more complex, they do not become less accountable.
Principles for Proactive AI Governance
Having navigated the pitfalls, we can chart a course for proactive governance. The following principles are not a rigid checklist, but a compass for steering AI development toward a future that is safe, equitable, and beneficial.
The core of this approach is a commitment to evidence over ideology. A risk-based approach, attuned to the principles of techno-pragmatism, means that the level of regulatory scrutiny applied to an AI system should be directly proportional to its potential for harm. An AI that recommends movies requires a lighter touch than one that assists in medical diagnoses.
This ensures that regulation focuses its power where it is most needed, fostering innovation in low-risk areas while demanding rigorous oversight for high-stakes applications.
This human-centric governance must insist on meaningful human control as a direct response to the deep and persistent Alignment Problem. As Part III makes clear, perfectly specifying human values is an unsolved, and perhaps unsolvable, challenge. Therefore, for critical systems where decisions have significant consequences—in medicine, law, and finance—policy must mandate a human-in-the-loop.
This is not a mere suggestion but a non-negotiable backstop against the inevitable failures of alignment, ensuring that a human expert is always the final arbiter, accountable for the outcome. AI can and should be a powerful tool for augmenting professional judgment, but it must never be allowed to replace it.
Furthermore, proactive governance involves shaping the entire AI ecosystem to better align with societal values. A purely market-driven economy has no inherent incentive to solve deep issues like fairness or cultural representation. Therefore, policy must create these incentives. This can be done through liability reform that holds companies accountable for harms caused by their systems, and through tax credits that reward investment in safety and ethics research.
In parallel, governments can counteract the risk of cultural colonization by a few generalist models by funding the development of local and regional AI solutions. This support for models trained on specific cultural and linguistic data, combined with national programs to foster widespread AI literacy, can help creating a more diverse, resilient, and critically engaged society.
Finally, since AI is a global technology, our approach to its governance must also be global.
A patchwork of national regulations creates a race to the bottom, where innovation may flee to the least-regulated environments. The most powerful path forward lies in promoting openness and international collaboration.
Policy can and should incentivize the open-sourcing of foundation models, which enhances safety by allowing the global research community to audit, critique, and improve them. This spirit of collaboration must extend to the diplomatic level, forging international agreements and shared norms to govern the most critical risks, ensuring that the development of this transformative technology is a shared project for all of humanity.
Conclusions
The path of technology is not deterministic. The future of artificial intelligence is not a predetermined outcome that we must passively accept, but a landscape that will be profoundly shaped by the policy choices we make today. As we have seen, the risks are significant, but so is the potential. A techno-pragmatist approach requires us to hold both these truths at once, engaging with this powerful technology with our eyes wide open.
Thanks for reading! Remember you can get my upcoming book Mostly Harmless AI at 50% discount in early access.
Alejandro, you’ve provided a must-read analysis of the possible consequences of unregulated AI while making clear the folly of putting unnecessary obstacles in the path of innovation. Your breadth and depth of vision are sorely needed right now. As you say, decisions made now based on ideologies rather than good scientific evidence produced by experts leave us at risk in communities, nation-states, and globally. I’m left feeling safer just knowing that there is at least someone who understands both AI with its risks and benefits and how they might play out in capitalist societies without effective regulation. There is too much polemical content out there; your content is the antidote. To top it off, your writing is clear, reasoned, beautiful if you ask me.
Thank you for thoughtful considerations of the principles to guide AI governance policy.
Feedback:
- a concrete example of each principle, such as an existing body or exemplary framework, would help further ground the work.
- if we wanted to crowdsource or donate to causes that support these principles, which might we choose?
- so this is a call for policy. If we wanted to write our legislator about clear and present abuses, or unregulated processes likely harboring discrimination, where should we focus now?
- can we take existing regulation, such as protected classes, credit score fairness, lending practices, GDPR, rules for layoffs - how could we expand coverage of these and other fairness legislation to cover AI.
- yeah, how can we take existing consumer protections and non-discriminatory protections, and use the mediation mechanisms there - as a template for future policy. I image if you are denied credit or denied a mortgage - you have the right to ask for an explanation - and appeal to a human - perhaps we can ask for these 'fairness rights' in other places.
I think this is a great summary and list of predictions of where policy is going to be needed in the future. What are our calls for action right now? Can you choose three specific regulations that should be implemented immediately?
-