8 minute read

The Balancing Act - AI Regulation

The Balancing Act - AI Regulation

Questions of this Interview

Below is a list of direct questions posed by the interviewer (Professor Hannah Fry) to Nicklas Lundblad (Deepmind)

  1. How would you describe the current public mood, as it were, around the subject?

  2. Do you think there’s been a shift in public mood? I mean, with the explosion of generative AI into the scenes Have you noticed a mood shift as a result?

  3. Could you give us a brief overview of where we are at the moment in terms of the landscape of regulation on artificial intelligence?

  4. Do you think that AI, as a distinct technology, needs regulating sort of independently of other technologies?

  5. I’m thinking here about CRISPR or plutonium, for instance, where there are very strict rules. Doesn’t that mean sometimes we do regulate the technology itself?

  6. If we apply that back to AI, how do you regulate the harms without regulating the technology? What does that actually look like in practice?

  7. Does that not require you knowing in advance what the potential harms will be?

  8. Give me an example. In what situation would it make it difficult? (Referring to setting bars for safety, false positives, etc.)

  9. I wonder whether there’s a question here about must we do it at all? If doing so means there’s unavoidable uncertainty, then what about just not doing it?

  10. What do you think the role should be in having private companies shape this regulation?

  11. Is that a universal belief among the tech industry? (Reffering to building expertise and understanding of technology in the public sector and the idea that mutual dialogue between private companies and policymakers is essential.)

  12. But there are some who think that self-regulation should be the main lever that gets used as we go forwards. What’s your take on that?

  13. I guess one of the other big counterarguments is that this whole picture is really further colored by the pursuit of profit. So can you trust that companies are doing this for public good, or for competitive advantage?

  14. Do you think then, with that as the background, there is a risk that we’ll end up in a future where AI is in the hands of a very small number of companies?

  15. They’ve really gone for this risk-based approach. Do you think this is broadly a good approach? (On the EU AI Act)

  16. Do you think then that we will see similar types of innovation? For instance, will the EU regulations force wider adoption of ideas like SynthID, and maybe even lead to a standardized system?

  17. How about within ‘Google DeepMind,’ then? Do you have certain applications that you consider off limits here?

  18. How do you decide at ‘Google DeepMind’ which projects you will and won’t get involved in? How does that decision process happen?

  19. Why in a couple of years? Why not now? (Referencing Demis Hassabis’s idea of regulating frontier models in a couple of years)

  20. Are there any emerging capabilities that you are particularly concerned about?

  21. Do you think that we’re going to get to a point where international bodies will collaborate on AI regulation, or do you think that we’ll stay in this situation where different regions of the world approach it differently?

  22. One place that has not yet planted its flag about regulation at all is the UK. Do you think that when it does, it will end up being closer to the EU or to the US?

Themes discussed in this interview

1. The Need for AI Regulation

  • There is broad consensus that regulation is essential as AI becomes integrated into society.
  • The challenge lies in balancing innovation with safeguarding against harm.
  • Regulation should aim to ensure transparency, accountability, and public trust while fostering innovation.

2. Approaches to AI Regulation

  • US Approach: Focuses on sectoral, industry-led regulation (e.g., healthcare, education).
  • EU Approach: Adopts a horizontal, risk-based framework with stricter rules for high-risk systems (e.g., the AI Act).
  • UK Approach: Still developing; likely to favor a sectoral approach but lean closer to the US model.
  • China Approach: Focuses on power dynamics, regulating information algorithms and societal impact.

3. Regulation of Technology vs. Regulation of Use

  • Debate on whether regulation should target the technology itself (e.g., CRISPR or plutonium) or its applications and uses.
  • Many regulatory decisions are based on addressing potential harm rather than blanket bans on technology.

4. Challenges in AI Regulation

  • Uncertainty of Harms: Difficulty in predicting all potential harms of emerging technology.
  • Risk vs. Reward: Balancing the risks of AI with its potential for high rewards, like solving complex societal problems.
  • Global Disparities: Differing regional regulatory approaches may lead to uneven innovation and competitive advantages.

5. The Role of Private Companies

  • Companies are key players in shaping regulation, but there are concerns about the influence of profit motives.
  • Effective collaboration requires knowledge exchange: companies provide technical insights, while policymakers contribute democratic values and societal priorities.

6. Regulatory Models

  • Precautionary Principle: Emphasized in Europe; focuses on minimizing potential harms before allowing deployment.
  • Cost-Benefit Principle: Favored in the US; balances risks against potential benefits to determine acceptability.

7. Emerging AI Capabilities and Risks

  • Concerns include bias, misinformation, deception, and persuasion enabled by AI.
  • The importance of understanding and mitigating these risks while fostering innovation is emphasized.

8. Frontier Models and Safety

  • Discussion of frontier models (cutting-edge AI systems) and their potential risks.
  • Frontier models are large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models.
  • Suggestion to gradually develop testing and regulatory frameworks as capabilities evolve, rather than prematurely imposing restrictions.

9. Self-Regulation and Public Oversight

  • Self-regulation may be effective in some cases, but transparency and public accountability are crucial.
  • A hybrid model combining self-regulation with legal auditing and external review could be more effective.

10. International Collaboration

  • Efforts through OECD, UN, and G7 to harmonize AI regulations globally are underway, but geopolitical tensions pose challenges.
  • The necessity of aligning general principles (e.g., “AI should benefit humanity”) and specific practices is highlighted.

11. Ethical Principles and Prohibited Uses

  • Proposals for blanket bans on certain applications (e.g., real-time biometric surveillance, social credit systems).
  • Companies like Google DeepMind have their own AI principles to decide which projects they will and won’t engage in.

12. Innovation Driven by Regulation

  • Regulation can drive innovation when clear frameworks set goals or constraints (e.g., SynthID for watermarking AI content, EV innovation spurred by emissions standards).

13. The Importance of Public Investment in Science

  • There is a call for more public funding in AI research to balance the dominance of private-sector investment and ensure societal benefits.

14. Democratic and Transparent Governance

  • The role of democratic institutions in ensuring that regulation reflects societal values and not just corporate interests.
  • The importance of engaging diverse stakeholders in shaping AI governance.

15. Future Directions

  • Building testing institutions and benchmarking systems to evaluate AI models systematically.
  • Regulatory “curiosity” is vital for exploring where interventions can best mitigate harm while enabling progress.

16. Regulation vs Legislation

  • Regulation: Refers to a broader framework for controlling or guiding the use of technology. It can involve a combination of tools such as norms, market forces, architectural design (how the technology is built), and laws.
  • Legislation: Refers specifically to laws enacted by governments or legislative bodies. Legislation is a subset of regulation and focuses on formal, legal rules and requirements.

Books and Documents Discused in this Interview

The transcript mentions the following books:

1. Code and Other Laws of Cyberspace by Lawrence Lessig

This book is referenced by Nicklas Lundblad to explain the broader concept of regulation, emphasizing that regulation is not just about laws (legislation) but also includes architecture (technical design), markets (economic forces), and norms (societal expectations). This book presents a framework for understanding how technology can be regulated through multiple forces and highlights the interplay between technology and societal governance.

2. The Library of Babel by Jorge Luis Borges

Nicklas Lundblad during the discussion on the hype cycle of technologies, he refers to Borges’ fictional library containing all possible books as a metaphor for the initial euphoria and subsequent disillusionment that often accompany technological breakthroughs. This explores the infinite nature of information and the despair that comes with its overwhelming abundance, which parallels the excitement and challenges of technological advancements like AI.

3. The Imperative of Responsibility by Hans Jonas

Lundblad discusses the sociotechnical aspect of regulation. He mentions Jonas’ philosophical idea that all use of technology is an exercise of power. This book argues for an ethical framework of responsibility in using technology, particularly in light of humanity’s increasing capacity to affect the future through technological innovation.

4. A Declaration of the Independence of Cyberspace by John Perry Barlow

(not a book but an essay/document)
Libertarian ethos of early internet regulation debates contrasts this with the current, more cautious and regulation-friendly approach to AI. It argues for minimal interference by governments in the digital realm, reflecting a libertarian vision of freedom on the internet.


Key Hypothesis by Nicklas Lundblad

While responding to these question Nicklas Lundblad had following hypothesis:

  • Different global approaches to AI regulation (e.g., EU’s risk-based model, US’s sectoral model, China’s focus on societal control) are effectively “experiments,” and we will learn from their successes and failures over time.
  • Regulation should be oriented around mitigating harm rather than directly targeting the technology itself.
  • Effective regulation requires considering both the risks and the potential rewards of AI applications, particularly in high-impact areas such as healthcare or climate change.
  • A well-defined regulatory frameworks can spur innovation by providing certainty and goals for companies to meet.
  • Democracies are more capable of balancing public interests, technical knowledge, and ethical considerations when regulating AI compared to authoritarian systems.
  • Immediate, restrictive regulation of cutting-edge AI (frontier models) might stifle innovation. A better approach is to build evaluation frameworks, test models, and incrementally develop regulatory standards.
  • Self-regulation by AI companies can be effective if paired with transparency, external audits, and government oversight.
  • The effectiveness of AI regulation can be partially measured by how quickly the technology diffuses through society and the economy, unlocking benefits and welfare.
  • While there is value in global collaboration on AI regulation (e.g., through the UN, OECD), geopolitical tensions and national competitiveness will likely limit the scope of such efforts.

Full Interview : The Balancing Act: AI & Regulation with Nicklas Lundblad

Updated: