All AI Labs Business News Newsletters Research Safety Tools Topics Sources
⚖️

AI Regulation & Policy

Latest AI regulation and policy news — EU AI Act, US executive orders, UK AI Safety Institute, international AI governance, and the global regulatory landscape.

AI regulation is the fastest-moving area of technology policy. Since 2023, governments worldwide have moved from observation to action: the EU passed the world's first comprehensive AI law, the US issued executive orders establishing AI safety requirements, the UK hosted the first global AI Safety Summit, China enacted AI-specific regulations, and the UN established an AI advisory body. The regulatory landscape is being constructed in real time, with enormous consequences for how AI is developed and deployed.

The EU AI Act — adopted in 2024 — establishes a risk-tiered framework that prohibits certain AI uses (social scoring, real-time biometric surveillance in public), mandates conformity assessments for high-risk applications (medical devices, employment, critical infrastructure), and imposes new transparency and safety obligations on frontier 'general-purpose AI' models above a compute threshold. The Act is the most significant AI regulation globally and is shaping how companies worldwide design and document their systems.

In the US, the regulatory approach has been more fragmented. The Biden administration's October 2023 executive order required safety testing disclosures for frontier AI models and directed agencies to develop sector-specific rules. The Trump administration subsequently revoked that order, signaling a more permissive approach. At the state level, California's SB 1047 (vetoed by Governor Newsom) and similar bills in dozens of states reflect growing legislative activity. DeepTrendLab tracks all major regulatory developments across jurisdictions.

Latest AI Regulation & Policy News

18 recent articles
AI Now Is Hiring a Program Associate
⚖️ Safety AI Now Institute

We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to…

Data centers are coming for rural America
📰 News The Verge — AI

At its peak, the Androscoggin paper mill in Jay, Maine, a rural town about 67 miles northwest of Portland, employed about 1,500 people - until a pulp digester…

AI Now Is Hiring a Comms Associate
⚖️ Safety AI Now Institute

We are looking for a high-touch, digitally savvy communications professional to support the organization’s external presence across a range of channels. The Communications Associate will be a primary…

AI Now Is Hiring a Senior Operations Director
⚖️ Safety AI Now Institute

We’re looking for a senior leader to support the organization through this next phase of growth. Experienced and results-driven, this individual will have a finger to the pulse…

Cybersecurity in the Intelligence Age
🤖 AI Labs OpenAI Blog

OpenAI outlines a five-part action plan for strengthening cybersecurity in the Intelligence Age, focused on democratizing AI-powered cyber defense and protecting critical systems.

The missing step between hype and profit
🎓 News MIT Technology Review — AI

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In February, I picked…

Frequently Asked Questions about AI Regulation & Policy

What is the EU AI Act?

The EU AI Act is the world's first comprehensive AI regulation, adopted in 2024. It uses a risk-based framework: prohibited AI practices (social scoring, real-time biometric surveillance), high-risk applications requiring conformity assessments (medical devices, hiring, critical infrastructure), limited-risk systems requiring transparency (chatbots must disclose they are AI), and minimal-risk systems. Frontier general-purpose AI models above 10^25 FLOPs of training compute face additional requirements.

What is the UK AI Safety Institute?

The UK AI Safety Institute (AISI), established after the Bletchley Park AI Safety Summit in November 2023, is the world's first government body dedicated to evaluating frontier AI models for safety. AISI conducts pre-deployment testing of frontier models from leading labs and publishes research on AI evaluation methodology. It serves as a model for similar institutions in the US (USAISI), EU, and other countries.

What is the difference between AI safety and AI ethics?

AI safety typically refers to preventing catastrophic or existential risks from advanced AI systems — misalignment, loss of human control, or misuse for weapons of mass destruction. AI ethics addresses current harms from deployed AI systems — bias, discrimination, privacy violations, labor displacement, and misinformation. The two communities overlap but have different threat models, time horizons, and policy priorities, leading to occasional tension over regulatory focus.