AI Now Is Hiring a Program Associate
We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to…
Latest AI regulation and policy news — EU AI Act, US executive orders, UK AI Safety Institute, international AI governance, and the global regulatory landscape.
AI regulation is the fastest-moving area of technology policy. Since 2023, governments worldwide have moved from observation to action: the EU passed the world's first comprehensive AI law, the US issued executive orders establishing AI safety requirements, the UK hosted the first global AI Safety Summit, China enacted AI-specific regulations, and the UN established an AI advisory body. The regulatory landscape is being constructed in real time, with enormous consequences for how AI is developed and deployed.
The EU AI Act — adopted in 2024 — establishes a risk-tiered framework that prohibits certain AI uses (social scoring, real-time biometric surveillance in public), mandates conformity assessments for high-risk applications (medical devices, employment, critical infrastructure), and imposes new transparency and safety obligations on frontier 'general-purpose AI' models above a compute threshold. The Act is the most significant AI regulation globally and is shaping how companies worldwide design and document their systems.
In the US, the regulatory approach has been more fragmented. The Biden administration's October 2023 executive order required safety testing disclosures for frontier AI models and directed agencies to develop sector-specific rules. The Trump administration subsequently revoked that order, signaling a more permissive approach. At the state level, California's SB 1047 (vetoed by Governor Newsom) and similar bills in dozens of states reflect growing legislative activity. DeepTrendLab tracks all major regulatory developments across jurisdictions.
We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to…
At its peak, the Androscoggin paper mill in Jay, Maine, a rural town about 67 miles northwest of Portland, employed about 1,500 people - until a pulp digester…
OpenAI CEO Sam Altman says Elon Musk did "huge damage" to the culture of the AI startup. During testimony as part of Musk's lawsuit against OpenAI, Altman said…
The family of a 19-year-old college student is suing OpenAI over claims that his conversations with ChatGPT led to an accidental overdose. In the lawsuit filed on Tuesday,…
OpenAI CEO Sam Altman has begun his testimony against Elon Musk in a high-profile jury trial in a California federal courtroom. Altman, alongside OpenAI president Greg Brockman, is…
Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv, cappuccinos, and feedback from readers. If you’d like to support this, please subscribe. Subscribe…
Sam Altman and Elon Musk are facing off in a high-stakes trial that could alter the future of OpenAI and its most well-known product, ChatGPT. In 2024, Musk…
Hello and welcome to Regulator, a newsletter exclusively for Verge subscribers about tech, politics, and Washington intrigue. (It's basically House of Cards, but for nerds.) Not a subscriber…
We are looking for a high-touch, digitally savvy communications professional to support the organization’s external presence across a range of channels. The Communications Associate will be a primary…
We’re looking for a senior leader to support the organization through this next phase of growth. Experienced and results-driven, this individual will have a finger to the pulse…
One is genuine curiosity. The other is someone who already knows what they want and went looking for a number to back it up.
First week of Musk v. Altman, OpenAI ends Microsoft legal peril over its $50B Amazon deal, DeepSeek previews new AI model that ‘closes the gap’ with frontier models,…
Explore OpenAI’s European Youth Safety Blueprint and EMEA Youth & Wellbeing Grants, advancing safe, responsible AI for teens, families, and educators.
Amid falling revenue and store closures, GameStop wants to buy the much larger eBay.
The move follows the Trump administration’s feud with Anthropic.
A new bill introduced by Senators Adam Schiff and Mike Rounds would award grants to the National Science Foundation—which has endured massive funding cuts under the Trump Administration…
OpenAI outlines a five-part action plan for strengthening cybersecurity in the Intelligence Age, focused on democratizing AI-powered cyber defense and protecting critical systems.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In February, I picked…
The EU AI Act is the world's first comprehensive AI regulation, adopted in 2024. It uses a risk-based framework: prohibited AI practices (social scoring, real-time biometric surveillance), high-risk applications requiring conformity assessments (medical devices, hiring, critical infrastructure), limited-risk systems requiring transparency (chatbots must disclose they are AI), and minimal-risk systems. Frontier general-purpose AI models above 10^25 FLOPs of training compute face additional requirements.
The UK AI Safety Institute (AISI), established after the Bletchley Park AI Safety Summit in November 2023, is the world's first government body dedicated to evaluating frontier AI models for safety. AISI conducts pre-deployment testing of frontier models from leading labs and publishes research on AI evaluation methodology. It serves as a model for similar institutions in the US (USAISI), EU, and other countries.
AI safety typically refers to preventing catastrophic or existential risks from advanced AI systems — misalignment, loss of human control, or misuse for weapons of mass destruction. AI ethics addresses current harms from deployed AI systems — bias, discrimination, privacy violations, labor displacement, and misinformation. The two communities overlap but have different threat models, time horizons, and policy priorities, leading to occasional tension over regulatory focus.