LWiAI Podcast #243 - GPT 5.5, DeepSeek V4, AI safety sabotage
Our 243rd episode with a summary and discussion of last week’s big AI news!
Your hub for Deepseek news and research — curated daily from 50 top AI sources including OpenAI, Anthropic, Google DeepMind, and more. Every article is reviewed and enriched with editorial analysis by the DeepTrendLab team.
Our 243rd episode with a summary and discussion of last week’s big AI news!
On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to…
DeepSeek says both models are more efficient and performant than DeepSeek V3.2 due to architectural improvements, and have almost "closed the gap" with current leading models, both open and closed,…
The latest set of open-source models from DeepSeek are here. While the industry anticipated the dominance of “closed” iterations like GPT-5.5, the arrival of DeepSeek-V4 has ticked the dominance in…
Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, saying that the open-source model can compete with leading closed-source systems from US…
Last Updated on April 23, 2026 by Editorial Team Author(s): DrSwarnenduAI Originally published on Towards AI. GPT-4 Has 1.8 Trillion Parameters. It Uses 2% of Them Per Token. DeepSeek-R1: 671…
A newly released 14-page technical paper from the team behind DeepSeek-V3, with DeepSeek CEO Wenfeng Liang as a co-author, sheds light on the “Scaling Challenges and Reflections on Hardware for…
DeepSeek AI, a prominent player in the large language model arena, has recently published a research paper detailing a new technique aimed at enhancing the scalability of general reward models…
The recent disruption caused by DeepSeek’s R1 model sent shockwaves through the AI community, demonstrating that Chinese AI advancements may have been underestimated. The model’s performance, rivaling some of the…