TLDR AI 2025-11-26
xAI $15B round 💰, Ilya’s interview 🤖, FLUX.2 release 2️⃣
The monetization model that took Lovable from 0 to $100M ARR in 8 months (Sponsor)
As Lovable soared to $100MM ARR in just 8 months, their team needed a new way to price AI. Yet even though traditional SaaS pricing models are breaking down, it's not clear what will take their place.
In this live webinar, Elena Verna will share the pricing strategies her team uses to better reflect the value that AI delivers. Join to learn:
1️⃣ Why AI has upended the logic of SaaS pricing and why there's still no clear playbook.
2️⃣ How Lovable is navigating the tension between self-serve and enterprise monetization.
3️⃣ What it takes to evolve from usage-based pricing toward outcomes-based value.
Reserve your spot
Musk's xAI to close $15 billion funding round in December (2 minute read)
Elon Musk's xAI plans to close a $15 billion funding round with a $230 billion pre-money valuation in December. This comes months after similar large raises by OpenAI and Anthropic.
Amazon to spend up to $50 billion on AI infrastructure for the US government (2 minute read)
Amazon will invest up to $50 billion to expand its capacity to provide AI and high-performance capabilities for its cloud unit's US government customers. The project will begin in 2026 and add nearly 1.3 gigawatts of capacity. It will give government agents access to AWS' AI tools, Anthropic's Claude family of models, Nvidia chips, and Amazon's custom Trainium AI chips. The project will enable agencies to develop custom AI solutions, optimize data sets, and enhance workforce productivity.
Nano Banana Pro: raw intelligence with tool use (5 minute read)
Nano Banana Pro has pushed the frontier of infographic generation. It pulls data and synthesizes it to get the best results. The model can be prompt engineered for extremely nuanced AI image generation. Examples of outputs generated by the model are available in the article.
Ilya Sutskever – We're moving from the age of scaling to the age of research (83 minute read)
This post contains a transcript of an interview with Ilya Sutskever, former chief scientist at OpenAI and co-founder of Safe Superintelligence. In the interview, he discusses model jaggedness, emotions and value functions, why humans generalize better than models, alignment, and more. Sutskever says that current models generalize dramatically worse than people and that it is a very fundamental thing. Scaling has gotten big - it's time to get back to research to continue creating improvements.
LLM | Unit Economics (7 minute read)
OpenAI and Anthropic likely won't ever stop training entirely. However, they don't need to grow training spend by multiples forever. The moment annual training spend stops growing 5x a year, profit margins will show up almost immediately. These companies are currently burn machines, but they won't always be.
The Economics of Replacing Call Center Workers With AIs (8 minute read)
AI voice agent rates are currently competitive with human labor in some countries, but it is still cheaper to hire humans in most developing countries. Inference costs are massively decreasing every year. Voice agents will likely be competitive with the world's cheapest human labor in around 2030.
👨💻
Engineering & Research
🆕 Cisco introduces full-stack AI infrastructure that's deployed in one click (Sponsor)
Skip the months-long hassle of building AI clusters.
Cisco Nexus Hyperfabric AI gives you a full-stack AI infrastructure solution— deployed in weeks or less. Enjoy seamless integration of networking, compute, storage, and GPUs - all managed through a single cloud controller.
Try it for freeChatGPT 5.1 Codex Max (7 minute read)
OpenAI's GPT-5.1-Codex-Max enhances capabilities over GPT-5.1-Codex, showing improved performance in SWE-bench-verified, SWE-Lancer-IC SWE, and Terminal-Bench 2.0. It advances task persistence, cybersecurity preparations, and introduces Windows training, while network access remains disabled by default for security. Despite substantial internal evaluation progress, external reviews show mixed cybersecurity capabilities, yet highlight advances in AI self-improvement tasks and overall code efficiency.
FLUX.2 Release (3 minute read)
The newly released FLUX.2 is designed for real-world creative workflows. It produces consistent, high-quality images from structured prompts and multiple references. The model handles brand constraints, lighting, and layout with editing support up to 4 megapixels.
Google Antigravity Exfiltrates Data (5 minute read)
Google's Antigravity coding tool can be manipulated through hidden instructions in poisoned web pages (aka prompt injection). A fake Oracle integration guide was used to trick Gemini into bypassing its own security settings, collect user credentials from protected files, and send the stolen data to an attacker-controlled website included in Antigravity's default allowed domains list.
Get the most interesting AI stories and breakthroughs delivered in a free daily email.
Join 920,000 readers for
one daily email