TLDR AI 2026-02-13
Codex Spark speed β‘, Gemini Deep Think π§ , Anthropic $380B π°
GPT-5.3-Codex-Spark (8 minute read)
OpenAI released GPT-5.3-Codex-Spark, a smaller ultra-fast coding model optimized for real-time use in Codex and capable of generating over 1,000 tokens per second on low-latency hardware.
Gemini 3 Deep Think Upgrade (4 minute read)
Google released a major upgrade to Gemini 3 Deep Think, its specialized reasoning mode, expanding access to Ultra subscribers and select API users. The update was developed with researchers to better handle open-ended scientific and engineering problems with messy or incomplete data.
Anthropic Raised $30B at $380B Valuation (5 minute read)
Anthropic raised $30 billion in Series G funding at a $380 billion post-money valuation, led by GIC and Coatue with broad institutional participation.
π§
Deep Dives & Analysis
Optimal Timing for Superintelligence (72 minute read)
Developing superintelligence is like undergoing risky surgery for a condition that would otherwise prove fatal. Some have called for a pause or permanent halt to AI development as AGI could pose existential risks. However, poorly implemented pauses could do more harm than good. The optimal strategy is to move quickly to AGI capability, then pause briefly before full development.
I improved 15 LLMs at coding in one afternoon. Only the harness changed (9 minute read)
There's no real consensus on the best solution to the simple 'how do you change things' problem. None of the current tools give models a stable, verifiable identifier for the lines to be changed without wasting tremendous amounts of context and depending on perfect recall. This post discusses a solution where, when a model reads a file, or greps for something, every line comes back tagged with a 2-3 character content hash, and when the model edits, it references those tags. This approach can result in an over 8% improvement in Gemini's success rate with zero training compute cost.
Leading Inference Providers Cut AI Costs by up to 10x With Open Source Models on NVIDIA Blackwell (7 minute read)
Leading inference providers like Baseten, DeepInfra, and Together AI are cutting AI costs by up to 10x using open source models on NVIDIA Blackwell GPUs. In healthcare, companies like Sully.ai reduced inference expenses, improving response times and freeing up valuable time for doctors. In gaming and customer service, NVIDIA Blackwell's optimized platforms enable Latitude and Decagon to slash token costs while enhancing user experiences and managing high workloads effectively.
The AI Advantage Nobody Is Talking About (7 minute read)
AI is not eliminating expertise; it's shifting the bottleneck from production to evaluation. Anyone can now generate βpassableβ output quickly, but true judgment, pattern recognition, and domain knowledge remain scarce. Professionals who combine deep expertise with AI fluency multiply their impact, while those who rely solely on prompting or ignore AI fall behind.
Cursor Expands Long-Running Agents Preview (3 minute read)
Cursor expanded access to its long-running agents research preview for Ultra, Teams, and Enterprise users. The custom harness enabled agents to complete longer, more complex coding tasks, producing larger pull requests with merge rates comparable to other agents.
Get the most interesting AI stories and breakthroughs delivered in a free daily email.
Join 920,000 readers for
one daily email