TLDR 2025-11-12
SoftBank dumps Nvidia π°, SpaceX GigaBay π, devtool integration π¨βπ»
SoftBank Sells Its Nvidia Stake for $5.8 Billion to Fund OpenAI Bet (6 minute read)
Softbank has sold its $5.8 billion stake in Nvidia to invest more into OpenAI. The investment group's commitments to OpenAI have been profitable - Softbank reported on Tuesday that its quarterly profit had more than doubled compared to a year earlier. Softbank recently sold part of its stake in T-Mobile's US arm and Deutsche Telekom. It has also loaded up on debt.
Google says new cloud-based βPrivate AI Computeβ is just as secure as local processing (3 minute read)
Google's Private AI Compute runs on a stack powered by the company's custom Tensor Processing Units, which have integrated secure elements. The new system will allow devices to connect directly to the protected space via an encrypted link. This will mean that not even Google will be able to access the data. The service will be as secure as using local processing while providing enough processing power to use Google's largest and most capable models on any device.
π
Science & Futuristic Technology
MIT physicists observe key evidence of unconventional superconductivity in magic-angle graphene (9 minute read)
'Unconventional' superconductors are materials that exhibit superconductivity in ways that are different from today's superconductors, which require ultra-low temperatures to operate. MIT physicists have created a material that shows a superconducting gap very different from that of the typical superconductor, meaning that the mechanism by which the material becomes superconductive must also be different. Understanding how unconventional superconductors work could unlock the design of superconductors that work at room temperature, the Holy Grail of the entire field.
SpaceX's next project will produce Starships at a level that sounds impossible (4 minute read)
SpaceX's newest Starbase facility in Texas, the Gigabay, is designed to manufacture up to a thousand Starship rockets a year. It will be one of the largest industrial structures to date. Construction of the facility is expected to be completed in December 2026.
π»
Programming, Design & Data Science
Cut QA Cycles From Hours to Minutes With Automated Testing (Sponsor)
If slow QA cycles are holding your team back from releasing faster, try
QA Wolf. Their fully managed, AI-native service delivers 80% automated E2E test coverage in weeks and helps teams ship 5Γ faster by cutting QA cycles from hours to minutes. With unlimited parallel runs, 24-hour test maintenance, and zero flakes, teams like
Drata have achieved 86% faster QA cycles.
Schedule a demo to learn more β
Vertical Integration is the Only Thing That Matters (13 minute read)
The inability of developer productivity startups to vertically integrate their offerings has hindered their adoption and utility. Vertical integration refers to a tight integration between different tools in a stack. Being unable to ship the whole stack means limiting the features you can provide. 'Glue code' is what makes this all work.
Scaling HNSWs (25 minute read)
The Hierarchical Navigable Small World (HNSW) algorithm is a graph-based approximate nearest neighbor search technique used in many vector databases. This post shares advanced findings on HNSWs. HNSWs are resistant to low latency and high performance, so the post is focused on making them fast enough to allow for a 'Redis' experience. It covers the challenges involved in exposing HNSWs as abstract data structures.
Meta AI Pioneer Has Discussed Leaving to Launch a Startup (3 minute read)
Yann LeCun, the AI pioneer who has led Meta's AI efforts for more than a decade, has reportedly recruited colleagues and spoken to investors about possibly creating a startup focused on developing world models. LeCun has become one of the biggest skeptics that large language models could lead to superintelligence. He has advocated for a new type of training for AI models based on real-world data. LeCun believes that within three to five years, world models will be the dominant model for AI architectures and that nobody will use the kind of models we have today.
Galaxy brain resistance (25 minute read)
If your arguments can justify anything, then they imply nothing. Arguments like these are almost always rationalizations. Galaxy brain resistance refers to how difficult it is to abuse an argument. Patterns of reasoning that are very low in galaxy brain resistance are common - this can have some extreme consequences. You can avoid galaxy braining yourself by having principles with a very high bar for considering any exceptions, and setting the right incentives - the easiest way to do this is to not give yourself bad incentives.
Get the most interesting stories in startups, tech, and programming delivered in a free daily email.
Join 1,600,000 readers for
one daily email