TLDR Dev 2026-05-12
Writing code by hand βοΈ, Interaction Models π, why use Python π
π§βπ»
Articles & Tutorials
Testing Vue components in the browser (9 minute read)
Running integration tests for Vue components directly in a browser tab bypasses Node and heavy automation, using QUnit to mount components and simulate interactions. This method requires polling for asynchronous updates and server-side endpoints for fixture data, enabling a more confident workflow and utilizing native browser code coverage tools.
I got a $134 Cloudflare D1 bill. Here's how I cut it 95% (5 minute read)
A SvelteKit site incurred a $134 Cloudflare D1 bill due to the database charging per row scanned, which, combined with a lack of indexes, resulted in 127.6 billion row reads from two full scans on every page load. The cost was cut by applying composite indexes, running ANALYZE for the query planner, and implementing KV cache-aside for layout-level data reads.
The self-driving codebase: Building Horizon at WorkOS (20 minute read)
Horizon is an autonomous code factory that uses webhooks to trigger agents in Cloudflare sandboxes to plan, code, and verify work end-to-end. The system's architecture is made of disposable sandboxes, a shared context server, and an orchestrator, which all work together in a compounding loop where every run ships code and improves the platform.
Im going back to writing code by hand (18 minute read)
AI-assisted vibe-coding for the k10s Kubernetes dashboard initially sped up development, but the lack of human architectural oversight led to a fragile "god object" structure and critical technical flaws like data races. Therefore, the tool is being completely rewritten in Rust with a human-designed architecture and technical guardrails to properly guide future AI contributions.
If AI Writes Your Code, Why Use Python? (7 minute read)
Modern AI agents are great at using systems languages like Rust and Go, using their strict compiler feedback to self-correct architectural flaws and concurrency bugs more efficiently than human devs. This proficiency is causing a shift away from human-friendly languages like Python, as AI is now capable of rapidly porting or rewriting massive codebases in high-performance languages.
Running local models on an M4 with 24GB memory (13 minute read)
Running local LLMs on an M4 Mac with 24GB of RAM allows for cost-effective and private research and coding tasks without relying on cloud services. The Qwen 3.5-9B model configured via LM Studio currently offers the best reasoning results.
Useful Memories Become Faulty When Continuously Updated by LLMs (22 minute read)
LLM agents frequently degrade in performance when they continuously rewrite their experiences into textual lessons, causing specific facts to drift into vacuous abstractions and over-generalized rules. Therefore, research suggests that future memory systems should favor a curated collection of raw, unabstracted episodes over constant summarization.
The most important software engineering news in one daily email
Join 450,000 readers for
one daily email