Venture capital technology news
Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date
18 January 2026 @ 7:00 pm
It’s the question on everyone’s minds and lips: Are we in an AI bubble?It's the wrong question. The real question is: Which AI bubble are we in, and when will each one burst?The debate over whether AI represents a transformative technology or an economic time bomb has reached a fever pitch. Even tech leaders like Meta CEO Mark Zuckerberg have acknowledged evidence of an unstable financial bubble forming around AI. OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates see clear bubble dynamics: overexcited investors, frothy valuations and plenty of doomed projects — but they still believe AI will ultimately transform the economy.But treating "AI" as a single monolithic entity destined for a uniform collapse is fundamentally misguided. The AI ecosystem is actually three distinct layers, each with different
Why reinforcement learning plateaus without representation depth (and other key takeaways from NeurIPS 2025)
17 January 2026 @ 7:00 pm
Every year, NeurIPS produces hundreds of impressive papers, and a handful that subtly reset how practitioners think about scaling, evaluation and system design. In 2025, the most consequential works weren't about a single breakthrough model. Instead, they challenged fundamental assumptions that academicians and corporations have quietly relied on: Bigger models mean better reasoning, RL creates new capabilities, attention is “solved” and generative models inevitably memorize.This year’s top papers collectively point to a deeper shift: AI progress is now constrained less by raw model capacity and more by architecture, training dynamics and evaluation strategy.Below is a technical deep dive into five of the most influential NeurIPS 2025 papers — and what they mean for anyone building real-world AI systems.1. LLMs are converging
Black Forest Labs launches open source Flux.2 [klein] to generate AI images in less than a second
16 January 2026 @ 11:28 pm
The German AI startup Black Forest Labs (BFL), founded by former Stability AI engineers, is continuing to build out its suite of open source AI image generators with the release of FLUX.2 [klein], a new pair of small models — one open and one non-commercial — that emphasizes speed and lower compute requirements, with the models generating images in less than a second on a Nvidia GB200. The [klein] series, released yesterday, includes two primary parameter counts: 4 billion (4B) and 9 billion (9B).The model weights are available on Hugging Face and code on Github.While the larger models in the FLUX
How Google’s 'internal RL' could unlock long-horizon AI agents
16 January 2026 @ 10:41 pm
Researchers at Google have developed a technique that makes it easier for AI models to learn complex reasoning tasks that usually cause LLMs to hallucinate or fall apart. Instead of training LLMs through next-token prediction, their technique, called internal reinforcement learning (internal RL), steers the model’s internal activations toward developing a high-level step-by-step solution for the input problem. Ultimately, this could provide a scalable path for creating autonomous agents that can handle complex reasoning and real-world robotics without needing constant, manual guidance.The limits of next-token predictionReinforcement learning plays a key role in post-training LLMs, particularly for complex reasoning tasks that require long-horizon planning. Ho
Listen Labs raises $69M after viral billboard hiring stunt to scale AI customer interviews
16 January 2026 @ 2:01 pm
Alfred Wahlforss was running out of options. His startup, Listen Labs, needed to hire over 100 engineers, but competing against Mark Zuckerberg's $100 million offers seemed impossible. So he spent $5,000 — a fifth of his marketing budget — on a billboard in San Francisco displaying what looked like gibberish: five strings of random numbers.The numbers were actually AI tokens. Decoded, they led to a coding challenge: build an algorithm to act as a digital bouncer at Berghain, the Berlin nightclub famous for rejecting nearly everyone at the door. Within days, thousands attempted the puzzle. 430 cracked it. Some got hired. The winner flew to Berlin, all expenses paid.That unconventional approach has now attracted $69 million in Series B funding, led by
Kilo launches AI-powered Slack bot that ships code from a chat message
16 January 2026 @ 2:00 pm
Kilo Code, the open-source AI coding startup backed by GitLab cofounder Sid Sijbrandij, is launching a Slack integration that allows software engineering teams to execute code changes, debug issues, and push pull requests directly from their team chat — without opening an IDE or switching applications.The product, called Kilo for Slack, arrives as the AI-assisted coding market heats up with multibillion-dollar acquisitions and funding rounds. But rather than building another siloed coding assistant, Kilo is making a calculated bet: that the future of AI development tools lies not in locking engineers into a single interface, but in embedding AI capabilities into the fragmented workflows where decisions actually happen."Engineering teams don't make decisions in IDE sidebars. They make them in Slack," Scott Breitenother, Kilo Code's co-founder and CEO, said in an interview with Ventu
Claude Code just got updated with one of the most-requested user features
15 January 2026 @ 7:37 pm
Anthropic's open source standard, the Model Context Protocol (MCP), released in late 2024, allows users to connect AI models and the agents atop them to external tools in a structured, reliable format. It is the engine behind Anthropic's hit AI agentic programming harness, Claude Code, allowing it to access numerous functions like web browsing and file creation immediately when asked.But there was one problem: Claude Code typically had to "read" the instruction manual for every single tool available, regardless of whether it was needed for the immediate task, using up the available context that could otherwise be filled with more information from the user's prompts or the agent's responses.