Researching how developers actually use AI to build & manage complex production software
Everyone demos a POC in 10 minutes. We study what happens after - when you need code that actually works in production "reliability".
We are an "open" research lab. Everything we learn and build is "public".
The Problem We See
Coding agents look simple to use. They're not. There's a real learning curve that most developers are navigating alone. The demos are impressive, but production software demands reliability, maintainability, and quality that doesn't come automatically. Most people out there optimize for line of code and speed. We think that's missing the point.
What We Believe
Core software engineering principles aren't going away - they matter more now. Used right, AI doesn't mean trading quality for speed. It means developers finally have time to focus on architecture, edge cases, testing, and the things that make software reliable. The goal isn't just faster code. It's better code.
Publications
Code Retrieval in Coding Agents
Why do coding agents work well on small projects but struggle with large codebases? We studied how different coding agents find and use code context.
Projects
InfraGPT
An AI SRE copilot for cloud infrastructure via Slack or CLI. Multi-agent system that handles deployments, debugging, Terraform generation, and monitoring with built-in audit trails.
A Guide to Agentic Coding
Comprehensive book covering how to actually work with coding agents. From fundamentals to practical guides on debugging, building reliable software using coding agents. We talk about model selection, tools, prompting, managing context windows, subagents, and more.
Talk to Us
Interested in joining the research? Fill out this form
Questions? Reach out at team@73ai.org
Want to chat? Talk to the founders