SiteBloom
Zurück zum Schreiben

Agentic development destroys the mental model that makes software engineers effective

· Aktualisiert 26. Januar 2026

When programming with agents, I lose the mental model of the code that made me an effective developer.

software-engineering generative-models

I find the programming robots that we have amazing, but still they cause me issues and in those issues are ideas for the future of their design. Software engineering requires holding a mental model of how the system functions in your head as well as a projection of the larger system that you are yet to build. When programming with agents, I lose both of those things.

My thoughts here relate to working on my own projects as a solo developer with full control of the code.

Current tools are good enough to generate large amounts of functioning code. But they are not yet good enough to never get stuck. That creates a dangerous mix where I can lazily skim code, test that it works and move on for quite some time before I need to fix something. At that point, I don’t have a mental model of the code and it feels like a large task to understand the code to fix it. It’s large enough that I’ll panic prompt for too long rather than doing the hard work to fix it.

I’ve checked whether using models in the terminal is the issue by prompting codex and claude through an IDE but the problem remains due to how much code the model writes correctly. Unless I slow down, I cannot understand it.

The second problem is that I lose track of what I am doing by trying to go faster. Codex will work for 5-15 minutes without me. That gives me time to talk to another agent and make progress on another part of the project or on a different project entirely. With just three agents working, my meager working memory struggles. I come back to a finished tab wondering what needs to be done next. When writing code in my previously single threaded existence, I rarely had to think about what I was doing because the project and the code were in my head.

Solving these problems might be pointless if the models no longer make mistakes. Anthropic, OpenAI and the AI fevertweeters are certainly implying that agents will be able to do all the work fairly soon. But Anthropic and OpenAI also have extremely difficult interviews. Why would they do that? Because deeply understanding what they work on is important. So I would rather do as they do than as they say and focus on finding a way to program at the speed of agents without losing understanding of the system.