Coding Is Solved. Context Engineering Is Your New Job


 

In this blog, I want to talk about something I have been experiencing first-hand over the last several months and hearing echoed in almost every conversation I have with developers, both inside and outside the company. AI can write code. That part is done. The question now is, what do we do about everything else?

Coding is solved

I use AI coding assistants every day. GitHub Copilot at work and Claude Code and other tools for personal projects - they all produce working code that I would have spent real time writing myself. And when I talk to other developers, the story is the same. The code that comes out is not perfect, but it is good enough that the bottleneck has clearly shifted. We are no longer waiting on the model to get better at writing functions. It is already there.

But here is what I have also noticed. Writing code was never the hard part of delivery. It is maybe 25% of the work. The rest is understanding what to build, making architecture decisions, writing tests that actually cover the right things, deploying safely, keeping documentation alive. And in all of those areas, AI is mostly disconnected. Your architecture docs sit in Confluence getting stale while the model hallucinates a database schema that contradicts your actual system. This wouldn't be a surprise for those building software for long time, but somehow it is :).

So the question that I keep coming back to is not "can AI code?" but rather "why does the same model give me brilliant output on one project and garbage on another?" And the answer, every single time, comes down to one thing.

Context Engineering is the real challenge

I started noticing a pattern. When I give the model my architecture doc, my domain rules, my coding conventions — the output is almost ready to commit. When I give it nothing, it writes generic tutorial code that does not fit the project at all. Same model. Same settings. Completely different result. The only variable is the context I provide.

This is what people are now calling Context Engineering. It is the practice of structuring and curating the information you feed to an AI so that it produces output that actually fits your team's codebase, your domain, and your way of working.

And here is the hard truth that I have learned from trying to shortcut this. No vendor does it for you. Your architecture is unique. Your domain knowledge is proprietary. Your coding standards are local decisions that your team made for specific reasons. Your workflow is not the same as any other team's workflow. I have tried using other people's instruction files and prompt setups. They do not help. Their context is not my context.

This is also why the developer community is so split on AI right now. On one end you have the vibe coders who ship impressive demos that fall apart in production. On the other end you have skeptics who see AI-generated bugs and decide the whole thing is not worth the risk. I think both are wrong. The vibe coders lack discipline. The skeptics lack structure. The developers in the middle, the ones who apply AI with curated context and human review at every stage, those are the ones actually shipping faster without creating new problems.

The other thing I have realised is that Context Engineering is not a solo activity. It is a team-level investment. You need your architecture documented where the model can read it. You need your requirements kept current after every release. You need your domain knowledge written down, not locked in someone's head. You need your workflow defined so that everyone on the team uses AI the same way. When all four of those things are maintained, AI output aligns with your codebase. When any one of them goes stale, the output drifts back to generic territory. The model did not get worse. Your context did.

There is also a practical sweet spot worth knowing about. From what I have calculated - a codebase in the range of 20k to 40k lines of code, with context docs totalling around 1k to 2k lines, fits comfortably within the context windows available today, which range from 1M to 4M tokens depending on the model. That means for a well-scoped project with good documentation, you can feed the model nearly everything it needs to understand your system in a single session. That is a significant advantage and one more reason to invest in keeping your context docs concise and current. Well, you can argue that this is not straightforward translation to LoC to tokens, you can use the Context Window UI in the new version of VS code to get a sense of how it working for your project. 

And the best part is that it compounds. Every feature you ship with structured context makes the next feature faster. The architecture doc gets richer. The domain rules get sharper. The prompts get tighter. It is compound interest for engineering velocity. But the inverse is also true. If you wait for a turnkey solution, you will still be waiting while other teams are two cycles ahead.

Context Engineering across different system architectures

One thing I have noticed is that Context Engineering does not look the same for every type of system. The architecture you are working with fundamentally changes what context you need to provide and how easy it is to manage.

If you are working on a modular monolith with full stack code in a single repository, you are in the best position. The model can see your routes, your business logic, your templates, your database layer, all in one place. Context Engineering here is relatively straightforward. The main thing to be deliberate about is documenting the interfaces between your modules clearly. If you have stored procedures, for instance, write down how they connect to the application layer, what calls them, what data they expect, and what they return. The model will not infer those connections from code alone, especially when the logic crosses the boundary between your application and your database. But once those interfaces are documented, the model can reason about your entire system coherently because everything lives together.

Micro-services are a different story. When your system is spread across multiple repositories, each service only sees its own code. The model has no visibility into the services your code depends on or the services that depend on yours. This is where Context Engineering becomes critical and also harder. You need to provide clear documentation about inter-service connections: what APIs your service calls, what contracts it expects, what events it publishes or consumes, and how authentication flows between services. Without that, the model treats each service as if it exists in isolation and produces code that breaks at integration boundaries. I have also noticed that teams with very thin micro services, services that do almost nothing on their own, struggle more with AI-assisted development. The model has so little code to work with in each repo that it lacks the context to make meaningful contributions. There is a practical minimum size below which AI assistance loses its leverage.

Three ways to start learning and applying Context Engineering

So here are three things I would recommend, based on what has worked for me and the teams I have spoken with.

First, write down what the model cannot find on its own. Every team has knowledge that lives in Slack threads, in senior engineers' heads, in tribal lore that "you just have to know." That is exactly the knowledge that makes AI output go from generic to useful. Start with an architecture doc, a domain doc, and a standards doc. Keep them in a docs folder in the repo. Keep them short. A page or two each is fine. And update them when they drift because stale context is worse than no context. It teaches the model the wrong patterns.

Second, build a repeatable workflow with human review gates. The failure mode of AI adoption is not that the model is bad. It is that nobody on the team agreed on how to use it. One developer prompts from scratch every time. Another copy-pastes from ChatGPT. A third refuses to use it at all. No consistency, no compounding. What has worked for me is a staged approach: generate a structured user story from context docs, review it, then run code and test generation in parallel from that approved story, review both, then generate automation tests, review those, and finally ship and update the context docs so the next feature benefits from everything you just learned. That last step, updating the docs after shipping, is the one everyone skips. It is also the one that makes the whole flywheel turn.

Third, treat your prompt templates like shared code. Stop writing prompts from scratch every time. Create reusable templates for the tasks you do repeatedly — story writing, code implementation, test generation, code review. Store them in the repo, version them, improve them when the output quality dips. When a new team member joins, they should not have to reinvent your prompts. They use the same templates, get the same quality, from day one. That is how you scale AI-assisted delivery beyond a few power users.



In conclusion

Coding is solved. The teams still debating whether AI can write code are solving last year's problem. The real challenge, and the real opportunity, is Context Engineering: curating the architecture, domain, standards, and workflow knowledge that turns a generic model into something that actually fits your team.

No one sells this off the shelf. No plugin auto-generates it. It is built by the team, for the team, one feature cycle at a time. And it compounds.

I would strongly recommend starting this week. Write the docs, define the workflow, template the prompts. The teams that structure their AI use deliberately now will be unreachable in a year. The ones that wait will still be vibe coding demos.

No comments:

Post a Comment