A feature like this isn't useful because knowing what connects to what, dependencies, etc. means nothing without business context. AI will never know the why behind the architecture, it will only take it at face value. I think technical design docs which have some context and reading the code is more than enough. This sits in the middle ground where it lacks the context of a doc and is less detailed than the code.
To add to that, a lot of business context is stuck in people‘s heads. To reach the level of a human engineer, the coding agent would have to autonomously reach out and ask them directed questions.
> AI will never know the why behind the architecture…
That's true only if you don't provide that context. The answer is: Do provide that context. My experience is that LLM output will be influenced and improved by the why's you provide.
it takes longer to explain the context to the model than it does to just write the code based on the context I already understand, especially since code is more terse than natural language
Definitely, iff you have to provide the context with every task. If agent memory worked better and across your whole team, then providing context might be much easier
agree that AI can kinda infer business context sometimes. in my experience, it doesn't work that well.
a lot of the time, debugging isn't a logic issue, but more of a state exploration issue. hence you need to add logging to see the inputs of what's going on, just seeing a control flow isn't super useful. maybe codemaps could simulate some inputs in a flow which would be super cool, but probably quite hard to do.
> AI will never know the why behind the architecture, it will only take it at face value.
There is no reason to believe that at some point in the future AI will know the business context for apps when they are vibecoded (because the prompts should contain the business context).