Trending:
Engineering Leadership

Why CTOs still need to understand git - even when AI writes the commits

Claude Code can split commits, write messages, and handle rebases automatically. But enterprise teams that skip the fundamentals are shipping problems to production. The skill gap isn't disappearing - it's just shifting from execution to oversight.

Why CTOs still need to understand git - even when AI writes the commits

The shift from writing code to directing it

A developer asked Claude Code to split one commit into three atomic commits. It did it perfectly. Then the question hit: why bother learning git workflows if AI handles them?

The answer matters for enterprise teams adopting AI coding tools.

Anthropic's Claude Code automates git operations - commit splitting, rebases, history searches. It's the same tool 90% of Anthropic's engineers use for daily Git tasks. Recent DeepLearning.AI courses teach multi-session workflows with GitHub integration. Real-world usage shows 80% faster coding, but production bugs still consume 20% of development time.

The catch: AI only splits commits when asked. It doesn't inherently know what makes a good commit. Someone needs to recognize the problem, know what atomic commits are, and validate the result.

That's the pattern emerging across AI coding tools. The knowledge hasn't become obsolete - it's shifted from execution to oversight.

What this means in practice

Teams using AI agents well are encoding principles as automated skills - rules for atomic commits, message formats, validation checks. But creating those skills requires understanding the underlying concepts.

The alternative: developers who accept AI defaults without question. Large commits because no one asked for better. Code that works but can't be reviewed. Changes shipped to production without understanding what changed.

Enterprise architecture teams should note the parallel to calculators. We still teach multiplication tables - not to be faster than calculators, but to know when results look wrong, to estimate mentally, to understand what we're calculating.

Same applies to git workflows, code structure, and architecture decisions. AI handles the execution. Humans need to know what good looks like.

The production reality

Anthropnic's observational data shows the risk: AI speeds tasks but reduces engagement. Developers stop thinking through problems. Debugging skills atrophy, especially for junior engineers learning from agentic tools rather than fundamentals.

Some companies are already shipping AI-generated code without review. The production bugs follow predictably.

The teams getting value from AI coding tools are those who can course-correct early, validate outputs, and know when to intervene. That requires the same knowledge that used to go into writing code manually - just applied differently.

For CTOs evaluating Claude Code, Copilot, or similar tools: the productivity gains are real. So is the need for teams who understand what they're asking for and can evaluate what they get back.

The skills haven't disappeared. They've just become oversight skills instead of execution skills.