Overview
IndyDevDan introduces a mental framework called “thread-based engineering” for measuring progress with AI coding agents. A thread consists of three parts: your initial prompt/plan, the agent’s tool calls (middle work), and your final review/validation. The key insight is that tool calls roughly equal impact, allowing you to quantify the value your agents are creating over time.
Key Takeaways
- Track your progress systematically - Without measuring your agent interactions through threads, you cannot improve your agentic engineering skills over time
- Focus on the human bookends - Your role has shifted from doing all the work to being strategic at the beginning (prompting) and critical at the end (reviewing)
- Use tool calls as your impact metric - The number and quality of tool calls your agent makes roughly correlates to the value being created in each thread
- Treat agentic engineering as a distinct skill - This requires new frameworks and measurement approaches, separate from traditional programming abilities
- Embrace the workflow shift - Pre-2023 you were the tool calls; now you orchestrate agents who execute the tool calls while you focus on planning and validation
Topics Covered
- 0:00 - The Agent Skills Gap: Discussion of how even top engineers like Andrew Karpathy feel behind in the age of AI agents, highlighting the widening gap between engineers using agents effectively and those struggling to keep up
- 2:30 - Thread-Based Engineering Framework: Introduction of the core framework with three mandatory components: prompt/plan (you), tool calls (agent work), and review/validation (you)
- 5:00 - Practical Thread Example: Live demonstration of a thread in action, showing how a coding prompt triggers agent tool calls and ends with human review of the completed work
- 7:30 - Measuring Agent Impact: Explanation of how tool calls serve as a metric for measuring the value and impact your agents are creating in your development workflow