What This Video Covers
In this live build stream, James demonstrates how to iteratively improve an underperforming AI sales agent instead of scrapping it and starting over. You’ll see the complete workflow: identifying performance gaps, planning improvements using Claude Co-work, implementing changes with Claude Code, and testing in a production environment.
This is a master class in treating your AI agents like you’d treat a real sales team — investing in their skill development rather than replacement.
Key Takeaways
1. Upskill Instead of Replace
When your AI agent isn’t performing, your first instinct might be to delete it and start fresh. Don’t. Instead, treat it like a human employee: invest in training, skill development, and process improvements. Your agent learns from iterations and improves each time.
Why it matters: Agents that learn your business processes and patterns get smarter over time. Starting fresh means losing all that accumulated knowledge.
2. Lead Quality is the Foundation
A perfect email to a bad lead is worthless. A mediocre email to a great lead converts. Your sales agent’s most critical function is finding and qualifying good prospects. If the scouting function fails, the entire sales system breaks down.
Why it matters: Most people focus on outreach copy and sequences. The real competitive advantage is discovering the right prospects in the right places.
3. Plan Before You Build
Use Claude Code’s Planning Mode to vet ideas and discuss scope before you start coding. This is like having a department head review proposals before execution. It saves credits, prevents over-engineering, and keeps you focused on MVP improvements.
Why it matters: Every execution costs credits. Every revised plan costs credits. Planning mode lets you test ideas and push back before committing to builds.
4. Make Your Agent Flexible, Not Hardcoded
Don’t hardcode search patterns for every business. Instead, ask your users where their ideal customers actually hang out. Build flexibility into your agent so it can adapt to different industries and customer profiles.
Why it matters: Scaling agents across businesses requires removing business-specific assumptions and replacing them with dynamic discovery.
5. Context Layers Trump Course Materials
In 2026, the value isn’t in courses — it’s in context. Build wikis, documentation, configurations, and organizational memory that your agents can access. This is what makes AI departments actually work.
Why it matters: Agents are only as good as the information they can access. A rich context layer (wiki + documentation + processes) multiplies agent effectiveness.
Step-by-Step: Upgrading Your AI Agent
Step 1: Identify Performance Gaps
Run your current AI agent in a test environment. Document what’s working and what’s not. Focus on the core bottleneck — in this case, lead quality, not outreach.
Use Claude Co-work (not Code yet) to analyze your agent without making changes. Get a handoff document outlining opportunities. Share this with your team or stakeholders for feedback.
Step 2: Plan the Upgrade in Co-work
Go to Claude Co-work with your agent folder. Ask it to review the performance gaps and suggest improvements. Push back on over-engineering — request MVP-only changes. Comment on the plan and negotiate scope.
Start with something like: “Here’s my current agent. What’s the minimum viable product improvement?” Use inline comments on proposals to refine scope. Focus on 1-2 critical fixes, not a complete rewrite.
Step 3: Create an Execution Plan in Claude Code
Switch to Claude Code and enable Planning Mode. Share your agent files and the handoff document. Ask for a detailed implementation plan with specific edits.
Let Claude generate a plan with file edits listed. Review the plan — check dependencies and potential issues. Accept “Allow Edits” only when you’re confident in scope.
Step 4: Verify Context Layer Access
Make sure your agent can access all necessary documentation. Build a wiki or knowledge base (for example, in Obsidian) with business processes. Ensure your agent configuration points to this context.
Step 5: Test in Production (Small Scale First)
Run the upgraded agent in a real Claude Co-work session. Start with a subset of prospects or a limited scope. Monitor outputs and gather feedback. Iterate based on what you learn.
Building Agent-Powered Departments
James demonstrates a larger pattern here: building plug-and-play departments that can run autonomously. His AI Operating System includes four departments: Sales (prospecting, scoring, outreach), Marketing (content creation, audience building), Education (course creation, lesson planning), and Admin (operations, scheduling, coordination).
Each department is documented with full processes, tested with proven workflows, and sellable as a packaged product. Once you’ve solved a problem for yourself, package it. Your audience faces the same challenges.
Common Mistakes to Avoid
Perfectionism over iteration: Don’t wait for the perfect agent. Deploy MVP versions and improve iteratively.
Hardcoding for your business: Build flexibility so the agent works across industries and customer profiles.
Ignoring lead quality: Perfect outreach to bad leads is wasted effort. Fix sourcing first.
Over-planning: Planning mode is powerful, but don’t plan endlessly. Set MVP scope and execute.
Forgetting to document: Capture what you’re building. Documentation is your competitive advantage.
Next Steps
Audit your current agent: What’s working? What’s failing? Use Co-work to plan — get a handoff document without committing to builds. Set your MVP scope and build the minimum change that solves the core problem. Create a context layer so your agent gets smarter every time you use it.