We sell AI automation. So it would be pretty embarrassing if we didn't use it ourselves.
This isn't a marketing post about capabilities. It's a transparent look at every AI system we actually use to run Practical Systems. The tools that work, the ones we're still improving, and the honest results so far.
If you're considering AI automation for your business, this is what it actually looks like in practice. Not slides and demos. Real systems, real workflows, real numbers.
The Full AI Stack
Here's everything running right now:
Sales and Prospecting:
- 5 Claude skills in the Claude Desktop app for daily sales work
- Autonomous agent fleet for lead processing and outreach
Content:
- 4-agent pipeline that takes topics from research to published blog posts
Operations:
- Mission Control dashboard that ties it all together
We'll walk through each one.
Claude Skills: The Daily Driver
Most of our AI-assisted sales work happens through Claude skills. These are custom instructions that turn Claude into specialized tools for specific tasks.
We run five skills daily:
Lead Prospecting
This skill finds qualified prospects that are both high-fit AND closeable within 90 days. The key insight here: most prospecting tools focus on ICP fit alone. Ours assesses fit, client readiness, and deal risk together.
The skill generates prospect profiles with what we call "primary constraint hypotheses." Instead of generic company descriptions, we get specific guesses about what's actually blocking their AI adoption. That gives us something real to validate in discovery.
Output is a single JSON file with all prospects, scored and ready for outreach.
Contact Finder
Once we have target companies, this skill enriches them with decision-maker contacts. It accepts a company domain (preferred) or company name and returns names, titles, emails, LinkedIn URLs, and confidence scores.
The skill prioritizes operations and sales leaders because that's our ICP. No point having great company data if you're reaching out to the wrong people.
Client Discovery
This is our discovery session framework. When we're on calls with potential clients, this skill helps structure the conversation, document requirements, identify AI opportunities, and assess technical readiness.
The output is a professional discovery report with prioritized use cases and implementation considerations. It's the artifact we share with prospects after the call.
Deal Qualification
After discovery, this skill runs a systematic go/no-go analysis. It stress-tests champions, calculates loss-framed ROI, and determines whether to proceed, wait, or walk away.
The reality is that most consultants are bad at saying no to deals. We built this skill specifically to prevent wasted time on unqualified opportunities. If the deal doesn't pass qualification, we don't chase it.
Solution Architecture
For deals that pass qualification, this skill converts opportunities into detailed solution blueprints. It defines scope, assumptions, constraints, MVP vs Phase 2 breakdown, data requirements, and dependencies.
The explicit in/out decisions prevent scope creep and underbidding. By the time we write a proposal, we know exactly what we're committing to.
The Agent Fleet: Autonomous Processing
Beyond the Claude skills we use interactively, we have an autonomous agent fleet that runs continuously in the background.
Sales Agents
Five agents handle different parts of the sales pipeline:
Prospector Agent: Finds qualified prospects on an hourly schedule. Feeds into the hygiene agent.
Hygiene Agent: Scores prospects and assigns tiers based on fit, readiness, and deal risk. Also runs hourly.
Researcher Agent: Does deep constraint research and value quantification for high-priority prospects. Runs on-demand when prospects hit certain thresholds.
Outreach Agent: Executes email sequences continuously. Handles personalization at scale while keeping a human in the loop for high-stakes sends.
Orchestrator: Coordinates the whole fleet, monitors health, and handles exceptions.
All agents emit events and heartbeats that show up in Mission Control. We can see exactly what's happening, when, and what decisions are being made.
How the Fleet Actually Works
Here's the honest version: it took months to get this right.
The first version had agents stepping on each other. The hygiene agent would re-score prospects the researcher had just enriched, creating circular updates. The outreach agent would send duplicates because it wasn't properly tracking state.
Now the agents follow strict protocols. Each one knows exactly what triggers it, what data it owns, and what events it emits. Human approval is required before any A1 tier email sends because those are our highest-value prospects.
The result: we process roughly 10x the prospect volume we could handle manually, with better consistency on scoring and research.
Content Pipeline: This Blog Post
This post you're reading was generated using our autonomous content pipeline.
Here's how it works:
Research Agent: Gathers competitive intelligence, finds stats to cite, identifies content gaps. For this post, the "research" was our own systems documentation.
Outline Agent: Creates a detailed structure with SEO optimization, section briefs, and word count targets.
Draft Agent: Writes the full first draft based on the outline and research. Marks places where human input is needed.
Publish Agent: Formats the final content as MDX, generates header images, and deploys to the website.
The pipeline is designed for 30-45 minutes of human review per post. We add real examples, sharpen the takes, and approve the final version.
For a "Building in Public" post like this one, I wrote more manually because the source material is internal. For posts based on external research, the agents do more heavy lifting.
The Numbers
We publish 2 posts per month. Each post is 1,500-2,500 words.
Before the pipeline: ~6 hours per post including research, writing, editing, and formatting.
After the pipeline: ~2 hours per post. Research and drafting happen automatically. Human time goes to review and enhancement.
That's not magic. It's just removing the parts that don't require human judgment.
Mission Control: The Dashboard
Everything comes together in Mission Control, our internal dashboard that replaced three separate Streamlit apps.
Pipeline View: Shows all prospects, their status, scores, and next actions. Filters by tier, status, and assignment.
Agent Monitoring: Real-time view of what each agent is doing. Event logs, heartbeats, and error tracking.
Content Dashboard: Pipeline status for all blog posts, from planned through published.
CRM-Aware Chat: An AI assistant that knows our full prospect database and can answer questions about specific accounts.
Mission Control runs on localhost:3001. The backend API and WebSocket server handle real-time updates so we see changes as they happen.
What Actually Works
Let's be specific about what's delivering value:
Claude skills for sales: Save 2-3 hours per prospect in research and documentation. The qualification skill in particular has improved our close rate because we're only pursuing deals that make sense.
Content pipeline: Cut writing time by 65%. More importantly, we actually publish consistently now. Before the pipeline, posting was sporadic.
Agent fleet: 10x throughput on prospecting and outreach. Better data quality because machines don't get tired or skip steps.
Mission Control: Single source of truth. No more context-switching between apps.
What We're Still Figuring Out
This isn't a victory lap. Some things still need work:
Qualification calibration: The deal-qualification skill sometimes flags good opportunities as risky. We're tuning the thresholds based on actual outcomes.
Agent coordination: The orchestrator sometimes makes conservative choices that slow things down. Better to be safe than to have data conflicts, but there's room for improvement.
Content voice consistency: The draft agent captures our voice about 80% of the time. The other 20% needs heavier editing.
Human handoffs: Some transitions from autonomous to human-in-the-loop are clunky. We're adding better notification and context passing.
The Meta Point
Running Practical Systems with AI automation isn't just dogfooding. It's how we learn what actually works.
Every system we build for clients starts with something we've already run ourselves. We know where the edge cases are because we've hit them. We know what adoption looks like because we've done the adoption.
When a prospect asks "does this actually work?", we can show them. Not slides. The actual systems. That's why we publish posts like this one.
Try It Yourself
If you're curious about building similar systems for your business, start small. Pick one workflow that's clearly repetitive and well-documented. Build an agent for that. Learn from what breaks. Then expand.
Our AI audit can help identify where AI automation would deliver the most value in your specific situation. Or just book a call and we'll walk through what we've built and how it might apply to you.
We're not precious about our systems. If something we've built can help you move faster, that's a win for everyone.