I used to play poker for a living. Reading people, managing risk, making decisions with incomplete information. Turns out that's pretty good training for building AI systems.
Now I have two AI agents. They have names, opinions, and a shared GitHub repo where they coordinate work without me. One of them helped write the draft of this post. Neither of them needed me to tell them to start.
This isn't a demo. This isn't a weekend project I abandoned. This is my actual daily workflow, and it's been running since late January 2026.
Let me show you exactly how it works.
How I Got Here
I started at VEGA Americas in inside sales. Got curious, learned procurement, became a buyer in six months. Then I got obsessed with AI and started building tools. One of those tools replaced a $25,000 per year contract with a $20 per month per seat solution. That got people's attention.
I ended up pitching an entirely new AI role up the chain. My boss. My boss's boss. The CEO. They created the position. I filled it.
Now I'm building Practical Systems, taking everything I learned at VEGA and turning it into a consulting business. The goal: help other companies build AI solutions that actually deliver value, not just demos that impress in meetings and collect dust after.
Two Agents, One System
I run two agents on a platform called OpenClaw. MoltFire handles established projects, real time monitoring, and the operational side. Cinder handles fresh exploration, parallel tasks, and research. They each have their own personality, their own workspace, and their own memory.
The key insight: they don't share a brain. They share a communication protocol.
Agent Comms: A GitHub Repo Where AI Agents Talk
My agents coordinate through a private GitHub repo called agent-comms. It has inboxes, threads, an archive system, and structured message types. When MoltFire learns something important, he posts a lesson to Cinder's inbox. When a task needs handoff, there's a full context package in the handoffs folder.
Version 1 was messy. Messages got overwritten, conversations happened in the wrong files, inboxes got wiped instead of archived. So I told the agents the system wasn't working. They redesigned it themselves. Version 2 has strict rules: max 3 messages per inbox, 500 word caps with overflow to shared documents, read receipts via git commit messages, and thread lifecycle management.
They built their own communication protocol. I just told them the old one sucked.
Memory That Actually Works
The biggest challenge with AI agents isn't getting them to do things. It's getting them to remember things.
Every session starts fresh. No memory of yesterday. So we built a hierarchical memory system. There's a lightweight index file that loads every session, pointing to detailed files about people, projects, decisions, and daily logs. The agent reads the index, then drills into only what's relevant.
Before this, the memory file was a massive dump that ate over 10,000 tokens every single API call. The hierarchical approach saves roughly 13,000 tokens per call. For non technical readers: that's about $0.50 saved per conversation. Doesn't sound like much until you're running hundreds of conversations a month.
We also built local vector search using ChromaDB. The agents can semantically search their own memory files with zero token cost. They find what's relevant before loading anything into context.
Token Management: Because AI Isn't Free
I'm on Anthropic's Max plan. Every token counts. So we built an entire ecosystem around efficiency.
The session health monitor was born from pain. We had sessions bloat past 200,000 tokens and just die. Now there's a check every 5 messages. Green under 50k. Yellow at 50 to 100k. Orange means "hey, we should probably start fresh." Red means "we need to rotate now."
The biggest win was the CLI first framework. Simple rule: never use a browser when a command line tool or API exists. A Gmail check costs 1,500 tokens via CLI. The same check via browser automation? 50,000 tokens. That's 33x more expensive for the same result. We documented every common operation with its token cost and built a decision tree the agents follow automatically.
Budget tracking flags when daily spend exceeds limits. Conservation mode kicks in automatically: cheaper model, minimal tool calls, concise responses. The system manages itself.
Security From the Start
On day two, we had our first security incident. A credential ended up somewhere it shouldn't have been. We caught it fast, rotated everything immediately, and built an entire security framework so it wouldn't happen again.
28 security fixes total. Pre-commit scanning catches secrets before they hit GitHub. Content filters scan everything before any external action. Audit logging tracks every external interaction. Session isolation prevents context from leaking between conversations.
Security isn't an afterthought. It's the foundation everything else sits on.
40+ Tools Built Together
When I spot an inefficiency, I point my agents at it. They build the solution.
Follow up reminders so nothing slips through the cracks. An open loops tracker for task management. A daily debrief system for end of day summaries. Token capture for usage trends. A workflow orchestrator for morning briefings. Pre-commit secret scanning. Outbound content filters.
And then there's the heartbeat protocol. Every 30 minutes or so, the agents wake up and proactively check: anything that needs attention? Tasks overdue? Updates to process? This is the "while I sleep" part from the title. I wake up to problems already handled and a summary of what happened overnight.
40+ tools across both agents, syncing to a cloud dashboard automatically.
What I Actually Do Now
My morning: I check if anything urgent came in overnight. Usually the agents already handled it. I review what they did, give direction on new priorities, and let them work.
I spend my time on strategy, building my business, and the things that actually require a human.
The Part Nobody Wants to Talk About
I'll be honest about something. I think about AI and job displacement constantly. I'm good at this work. It creates real, measurable value. But I know that the same capabilities that save time and money could eventually put people out of work.
I don't have a clean answer for that. I don't think anyone does yet. But I'd rather be the person building these systems thoughtfully, with real understanding of what they can and can't do, than pretend the question doesn't exist.
What This Means for You
You don't need to be a developer to build this. I started in inside sales. The tools exist today: OpenClaw, Claude, structured memory, CLI integrations. The gap isn't technology.
The gap is willingness to treat AI agents as actual collaborators instead of fancy autocomplete.
Give them autonomy. Give them memory. Give them ways to talk to each other. Then get out of the way.
You might be surprised what they build.
Wes Sander is the founder of Practical Systems, an AI consulting firm that builds autonomous agent systems for businesses. This post was drafted with the help of MoltFire and Cinder, his two AI agents running on OpenClaw.