A professional SEO audit for a 20-page site runs $1,500 to $3,000. It takes a week or two. And it usually arrives as a PDF full of recommendations that someone still has to translate into code.
I got 80% of the way there in two hours.
I built the Practical Systems website over a few intense weeks. Eight pages, a sandbox demo, an interactive AI readiness audit. It looked good. It worked. But I had no idea if Google could find it.
SEO was one of those tasks I kept pushing to "later." The site had a sitemap and a robots.txt, but beyond that I was guessing. Meta descriptions? Probably too long on some pages, too short on others. Structured data? I had some. Keywords? I'd never formally defined a single one.
So instead of hiring an SEO consultant or spending a weekend reading Moz guides, I did what we do at Practical Systems. I pointed an AI at the problem and saw what it could actually accomplish.
The Setup
Here's the context that matters.
The website is built with Next.js 14 using the App Router. It's deployed on Vercel. There are about 20 routes including a blog, an interactive AI readiness audit, a sandbox demo, resource pages, and the standard marketing pages. Every page exports its own metadata using the Next.js Metadata API.
I use Claude Code as my primary development tool. It has full filesystem access to the codebase, which means it can read every page file, check every metadata export, inspect the sitemap, and trace internal links across the entire site.
That last part is important. An SEO audit isn't useful if the auditor is guessing at your site structure. It needs to see the actual code.
The Prompt
I didn't just say "audit my SEO." That would have produced something generic and unhelpful.
Instead, I wrote a structured prompt that was roughly 2,000 words long. It included the full page inventory with current titles and descriptions. It specified the business context: who we are, who we sell to, what our conversion goal is. It defined eight specific audit categories and told the AI exactly what format to deliver the findings in.
The eight categories:
- Keyword strategy per page
- On-page metadata audit with exact code fixes
- Structured data (JSON-LD) recommendations
- Technical SEO checks
- Content depth evaluation
- Niche and vertical SEO opportunities
- Blog infrastructure audit
- Open Graph and social sharing metadata
For each category, I specified what "good" looked like. And I asked for the exact file path and code change for every recommendation. Not theory. Not "consider adding keywords." The actual TypeScript metadata export I could paste into the file.
The quality of the prompt determined the quality of the audit. A vague prompt would have produced a vague audit. By front-loading the business context, the technical architecture, and the output format, I got back something I could actually execute on.
What It Found
The audit came back organized into critical, important, and minor issues. 25+ total across all categories. Here's what stood out.
The Critical Stuff
Missing canonical URLs on blog posts. Every blog post on the site was missing a canonical URL in its metadata. This is the kind of thing that doesn't break anything visible. But it tells Google you haven't specified the authoritative version of each page. For a site trying to build authority, that's a real problem.
The fix was a five-line addition to the generateMetadata function in the blog's dynamic route file.
Thin metadata on the Try page. The interactive demo page had a description that was 61 characters long. That's roughly one sentence. Google wants 120 to 155 characters. The AI flagged this as critical because the Try page is a key conversion point. A weak description means it performs poorly if it ever shows up in search results.
Homepage description over the limit. The homepage meta description was 208 characters. Google truncates at roughly 155. So the last 50 characters of carefully written copy were invisible in search results. I had no idea.
The Page Titles Problem
The audit identified that almost every page title was too generic. "About." "FAQ." "Resources." "Blog."
These titles use the Next.js template system that appends "| Practical Systems," so they render as "About | Practical Systems" in search results. That's technically correct but strategically useless. Nobody is searching for "About."
The recommended fixes were specific. Instead of "About," use "About Wes Sander, AI Integration Consultant." Instead of "FAQ," use "AI Consulting FAQ: Pricing, Process, Timeline." Instead of "Resources," use "Free AI Implementation Guides & Tools."
Each one includes a target keyword while still reading naturally. This was probably the single highest-impact finding because it affects how every page appears in search results.
The Open Graph Gap
8 out of 12 main pages were missing page-specific Open Graph tags. They were inheriting the homepage defaults. Which means every page looked identical when shared on LinkedIn or Slack.
For a company that relies on content distribution for lead generation, that's a meaningful gap. Every time someone shares the Use Cases page or the How We Work page, it shows the homepage image and description instead. That's lost opportunity on every share.
Structured Data
The site already had Organization schema on the homepage, FAQPage schema on the FAQ, and BlogPosting schema on blog posts. Decent foundation. But the audit identified three additions worth making.
A Person schema on the About page, linking the founder to the organization. This helps Google understand that a real person with a real background is behind the company.
A ProfessionalService schema on the homepage with the full service catalog, including price ranges. This is the kind of structured data that could eventually populate rich results for queries like "AI consulting services."
A dateModified field in the BlogPosting schema, which was missing entirely. Google uses this to determine content freshness. Without it, posts look like they've never been updated.
What Surprised Me
The keyword strategy was genuinely useful. I expected generic suggestions. Instead, the AI calibrated for a new domain with low authority.
It recommended long-tail phrases like "AI automation use cases for operations teams" and "AI consulting process and pricing" instead of broad terms like "AI consulting" that I'd never rank for. That's exactly the right strategic call for where we are.
The internal linking analysis was also sharp. It identified that the Pilot Readiness page had zero inbound internal links from any other page on the site. A page with no internal links is effectively invisible to search engines, no matter how good its content is. I hadn't noticed because the page works fine when you navigate to it directly.
The technical audit was thorough but proportionate. It confirmed that fonts were properly configured with display: 'swap', that the RSS feed was functional, that image optimization was handled through Next.js, and that the 404 page existed with proper navigation. It didn't waste time on things that were already working.
Where I Pushed Back
Not everything the AI recommended was worth implementing.
It recommended building out dedicated landing pages for every industry vertical we could serve, complete with custom case studies and keyword targeting for each one. Then it partially walked it back, noting the site already had decent use case coverage. That kind of hedging tells me the AI wasn't confident in the recommendation to begin with. More importantly, we're a team of one with two live clients. Building ten speculative landing pages for industries we haven't served yet means spending a week on content instead of closing real deals. I'll build those pages when we have the case studies to back them up.
Some of the recommended title changes were too keyword-heavy. "AI Automation Use Cases for Operations Teams" is technically optimized but reads more like a search query than a page title a human would click on. I softened a few of these to balance SEO value with click-through appeal.
The Implementation Plan
Here's what I'm actually doing with the findings, in priority order.
Week 1 (quick wins, under 2 hours total): Fix the blog canonical URLs. Update the homepage description to fit under 155 characters. Rewrite the Try page metadata. Update the robots.txt to disallow /api/ routes.
Week 2 (metadata overhaul): Update all page titles to include target keywords. Add page-specific Open Graph tags to every main page. This is the single highest-impact batch of changes.
Week 3 (structured data): Add the Person schema, ProfessionalService schema, and dateModified fields. These take longer to show results but compound over time.
Backlog: Comparison content pages. Pillar index pages for blog categories. Page-specific OG images. Related posts section for blog pages.
What This Actually Cost
Two hours total. One hour writing the prompt. One hour reviewing and prioritizing the findings.
That's not a knock on professional SEO consultants. There are things a human expert brings that AI doesn't. Competitive intelligence from working across dozens of clients. Judgment about which opportunities are worth the effort in your specific market. The ability to notice qualitative things about your content that don't show up in metadata.
But for a founder who needs to get from "no SEO strategy" to "solid SEO foundation" as fast as possible, this approach works. The AI handled the mechanical analysis. I applied the judgment about what to implement, what to modify, and what to skip.
The Pattern
This post is about SEO, but the pattern applies to almost any operational task where you need analysis before action.
Give the AI full context about your situation. Be specific about what you want back. Define what "good" looks like. Review the output with your own judgment. Execute on the parts that make sense.
We use this exact pattern for prospect research, deal qualification, content planning, and architecture decisions. The tool changes, the domain changes, but the workflow stays the same.
If you're running a growing company and you've been putting off the operational tasks that require expertise you don't have in-house, consider whether an AI with the right context could get you most of the way there in a fraction of the time.
That last 20% still requires human judgment. But the first 80%? That's where AI earns its keep.
What's Next
I'll be implementing all three weeks of changes over the coming days. In 60 days, I'll publish a follow-up with the actual ranking data. Did organic traffic go up? Did the metadata changes move the needle on click-through rates? Did the structured data additions produce any rich results?
Real numbers, no spin. Follow along if you want to see whether this approach actually works over time.
This is part of our Building in Public series. If you want to see where AI could deliver this kind of leverage in your own operations, take our AI readiness audit. It takes five minutes and you'll get a personalized report on where to start.
What's the operational task you keep pushing to "later" because you don't have the right expertise in-house?