The Problem: Content at Scale
A land surveying company approached us with a straightforward request: they needed 100+ location-specific landing pages. Each page required unique content targeting different service areas--think "Land Surveying Services in [City], [State]"--but with real value, not thin AI slop.
The constraints were real: one week, zero budget for copywriters, and the client needed SEO-quality content that actually ranked. This isn't a case study about throwing GPT-4 at a problem. It's about building an intelligent system that understood their business, followed editorial standards, and scaled.
Our Architecture: Agents Over Prompts
We didn't use a prompt chain. We built an agent-driven system using Forge Agent that could reason about content requirements, validate its own output, and iterate.
Why Agents, Not Prompts?
Simple prompts fail at scale because they're stateless. You write a perfect prompt for one page, apply it to 100 pages, and suddenly page 47 has hallucinated a non-existent surveying technique. Agents maintain context, can verify facts against a knowledge base, and adjust their approach based on what they learn.
Our agent had three core capabilities:
- Content Research: Query their existing service documentation, past project data, and industry standards
- Generation with Constraints: Write 800-1200 word pages following their brand voice and SEO guidelines
- Validation: Check for factual accuracy, readability scores, and keyword density before passing to humans
The Knowledge Pipeline
We fed the agent:
- Their existing website content (to capture voice and terminology)
- Service descriptions and pricing tiers
- Local area data (demographics, property types, regulatory requirements)
- A curated list of 50 high-value keywords per location
This wasn't raw data dumping. We structured it as a searchable knowledge graph so the agent could retrieve relevant context for each location without token bloat.
The Generation Process: Structured Output, Real Iteration
Here's where most teams fail: they generate content once and ship it. We built a three-pass system.
Pass 1: Content Generation
The agent generated a complete page outline with sections for:
- Local market context (why surveying matters in that specific area)
- Service descriptions tailored to local property types
- Case studies or examples from nearby projects
- Local regulatory information
- CTAs relevant to their service area
Output format: structured JSON with section headers, body copy, and metadata (estimated reading time, keyword coverage, etc.).
Pass 2: Validation and Feedback
The agent ran self-checks:
- Factual verification: Does this claim about local surveying requirements actually match reality? (We had regulatory data from state surveying boards.)
- Readability: Flesch-Kincaid score between 50-60 (accessible to non-technical readers)
- Keyword alignment: Do target keywords appear naturally in the content?
- Uniqueness: How much content is genuinely unique to this location vs. boilerplate?
If validation failed, the agent revised automatically. If it failed twice, it escalated to a human reviewer with detailed notes.
Pass 3: Human Review (Lightweight)
Our review wasn't "read every word." We used a templated review process:
- Spot-check 3 facts per page against source documents
- Verify the local context is accurate (no made-up details about the region)
- Ensure the voice matches the brand
- Approve or request specific revisions
Most pages passed with zero revisions. The agent had learned the brand well enough that human review became a safety gate, not a rewrite process.
What Actually Moved the Needle
Structured data for local context. The biggest win wasn't the AI--it was giving the AI excellent source material. We spent two days building a database of local factors (average property values, common survey types by area, local zoning complexity). The agent's output quality jumped 40% once it could reference this data.
Hybrid human-AI workflow. We didn't aim for "fully automated." We aimed for "humans focus on judgment, AI handles execution." A human was never sitting around waiting for the AI; they were reviewing batches of 10-15 pages at a time, which took 20-30 minutes per batch.
Iterative refinement on a small set. We didn't generate all 100 pages on day one. We built 10 pages, reviewed them with the client, adjusted the system based on feedback, then scaled to the full 100. This meant our agent learned their preferences in real time.
SEO that doesn't read like SEO. We didn't optimize for keyword density. We optimized for keyword relevance--the agent understood what a surveyor in Denver actually needs to know (altitude considerations, soil types, etc.) and naturally incorporated relevant terms into that context.
The Numbers
- 100 pages generated and published in 7 days
- 92% passed human review on first pass (8% required minor revisions)
- Average production time per page: 12 minutes (including validation)
- Average page length: 950 words
- Estimated traditional copywriting cost: $15,000-20,000 USD
What We'd Do Differently
If we were building this again:
Start with user intent mapping. We made assumptions about what searchers wanted in each location. Before generation, we should have run searches for 20-30 target keywords to see what's actually ranking and what gaps exist. This would have made content more competitive.
Build iterative feedback loops earlier. We could have pushed drafts to the client for feedback on day 3 instead of day 5. The agent's learning curve is steep--early feedback compounds.
Automate metadata generation. We manually created meta descriptions, H1 tags, and image alt text. The agent should have done this automatically. We lost maybe 4 hours to busywork.
The Real Lesson
Scaling content isn't about better prompts or larger models. It's about building systems that understand your domain and maintain quality gates as they scale. The agent was the tool, but the architecture--knowledge pipeline, validation rules, human review workflow--was the product.
If you're facing similar scaling challenges, this pattern works across industries: real estate, legal services, local B2B, any vertical where you need variations on a theme at scale.
We used Forge Agent for the orchestration and validation logic because it gave us fine-grained control over the workflow and let us plug in custom validation. If you're building something similar, you'll want that level of control--generic content generation platforms handle this poorly.
The surveying company is now seeing organic traffic growth on these pages. More importantly, they have a system they can extend. New location? The agent can generate a page in 10 minutes and pass it through validation automatically. That's leverage.