← All Posts

How we built 100+ pages for a land surveying company in one week

We generated 100+ SEO-optimized pages for a surveying company using AI agents and automation. Here's what worked.

The Problem: Content at Scale

A land surveying company approached us with a straightforward request: they needed 100+ location-specific landing pages. Each page required unique content targeting different service areas--think "Land Surveying Services in [City], [State]"--but with real value, not thin AI slop.

The constraints were real: one week, zero budget for copywriters, and the client needed SEO-quality content that actually ranked. This isn't a case study about throwing GPT-4 at a problem. It's about building an intelligent system that understood their business, followed editorial standards, and scaled.


Our Architecture: Agents Over Prompts

We didn't use a prompt chain. We built an agent-driven system using Forge Agent that could reason about content requirements, validate its own output, and iterate.

Why Agents, Not Prompts?

Simple prompts fail at scale because they're stateless. You write a perfect prompt for one page, apply it to 100 pages, and suddenly page 47 has hallucinated a non-existent surveying technique. Agents maintain context, can verify facts against a knowledge base, and adjust their approach based on what they learn.

Our agent had three core capabilities:

The Knowledge Pipeline

We fed the agent:

This wasn't raw data dumping. We structured it as a searchable knowledge graph so the agent could retrieve relevant context for each location without token bloat.


The Generation Process: Structured Output, Real Iteration

Here's where most teams fail: they generate content once and ship it. We built a three-pass system.

Pass 1: Content Generation

The agent generated a complete page outline with sections for:

Output format: structured JSON with section headers, body copy, and metadata (estimated reading time, keyword coverage, etc.).

Pass 2: Validation and Feedback

The agent ran self-checks:

If validation failed, the agent revised automatically. If it failed twice, it escalated to a human reviewer with detailed notes.

Pass 3: Human Review (Lightweight)

Our review wasn't "read every word." We used a templated review process:

Most pages passed with zero revisions. The agent had learned the brand well enough that human review became a safety gate, not a rewrite process.


What Actually Moved the Needle

Structured data for local context. The biggest win wasn't the AI--it was giving the AI excellent source material. We spent two days building a database of local factors (average property values, common survey types by area, local zoning complexity). The agent's output quality jumped 40% once it could reference this data.

Hybrid human-AI workflow. We didn't aim for "fully automated." We aimed for "humans focus on judgment, AI handles execution." A human was never sitting around waiting for the AI; they were reviewing batches of 10-15 pages at a time, which took 20-30 minutes per batch.

Iterative refinement on a small set. We didn't generate all 100 pages on day one. We built 10 pages, reviewed them with the client, adjusted the system based on feedback, then scaled to the full 100. This meant our agent learned their preferences in real time.

SEO that doesn't read like SEO. We didn't optimize for keyword density. We optimized for keyword relevance--the agent understood what a surveyor in Denver actually needs to know (altitude considerations, soil types, etc.) and naturally incorporated relevant terms into that context.


The Numbers


What We'd Do Differently

If we were building this again:

Start with user intent mapping. We made assumptions about what searchers wanted in each location. Before generation, we should have run searches for 20-30 target keywords to see what's actually ranking and what gaps exist. This would have made content more competitive.

Build iterative feedback loops earlier. We could have pushed drafts to the client for feedback on day 3 instead of day 5. The agent's learning curve is steep--early feedback compounds.

Automate metadata generation. We manually created meta descriptions, H1 tags, and image alt text. The agent should have done this automatically. We lost maybe 4 hours to busywork.


The Real Lesson

Scaling content isn't about better prompts or larger models. It's about building systems that understand your domain and maintain quality gates as they scale. The agent was the tool, but the architecture--knowledge pipeline, validation rules, human review workflow--was the product.

If you're facing similar scaling challenges, this pattern works across industries: real estate, legal services, local B2B, any vertical where you need variations on a theme at scale.

We used Forge Agent for the orchestration and validation logic because it gave us fine-grained control over the workflow and let us plug in custom validation. If you're building something similar, you'll want that level of control--generic content generation platforms handle this poorly.

The surveying company is now seeing organic traffic growth on these pages. More importantly, they have a system they can extend. New location? The agent can generate a page in 10 minutes and pass it through validation automatically. That's leverage.


← Back to all posts