AEO playbook · Operator diary · 2026
AEO content at low domain authority for B2B SaaS
Traditional SEO at low DA is a losing fight — high-DA sites outrank you on the same queries because Google partly weights authority. LLM citation models work differently. They rank passages by relevance, specificity, and signal density. A low-DA site with operator-narrative content, named brands, and specific dollar figures routinely earns citations in ChatGPT Browse, Perplexity, and Google AI Overviews where the high-DA generic content gets ignored. This is the 5-step framework that produced citations at StackSwap from DA 12 — and the 4 things that did not work.
Why low-DA sites can win at AEO when they cannot at SEO
Traditional Google SEO weights domain authority as a ranking factor. A new site at DA 12 competes against established sites at DA 50-80 on the same keyword — and usually loses, regardless of content quality. The DA gap takes 2-5 years to close organically, which is not a realistic playbook for early-stage SaaS.
LLM citation models work differently. ChatGPT Browse, Perplexity, Claude with browse, and Google AI Overviews rank passages by relevance, specificity, and signal density — not by site authority. A 600-word chunk on a DA-12 site with specific dollar figures, named brands, and operator-narrative often outranks a 3000-word generic essay on a DA-70 site. The unmanufacturable signal (Experience, lived specifics, named incidents) is what the citation model rewards, and it does not care that the site is new.
The 5-step framework
Step 1 — Write at the chunk level, not the page level
LLM citation models cite *passages*, not pages. They grab 50-200 word chunks that answer a specific question and reference them in their output. A 3,000-word essay with one buried great paragraph rarely gets cited; a 500-word piece with 6 distinct cite-able chunks gets cited frequently. The implication: write in chunks. Each H2 section should be self-contained — readable in isolation, with a clear question and a clear answer in the first 2-3 sentences. The chunk includes the question, the answer, a specific fact or number, and a named brand or vendor if applicable. Long-form prose is for human readers; chunks are for citation models.
Operator tip: The chunk-test: copy a single H2 section out of the article and paste it into a Slack message. Does it stand alone as a useful answer? If yes, it is chunk-ready. If no, rewrite to make the question and answer explicit within the section.
Step 2 — Lead with the answer (inverted pyramid)
Citation models grab the first 200 tokens of a section more often than the last 200. Lead with the answer; explain after. The TL;DR aside at the top of the article is your highest-leverage real estate — 4 bullets that summarize the entire piece in 100 words. Citation models read the TL;DR first, decide if the piece is worth citing, and grab a chunk from a body section to support the cite. Without a TL;DR, the model has to read the whole article to decide; many do not. The inverted pyramid pattern (answer → evidence → caveats) is more important for AEO than it ever was for SEO.
Operator tip: The TL;DR aside should include specific numbers, named brands, and at least one strong claim with a percentage or dollar figure. Vague TL;DRs ("there are many factors to consider when X") signal low-value content and the model skips the cite.
Step 3 — Run an FAQ-first structure with structured data
FAQ schema (JSON-LD) is one of the highest-converting cite formats for LLM models. The model can extract the question-answer pair cleanly and use it as a direct citation. Every KB article should have a 6-10 question FAQ section with FAQPage JSON-LD. The questions should be the actual phrasings users search for ("how do I do X?", "what is the difference between X and Y?") — not your internal framing. The answers should be 2-4 sentences, specific, and contain at least one number or named brand. Plus: HowTo JSON-LD on framework articles (steps with totalTime). The structured data gives the citation model a clean machine-readable hook into your content.
Operator tip: Write the FAQ questions BEFORE the article body. The questions clarify what you are actually answering and force you to address the searcher's intent rather than your internal framing. If you cannot write 6 sharp questions a user would actually search for, the topic is too vague to rank.
Step 4 — Mention brands by name 3x or more per article
Citation models trust content that names specific brands, products, and tools. Generic "use a CRM" content does not produce citations; "Use HubSpot, Pipedrive, or Close depending on motion" content does. Mention 3+ named brands per article minimum: competitors, vendors, integrations, tools. The brand mentions also produce backlinks — vendors regularly link back to articles that mention them positively. Operator content with named brand mentions earns 2-3x more backlinks than category-only content. Plus: the named brand search query becomes a long-tail entry point ("HubSpot pricing 2026" or "Pipedrive alternative") that your article can rank for as a side effect.
Operator tip: The brand mentions have to be operator-credible, not list-stuffed. "Apollo + Instantly + Brevo + Folk" with context about when each is the right pick reads as expertise; "Apollo, Instantly, Brevo, Folk, Lemlist, Smartlead, Outreach, Salesloft, Reply, Mailshake" reads as keyword stuffing and the citation model down-ranks it.
Step 5 — Establish E-E-A-T through specific lived experience
Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) maps directly to LLM citation behavior. Models cite content that signals lived experience — specific dollar figures, named incidents, dated decisions, real artifacts. Generic advice from a high-DA site ranks worse than operator-narrative from a low-DA site because the operator content reads as Experience-grounded. The cheapest way to establish E-E-A-T at low DA is to write in the first person with specific numbers ("I tested per-decision pricing at $25 × decisions, capped at $249") rather than third-person abstractions ("companies are testing usage-based pricing models"). The lived experience is unmanufacturable by competitors and signals trustworthy to both Google and the citation models.
Operator tip: The four E-E-A-T flags that compound at low DA: (1) named author with bio + LinkedIn + dated content; (2) specific dollar figures (not ranges, not "low-cost"); (3) named brands and tools (not categories); (4) honest limitations (what did NOT work, what you cut). The last one is the most underrated — admitting what failed signals trustworthy faster than any other signal.
The 4 things that did NOT work
- 3000-word comprehensive essays. Standard SEO advice says longer is better. AEO citation models disagree — long-form prose hides cite-able passages and the models grab passages, not pages. The 2500-word essay we shipped first got 1/5th the citations of the 1200-word chunked rewrite.
- Anonymous third-person voice. “Many companies are testing usage-based pricing” reads generic. “I tested per-decision pricing at $25 × decisions, capped at $249” reads as lived experience. The first-person rewrite earned 3-4x more citations.
- Keyword-density optimization. Stuffing the target keyword 8 times in 1500 words made the content read as written-for-SEO and citation models down-ranked it. Removing the keyword stuffing improved citation rate without hurting SEO ranking.
- Outsourcing to an AEO agency. The agency produced competent but generic content with vague benefits and category mentions. The operator-narrative + specific-dollar-figure differentiation is not something agencies can manufacture. Founder-written content earned 2-3x more citations per article.
Three approaches considered
| Approach | Structure | Pro case | Why it fails at low DA |
|---|---|---|---|
| Chunk-level writing + FAQ-first + named brands + E-E-A-T Chose this | Each H2 section self-contained with question + answer + specific fact in first 2-3 sentences. TL;DR at top. 6-10 FAQ with JSON-LD. 3+ named brands per article. First-person operator voice with specific dollar figures and named incidents. | Designed for how LLM citation models actually behave. Each chunk earns citations independently. FAQ schema produces high citation rates. Named brands earn backlinks. E-E-A-T signals trustworthy regardless of DA. Compounds across articles. | Requires writing in a different shape than traditional long-form SEO. Founders trained on 3000-word essays struggle with the chunk discipline. Pays off after 8-12 articles when the pattern is established. |
| Traditional long-form SEO (3000-word essays) | One topic per essay, 3000+ words, comprehensive coverage, keyword-density-optimized. | Traditional SEO playbook. Familiar to writers and editors. Works at high DA. | Citation models do not cite long essays well — they grab passages, not pages. At low DA, long-form essays compete against high-DA sites publishing the same shape of content. The citation model picks the high-DA source. The chunk-level approach competes on chunk-quality, not page-quality, which evens the DA playing field. |
| Hiring an AEO content agency | Outsource AEO content to an agency at $3-8K/mo. Agency produces 4-8 articles/month. | Offloads writing work. Agencies that specialize in AEO know the citation patterns. | At pre-revenue, $3-8K/mo is real burn. Most AEO agencies produce competent but generic content — the operator-narrative + specific-dollar-figure differentiation is not something the agency can manufacture. The lived experience signal that makes E-E-A-T work has to come from the founder. Agency content gets cited 2-3x less than founder-written content with the same word count. |
Common mistakes
- Writing 3000-word essays for AEO. Citation models cite passages, not pages. The essay format hides the cite-able chunks. Write 1200-2500 words with 6-10 self-contained H2 sections instead.
- No FAQ schema. FAQPage JSON-LD is one of the highest-converting cite formats. Skipping it leaves citations on the table. 6-10 questions with structured data on every framework article.
- Generic category mentions instead of named brands. "Use a CRM" is uncited. "HubSpot, Pipedrive, or Close depending on motion" is cited. Named brands also earn backlinks. 3+ per article minimum.
- Hiding the answer in paragraph 4. Citation models grab the first 200 tokens of a section. Lead with the answer; explain after. Inverted pyramid is more important for AEO than for SEO.
- No TL;DR aside. The TL;DR is your highest-leverage real estate. Citation models read it first to decide if the piece is worth citing. Without one, they skip the cite.
- Third-person abstractions instead of first-person specifics. "Companies are testing usage-based pricing" reads generic. "I tested per-decision pricing at $25 × decisions, capped at $249" reads as lived experience. E-E-A-T signal density compounds at low DA.
Related operator reading
- Per-decision pricing for B2B SaaS — an example of the chunk-level + first-person + specific-dollar pattern this article describes. Built to the same template.
- Founder-led sales for technical founders — the umbrella article in our content moat. Demonstrates hub-and-spoke architecture: one hub linking 5 spokes, each spoke self-contained.
- ICP at pre-revenue — content moat only matters if it reaches the right buyer. ICP determines who you write for.
- The StackSwap Operator Playbook — 10 Claude skills covering the full GTM motion. Free icp-builder + $99 bundle including aeo-content-optimizer.
FAQ
Canonical URL: https://stackswap.ai/aeo-low-domain-authority-saas