Joe Robinson
Back to blog

The SEO sea of sameness

How identical tools and AI workflows have made SEO intellectually conformist, and why mental models, not tooling, are the real competitive differentiator.

The SEO sea of sameness

Open any SEO audit delivered in the past three years. The chances are it runs through the same checklist of checks, in roughly the same priority order, using roughly the same tool stack, and arriving at roughly the same recommendations. This is not a coincidence. It is a structural feature of an industry that has built its professional identity around shared tooling, shared frameworks, and now, shared AI workflows.

That standardisation has produced real proficiency gains. Systematic audits catch real problems. Keyword research surfaces real demand. Content frameworks produce pages that rank. None of this is wrong.

The problem is that efficiency and proficiency are not differentiation. SEO’s convergence on identical tools, frameworks, and AI workflows has created intellectual conformity at industry scale, making diversity of mental models (not access to data or tooling) the only remaining source of genuine strategic differentiation.

The toolkit shapes the lens

A mental model is a simplified representation of how something works that shapes how we interpret information and make decisions. Everyone uses mental models constantly and mostly unconsciously. In SEO, the tool stack determines which models practitioners apply, because every tool embeds a model of what matters.

Keyword tools surface volume and difficulty. The implicit model: demand is best measured by search volume, and difficulty is the primary selection constraint. Practitioners who work primarily within these tools begin to treat volume as a proxy for value and difficulty as the main strategic variable. The result is a profession-wide assumption that high-volume, lower-difficulty keywords are where effort should go, regardless of what conversion data or competitive context might suggest.

Dashboards track rankings, traffic, and impressions. The implicit model: performance means movement in these three numbers. A site that gains rankings but loses revenue is hard to identify within this frame. A site that loses organic visibility but acquires stronger direct and referral traffic appears to be declining. The dashboard model is not wrong, but it screens out what it does not measure.

Templated audits run through the same checklist in the same order regardless of the site’s commercial context. A site with 5,000 URLs and concentrated revenue from one category receives the same audit structure as a site with 500,000 URLs and complex crawl dynamics. The template produces comparable outputs because it is designed to produce comparable outputs. That is its function and its limitation.

Content brief generators analyse the current top-ranking pages and suggest matching structure, word count, heading patterns, and topic coverage. The logic is sound: if these pages rank, producing a similar page optimises for the same signals. The practical consequence is that every optimised piece of content converges toward the statistical median of what currently ranks. Optimising toward the median is a strategy for not being filtered out, not a strategy for standing out.

None of these tools are at fault. They are well-designed for what they do. The issue is structural: when every practitioner in a market runs the same tools against the same data with the same default interpretive assumptions, they do not merely produce similar outputs; they produce similar analyses of what the outputs mean. The analytical lens is shared because the tools are shared.

How professional consensus calcifies

The reason most SEO strategies converge on the same structure is not that they are all based on independent evidence that happened to agree. It is that the professional information environment amplifies agreement and filters out dissent.

A small number of visible voices publish frameworks: topical authority, E-E-A-T, content clusters, passage indexing, and others. These circulate rapidly on LinkedIn, through newsletters, at conferences, and across the podcast circuit. Practitioners adopt them not because they have independently tested them but because they are consensus, and consensus is legible as credibility.

This produces a professional community that increasingly resembles what researchers define as an echo chamber: a space where people “predominantly encounter viewpoints that reinforce their existing beliefs while excluding dissenting perspectives.” The community is not intellectually dishonest; it is structurally biased toward agreement.

The practical consequence is precisely what that research identifies: the echo chamber effect “significantly hinders information dissemination across communities.” In SEO terms, contrarian findings, unconventional frameworks, and interpretations that diverge from orthodoxy circulate poorly. The industry’s information flows reinforce what it already believes.

The topical authority framework illustrates the dynamic clearly. Once credible voices named and described the concept, it spread rapidly, became orthodoxy, and practitioners began treating it as a confirmed Google ranking factor. The mechanism was inferred from SERP observations, not released by Google. The speed with which inference became conventional wisdom is less a sign of the framework’s validity than of how the professional information environment handles new concepts: fast circulation, slow scrutiny.

AI is compressing the distribution further

AI SEO workflows extend the standardisation problem rather than disrupting it. They run on the same keyword datasets, apply the same SERP analysis patterns, process inputs through prompts trained on the same existing content, and produce outputs structured to match what currently ranks. An AI workflow built on shared assumptions accelerates those assumptions; it does not challenge them.

The result is a pattern the SEO content market already demonstrates plainly: “AI has revolutionized efficiency, but also homogenization. While it can help produce ideas and speed up drafting, AI also makes it easier for the mediocre middle to flood the market with ‘new’ content that’s anything but.”

When every practitioner automates the same analytical and production processes, the outputs compress toward the same statistical centre. The efficiency gain is real; the differentiation cost is equally real. This is not a hypothetical concern about some future state: “if every competitor says the same thing, none of you stand out, and rankings flatten out across the board.” The competitive dilution is “particularly acute in red ocean industries like SEO.”

AI was adopted, individually, to gain competitive advantage. When the entire industry adopts the same AI workflows simultaneously, that logic collapses. What remains is a faster mechanism for producing the same output as everyone else.

The counterargument: isn’t standardisation just best practice?

The honest objection to this argument runs as follows: best practices exist because they work. Structured audits catch real problems. Keyword research surfaces real demand. Content frameworks produce pages that rank. Standardisation is not intellectual conformity; it is applied learning. Why deviate from what is demonstrably effective?

The objection is correct as far as it goes. Standardised practice is better than no practice. The frameworks work, in the average case, for the typical site, in a competitive environment where most competitors are not executing significantly better. The problem is the ceiling this creates, not the floor.

When every practitioner in a market applies the same framework to the same data, the market develops collective blind spots around whatever that framework does not measure. The shared model cannot surface questions it does not ask. Demand quality, user psychology, competitive structure, and the specific commercial context of a business do not appear in standard analytical outputs. They are real, they affect performance, and the industry-standard toolkit mostly ignores them.

The execution consequence follows directly: if every player follows the same playbook, outperformance requires outexecuting rather than out-thinking. Doing the same things faster, more thoroughly, or more cheaply. Technology will always win that race. Google’s own behaviour signals this: it consistently favors content produced with original research, personal expertise, and unique insights over content that repeats established patterns. The platform rewards the thing that identical playbooks cannot produce.

The argument here is not against frameworks. Frameworks are a floor. The question is what you bring to the data that the framework cannot produce on its own.

A broader mental model toolkit for SEO

The prescription is not to abandon the standard toolkit. It is to treat it as a starting point and bring additional interpretive models to what it surfaces.

Charlie Munger’s argument for building a latticework of mental models is directly relevant here. Models must “come from multiple disciplines because all the wisdom in the world is not to be found in one little academic department.” The prescription is not to find a better single model but to develop a sufficiently diverse set that different dimensions of the same problem become visible under different lenses.

The real cognitive gain comes not from any individual model but from their combination. “Unique thinking, innovation and creative problem solving” arise from overlaying diverse models, not from upgrading a single existing one. In SEO terms: the practitioner who brings economics, psychology, and systems thinking to the same Google Search Console data will formulate questions that the pure SEO practitioner cannot, because those questions require models the standard toolkit does not contain.

Five analytical disciplines that the standard SEO toolkit systematically underweights:

  • Incentive analysis. What does Google actually optimise for at the business level? Not quality in the abstract, but revenue from ad products, user trust, and long-term engagement. Understanding the incentive structure predicts Google’s behaviour more accurately than inferring motivation from algorithm updates after the fact.
  • Information economics. Search engines solve an asymmetric information problem: users cannot reliably distinguish authoritative publishers from low-quality ones. Understanding how signals of trust, expertise, and credibility function as asymmetry reducers produces different hypotheses about what moves rankings than a checklist of technical requirements does.
  • User psychology. Click behaviour, dwell time, and return visits are shaped by recognition, familiarity, perceived authority, and loss aversion. Keyword research identifies what users search for; psychology explains why they choose one result over another when the options are formally similar.
  • Market structure analysis. Some verticals consolidate around one or two dominant brands in search regardless of how well competitors execute SEO. The structure of the market determines the available ceiling. Identifying that ceiling requires competitive economics, not keyword gap analysis.
  • Systems thinking. Content, links, brand signals, and user behaviour interact non-linearly. Changes in one element alter the others in ways that linear checklist analysis cannot model. A practitioner who understands feedback loops will make different prioritisation decisions than one who treats ranking factors as independent inputs.

None of these replace the standard toolkit. Each one extends the range of questions it is possible to ask of the same data.

The competitive differentiator

SEO does not suffer from a shortage of data, tools, or frameworks. Practitioners have more measurement capability than at any point in the industry’s history. The constraint is interpretive: when the same tools determine which questions get asked, the industry converges on the same questions and, consequently, the same answers.

AI has not disrupted this. It has automated the existing analytical monoculture and scaled it at lower cost. The practitioners who will outperform are not those with access to better data; they are those who bring a different interpretive frame to the same data everyone else has. That capability does not come from a tool purchase or a workflow optimisation. It comes from a reading list that extends well outside the industry’s own publications.

Two practitioners can look at identical SERP data and reach different strategic conclusions. The data does not explain the difference. The mental models they apply to the data do.

SEO’s convergence on identical tools, frameworks, and AI workflows has created intellectual conformity that reduces strategic diversity across the industry. When every practitioner interprets the same data through the same analytical lens, the outputs converge and competitive advantage disappears. The practitioners who consistently see what others miss are those who deliberately draw from mental models outside SEO: incentive analysis, information economics, user psychology, and systems thinking. The data is the same for everyone; the interpretation is the differentiator.