AEO Playbook: How to Optimize for AI

6 min read complete
Table of contents
Share this

AEO Playbook: How to Optimize for AI w/ Profound’s Josh Blyskal.

All in all this is a great and honest take on the current AEO, GEO and SEO landscape in general away from all the hype.

Everything old is new again?

Its’ true that there are many things changing but there are many fundamental aspects search, indexing and crawlability that are staying the same.

in AEO land, we are still working out the same muscles; like we are using meta descriptions, title tags, schema. Um, you know, FAQs, we’re using, we’re still writing for featured snippets, We’re still thinking about EAT and content.

Not to mention that most of the best strategies continue to be excellent to rank and get cited by LLMs. And, if you ask me, it’s a good thing is blog posts are losing ground, there’s just too much editorial slop out there:

Listicle and comparative content are 32.9% of all citations. That’s the number one most cited single type of content. The number two is all blogs and opinion at 9.9%.

I remember when it was a practice to serve different versions of your site, a statically generated one without any JS for the bots and one with rich JS interactions. Then Googlebots started rendering JavaScript and we stopped this nonsense. But we’re at it again with llms.txt and other practices that make it easier for the new wave of bots to process your content:

Maybe the userfacing.com website is actually radically designed and is not super intuitive for answer engines and it’s almost like kind of like a piece of art rather than an actual site. Because websites right now, to my opinion, look very similar. Websites are very optimized. There’s a very similar structure in a basic sense across websites, allowing answer engines to confidently go in and navigate some sort of backend architecture, be it LLM.text or whatever it is, allows us to do crazy things again with our websites

When you’re thinking about being cited for answer engines, it’s It’s kind of a two-step game. So, step one is marketing the cover of the book. It’s like, all right, we want our URL slug to look really good. We want our meta description to be really attractive so that the answer engine thinks that there’s a lot of value behind the page.

This is not dissimilar from optimizing for how humans operate. We scan the SERP and click on links we find interesting, and interesting links usually have good titles and descriptions. Nobody wants to waste a click.

Don’t discount Perplexity

Perplexity is my daily search driver. Many people discount it but they are quietly doing a lot of great stuff and testing things no one else is daring to try. They also recently launched Comet, the first agentic browser.

Perplexity is doing a great job, I think. If you’re going to look at any model and look at maybe what’s to come in the future, Perplexity is kind of the canary in the coal mine for this stuff because they were, you know, first to shoppability. They’re doing sponsored queries now. So, you can go in as a brand, and you can sponsor those. You can sponsor follow-up queries

If you’re discounting Perplexity, don’t. You might not get a lot of traffic from it but whatever you get is high intent. I think part of the reason for this is that they make their sources very clear and easy to scroll through as opposed to other providers.

Perplexity has like a six to 10 times higher click-through rate than ChatGPT doe

The Game is changing, mostly for the better

The traffic coming from these answer engines is extremely high quality; they are ready to buy. They are qualified. So, when you get someone who clicks, a click is a conversion for the most part. I mean, I’ve heard of traffic CBR as high as 20 to 30% from ChatGPT.

This is good news for everybody. Funnel is getting compressed, parts of the funnel (especially the top) might get squeezed to zero but even if volume goes down you should get better leads.

We’re in a very transitory stage. Like, we’ve got this RAG model with OpenAI; it’s heavily reliant on Bing. For anyone who’s listening who doesn’t know already, um, Bing is the foundation of ChatGPT’s retrievals. Right? So if your site’s going to appear in ChatGPT, it has to get indexed in Bing, and then ChatGPT will index it in its own index. But if you’re not in Bing, you’re not going to make it through that first filter. But this RAG methodology that’s using Bing is really kind of made by academics.

Nobody really knows what’s happening or what’s next so we need to always be experimenting and adapting. SEO is not dead is just evolving.

New tricks

you can put 2025 in your URL slugs, in your title tags, and in your meta descriptions and see an upwards of 20% improvement in ChatGPT citations just by including 2025

Anecdotally this seems to be because, at least ChatGPT, adds the current year at the end of all their queries to search engines (mostly Bing).

This is a good strategy but it complicates keeping content fresh, URL redirects, etc.

if we’re going to create a blog post, it really does help to cross-post that into LinkedIn Pulse so that when ChatGPT does search or, you know, your perplexity, your AI model does Search sees a few different sources saying the same thing

Mentions and visibility across different web sources are the new backlinks, or so some people say. In the end it’s all marketing 101, increasing the surface area of your brand mentions should help no matter what.

…the best uplift I’ve seen so far surprisingly is author schema, which is really weird. Like, really building out a nice author schema has been really nice for answer engine pickup

While I don’t have data to back this up, it is not at all surprising. Bots and LLMs want to trust the content they are scraping, in a similar way to how users want to trust the content they are reading. Having good author schemas reinforces this.

The more clear, structured and easy to parse data you feed into the LLMs the better. This seems clear by now and Josh has data to back this up:

Semantic, long, descriptive URL slugs win in AI search. That is clear to us, at least; like, the data is overwhelming in that regard.

…adding llms.txt in your robots.txt, even if it’s totally redundant, it’s like absolutely redundant to do that, but we’ve seen it drastically increase the amount of actual pickup from answer engines. Like it gets picked up quite significantly because of that.

Everybody is debating about llms.txt. While it is not a standard like robots.txt if you add the file in all the right places, scrapers will get to it and use it. This seems especially useful for documentation sites that are constantly scraped by coding tools like Cursor.

In the ideal world right now, if you’re thinking about how to win for the next year or two, right now it’s about creating a landing page for any kind of problem-oriented solution. It’s like a solution-oriented landing page, like mapping every use case to a Landing page for the same product almost. And it sounds insane, but it’s exactly right.