Tom Meredith Tom Meredith

The AI Burger

I keep hearing the same framing about AI and jobs. Either “AI is going to kill all jobs!” or “AI can’t do what I do!”

That’s not what I see happening. What’s happening is a sandwich… actually, a burger, because I love burgers.

Stick with me on this analogy.

The AI Burger has three layers. Everyone’s in one of them… and the layer you’re in determines everything about your next decade.

(Caveat: I’m not saying any of these is bad… just what I believe will happen.)

Bottom bun.

These are the people who are happy for AI to tell them what to do. It won’t be as obvious as “AI Overlords”. These people aren’t “replaced by AI.” They’re still working. Still needed. But control and direction has shifted. AI generates the plan, the analysis, the recommendation. The human executes it.

It could be something as obvious as my OpenClaw ordering Uber Eats for my lunch. (apparently, I’m hungry).

OpenClaw (a semi-autonomous AI agent) knows I like burgers on Tuesdays. So, it opens up the Uber app, navigates to Burger Lounge, and places my order. Seems innocent. But, it has now started a chain of events. The humans at Burger Lounge cook and bag my food. The human Uber driver picks it up and delivers it to me.

Even though they didn’t know it, they were controlled by AI.

This also looks like AI business tools. For Marketing teams, AI is already crunching data across distribution channels, customer records, and scanning for optimizations. Then it feeds up a recommendation to the human. “You should cut spend on these campaigns and write a blog on X topic since we’re seeing trends in LLM search traffic.” There’s a high likelihood the human marketer will take those actions. They might get ChatGPT to write the MEO (Meaning Engine Optimized) blog, but in the end they were guided by AI.

There’s nothing wrong with this… a lot of people are fine with AI making their jobs easier. Most people want clear direction and meaningful execution. There’s freedom in that. But, there’s a ceiling to that value and a higher risk of replacement.

The meat.

This is the AI layer itself. LLMs (ChatGPT, Gemini, Claude), agents, autonomous systems. The part that does the cognitive heavy lifting… analysis, generation, optimization, pattern recognition, synthesis… direction.

I recently wrote about how $250 in AI compute can reproduce the raw token output of a knowledge worker’s entire year. That’s the meat layer. It’s where volume lives. The meat is getting thicker every quarter as models improve, context windows expand, memory compounds, and agent frameworks mature.

The meat doesn’t need breaks. It doesn’t have bad days. It doesn’t lose institutional knowledge when it changes jobs. It just processes.

Top bun.

Then there are the humans who build AI systems and point them at problems. Think subject matter experts. Senior workers with years of compressed knowledge. The leaders and strategists who set the intents and objectives of the business. The ones who decide what to optimize, how to evaluate the output, and when the AI is wrong.

This is the layer where I think the most fun (and opportunity) lives… at least, until AI solves this too and the whole analogy turns into an openface sandwich.

If you see a problem that AI can solve and make it happen. You’re top bun. If you’re trying to build a product with AI… top bun. If you’re finding ways to help others use AI to make their life better. You’re the best kind of top bun… you’ve got sesames on top.

Being the top bun can be hard too. It might mean shrinking headcount while growing value. Fewer people will be needed. But the ones who can do it well? They’re the most valuable workers in any organization.

What happens to the buns in the future?

This is the part that bothers me.

When people say “AI will create more jobs than it destroys,” they’re technically right. But the new jobs will mostly be bottom buns. They’ll exist because something needs a human signature. Because someone needs a burger delivery. Because AI won’t want to do the low value tasks… the ones that are cheaper for humans to do.

Factory automation didn’t create “better factory jobs.” It created call centers, service jobs… and digital marketers.

AI automation will follow the same pattern.

For now the real opportunity is in the top bun. Building systems. Continuously discovering new product opportunities. Evaluating outputs against your expertise. The problem is that the top bun requires a specific set of skills that can’t be easily taught… judgment, taste, strategic thinking, the ability to see what’s missing. They’re compressed over a career.

And pretty soon, AI will have those same skills.

But, in the meantime. Enjoy creating new things. Solving problems. Making life easier… with AI. For me, I’m planning to maximize the current opportunity so I can retire and eat burgers when all of the top bun roles are gone.

Which layer of the burger are you in?


Everyone's optimizing for the wrong end of AI search

I spent a few weeks reading everything I could find on AEO and GEO.

That’s Answer Engine Optimization and Generative Engine Optimization, in case you’ve been blissfully offline.

Every tweet, blog, reddit post, and youtube video said basically the same thing. Write clear answers. Structure your content well. Think about how AI will present you in a summary.

Good advice. None of it is “wrong.”

But, they’re all describing the output side.

How AI presents the answer. What the results look like. Which format gets featured. You hear variations on…

“use more tables”

“Make sure to answer questions that user might ask ChatGPT” (as if this wasn’t the right way to add value in the first place)

“you must have llms.txt”, “no, you need a schema.js file”

What those all sound like to me is work that an agency can show you they did.

Nobody’s really asking what happens before that. How AI actually finds and selects content in the first place.

That’s the part that changes everything.

LLMs don’t search with keywords.

They search for “meaning”

Ok, this is about to get a bit technical… LLMs like ChatGPT, Claude, Gemini, Grok, Perplexity all work through embeddings… chunks of text are given meaning. Basically, the systems encode the text as vectors in high-dimensional space.

Quick math recap…

A vector is a set of coordinates. You’re probably familiar with x, y coordinates. Maybe z as well. That’s two and three dimensions respectively. Well with LLMs they use up to 3,072 dimensions (that’s OpenAI’s latest embedding model… most use somewhere between 768 and 3,072) and those coordinates actually encode the meaning.

It’s weird… I’m not totally sure how it works either. But the foundational research is Google’s Word2Vec paper from 2013 (Mikolov et al.). They showed that vector math on words actually works. King minus man plus woman equals queen. Seriously. The vectors captured meaning well enough to do algebra on concepts.

Now, when a model retrieves content, it’s finding proximity. What’s semantically closest to the query… meaning what’s closest in terms of meaning, not just which words appear. Not what literally matches the words. What matches the meaning.

This is a completely different mechanism than keyword search.

And it means most SEO thinking is the wrong mental model for AI retrieval.

I started calling this MEO… Meaning Engine Optimization. Not because I love coining things (Even though I do. ™ is literally my initials), but because the concept needed a name. Nobody had claimed it yet. So here we are.

The distinction is simple.

AEO and GEO are output-focused. They ask: how do I show up well once AI has already found me?

MEO is input-focused. It asks: how does AI find and select me in the first place?

One layer deeper. Many layers more meaningful.

The clearest proof I’ve seen is Exa.ai. Exa is a search engine built on this concept and trained on link prediction. Not keyword matching. It retrieves pages based on meaning and context. You search for a concept, it finds pages that mean that thing… not pages that just say that thing.

Use it for a week and you’ll notice Google feels manipulated after.

Keyword-optimized content often ranks lower in Exa. Meaning-dense content, where a clear point of view runs through the whole piece, performs better.

LLMs learn the same way. They’re trained on massive amounts of text and build internal maps of how concepts relate to each other. The content that lands closest to what someone means when they ask a question… that’s what gets retrieved.

GEO and AEO tactics are fine. They’ll help at the margins. But, they’re modifications of the old model. You’re polishing the presentation of a result you’re not even being retrieved for… or won’t be for long if you’re thinking in terms of keywords.

The mechanism of the future is meaning. The unit of optimization is meaning.

And that’s what I’m calling MEO. I’ll go deeper on how this actually works and what you can do about it in the next few posts.

Get new posts delivered to your inbox.