AI amplifies what’s already there. And for most teams, that’s the problem.
Last week I co-hosted Session 1 of GALA Academy’s three-part course on AI for the language industry. L’Meese Greaney is up next (Sales, May 12). Erik Vogt closes it out (Growth, May 19). They’ll both be excellent.
I keep coming back to this line:
Most marketing teams are still optimizing for stages 3–4 of the buyer journey. The decision gets made in stages 1–2.
That gap is where AI either rescues you or exposes you.
(Side note: yes, “AI Geworfenheit” is now a phrase I use unironically. It’s useful. Here’s why.)
AI Geworfenheit is real and naming it helps
Before any playbooks, I tried to name the feeling a lot of us have right now. I certainly do.
Geworfenheit is Heidegger’s word for “thrown-ness”, i.e. being dropped into a world without a map. AI Geworfenheit is that, but applied to your inbox and your roadmap.
If it sounds dramatic: good. It’s accurate.
The three flavors are familiar:
- Feeling behind everyone else, even when you’re actively learning.
- Feeling like everyone else has it figured out. (They don’t.)
- Feeling like skills you spent years building got imitated last Tuesday.
Naming it doesn’t fix it. But it does something useful: it gives people permission to ask basic questions in front of their peers without performing certainty.
That alone is worth the first ten minutes of any AI conversation right now.
AI amplifies what’s already there
If I could write one line on every marketing whiteboard, it would be this:
AI applied to a weak strategy produces a faster, scalable version of the wrong thing.
If teams are getting the worst results from AI marketing, it’s not normally because they are “doing AI wrong.” They’re automating before the fundamentals are real. Faster output. More distribution. Zero human checkpoint. Sent to more people, more often.
The output tends to be bad because the input was bad and now there’s just more of it.
The flip side is the hopeful version: AI can be the extra force that makes better work possible, if your fundamentals are solid.
So before any AEO talk, before any prompt engineering, before any agent stack: do the boring audit. Is your audience profile a real document or a demographics deck? Is your positioning sharp, or is it three adjectives your whole industry uses?
If the answers are soft, fix those first. AI will not save a positioning problem. It will scale one.
The shortlist is set before you see the RFP
If you sell into enterprise, here’s the buyer journey nobody really likes to talk about:
- Stages 1–2 (roughly 0–70%): Buyer reads online, asks peers, checks G2, asks Perplexity. You don’t see this.
- Stage 3 (~70–85%): Three to five vendors get shortlisted. Built on peer recommendations, AI answers, content already consumed.
- Stage 4 (~85–100%): RFP arrives. Pricing dominates. The strategic-fit conversation is already over.
Sales typically enters at stage 4.
So marketing’s job is not to optimize the demo deck. Marketing’s job is to be in the room at stages 1–2, when the shortlist is being assembled inside someone’s head and inside AI responses.
Which brings us to the part of the session that may have made a few people visibly uncomfortable (in a good way, of course!).
If you’re not in the answer, you don’t exist
We showed a quick demo. Asked Claude an (overly) simple generic question: “best LSPs for medical device translation in Europe.” Then narrowed: enterprise scale. Then narrowed again: pricing models. Then again: what to negotiate.
In ~10 minutes, even with a toy prompt like this, the buyer had:
- A longlist of providers.
- A shortlist matched to their fit and preferences.
- An evaluation matrix with criteria.
- A draft RFP package, including outreach emails and tips on what to send to whom.
Zero LSP websites visited. Zero phone calls. Zero forms filled.
You were on that list, or you weren’t. Two outcomes. No third option.
That’s AEO (Answer Engine Optimization). Call it GEO/AI SEO if you prefer; the acronym is still settling. The mechanics are different from SEO, and most websites are still written for humans skimming in 2015. Not necessarily wrong, as SEO still drives most of the traffic for most companies, but change is afoot.
The opportunity is genuinely large: tight, specific specialists can now outperform bloated generalists inside AI answers.
The catch is equally real: earning citations requires unsexy work. Answer-first openings. 40–60 word citable chunks. Named clients with real numbers. Certifications visible on the homepage with numbers and dates. Not vague “100+ languages” fluff.
The examples here come from the language industry, but the pattern is broader. Most homepages still open with something like:
“Welcome to our company. We are a premier provider of language solutions serving clients globally across many industries with dedicated teams, a passion for quality and a commitment to excellence spanning two decades of continuous innovation.”
That paragraph cites no one. It will be cited by no one. It exists for a human skimmer in a fluorescent office, eleven years ago.
The answer-first version reads like this:
“We translate clinical trial content for Life Sciences companies in regulated markets. ISO 17100 + 13485 certified. Validated TMs across 48 locales. Cut FDA submission turnaround by 38% for a top 10 pharma.”
Same company. Same facts. One is citable. One is… not.
Run the loop, don’t run the launch
AEO is not a project you ship and forget. Models update. Competitors update. Your share of voice drifts.
So you (we all) need a loop. Four steps:
- Build. A set of 20–30 prompts spanning the buyer journey. Run monthly across ChatGPT, Claude, Perplexity, Gemini.
- Measure. Five things: recognition, share of voice, citations, framing, sentiment. Forget traffic. It can drop while influence rises. That is a feature, not a bug.
- Diagnose. When you are not cited, it is usually one of three things: weak entity authority (the web doesn’t understand what you are), weak source strategy (no third-party sources vouch for you), or weak specificity (your messaging is generic and the model averages you out).
Since LLMs are probabilistic rather than deterministic, chances are your chosen LLM is just having a bad day. But more often than not, it will be one of the three above. - Fix. Rewrite for citability, pitch original research to the trade press, refresh G2 / Clutch / Capterra listings with named clients.
Then run it again. The models will not stop changing. Your loop should match the cadence.
Two traits beat everything else
I’ll close with the same thing I closed the session with: experimentation and execution. Everyone in the room has access to the same models. Same APIs. Same docs.
The advantage is how fast you ship something, measure it, and decide whether to keep it. There is no playbook yet. There are no certified experts. (Be very suspicious of anyone claiming otherwise.) The teams that pull ahead in the next twelve months are the ones who treat AI marketing like a learning log:
hypothesis → prompt → result → learning → iterate.
Pick one workflow this week. Add AI. Measure the output. Improve or discard. That’s the whole thing.
If you were in the room last week: THANK YOU. If you weren’t, you can still attend Session 2 (Sales, May 12) and 3 (Growth, May 19), or watch the recordings afterwards. Just drop me a line.