by: Tanaaz Khan
When I was in academia, you couldn’t publish anything without considering cross-arguments or acknowledging your study’s limitations. It would end your career if you didn’t.
But in B2B marketing, this practice would make you the most rigorous person on the team.
I didn’t land a role in marketing the traditional way. I worked in pharmaceutical R&D and was on my way to being an infection biologist.
“Research” meant something in that world. It meant methodology — not the “we surveyed 50 sales leaders” kind, but the kind where you actually sat with one problem for weeks, if not months on end, to understand what the data didn’t tell you.
You had to pressure test your thinking from angles that made you uncomfortable. But in the B2B world, a simple Google search or a Perplexity-fueled “deep research” session counts as the end-all, be-all of research.

In academia, rehashing things would be considered a crisis. In the B2B world, it’s an average Tuesday. Nobody even bats an eye because the bar was never set high enough to notice in the first place.
We don’t have a content quality problem. We have a critical thinking problem.
We’re not spending enough time building our gray matter. You build expertise by taking the time to learn your audience and live in their world. But when we don’t even spend enough time doing that, how are we going to build the skill of connecting disparate ideas?
The efficiency machine has eaten our curiosity
It’s easy to blame SEO content and name it as the obvious culprit. The content marketing space is built on an operational model that prioritizes speed and volume.
The content marketing space is built on an operational model that prioritizes speed and volume. But it also has forced us to redefine research by focusing only on its most efficient form.
But it also forces us to redefine research by focusing only on its most efficient form. Find two reports, pick the most agreeable stat, build your argument around, and loosely tie the thesis to your product.
- Good research ≠ Finding random statistics
- Good research ≠ Asking the most obvious questions
- Good research ≠ Publishing the most biased insights

Take a standard content brief: “Write a guide on email marketing.” The outline is predictable — definition, benefits, challenges, best practices. IYKYK.
But nobody’s asking what email marketing looks like operationally in 2026. Why did recent compliance changes come into effect? How are those shifts reshaping the KPIs email marketers are actually measured against? The companies that haven’t adapted — how are they faring? What are practitioners saying in their own words?
Those questions require you to move beyond the brief and into your audience’s world. And nothing in the system rewards that move, at least for 90% of the companies out there.
But the 10% that prioritize this step publish content that survives algorithmic changes and shapes perceptions for years to come.
No, “deep research” won’t save you because your muscles are atrophying
We have access to some of the best AI models today. ChatGPT and Perplexity’s Deep Research mode feel like a superpower because it can cut down your research time by 90%.
That said, the frame of reference for this comparison is broken in itself. When you’re comparing yourself to a junior analyst who’s learning the ropes and whose job is to compile reports into pretty decks, it can feel like a superpower.
But these models have made it dangerously easy to mistake compilation for comprehension.
The average analyst (or B2B marketer) can enter a simple prompt and receive an in-depth report in 15 minutes. However, only an experienced analyst can tell if the report is worth reading in the first place.
Even with these tools, the quality of the output depends entirely on the quality of the context and the prompt you bring to it. Feed it a shallow question, get a shallow answer with better footnotes. The synthesis — the ability to contextualize a problem within your audience’s reality, to see what the data implies but doesn’t say outright — that’s still the job. That’s where our gray matter earns its keep.
But most teams aren’t being incentivized to flex that muscle. When performance is measured by output volume and keyword rankings, there’s no structural reason to spend eight hours in customer calls or community threads before writing a single word.
The system says: produce. So people produce. And now the efficiency machine uses AI as an excuse to keep that engine running.
The real cost isn’t just worse content. It’s that teams that operate at the surface for years eventually forget that real depth exists (or matters). They stop asking “How does this actually play out in practice?” because they’ve been trained to stop at “What are the benefits?” They lose their eye for the difference between insight and platitude.
Now, I’m not arguing that every content asset needs a 90-day research phase. But there’s a vast middle ground between “I Googled it” and “I ran a longitudinal study” — and very few teams occupy it. For example, you could start:
- Spending an hour in a practitioner community before outlining.
- Mapping where three podcast guests agree and disagree on the same topic.
- Asking your sales team not “What do prospects want?” but “What do they get wrong, and why?”
It’s the bare minimum today.
The bar for research in B2B is not exceptionally high, we’ve just convinced ourselves that even the most basic research counts as exceptional. If your “research” process wouldn’t hold up to a single follow-up question from a peer — or worse, your actual audience — it’s not actually research yet. ■
Tanaaz Khan is a B2B SaaS content strategist specializing in original research and bottom-of-funnel content. She helps growth-stage companies build content assets that win deals by owning their narrative and enabling buyers to choose with confidence. You can read more of her musings in her newsletter, The Content Loop.
- Image courtesy of PhD Comics ↩︎
- Image courtesy of Jorge Cham at PhD Comics ↩︎
Leave a Reply