We were in the middle of replacing an enterprise-wide application.

As expected, requirements gathering was uneven. Some departments came prepared—with workflows, documentation, and a clear understanding of how their function operated within the broader system. Others arrived with little more than fragments. That imbalance slowed everything: scoping stalled, dependencies remained unclear, and constructing a credible program roadmap for Board-level updates became difficult.

So we did something practical. We used AI to generate a high-level, generic roadmap. Not because we thought it would reflect our actual system, but because we needed a structured way to visualize what should exist—dependencies, sequencing, phases that are typically present in programs of this scale.

The reaction was immediate. “This is Chat GPT thinking.” Dismissed. Generic. Faux expertise.

Fair enough—at least partially. But that reaction missed what actually happened next.

The Problem With “Chat GPT Think”

We’ve all seen the LinkedIn posts warning about it—lamenting the rise of “faux experts.” The posts themselves are often easy to spot: too precisely phrased, too neatly packaged, polished to the point where something feels manufactured.

They’re structurally sound. Even insightful at a glance. But they’re missing something. The smell of authenticity. And, dare we say, experience.

When people say something “reads like ChatGPT,” they’re reacting to a pattern: structurally correct; broadly applicable; light on context; heavy on confidence. It sounds right, but doesn’t help you do anything.

That critique is valid—especially in execution-heavy environments. But it assumes that if something isn’t immediately actionable, it has no value. That’s the mistake.

What the “Generic” Roadmap Actually Did

The roadmap wasn’t usable as a plan. It didn’t know our systems, constraints, or internal realities. But that’s not what we used it for. We used it to force alignment.

Suddenly, teams that had struggled to explain their role could see where they fit. Dependencies that had been implicit became visible. People started recognizing steps they had been skipping—steps they hadn’t even realized they were skipping during walkthroughs with business analysts.

The “generic” roadmap did something our detailed conversations hadn’t: It gave everyone the same picture. What was dismissed as faux expertise turned out to be an effective coordination tool.

The Accounting Version of This Problem

You see the same dynamic in accounting. There was a time when every organization needed technical accountants who could read US GAAP and IFRS guidance chapter and verse. They were the interpreters of the rules. The human index of the codification.

Today, AI can do that instantly—and at no cost. Ask it about lease accounting, revenue recognition, consolidation structures—it will give you a clean, structured answer in seconds. In many cases, more clearly than a human would.

So does that make everyone an expert? Not even close.

There’s an old joke about how to identify the best accountant.

You ask candidates: what is 1 + 1?

  • If they say three—pass.
  • If they say two—also pass.
  • You’re looking for the one who answers: “What do you want it to be?”

It’s a joke, but it captures something real. The value was never in knowing that 1 + 1 equals 2. The value is in understanding: context; intent; materiality; risk; and the consequences of getting it wrong

AI can tell you what the rules say. It cannot tell you how those rules play out in your specific situation, under your specific constraints, with your specific stakeholders. That’s where the work actually is.

AI and the Illusion of Expertise

The concern that AI is creating “faux experts” is understandable. It can produce language that mimics expertise convincingly. But the framing is off. AI didn’t create faux expertise. It scaled access to baseline competence.

It produces what we might call competence at the mean—coverage of the standard case, the typical scenario, the 95 percent of situations where the rules apply cleanly. But expertise doesn’t live there.

Expertise shows up in: the exceptions; the gray areas; the edge cases; the moments when the clean answer doesn’t quite fit. That’s where judgment replaces knowledge.

Knowing vs. Doing

This gap isn’t new. Anyone who has assembled something complex from instructions has felt it. The instructions are clear. The steps make sense. And yet, when you actually start, things don’t quite line up. A part doesn’t fit the way you expect. A step assumes you know something it never explained. What should take an hour turns into an afternoon.

Try building a gas grill with 1,000 parts and a manual. The instructions are technically complete. But that doesn’t mean the process is easy—or even smooth. It requires patience, judgment, and the ability to work through small problems the instructions never anticipated.

That’s the difference between knowing and doing. And in some domains, that gap is widening, not shrinking.

Years ago, I could work on cars by following instructions. Basic repairs, replacements—it was manageable. Today, I wouldn’t even attempt it. The systems are too complex, too integrated. Even with step-by-step guidance—or AI walking you through it—you’re still missing the practical experience that lets you interpret what you’re seeing in real time.

At that point, the instructions haven’t failed. You’ve just reached the limit of what instructions can do. AI operates firmly on the “knowing” side of that divide. It can explain the system, outline the steps, and describe the mechanics.

Experts operate on the “doing” side. They recognize when the situation doesn’t match the instructions. They adjust. They know which steps matter, which ones don’t, and when the entire approach needs to change. That’s not something you get from reading. That’s something you get from having done it before.

What Actually Changes

The real shift is this: AI compresses the gap between being uninformed and being competently average. Basic knowledge is now widely accessible. Standard frameworks are easy to generate. Routine analysis can be replicated quickly.

So the value of knowing the rules declines. But the value of applying them correctly does not. If anything, it becomes more obvious who can and who can’t.

When You Actually Need an Expert

Most of the time, you don’t. For routine decisions, standard processes, and well-understood problems, AI is more than sufficient. It’s fast, consistent, and increasingly reliable. But as soon as the situation becomes ambiguous; the systems become interdependent; and the stakes increase, the equation changes.

A simple way to think about it: use AI until the cost of being wrong exceeds the cost of experience. At that point, you’re not looking for someone who can quote the rule. You’re looking for someone who has lived through the exceptions.

So What Should We Make of It?

Yes, there will be more noise. More polished explanations. More people who can sound like they know what they’re talking about. But that was already true. We’ve always filtered information. We’ve always separated signal from noise. AI just increases the volume.

The more important shift is this: AI raises the baseline. And in doing so, it raises the threshold at which expertise actually matters. Most of the time, you don’t need the expert. But when you do—when you’re operating at the edges, when the answer isn’t obvious, when the cost of being wrong is real—it becomes clear very quickly who has actually done the work. And who has just read about it.

Posted in

If you have a perspective to add or a different way of seeing this, I’d welcome the discussion below. If you’d rather reach out directly, you can also connect through the Contact page.

Leave a comment