The Supervision Trap

The 4:15 PM Ghost

It’s 4:15 PM on a Tuesday. You are three patients behind, and your Inbox is humming. It has been a busy digital assistant today—scanning labs, drafting portal messages, and suggesting titration for your chronic care patients.

You see a pre-written message to a 72-year-old with heart failure: "Your labs look great! We are increasing your lisinopril to 20mg as planned." With a waiting room full of people, you click "Batch Send" to clear the queue.

Two days later, that patient is in the ER with acute kidney injury. The Agentic AI model saw a "normal" potassium but missed the subtle 0.3 creatinine "creep" over the last month—a nuance you would have caught in five seconds if you weren't "supervising" fifty automated decisions in the frantic margins of a ten-minute physical exam.

The Historical Parallel: A Lesson Unlearned

We have been here before. Twenty years ago, the "Mid-Level" revolution promised to extend the physician’s reach. But for many Primary Care Physicians, it became a game of Legal Shielding.

We were never taught how to supervise. There was no "Residency in Oversight." We were simply told that our signature at the bottom of a chart—even one we barely glanced at—meant the care was safe. We were expected to be in two places at once: maintaining our own clinical excellence while acting as a "magical" legal backstop for decisions we didn't personally make.

The New Challenge: The Agentic AI Model

The industry expects the same magical supervision for the Agentic AI model. They promise that AI will "reduce the burden," but in reality, it just shifts the burden.

We are moving from doing the work to auditing the work. And as any clinician knows, auditing a "black box" is often more cognitively taxing than just seeing the patient yourself. It requires you to reverse-engineer a machine’s logic while the clock is ticking.

The Proposal: From Ghost to Pilot

If we repeat the mistakes of the past—where supervision was an afterthought—we are creating a more efficient way to fail our patients. We need a new model of Algorithmic Stewardship:

Closing Thoughts

We are at a crossroads. We can continue to be the "ghosts" in a machine we don't control, or we can demand the training and the time to be true Chief Pilots.

I leave you with these three questions: