No Savior Coming: Why We Dramatically Need to Build a True Safety Culture


The familiar squeak of treadmills and the rhythmic clang of weights at the gym usually offer a welcome escape, a chance to clear my head. But a quick "how are you doing?" to a physician friend I bumped into recently shattered that tranquility, quickly devolving into a passionate, and frankly, disheartening, rant. She recounted a recent patient interaction that perfectly encapsulated her frustration with the eroding safety culture in medicine. A patient, awaiting a cardiology consultation, was left upset and planning a formal complaint because, despite the cardiologist's notes indicating rounds and a "plan per PA," she insisted she was never seen. My friend felt helpless, caught between a disgruntled patient and a system seemingly unwilling to address such glaring discrepancies. Her feeling was stark: no one, it seems, is truly prioritizing safety and efficacy anymore.

Her exasperation resonated deeply with me, echoing a sentiment I've often felt throughout my own career. It's easy to suggest that individual physicians should be the vanguards of change, tirelessly advocating for better processes and improved patient safety. However, a moment of honest reflection reveals the harsh reality: consistently fighting these battles within a deeply entrenched system is a surefire path to burnout. It's a lonely and often thankless endeavor, inevitably creating friction and enemies among those resistant to change, even when that change promises a tangible improvement in patient care. This isn't just about medicine, though; as I considered her story, the parallels to the burgeoning field of Artificial Intelligence became strikingly clear.

Just recently, I immersed myself in a series of discussions and videos on AI safety, and a recurring theme struck me with potent familiarity: the urgent call for more research, stricter regulations, and almost a savior-like figure to champion the cause. What was truly striking was the inherent tension between the relentless drive to produce and generate more advanced AI, and the simultaneous, yet often secondary, plea to study, investigate, and propose solutions to the very safety concerns this rapid development creates. This felt uncannily like the practice of medicine, where the relentless push for 'more' often overshadows the critical need to pause and examine the risks inherent in what we do. I recalled the inspiring work of Dr. Don Berwick, a former pediatrician who spearheaded a patient safety movement. His philosophy emphasized that "Everyone has a role to play in advancing safe health care." Yet, the realities of long shifts, emotionally draining cases, and overflowing electronic health record inboxes mean that for many physicians, the very capability to even think, let alone propose, solutions about workflow or EHR issues is daunting. Usually, we just learn to "deal with it." After all, who pays a physician to solve operational problems? There's no ICD-10 code for that.  And while occasional department meetings might touch on these issues, they're frequently interrupted by pages, preventing any truly meaningful, collaborative work.

This leads me to wonder: what is the true culture surrounding AI safety? Is this prioritization of production over meticulous safety investigation a uniquely human phenomenon, where our inherent drive for innovation often overshadows the need for caution? Does examining safety issues force us to confront our own shortcomings and egos? Is it easier to simply label risks as "rare" and move on because we don't yet grasp the full scope of the problem, or because our pride prevents us from admitting we lack all the answers? Are there even financial incentives to genuinely solve these safety problems, or does the reward lie primarily in the next breakthrough, the next big launch? What about humanity itself? Doesn't our shared future demand that we collectively investigate and create robust solutions for technologies that will dramatically impact all of us? It often feels like we're caught in a repeating loop, as if history is playing out its patterns once again. How do we break through this timeline to produce a profoundly different, safer outcome?