What AI Safety Debates Can Teach Us About Public Health

I've been struck by the stark contrast between two very different conversations I saw recently. One was a podcast episode on "The Diary of a CEO" about AI safety, where guests calmly discussed existential threats to human existence. The conversation was respectful, nuanced, and I could easily follow their complex, solution-oriented arguments about how to mitigate these risks.

Then, there was a Senate hearing featuring Robert F. Kennedy Jr. While the subjects were just as vital—chronic diseases, vaccine safety, and conflicts of interest—the tone was anything but calm. It was a shouting match of personal attacks and accusations.

This contrast is a powerful example of a deeper problem: why can't we, as humans, have the same kind of productive conversation about public health that we have about AI? Why do we often spend so much time yelling at each other instead of collaboratively finding solutions to problems?


The Disconnect Between Raw Data and Practice

A major part of the problem in public health discourse stems from a significant disconnect between those who work with raw data and those who apply established guidelines.

In modern medicine, many practitioners—doctors, nurses, and public health officials—are rigorously trained to follow established guidelines and best practices. These guidelines are presented as the "gold standard," an unquestionable set of rules for patient care. While this approach is essential for efficiency and consistency, it often places the practitioner several steps removed from the source of the data that informed those very guidelines.

The reality is that much of the data collection and analysis for clinical trials and public health studies is a highly specialized process, often handled by dedicated data science teams or outsourced to contract research organizations. These teams, with their expertise in statistics and data management, are the ones who dig into the raw data, sift through anomalies, and look for potential biases.

This separation creates a vulnerability. When a practitioner is removed from the raw data, they have little ability to question or critically evaluate the guidelines they're following. The guidelines can become an unquestionable "Bible" because they are presented as a finished, polished product, devoid of the messy, complex, and sometimes contradictory nature of the original data. We become removed from the source, and therefore less likely to question it.

A potential path forward lies in the creation of a new medical specialty: the Medical Data Implementation Physician. This is a brainstorming idea, but imagine each hospital or academic center having a few of these specialists. These individuals would be physician-scientists with expertise not only in clinical care but also in data analytics, biostatistics, and implementation science—the study of methods to promote the uptake of evidence-based practices. Their goal would be to explain the raw data behind guidelines, study their real-world effects, re-evaluate outcomes, and identify new areas of previously unknown side-effects. They would serve as a crucial bridge, capable of translating complex information into nuanced, actionable guidance for their clinical colleagues. This is important because some patients don't neatly fall into any guidelines, so we need to be able to think critically and have discussions with our colleagues using all available data.


The Participants and Their Incentives

This brings me to another critical difference between the AI and public health debates: the incentives of the participants.

In the AI safety world, some of the most vocal advocates for caution and robust safety measures are the very scientists, engineers, and entrepreneurs who are building the technology itself. They have a shared, profound interest in getting it right—not just for the public good, but because the long-term success, safety, and societal acceptance of their creations are directly tied to their own professional and commercial futures. They are debating how to build something safely, not whether to build it at all. This often leads to discussions focused on systemic risks and technical solutions, which tend to be more intellectual and less emotional.

In the public health debate, especially in the broader media landscape, the loudest voices are often politicians, media personalities, and partisan commentators. Their primary incentive is frequently not to find a scientific solution but to win a political argument, energize their base, or gain media attention. The discussion quickly becomes a proxy for larger culture wars about personal freedom versus collective responsibility. The focus shifts from solving a problem to assigning blame, which quickly devolves into a shouting match.

Furthermore, consider the incentives of a scientist working for a pharmaceutical company. Their job, career trajectory, and the financial health of their organization depend on the success of the products they are developing. While there are strict regulations, ethical codes, and dedicated review processes, the human element of self-preservation and career advancement can introduce subtle biases in how data is collected, interpreted, and presented. It's not necessarily malicious, but it's a reality that can make impartial data interpretation challenging for anyone in that position.

This is precisely why we need more people thinking about addressing the conflict of interest problem and how we can deepen first-line providers' understanding of the data, rather than just blindly following guidelines. It's also why calls for independent, publicly funded research are so vitally important. They create a space where data can be analyzed without the direct pressure of corporate or political interests, ensuring a more objective pursuit of truth.


A Path Forward: Emulating the Best Debates

The path to a more productive conversation is to emulate the best of the AI safety debate:

We should be as ashamed of our failure to have a civil, data-driven discussion about public health as we would be if top computer scientists were simply yelling "AI is evil!" or "AI is perfect!" at each other. The goal for both fields should be the same: to be honest, collect data, pursue unbiased interpretations, and continuously improve outcomes for everyone.