Michelle A Williams
Earlier this year, one of my colleagues received an email in question. An old friend shared with him that she’s been experimenting with ChatGPT, a new artificial intelligence chatbot that’s dominating the headlines.
She prompted the tool to create an essay arguing that access to guns does not increase the risk of child mortality.
In response, ChatGPT has produced a well-written essay citing academic papers from leading researchers – including my colleague, a global expert on gun violence.
The problem? The studies cited in the footnotes do not exist.
Opinions in your inbox: Get exclusive access to our columnists and the best of our columns
ChatGPT used the names of real firearms researchers and real academic journals to create an entire universe of fictional studies to support the completely false thesis that guns are not dangerous to children.
ChatGPT tried to justify its mistake
When challenged, the chatbot doubled back and spat out this response: “I can assure you that the references I have provided are genuine and from peer-reviewed scientific journals.” Not true.
I find this example frightening. I can understand the excitement around tools like ChatGPT, which generates original text based on patterns it “learns” from ingesting billions of words online. But this powerful technology carries very real risks – and can harm public health.

Both OpenAI, which developed ChatGPT, and Microsoft, which is building the technology into its Bing search engine, are aware that the chatbot can “hallucinate” facts, i.e. invent them. It can also be manipulated to produce highly persuasive misinformation.
For companies, these growing pains are part of the plan; They need the public to test the tool, even though they know it’s “a bit broken,” according to OpenAI CEO Sam Altman. recently written. In their view, large-scale testing is critical to improving the product.
Should the US ban TikTok? Here’s why blocking it isn’t a good idea
Unfortunately, this strategy misses the real consequences of a “beta test” that reaches more than 100 million monthly users. The companies get their data. In the meantime, however, they risk unleashing a new deluge of fake news that will create confusion and shake our already low level of societal trust.
Most don’t seem to realize how serious the risk is. Snapchat, for example, launched its new AI tool last month with a warning that it’s “prone to hallucinations and can be tricked into saying just about anything” — and added a gleeful “Sorry in advance!” This approach only increases my concern.
In my view, there are two interlocking risks. Because ChatGPT is so confident in proclaiming its “facts,” it’s easy to be fooled by its hallucinations. When it comes to health and safety, that can be dangerous.
18 errors found in a health article generated by ChatGPT
For example, consumer magazine Men’s Journal published an article “written” by ChatGPT about low testosterone. Another publication asked an eminent endocrinologist to verify this – and he found 18 errors.
Readers who had relied on the play for health advice would be severely misled. Given that 80 million adults in the United States have limited or low health literacy, and young people may not think to check AI-produced “facts,” this is a major concern.
The second threat is ChatGPT’s potential to be armed by bad actors. We live in an era characterized by widespread access to information and near-record low levels of trust in the entities designed to help separate fact from fiction.
Is Elon Musk a risk to US security? The Pentagon’s confidence in him is growing despite his behavior.
Is Social Media the New Marlboro Man?:The Big Tobacco comparison offers a lesson for Big Tech.
In a world where anyone with a Twitter account can present themselves as a legitimate news organization, ChatGPT’s impressive ability to produce content that has the ring of truth could allow malicious entities to spread false narratives quickly and cheaply. These actors could also launch “injection attacks” to teach lies to the AI programs, which would further spread the untruths. The possible domino effects are alarming.
To be clear: I am by no means against the further development of AI. If done well, artificial intelligence could help minimize human error and drive innovative solutions in medicine, science, and myriad other fields. But as we explore this new technology, we must keep both the benefits and the risks in mind and install guard rails to protect the health of the public.
Federal regulators should take the lead in this effort. But unfortunately, these bodies don’t have a good record when it comes to keeping up with innovations like cryptocurrencies or social media companies. In both cases, a laissez-faire approach allowed for significant damage to their clients’ financial security and mental health.
However, these past disappointments do not mean that we should give up. Now is the time for agencies to develop proposals to prevent generative AI from harming vulnerable populations while enabling and encouraging innovation.

Even if government regulators don’t pull through, companies themselves should see the benefit of taking a more measured and considered approach to beta testing. Introducing technologies that are not ready for prime time can harm not only public health but also the company’s bottom line. For example, a known bug in Google’s new AI tool last month cost the parent company $100 billion in market value.
For both the consumer and the business, getting technological breakthroughs incrementally right is far better than spectacularly wrong.
Michelle A. Williams is Dean of the Faculty at Harvard TH Chan School of Public Health.