Hours
Minutes
Seconds

Today at 4pm EST I Webinar: Dapta 101: Go from zero to your first AI agent in one session.

The 2-Employee “AI Unicorn” Was Partly a Fake Doctor Factory

AI News Stories of the Week

The 2-Employee “AI Unicorn” Was Partly a Fake Doctor Factory

Picture of Annie Neal
Annie Neal

Growth Advisor

Table of Contents

Share this post

When The New York Times profiled Medvi as proof that artificial intelligence was transforming healthcare, the numbers seemed almost too good to be true. A telehealth company with $401 million in first-year revenue and only two employees, Medvi appeared to validate every optimistic prediction about AI-powered businesses. As it turns out, much of what made the story so impressive was fabricated.

According to multiple reports, Medvi’s founder Matthew Gallagher created over 800 Facebook pages posing as individual doctors with names like “Dr. Daniel Foster, MD” and “Dr. Sara Martin.” None of these people exist. The profiles featured AI-generated photos and fabricated medical credentials, each one promoting compounded GLP-1 medications (primarily semaglutide, the active ingredient in weight-loss drugs like Ozempic) through the Medvi platform. The economics of the operation were straightforward: invest $500 to $1,000 per page to run follower campaigns until each account reached 5,000 to 10,000 followers, then funnel that audience toward Medvi’s telehealth services.

The marketing deception went beyond fake profiles. Medvi’s platform used AI-generated deepfake before-and-after photos in its advertising, showing synthetic results rather than actual patient outcomes. In an industry where trust between patient and provider is foundational, Medvi built its entire customer acquisition strategy on fabricated medical identities and manufactured evidence.

Regulatory agencies had flagged concerns well before the flattering media coverage. The FDA issued Warning Letter #721455 in February 2026, citing misbranding violations related to compounded semaglutide products. This warning came a full six weeks before the New York Times published its laudatory profile, raising questions about the journalistic due diligence behind the story.

The situation worsened considerably in January 2026 when OpenLoop, the clinician network Medvi relied on for its medical services, suffered a data breach that exposed 1.6 million patient records. The compromised data included medical records and personal information, meaning that patients who had entrusted their health data to what they believed were real doctors now faced the additional risk of having that information exposed. A class action lawsuit filed in Delaware in November 2025 had already targeted Medvi’s deceptive marketing practices.

The Medvi case illustrates a deeper problem in the AI-powered business landscape. The same tools that can legitimately automate and scale healthcare access can also be used to fabricate trust at an industrial scale. Facebook’s advertising platform had no mechanism to distinguish between a real physician building a patient community and an AI-generated persona designed to funnel clicks to a telehealth service selling weight-loss medication.

Presented by: Dapta

For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.

Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.

If you want your team to sell more while AI handles the complex stuff, you have to try it.

For the broader telehealth industry, Medvi represents a cautionary tale. The pandemic accelerated the adoption of remote healthcare services, and AI promised to make those services more accessible and affordable. But the regulatory framework hasn’t kept pace. There is currently no requirement for platforms to verify that the medical professionals appearing in social media advertisements are real people with valid credentials. Medvi exploited this gap with remarkable efficiency.

The story also raises uncomfortable questions about the media’s role in amplifying AI hype. The New York Times profile that celebrated Medvi’s two-employee structure as visionary did not, apparently, investigate how a company with $401 million in revenue and virtually no staff was acquiring hundreds of thousands of patients. The answer, now clear, is that it was doing so through systematic deception.

As AI tools become more sophisticated and more accessible, the barrier to creating convincing fake professional identities will continue to drop. Medvi may be among the first major cases of AI-enabled medical fraud at scale, but it is unlikely to be the last.

Link here.

Link here.

You might also be interested in