top of page

WORLD

Activist Robby Starbuck Sues Meta Over AI Answers About Him

Defamation suit alleges Meta AI keeps erroneously linking activist to the Jan. 6 riot in Washington, D.C.


Robby Starbuck, the conservative activist, filed a defamation lawsuit against Meta alleging its artificial intelligence tool smeared him by falsely asserting he participated in the Jan. 6, 2021, riot at the U.S. Capitol.


Starbuck says he discovered the problem last summer when he was waging an online campaign to get Harley-Davidson to change its diversity, equity and inclusion, or DEI, policies. A Harley dealer in Vermont fired back by posting on X a screenshot on Aug. 5 purportedly of an Meta AI response saying Starbuck was at the Capitol riot and that he was linked to QAnon.


Starbuck denied the allegations that same day and responded in a social media post that “Meta will hear from my lawyers.” He says Meta AI was still making the same unproven claims about him months later and he filed a lawsuit.


His lawsuit, filed in Delaware Superior Court on Tuesday, seeks more than $5 million in damages.


“You start to wonder how much further this is going to seed out into society,” Starbuck said in an interview. He said he is worried about how AI could be used to determine creditworthiness, insurance risk or overall reputation.


A Meta spokesman said “as part of our continuous effort to improve our models, we have already released updates and will continue to do so.”

Starbuck joins a small list of plaintiffs who are trying to hold AI companies accountable for false and reputation-damaging information generated by large language models. No U.S. court has awarded damages to someone defamed by an AI chatbot.


A Georgia judge in Gwinnett County last year allowed a defamation lawsuit against OpenAI to proceed to discovery after denying the ChatGPT maker’s motion to dismiss.


In that case, conservative talk radio host Mark Walters claims ChatGPT said he was the subject of a lawsuit accusing him of embezzling funds from a gun-rights organization.


The judge heard arguments on OpenAI’s motion for summary judgment in April. In its defense, OpenAI emphasized how often it warns users about the potential inaccuracy of ChatGPT outputs and says Walters never alerted OpenAI about the alleged error or sought a retraction before bringing his complaint.


Microsoft in 2023 was sued by a man alleging that its Bing search engine and AI chatbot confused him with a convicted terrorist of a similar name. A federal judge in Maryland halted the litigation in October 2024 in a ruling requiring the plaintiff to pursue his claims against Microsoft in arbitration.

Microsoft and OpenAI declined to comment beyond their court filings.


Social media and other internet sites generally can’t be held liable for what their users post on their platforms. But legal experts say that legal shield, under a federal law known as Section 230, doesn’t cover humanlike responses produced by automated AI programs in response to user prompts.


Clare Norins, director of University of Georgia School of Law’s First Amendment Clinic, said she doesn’t think that AI companies can simply rely on disclaimers cautioning users about their factual reliability.


“It’s not an insurance policy against being sued for defamation,” she said.

But to establish liability, plaintiffs in such cases still have to at least show negligence on the part of actual humans.


“If the AI company is put on notice that the AI program is hallucinating a particular false statement about somebody and nothing is done to correct that, that’s a stronger defamation case,” said Norins.


When Starbuck learned last year what Meta’s AI tool was saying about him, he used X to ask Meta executives, including Chief Executive Mark Zuckerberg, to take down the inaccurate information. Starbuck said his lawyer sent the company a cease-and-desist letter.


Starbuck said he was in his home state of Tennessee on Jan. 6 in 2021. His research team, a group that is accustomed to digging up information about company policies they object to, tried to find the source of the information Meta spit out about Jan. 6, but couldn’t find anything, Starbuck said in an interview.


Days later a lawyer for Meta told Starbuck’s attorney Krista Baughman that “Meta takes the assertions set forth in your letter seriously, and an investigation into them is underway,” according to the lawsuit.


Meta AI continued to state that Starbuck entered the Capitol on Jan. 6, according to the complaint. Starbuck and his legal team queried other AI tools for similar information, but ChatGPT and xAI’s Grok stated that Starbuck wasn’t at the riot and said the inaccurate statement originated with misinformation from Meta AI, according to the complaint.


By this month, Meta AI had made it more difficult to search for information about Starbuck. The AI tool responded to direct queries about him with: “Sorry, I can’t help you with the request right now,” according to the lawsuit and checks by The Wall Street Journal.


Starbuck’s lawsuit says Meta AI’s voice feature was still generating false outputs about him as recently as this month, now saying he had pleaded guilty to disorderly conduct in connection with the Capitol riot and promoted Holocaust denial.

Share

bottom of page