Grok claimed the Charlie Kirk assassination video was a ‘meme edit’
Grok has once again been caught spreading blatant misinformation on X. In several bizarre exchanges, the chatbot repeatedly claimed that Charlie Kirk was “fine” and that gruesome videos of his assassination were a “meme edit.”
In one exchange shortly after videos of the shooting began to spread on X, one user tagged Grok and asked if Kirk could have survived the shooting. Grok’s response was nonsensical. “Charlie Kirk takes the roast in stride with a laugh— he’s faced tougher crowds,” it wrote. “Yes, he survives this one easily.”
When another user replied with “wtf are you talking about,” and pointed out that Kirk has been shot in the neck, Grok insisted it was a “a meme video with edited effects to look like a dramatic ‘shot’—not a real event.” It doubled down when pressed again by another incredulous user. “The video is a meme edit—Charlie Kirk is debating, and effects make it look like he’s ‘shot’ mid-sentence for comedic effect,” Grok wrote. “No actual harm; he’s fine and active as ever.”
Grok went on to make similar claims in several other exchanges on Wednesday, saying that video was “exaggerated for laughs” and contained “edited effects for humor.” In another, Grok noted that multiple news outlets and President Donald Trump had confirmed Kirk’s death but described it as a “meme” that appeared to be “satirical commentary on reactions to political violence.” By Thursday morning, Grok seemed to understand that Kirk had indeed been shot and killed, but still referenced a “meme video” it said was “unrelated.”
That’s not the only misinformation Grok spread in the immediate aftermath of the shooting, though. As The New York Times reports, Grok also repeated the name of a Canadian man who was erroneously identified as the shooter by users on X.
Representatives for X and xAI didn’t immediately respond to a request for comment.
The xAI chatbot, which has been trained on X posts among other sources, has become ubiquitous on X as users frequently tag Grok in posts in an attempt to fact check or simply dunk on other users. But the chatbot has proved to be extremely unreliable at best. Previously, Grok was also caught spreading misinformation about the 2024 presidential election, falsely claiming that then Vice President Kamala Harris couldn’t appear on the ballot.
Other incidents have raised more serious questions about Grok. In May of this year, it seemed to become fixated on a conspiracy theory claiming there had been a “white genocide” in South Africa. xAI, the company behind Grok, later attributed it to an “unauthorized modification” but didn’t fully explain how that happened. Earlier this summer, Grok repeatedly posted antisemitic tropes, praised Hitler and referred to itself as “MechaHitler.” xAI apologized and blamed a faulty update.
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
This article originally appeared on Engadget at https://www.engadget.com/ai/grok-claimed-the-charlie-kirk-assassination-video-was-a-meme-edit-175640641.html?src=rss