He’s previously said this information will be used to train X’s artificial intelligence chatbot Grok on how to interpret them efficiently.
“You can upload your X-rays or MRI images to Grok and it will give you a medical diagnosis,” Musk said in the video, which was uploaded in June. “I have seen cases where it’s actually better than what doctors tell you.
In 2024, Musk said medical images uploaded to Grok would be used to train the bot.
Musk also claimed in his response Grok saved a man in Norway by diagnosing a problem his doctors failed to notice. The X owner was willing to upload his own medical information to his bot.
Musk did not disclose in the podcast why he received an MRI. XAI, which owns X, told Fortune in a statement: “Legacy Media Lies.”
Medical information shared on social media isn’t bound by the Health Insurance Portability and Accountability Act (HIPAA), the federal law that protects patients’ private information from being shared without their consent. That means there’s less control over where the information goes after a user chooses to share it.
“This approach has myriad risks, including the accidental sharing of patient identities,” Tarzy said. “Personal health information is ‘burned in’ too many images, such as CT scans, and would inevitably be released in this plan.”
The privacy dangers Grok may present aren’t fully known because X may have privacy protections not known by the public, according to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania. He said users share medical information at their own risk.



