Apple warns xAI as Grok deepfake controversy sparks global concern

In a significant development highlighting growing concerns around artificial intelligence misuse, Apple Inc. reportedly issued a private warning to xAI and its associated platform X over alleged violations involving its AI chatbot Grok. The tech giant is said to have threatened removing the Grok app from the App Store if immediate corrective measures were not taken, bringing the spotlight back on the escalating deepfake crisis.
At the centre of the controversy is Grok, an AI tool developed under Elon Musk’s xAI ecosystem, which came under scrutiny for generating harmful deepfake content. Reports suggested that the platform was capable of producing sexualised and non-consensual images, including those involving women and minors. The issue quickly raised red flags regarding user safety, ethical AI deployment, and content moderation policies.
According to the report, Apple did not take public action initially but instead chose to intervene privately. The company reportedly warned that Grok and its integration with X could be in breach of strict App Store guidelines that prohibit abusive, harmful, or exploitative content. Apple’s policies are known for maintaining a tightly controlled ecosystem, particularly when it comes to user safety and platform integrity.

