Ireland’s Data Protection Commission Initiates Investigation into X’s AI Training Practices

Investigation into X’s Data Use for AI Training
Overview of the Inquiry
Ireland’s Data Protection Commission (DPC) has launched an investigation into X, a social media platform owned by Elon Musk. The inquiry focuses on how X processes the posts of European users, specifically targeting data utilized in training its artificial intelligence model known as Grok. This scrutiny is part of a broader examination of compliance with the General Data Protection Regulation (GDPR), which prioritizes user privacy and data protection.
Details of the Investigation
The DPC’s investigation aims to determine whether X’s actions align with the GDPR’s requirements regarding the lawful and transparent processing of personal data. This decision follows a previous directive that required X to cease its data collection from European users for Grok training. In response to the DPC’s findings, X temporarily suspended the use of data for training Grok.
What is Grok?
Grok is an AI model developed by xAI, which encompasses several Large Language Models (LLMs). These models are designed to enhance a generative AI querying tool on the X platform. The LLMs are trained on varied datasets, making them capable of processing a wide range of information.
Focus of the Investigation
The DPC’s probe will specifically investigate a subset of data overseen by XIUC, which pertains to personal data collected from publicly available posts made by users within the European Economic Area (EEA). The primary objective is to ascertain whether processing this personal data for Grok’s LLM training complies with relevant privacy laws.
Legal Framework
This inquiry is conducted under Section 110 of the Data Protection Act of 2018. The decision to proceed with this investigation was announced in April 2025 by DPC Commissioners Dr. Des Hogan and Dale Sunderland.
Previous International Scrutiny
In addition to the inquiry by the Irish DPC, X is also under investigation by Canada’s privacy officials. The Office of the Privacy Commissioner of Canada is examining whether X may have violated Canadian privacy regulations by using personal data from Canadian users to train its AI models. This investigation follows Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and aims to evaluate if X has appropriately complied with federal privacy laws regarding data collection, use, and disclosure.
Implications for Technology Regulation
The DPC’s inquiry occurs against the backdrop of the EU’s AI Act, which was enacted to establish various guidelines and requirements for the responsible use of technology. Technology leaders have expressed concerns regarding the implications of these new regulations on innovation and business operations in the tech industry.
Responsibilities of Tech Companies
As tech companies navigate these regulatory landscapes, they are required to be more transparent about their data utilization practices, especially concerning AI model training. This includes:
- Ensuring lawful data processing: Companies must verify that they are collecting and utilizing data within the bounds of legal frameworks such as GDPR and PIPEDA.
- Transparent communication: Companies should clearly inform users about how their data is being used, ensuring that individuals are aware of their rights.
- Compliance with international laws: Global tech firms are expected to adhere to privacy standards in every country they operate in, necessitating a nuanced understanding of varying regulations.
The investigations by the DPC and the Canadian Office of the Privacy Commissioner underscore the growing scrutiny faced by tech companies regarding their data practices and AI training methodologies. As regulations evolve, organizations will need to adapt to ensure they meet both legal and public expectations for privacy in the digital space.