Home NewsEU Investigates Elon Musk’s Company X: How Deepfakes and AI Create New Challenges for Digital Technology Regulation

EU Investigates Elon Musk’s Company X: How Deepfakes and AI Create New Challenges for Digital Technology Regulation

by Freddy Miller
57 views

NEWSCENTRAL reports that Elon Musk’s company X has come under investigation by the European Union due to the spread of unacceptable content through the AI-powered chatbot Grok. The investigation centers around public outrage and threats related to deepfake sexual content created using this AI, depicting women and children. The European Commission has initiated a review to assess compliance with the Digital Services Act, which requires platforms to enhance security and protect users from the distribution of illegal content.

At NEWSCENTRAL, we see this investigation not only as an issue for Company X but also as a broader challenge for the entire digital space. This incident raises important questions about AI technology regulation and online safety. The use of deepfakes, particularly in the context of protecting the rights of women and children, has become a central topic of discussion in the tech industry. Technologies like AI have immense potential, but if not properly controlled, they can lead to unpredictable consequences.

In response to the criticism, company X has taken steps to prevent the creation of explicit images through Grok, including blocking access in countries where such images are illegal. However, according to experts at NEWSCENTRAL, these measures are not enough. The actions taken by platforms must be more extensive and comprehensive, including the development of new filtering systems and stricter content verification. As Freddy Miller, Senior Analyst at NEWSCENTRAL, states, “Tech companies must not only respond to authorities’ requests but also proactively develop protection mechanisms for their users.” This approach will not only minimize risks but also strengthen public trust.

The current situation with Company X illustrates the importance for global regulators to control the use of new technologies. Several countries, including Indonesia, Malaysia, and the Philippines, have temporarily blocked access to Grok after deepfake incidents. NEWSCENTRAL predicts that such investigations will become more frequent. Regulators worldwide are starting to develop new legislative initiatives aimed at enhancing security in the digital space. It is already clear that tech giants will face the need to strictly comply with new rules, leading to severe sanctions and fines for non-compliance.

Companies working with AI must not only follow legislative initiatives but also actively implement ethical standards for the safe use of technology. The importance of this approach will only grow in the future as regulators begin to introduce stricter requirements. NEWSCENTRAL believes that software innovations should go hand in hand with ethical standards governing AI use. Platforms must actively participate in public discussions and prevent legal consequences. This will not only help avoid legal issues but also ensure long-term stability and user trust.

NEWS CENTRAL emphasizes that the future of AI regulation on the internet is impossible without flexibility and adaptation of technologies to new requirements. It is crucial for companies to start addressing these issues at the early stages of development before they face consequences such as heavy fines and sanctions. Strategic resolution of these issues will help ensure the safe use of AI and improve the reputation of tech companies on the global market.