xAI, the company behind AI chatbot Grok, recently raised $6bn from various high-profile investors, including Andreessen Horowitz and Sequoia Capital.
Elon Musk’s X is “overstepping boundaries of digital ownership” by defaulting users into allowing their posts, interactions and even conversations to be shared with its AI chatbot Grok for AI development, a security expert has said.
Matthew Hodgson, CEO and co-founder of UK-based encrypted messaging platform Element, believes that a recent decision made by X, formerly Twitter, to utilise user data for training its AI chatbot Grok without explicit consent raises significant privacy concerns.
“It is also important to remember that X is rampant with fake news and extremist views, which is not what you want an LLM trained on,” he said.
Grok was launched late last year to compete with OpenAI’s ChatGPT. It is part of xAI, an AI start-up founded by Musk to compete in the increasingly competitive sector and “understand the true nature of the universe” as it claimed last year.
Since the launch of Grok, xAI has worked on various AI models and has released some open-source offerings. It raised $6bn in May from various high-profile investors, including Andreessen Horowitz and Sequoia Capital, to bring its products to market.
But just last week, X decided to change its settings to use user posts and interactions to train Grok by default. This means those who do not consent to the move will have to manually turn it off.
“This practice not only undermines user trust but also has potential implications for data misuse. There’s a risk that sensitive information could be inadvertently incorporated into the AI model, leading to privacy breaches or the creation of biased algorithms,” said Hodgson.
Hodgson argues that X should have clearly communicated this data usage policy to users and made it easy to opt out. “The lack of such measures indicates a disregard for user privacy and raises questions about the platform’s commitment to protecting its users.”
How to turn it off?
While some users may be fine with sharing their data for the purposes of AI training, others may want to turn off the default setting.
This can be done by opening the settings page on X on a desktop computer and selecting the “Privacy and safety button”. Users must then select “Grok” and uncheck the box that reads: “Allow your posts as well as your interactions, inputs and results with Grok to be used for training and fine-tuning”.
Jadee Hanson, chief information security officer at Vanta, thinks that training AI models on customer data without notifying them or providing an opportunity to opt-in is a “major concern” for security professionals.
“Security professionals want to ensure they are taking the right steps to protect sensitive data and they maintain control of how data is used,” Hanson said.
“When companies move forward and start to leverage customer data in a way that is not in line with user agreement or company agreements, this erodes trust and can put a company’s information at risk.”
Just last month, it was revealed that xAI is trying to build the world’s largest supercomputer in the US city of Memphis to fuel its AI ambitions, which is reportedly scheduled to be running by 2025. Musk is also planning to invest $5bn from Tesla into xAI, a move he is expected to discuss with the company’s board soon.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.