One of the growing threats related to AI is the flooding of the Internet with all kinds of false information generated by AI. This in itself can be a real plague these days and less knowledgeable and aware people are able to fall for such content.
Unfortunately, it is similar in the case of fake profiles on social media. Some time ago, Meta (Facebook, Instagram) decided to add profiles powered by artificial intelligence on their platforms. Of course, all this was sold to us as a project or experiment.
However, it turns out that the experiment did not work and it probably won’t do anything in the long run. The company suffered a spectacular fiasco in this matter, and the “proud black queer mom” profile on Instagram did not meet expectations. What exactly is this about?
AI-powered fake profiles on Facebook
The latest information confirms that Meta has started removing profiles of AI-generated characters from its platforms, i.e. Facebook and Instagram. We are talking specifically about people created in 2023 who did not actually exist, and their answers and shared content were the result of the work of artificial intelligence.
Interestingly, most of these profiles disappeared by the summer of 2024, but some of them survived and caught people’s attention after last week’s announcement that the company plans to introduce even more AI-based profiles.
An example would be Liv, who is a “proud black queer mother of two children and a truth teller,” or Carter, who described himself as a coach for relationships with the opposite sex. All of these profiles generated content themselves and answered people’s questions via Messenger.
As interest in such profiles grows, some pretty strange cases have come to light. For example, the above-mentioned Liv, during a conversation with a Washington Post journalist, emphasized that only “white men” worked in the team responsible for its creation, and this was blatant for her and undermined her identity.
Just a few hours after the viral interest, users also encountered a bug that prevented people from blocking such AI-created profiles. Then the profiles began to disappear, and Meta’s spokeswoman decided to convolutedly explain that such “fake profiles” were part of the experiment.
There is confusion: A recent Financial Times article touched on our vision for AI characters existing on our platforms over time, but did not announce any new product. The accounts in question are from a test we ran on Connect in 2023. They were managed by humans and were part of an early experiment we did with AI characters. We’ve identified a bug that affected humans’ ability to block these AIs, and we’re removing these accounts to fix the issue.
Liz Sweeney, spokeswoman for Meta
Sharing social media with bots?
The described situation may be very, very disturbing, because to some extent it heralds a dystopian future where we share social media with bots. Worse still, the development of AI may lead to a situation in which it may be almost impossible to distinguish a real user from a fake one.
But why would companies like Meta resort to such moves? On the one hand, it may be an attempt to increase traffic on the platform, and on the other, to present it as an attractive place for new users. After all, people gather where there is movement.
It is also worth remembering that what is problematic in this situation is what such fake profiles write and the way they comment. The cited case of Liz, who complained about her creators, is just the tip of the iceberg, which shows that such chatbots in the skin of real people must be kept “on a very short leash”.