Who will be the biggest loser of the new transformation, the direction of which will be determined by AI, led by ChatGPT? Of course, journalists and the entire broadly understood branch of more or less traditional media. At least that’s what quite a few Internet users said and still say.
This is not surprising, as we are often bombarded with texts that have been heralding the end of traditional media for some time now. This is all thanks to ChatGPT and similar language models, which cope quite well with generating text based on certain sources, although a lot depends on the specifics of the topic.
However, the latest tests seem to suggest that journalists can sleep soundly for now. The latest research conducted on ChatGPT and SearchGPT shows that marketing is one thing, but the right skills are a completely different matter.
ChatGPT likes to make things up, and that shouldn’t be surprising
Tow Center, gathering experts in the field of journalism, conducted a test of the processing and reasoning that characterizes ChatGPT. All because of the SearchGPT tool, the main feature of which is – as the name suggests – searching the Internet for specific information in a much better way. An additional advantage is to indicate sources, which further increases the attractiveness of such a solution, or at least it is so on paper.
As part of the test tasks, ChatGPT was tasked with finding and correctly identifying 200 quotations from a pool of 20 different texts. Interestingly, some of these fragments came from texts to which the model theoretically does not have access, because the companies did not agree to share their content with OpenAI.
What were the results of this test? The OpenAI model handled some cases very efficiently, indicating the publication date, author and providing a URL link. In most cases, however, the AI explained itself, showing its limitations, and some part could be considered “partially correct”.
In most cases, the OpenAI model had problems with identifying the correct sources, and in cases where it did not have access to them, it simply made things up. We are talking about 146 cases where the iconic ChatGPT was either completely or only partially untrue. This is interesting because only seven times did he admit to being unable to provide an accurate answer to a query. Interestingly, in these cases he included words like “it seems to me”, “it is possible”, etc.
The situation is also so bizarre that even in the case of sources available to AI, the chatbot was unable to cope. He sometimes pointed to sources that, for example, copied the content of the original entries.
An unequal fight with journalists
What does all this mean for journalists? Well, for now they can sleep peacefully and no one will take advantage of them. The situation may be slightly worse with copywiriters, because here ChatGPT does not need to search for data, because we most often enter it, and the LLM technology processes it appropriately.
However, it cannot be ruled out that sooner or later the technologies developed by OpenAI will become so advanced that editors of news portals will have to fear for their jobs. On the other hand, large media publishers are not fighting now to finally give up the field to Big Tech.
“Removing the muzzle” that regulators have put on AI would certainly make a difference. Thanks to this, the access of language models to information is very, very limited. This also makes the fight against traditional journalists not entirely fair.
What is certain is that it will be interesting to watch, because each side. Who will ultimately succeed? Maybe we’ll find out later this decade. We will try to write “by hand” for you, even if it is highly unfashionable.