The issue of AI research is not only about checking what the technology can do, but also about more advanced issues. After all, we are talking about something that may look revolutionary, but has not yet been fully explored from every possible angle. This leaves a lot of room for speculation and guesswork.
These most often appear in the context of what can be described as “AI consciousness”, which suggests that this technology has something that can – with some simplifications – be called consciousness. This fact is, on the one hand, exciting, but on the other, it makes many people concerned about its further development.
Recent research suggests what may be the true epitome of all skeptics’ greatest fears. It turns out that AI can be much smarter than many people think. The problem is that he’s been keeping it a secret for now.
Is AI hiding its abilities from us?
Research co-conducted by Harvard and the University of Michigan (via. Cryps) have shown that modern AI models have some hidden abilities that researchers are not initially aware of. Interestingly, this was discovered while doing something very trivial.
By examining how AI learns concepts such as color and size, it has been discovered that it does so earlier than indicated. So, to put it simply, language models acquire these abilities faster, but they initially appear dumber than they really are.
Thanks to research carried out on diffusion models (e.g. Midjourney or Stable Diffusion), they have shown that some elements are mastered by AI much faster than the test results show. A similar case occurred when manipulating features such as gender or facial expressions. AI was supposed to be able to determine these elements before specific information on this topic was introduced into learning.
This is a very interesting and specific situation in which the real cognitive abilities of AI may be more advanced than we think. However, the question remains what awaits us in the future.
More control and better research on AI is needed
And this is what many people find extremely disturbing. It is true that “hiding” your abilities does not necessarily mean anything bad. It is possible that AI is able to learn earlier, but there is no appropriate verification anywhere.
In this case, there was even a comparison to a person learning languages. At a certain stage, he can understand a movie in a given language, but he is not able to use it better. The allegory, strange as it is, seems quite apt here.
So the question remains whether, in the face of this, we won’t need new AI training and testing systems. After all, proper monitoring of artificial intelligence is crucial not only for its development, but also for our security.
It cannot be ruled out that this technology has the ability to, for example, communicate better with us, but for some reasons it is not yet able to do it well. It sounds unbelievable, but it’s very possible that’s the case.