Follow
Follow

Fatal AI mistakes! Technology identified the wrong perpetrators of the crime 

Fatal AI mistakes! Technology identified the wrong perpetrators of the crime 
Fatal AI mistakes! Technology identified the wrong perpetrators of the crime 

There is a lot of talk about the applications of AI and the fact that its use goes far beyond creative industries. After all, this technology can also be used in industry, analytics and healthcare. However, it does not have to end there and almost every profession can use artificial intelligence. 

However, it must be remembered that it is still sometimes an extremely unreliable and insecure tool. Therefore, it is always worth taking into account all the results that artificial intelligence-based tools offer us. 

American law enforcement officers found this out very clearly. The latest analysis by the Washington Post confirms that when we combine AI inaccuracy and human negligence, real drama may occur. 

Artificial intelligence errors led to wrongful arrests 

Fatal AI mistakes! Technology identified the wrong perpetrators of the crime 
Fatal AI mistakes! Technology identified the wrong perpetrators of the crime 

The Washington Post decided to analyze the data of police reports, files and the accounts of police officers, lawyers and prosecutors in order to check the potential participation and effectiveness of AI in these professions. It turns out that this is not a rare phenomenon at all, but it is very rarely mentioned to the public. Why? The reason is prosaic – not everyone is happy to see law enforcement agencies using such technology. 

However, it turns out that using AI in these cases carries considerable risk. There were wrongful arrests, and it happened when… officers relied solely on the results of facial recognition systems, omitting key elements of the investigation. Therefore, DNA traces, fingerprints were often not taken, and the alibi was not even verified (!). 

The situation is so bizarre that there have even been cases of ignoring obvious evidence of the suspect’s innocence. An example is a situation in which officers arrested a man suspected of checking forgery, despite the lack of analysis of his bank accounts. These are huge oversights that are basically unacceptable. Another example is the case of a pregnant woman who was arrested for stealing a car, and interestingly, the witnesses clearly pointed to a person who was not pregnant. 

But what happened in such cases? It turns out that this is the result of limited facial recognition technology in conditions other than laboratory ones. Ironically, these mishaps also involve what could be called the “human factor.” These are situations when a person automatically places too much faith in the capabilities of AI, ignoring other, even more rational premises. 

You can never be careful enough

The situations described show most clearly that even the most developed AI models can be unreliable and the consequences are very severe. At the same time, it is a signal to many who are concerned that there is still a long way to go before they are replaced by artificial intelligence. 

Nevertheless, one cannot ignore the fact that the security services use such solutions but do not boast about it publicly. As long as it’s for a good cause, it’s probably not a problem. It’s worse when it comes to blatant mistakes and abuses. So we can only hope that there will be as few of them as possible.