This will not be the first algorithm designed for this purpose: predicting where crimes will occur. Other posts of the same kind, even more controversial, have even been deleted online. The new publication by a team of scientists from the University of Chicago, reminiscent of the film Minority Report, does not escape the debates on prejudices that have affected those who preceded it. Indeed, crime is a term to which the association of a percentage of accuracy escapes most throats. The research team also wants to be cautious about recommending avoiding using it for proactive neighborhood hair styling. How can we credit the fears of human rights defenders?
The new algorithm examines the temporal and spatial coordinates of discrete events and detects patterns to predict future events. It divides the city into space tiles about 1000 feet away and predicts crime in these areas. Previous models relied more on traditional political or neighborhood boundaries. The model results are based on data from several US cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.
A study conducted by researchers from the private, non-profit American University of Harrisburg will go down in history by being deleted online after publication. His concise conclusion was that the software can predict that a third party will be a criminal with 80% accuracy and no racial bias, based on their photo alone. The statement, which had begun to be shared on Twitter, had attracted a large influx of naysayers. The study was removed from the website at the request of the faculty involved in the research. It is working on an update of the document to address the concerns raised, the statement said. The update is still awaited.
In 2016, a research group from Shanghai Jiao Tong University published their study linking crime and the facial features of individuals. In numbers, the study reports that the system is capable of detecting with 90% accuracy who is most likely to be a criminal without bias.
The training phase of the Chinese university’s artificial intelligence took place with more than a thousand shots of individuals taken from national identity cards, of which seven hundred belonged to recognized criminals. This could be the problem with these systems. In fact, a study by MIT researchers shows that the creation of a psychopathic artificial intelligence depends on one essential thing: feeding it with ultraviolent images during the training process. This is an example of the problem of bias in data provided to AI, which appears to be the root cause of the controversies we observe. If we look at the accuracy rates, we have to say that there are 10 to 20% of false positives depending on the results of the university we are referring to. It’s a problem when you know this represents a group of people who could be unfairly accused as members of the United States Congress by an AI. This is why human rights organizations are raising concerns.
And she ?
Is it possible to rely on algorithms to predict where crimes may occur or who might be criminals given current technological advances?
Facial Recognition: The Detroit Police Chief admits a 96% error rate. Detroit police are inundated with criticism of wrongful arrest using this technology
Police use of facial recognition violates human rights, the UK court found. However, the court did not completely ban the use of the technology
81% of suspects reported by Greater London Police facial recognition technology are innocent, the report said
USA: Portland today passes the strictest ban on facial recognition in the country, banning public and private use of the technology