LADO about Predictive AI: balancing and privacy in tomorrow's world | Lado Okhotnikov
Lado Okhotnikov
20.09.2024

Imagine a world where science fiction becomes reality. Remember the movie “Minority Report” starring Tom Cruise (2002)? The movie depicts a future of the not-so-distant year 2054. It seemed far-fetched in 2002, but here we are in 2024, where South Korea is actively testing artificial intelligence to predict and prevent crimes before they happen. This technological leap is mind-boggling: the Dejaview artificial intelligence system boasts an impressive 82.8% accuracy rate in identifying potential criminal activity.

At its core, Dejaview analyzes real-time video footage to detect and potentially intervene in criminal behavior. Using machine learning, it identifies patterns and indicators of potential crimes, flagging high-risk areas that require closer attention. Factors such as time of day, location, historical crime data, and other variables are taken into account to determine the likelihood of suspicious behavior.

While promising, the successes bring to mind a cautionary tale from Minority Report. Over-reliance on AI to predict human behavior poses ethical dilemmas and risk of error. Dependence on technology can cloud human judgment, reduce accountability, and lead to unjust consequences for innocent people.

There are also questions of transparency, how does such a system fit with existing laws and human rights?

As we venture into this frontier of crime prevention through artificial intelligence, the balance between innovation and ethical considerations will be paramount to shaping a future that is both safe and just.

© 2022