AI in Surveillance: A Dicey Road to Safety
The rising sun of artificial intelligence (AI) casts long shadows of privacy intrusion. Indeed, the allure of cutting-edge technology promising safety and security is irresistible. However, what looms beyond this promise is an unnerving compromise on individual freedom and privacy, highlighting a complex conundrum: Safety or Freedom?
AI surveillance is not an idea extracted from a dystopian fiction novel; it's very much part of our reality.
It's everywhere already
Systems like the infamous "Skynet" in China, equipped with facial recognition and predictive policing, tracks its citizens in real-time. This surveillance behemoth, although ensuring security, has raised global concerns about the mass surveillance of innocent individuals, exposing them to potential abuse and misuse of personal data.
Then there's the case of Clearview AI, the U.S.-based company that came under the spotlight for scraping billions of images from social media to feed its facial recognition algorithm. This "Google of faces," as it was dubbed, offered its services to law enforcement agencies, causing a significant uproar over how our digital footprints could be used to trace our every movement.
And let's not forget the use of AI surveillance by the Russian government to track and control political dissidents, turning the technology into a tool for political manipulation and suppression of dissent. It's not just authoritarian regimes; democratic governments, too, are increasingly adopting AI surveillance under the banner of public safety and counterterrorism.
Driving economic inequality
AI surveillance also fuels economic inequality. Take Amazon's AI-powered cameras in their delivery vans, for instance. Ostensibly designed to improve driver safety, these systems monitor drivers' every move, leading to a stressful, dystopian work environment where one false move can lead to job termination.
Many firms have installed CleverControl and FlexiSPY on their employees' computers, especially for home-workers, which even include webcam monitoring and audio recording.
In contrast, company executives enjoy relative freedom from such invasive surveillance.
The accelerating ubiquity of AI surveillance, driven by advances in machine learning and cheaper hardware, is cause for alarm. It brings to mind philosopher Michel Foucault's concept of the "panopticon" – a scenario where constant surveillance becomes the norm, and people, aware of this persistent scrutiny, internalise this surveillance, leading to self-censorship.
To focus on safety alone, to chase after the promise of a crime-free society using AI surveillance, is a short-sighted approach. We must carefully consider the trade-offs. In our pursuit of security, are we forging chains of surveillance that shackles our freedom, robs us of our privacy, and transforms our societies into Orwellian dystopias?
Indeed, it's high time we scrutinise AI surveillance through a more very important lens. As the saying goes, the road to dystopia is paved with good intentions. As we journey on this road of AI surveillance, let's tread carefully, mindful of the footprints we leave behind. Because, in the end, these footprints could lead right up to our doorsteps.