• EspaĂąol
  • Hindi
  • Odiya
  • Kannada
  • 中文 (简体)
  • Français

Picture this: It is the year 2035 and the sun is shining brightly on New Delhi. Cameras on lampposts, trees and traffic signals are catching jaywalkers mid-step and reckless drivers even before their engines cool. The cameras are so high-tech that the cloud of pollution in the city doesn’t deter them from zeroing in on lawbreakers. Accidents are down, discipline is up. In the last decade, India has expanded its nationwide surveillance systems under initiatives like Smart Cities and Digital India. There is no longer a challan system or potbellied traffic cops standing on the curbside eyeing every vehicle passing by like a delicious meal. The AI-operated systems automatically impose penalties, ranging from suspensions of digital services like Aadhaar-linked benefits to subtle forms of social ostracisation. It is a new society. It is also a fearful society.

While this scenario might look like something picked out of a George Orwell book, it can very much become our reality in the not-so-distant future.

Adopted from the French word surveiller, which means ‘to watch over,’ the meaning of Surveillance has turned on its head owing to technological advancements. That is not to say that the governments didn’t spy on their citizens in the 20th century. It’s just that the methods of supervision have become more sophisticated.

In Asia, surveillance has deep historical roots. For instance, during the Cultural Revolution in China, state monitoring was pervasive, but basic, relying on human informants and community policing. Similarly, during the Emergency period in India (1975–1977), the government employed basic surveillance methods to silence opposition and enforce strict laws.

It was in the late 1990s that the advent of digital technology revolutionised surveillance systems. However, it wasn’t until the 2010s, with the rise of artificial intelligence (AI) and machine learning, that surveillance underwent its most dramatic transformation. Traditional CCTV cameras merely recorded events, but AI enhances these systems with capabilities like real-time facial recognition technology (FRT), behavioural analysis, and predictive policing.

That means these AI equipped cameras can identify individuals in crowded spaces, detect suspicious body language, and predict crimes based on historical data. We’re talking about tracking sound, communications, data, and even the details of our personal experiences. The proactiveness with which these systems operate today has made cities safer, with authorities able to prevent crimes before they occur, rather than merely responding after.

But let’s not kid ourselves. The more ‘helpful’ these systems get, the more they start creeping into our lives, keeping tabs on everything we do. It’s become so ordinary to monitor people in physical spaces—and their digital lives in the virtual world—that it’s almost as if we’re all living under constant observation, whether we realise it or not. While we’re busy patting ourselves on the back for safer cities, there’s a much darker reality lurking behind those shiny screens: who’s really watching us, and what are they doing with all this power?

China: A Surveillance State

Nowhere is the impact of AI-driven and traditional surveillance more visible than in China. Over the past decade, the country has built an unparalleled surveillance network. It’s a blend of cutting-edge technology with authoritarian governance. The Great Wall Nation has about 700 million CCTV cameras to keep a watchful eye on its citizens. Anyone remember ‘Telescreens’ from Orwell’s book ‘1984’, where the citizens of a fictional country named Oceania had to keep their television screens switched on at all times so that the government could see what they were doing?

China, while not as extreme as Orwell’s Oceania, has something similar by weaving surveillance into everyday life. Its ambitious Sharp Eyes project is emblematic of this. “Sharp Eyes” comes from a quote by the founder of the People’s Republic of China—Mao Zedong, “The people have sharp eyes,” referring to how citizens vigilantly monitored each other to ensure adherence to communist values. The Sharp Eyes initiative, launched in 2015, aimed to create an omnipresent monitoring system where every corner is covered, every moment recorded, and every person visible.

This system in China can be seen as a modern version of Jeremy Bentham’s Panopticon. Bentham’s design was a circular prison with a central watchtower, where guards could observe prisoners without the inmates knowing when or if they were being watched. This uncertainty forced prisoners to self-regulate their behaviour, as they assumed they were always under observation.

Today, in cities like Beijing and Shenzhen, facial recognition systems identify individuals within seconds, helping authorities catch fugitives, track public movement, and even enforce lockdowns during the COVID-19 pandemic. In cities like Chongqing, with around 2.6 million cameras, residents live under the gaze of AI-powered cameras capable of identifying individuals and scrutinising their moves.

Surveillance also feeds into the controversial Social Credit System, where citizens’ actions—whether paying bills on time or criticising the government—contribute to scores that affect everything from job prospects to travel permissions. While these measures ensure public safety and discipline, critics warn of a dystopian reality where personal freedom is a casualty of state control.

One of the most significant moments in China’s surveillance history occurred in 2017, when its government launched an extensive surveillance network in Xinjiang, home to a large Uyghur Muslim population. This system included cameras, mandatory biometric scans, and apps that tracked citizens’ movements and communications, marking a stark escalation in state surveillance under the guise of counter-terrorism.

This system allowed authorities to monitor not only physical spaces but also their behaviour on an unprecedented scale. The surveillance was so deep that it extended to checkpoints, where residents were required to provide biometric data such as DNA and iris scans. To the world, it became a chilling example of how AI and technology can be weaponised to suppress dissent and exert authoritarian control.

In countries with authoritarian regimes, surveillance has long been used as a tool for maintaining control, suppressing dissent, and enforcing state ideology. Sure, the West may have started the trend with early AI security tools, with the US refusing to remain soft post 9/11, but China took it further. It did so by not only being the leader in AI surveillance but also in exporting this surveillance technology world over. China sells extensive digital surveillance packages to governments under its Belt and Road Initiative (BRI) infrastructure project.

This export of technology has significant geopolitical implications. On the one hand, it helps spread technological advancements. On the other, it risks exporting authoritarian practices, as governments adopt these tools to tighten control over their own populations.

For example, Chinese company Huawei’s Safe City system installed in Lahore uses 8,000 cameras, facial recognition, licence plate tracking, and apps for security staff, all powered by AI to analyse data and monitor the city. While it’s sold as a crime-fighting tool, there are concerns it could be used for political control. Countries like Zimbabwe and Uganda have used similar systems to spy on political opponents and control certain groups, raising worries about how such technology could be misused.

Shifting focus to the south—India, with its democratic framework and rapid technological growth, presents a unique case.

India’s Surveillance Sprint

India, while not as advanced as China, is rapidly expanding its surveillance capabilities. With its ambitious Smart Cities Mission, AI-driven technologies are becoming integral to how crime is managed and monitored. Under the Smart Cities Mission, over 83,000 CCTV surveillance cameras have been installed in 100 Smart Cities in India, said a government press release. These cities also boast of systems that aid in “automatic number plate recognition.”

Cities like Pune are now equipped with AI-enabled cameras that assist the police in overseeing traffic, detecting crimes, and even assisting in investigations to solve crimes. Airports like the one in Hyderabad have also introduced AI-powered boarding systems, promising efficiency. But in turn, this has raised questions about data privacy and retention.

The case of activist SQ Masood against Hyderabad police in 2022 offers an example of unchecked FRT use in India. Telangana, identified in 2020 as the most surveilled place in India, witnessed numerous FRT projects with little public awareness or consent. Masood filed a petition after police stopped him, removed his mask, and took his photo without explanation or consent. His legal challenge highlighted concerns about privacy and potential misuse, urging the need for safeguards as FRT becomes increasingly pervasive in public spaces.

The Ministry of Railways too are venturing into the realm of high-tech surveillance. There will be a roll out of AI and facial recognition-enabled CCTV cameras inside trains across the nation. Their purpose? To curb crime. To achieve this, the system will employ face-cropping tools and matching servers, capturing facial data from all passengers, including children.

The cameras, stationed at every entry and exit point, will slice out faces from the live feed and whisk them away to a central server, where the data will be stored in real-time. As we race toward an age of digital efficiency, one might wonder: what price do we pay for convenience?

One of the gravest risks lies in data security. With AI systems collecting vast amounts of sensitive information, breaches can have devastating consequences. Unlike Europe’s General Data Protection Regulation (GDPR) or even China’s tightly controlled data systems, India’s surveillance expansion operates in a legal grey area. The country lacks comprehensive data protection laws, leaving its surveillance ecosystem vulnerable to misuse. The Digital Personal Data Protection (DPDP) Act has been in limbo since it was passed into law over a year ago. For many, this legal vacuum undermines trust in the system, turning a potentially beneficial tool into a source of concern.

Where India has historically been wary of Chinese technology, particularly in the context of security concerns and ongoing territorial disputes, Pakistan on the other hand has been more open to Chinese technology.

Under the Lens: Rest of Asia

Pakistan has procured technology for its telecommunications and surveillance needs, particularly from Chinese company Huawei, as per the AI Global Surveillance (AIGS) Index. Under this index, most Asian countries like Thailand, Singapore, Philippines, Malaysia, Laos, Indonesia, Hong Kong, Myanmar, Bangladesh, Kazakhstan, Kyrgyzstan, Tajikistan and Uzbekistan have one thing in common—that they use Huawei technology for AI surveillance.

Singapore is often described as a unique blend of democracy and authoritarianism, governed by a single party since independence. In terms of surveillance, the country offers a more measured approach, blending technological innovation with governance under its Smart Nation initiative. The city-state uses smart lampposts equipped with AI-driven cameras and sensors to monitor crowd density, manage traffic, and even detect air quality. During the COVID-19 pandemic, Singapore’s TraceTogether app was lauded for its effectiveness in contact tracing, though concerns arose over the government’s access to user data.

South Korea and Japan, known for their strong privacy regulations, have cautiously integrated AI surveillance. In Seoul, AI-enabled cameras monitor public spaces, capable of detecting sudden falls or aggressive movements to alert authorities. Japan, too, has used facial recognition for security at events like the Tokyo Olympics, though strict data protection measures ensured collected information was not misused. These countries exemplify how surveillance can be balanced with privacy safeguards, but even their systems are not immune to criticism.

A Look to the Future

AI surveillance continues to expand across Asia, with promises of safer cities and smarter governance. Its future will likely be defined by three key trends: greater integration of IoT devices, citizen awareness, and regulatory frameworks. Smart city projects, like in India, will continue to expand, creating interconnected systems that monitor everything from traffic patterns to environmental conditions.

At the same time, advocacy for privacy rights is growing, with citizens and watchdog organisations demanding stricter regulations and greater transparency. It is true. The darker side of this technology cannot simply be ignored. Mass surveillance often operates in opaque ways, leaving citizens unaware of what data is collected or how it is used. The constant presence of cameras creates a chilling effect, where individuals self-censor their behaviour for fear of being watched.

There is another pitfall to technology advancing at a rapid velocity, that it is readily available for people to use at a personal level. Take the example of two Harvard students who hacked Meta Ray-Ban Smart Glasses and installed facial recognition software. By merely looking at someone’s face, the glasses were able to bring up their name, address, age, biography and any other information available on online databases. However, the students told 404 Media that they carried out the experiment “to raise awareness of what is possible with this technology” and that they won’t make it open-sourced.

Unchecked surveillance is not just about who watches but how much power the watcher wields. From spy gadgets like pen cameras to open-source AI models that allow users to create apps to analyse behaviour in real time, we have come a long way in terms of surveillance technology. And as we stand in awe of this leap, it’s crucial to confront its implications. The line between safety and spying grows thinner with every advancement, and more so with the many capabilities of AI in the picture. Although surveillance may offer safety, without transparency and ethical governance, it risks creating a world where privacy is a privilege and not a fundamental right. Ultimately, the success of this technology lies in trust, ensuring that the tools we create must serve us—not control us.

The link has been copied!