Advertisement
Oracle

Larry Ellison's AI-Powered Surveillance Dystopia Is Already Here

"Citizens will be on their best behavior, because we’re constantly recording and reporting everything that’s going on."
Larry Ellison's AI-Powered Surveillance Dystopia Is Already Here
Image: Oracle

There’s a comment that’s become very popular on social media whenever a new, horrifying surveillance practice is revealed: “1984 was supposed to be a warning, not an instruction manual!” 

This sentiment has become a bit tiresome, in part because saying it doesn’t really mean anything, and our real world has long since surpassed George Orwell’s dystopian nightmare in a few ways. But invoking 1984 as warning, not instruction manual feels appropriate here, with Oracle CEO Larry Ellison, the fifth-richest person in the world, pitching his exciting vision for an always-on, 1984-style, AI-powered surveillance fever dream to an audience of investors.

0:00
/4:39

In the remarks, which were first reported by Business Insider, Ellison said police body cameras, car cameras, drones, and other cameras will be always on and streaming to Oracle data centers, where AI will constantly be monitoring the feeds.

“The police would be on their best behavior, because we’re constantly watching and recording everything that’s going on,” Ellison said. “Citizens will be on their best behavior, because we’re constantly recording and reporting everything that’s going on. It’s unimpeachable. The cars have cameras on them.”

Ellison’s entire remarks are worth reading, because what he is pitching is a comprehensive surveillance apparatus that touches most parts of being in public. More importantly, every idea he is pitching currently exists in some form, and each has massive privacy, bias, legal, or societal issues that have prevented them from being the game-changing technology that somehow makes us all safer.

Ellison: “Securing schools: We think we can absolutely lock down schools so that dramatically reduce the case of anyone being on campus that doesn’t belong on campus, and immediately alert someone, use AI cameras to immediately recognize that.”

The idea that schools can be made safe with technology (or armed teachers, or more police) rather than, say, making guns harder to access, has become a cash cow for surveillance tech companies in the age of near-constant school shootings. Many schools have begun to implement AI-powered weapon detectors, which are notoriously inaccurate and which have, for example, detected notebooks as “weapons,” missed actual weapons, and led to what one administrator called “the least safe day” because of mass confusion associated with the scanners. Students are also being monitored in the hallways, in the bathroom, on social media, and on their school-issued devices. It’s unclear that any of this has made schools any safer. Axon, meanwhile, pitched the idea of taser-equipped drones that patrol schools, an idea that was quickly shelved after widespread public outrage and its own ethics board resigning over the idea.   

Ellison: “We completely redesigned body cameras. Our body cameras cost $70, normal body camera costs, I don’t know, $7,000. Our body cameras are simply lenses, two lenses attached to a vest attached to the smartphone you’re wearing. We take the video of the police officer. And the camera is always on. You don’t turn it on and off.

[A police officer can say], ‘Oracle, I need two minutes to take a bathroom break,’ and we’ll turn it off. The truth is, we don’t really turn it off. What we do is, we record it, so no one can see it, so no one can get into that recording without a court order. So you get the privacy you requested, but court order, we will—a judge will order, that so-called bathroom break. Something comes up, I’m going to lunch with my friends. ‘Oracle, I need an hour of privacy for lunch with my friends.’ God bless. We won’t listen in, unless there’s a court order. But we transmit the video back to headquarters, and AI is constantly monitoring video.”

Ellison is correct that police turning off their body cameras is a well-documented problem. Intuitively, you would also think that body cameras would improve police behavior. But the evidence for this is actually not that strong. Academics have shown that having a body camera on does not make a statistical or consistent difference in police behavior in part because there are many cases of police being filmed committing acts of brutality where the officer in question does not face severe consequences. 

Public access to body camera footage is also incredibly uneven; public records laws differ in each state about whether that footage can be obtained by journalists and police accountability organizations. Ellison is proposing a situation here where the footage would be held and analyzed not by a public police department but by Oracle and Oracle’s AI systems. “We won’t listen in, unless there’s a court order” means, of course, that it is listening in, and has all sorts of implications for who can access this sort of footage, when, and under what circumstances.

Ellison: Remember this terrible case in Memphis where the five police officers basically beat to death another citizen in Memphis? Well, that can’t happen because it’d be on TV at headquarters. Everyone would see it. Your body cameras would be transmitting that. 

The police would be on their best behavior, because we’re constantly watching and recording everything that’s going on. Citizens will be on their best behavior, because we’re constantly recording and reporting everything that’s going on. It’s unimpeachable. The cars have cameras on them. I think we have a squad car someplace. Those applications—we’re using AI to monitor the video. So that altercation that occurred in Memphis, the chief of police would be immediately notified. It’s not people that are looking at those cameras, it’s AI that’s looking at the camera, [saying] ‘No, no no, you can’t do this. That’s an event.’ An alarm is going to go off and we’re going to have supervision. Every police officer is going to be supervised at all times. And if there’s a problem, AI will report it to the appropriate person, whether it’s the sheriff or the chief or whoever we need to take control of the situation.”

AI-powered and connected smart cameras are already in use in the United States and across the world. So are automated license plate readers, AI-powered gunshot detecting microphones, connected home security cameras, facial recognition tech, etc. Tesla footage is being subpoenaed and used in police investigations

Crime still exists, and false positives and cases of mistaken identities are common across most of these technologies. Companies like Predpol, Shotspotter, Flock, Fusus, Axon, and many others have been pitching the idea of predictive and instantly-reactive policing as a deterrent for many years. 

These technologies are anything but “unimpeachable.” There have been numerous incidents of Black men being misidentified by facial recognition technology, instances of police automatically responding to a supposed gunshot that was not a gunshot, and cases of over-policing for minor crimes or minor disturbances. In South Africa, for example, AI-connected smart surveillance cameras have been accused of creating a “new Apartheid” because of bias within its systems.

Ellison: “We have drones. A drone gets out there way faster than a police car. You shouldn’t have high-speed chases with cars. You just have the drone follow a car, it’s very simple. A new generation of autonomous drones. A forest fire—the drone spots a forest fire and then the drone drops down and looks around to see if there’s a human being near that heat bloom, and someone else either had an unattended campfire that caught fire, or it’s arson. We can do all of that. It’s all done autonomously with AI.”

This system is called “drones as first responders,” it also already exists, and there are problems with this, too. Drones are being used to automatically surveil homeless encampments, house parties, teens being loud, and a person “bouncing a ball against a garage.” AI and drone tech to prevent forest fires is also something that has had millions of dollars poured into it, with underwhelming results so far.

This is all to say that Larry Ellison’s surveillance fever dream isn’t actually a fever dream: All of these are technologies that already exist in some form, many of which have not measurably driven down crime or solved the complicated societal, political, and economic problems that lead to crime in the first place or the societal power structures that protect police at every turn. These technologies have collectively cost billions of dollars, made surveillance companies very rich, and have created new privacy problems. Big Brother is here, and, surprise, having AI “supervision” has not created a crimeless utopia.

Advertisement