A few months after Narendra Modi was re-elected in 2019, India’s Parliament passed a discriminatory bill extending citizenship to refugees from six religious minority communities, except for Muslims from the neighboring countries of Pakistan, Afghanistan and Bangladesh. Following the controversy, a series of protests erupted across the country. The capital city of Delhi witnessed ghastly communal riots, as members of the minority Muslim community were targeted by far-right groups that rallied in support of the bill. To identify the alleged “rabble rousers and miscreants,” including the protesters, law enforcement officials acknowledged using what they called the Automated Facial Recognition System.
A new study from the University of Michigan on the use of facial recognition in schools is recommending that lawmakers and school administrators ban the use of this technology in educational settings. The researchers behind the study write that facial recognition in schools “will likely have five types of implications: exacerbating racism, normalizing surveillance and eroding privacy, narrowing the definition of the ‘acceptable’ student, commodifying data, and institutionalizing inaccuracy. Because FR is automated, it will extend these effects to more students than any manual system could.” “Using facial recognition in schools amounts to unethical experimentation on children,” said Evan Greer (she/her), deputy director of digital rights group Fight for the Future who have been organizing to ban facial recognition for more than a year.
In an email message, Betsy Reed, the editor-in-chief of the Intercept reports, "Hundreds of thousands of files from the FBI and local police departments have been leaked, exposing serious abuses of power by law enforcement." She writes, Blue Leaks "are like the Pentagon Papers for U.S. law enforcement." The leaks show how political activists are targeted and monitored on social media, that there is widespread racial bias by police and that police exaggerate threats by antifa to justify violence against protesters.
Uprisings for racial justice are sweeping the country. Following the police murders of George Floyd, Breonna Taylor, and so many others, named and unnamed, America has finally reached its moment of reckoning. And politicians are starting to respond. But you can’t end police violence without ending police surveillance. That starts with banning facial recognition, a technology perfectly designed for the automation of racism. I live in Detroit, a city with more than 500,000 Black people. In my city, we live under constant surveillance. We are in a perpetual lineup. Our faces are caught on camera everywhere we go—harvested and analyzed by algorithms. Numerous studies have shown that facial recognition algorithms exhibit systemic racial and gender bias. Detroit’s police chief openly admitted that their software is wrong up to 96 percent of the time.
Olympia, WA - Last week, a Washington state law went into effect that requires a warrant for ongoing and realtime facial recognition surveillance. The new law will not only help protect privacy in Washington state; it will take a step toward hindering one aspect of the federal surveillance state. A coalition of 10 Democrats introduced Senate Bill 6280 (SB6280) on Jan. 14. The new law requires law enforcement agencies to get a warrant “to engage in ongoing surveillance, to conduct real-time or near real-time identification, or to start persistent tracking” with just a few exceptions. This includes using facial recognition technology to scan crowds, streets or neighborhoods. Police can utilize facial recognition without a warrant when exigent circumstances exist or with a court order authorizing the use of the service for the sole purpose of locating or identifying a missing person, or identifying a deceased person.
In 2017, a leak from the FBI revealed they were targeting black activists organizing to end racist policies and practices calling them "Black Identity Extremists." This is consistent with the FBI's long history of investigating and harassing black and brown activists. Organizations like Media Justice and the ACLU have been working to get information from the FBI about what they are doing and who they are targeting but the FBI has been putting barriers in their way. We speak with Myaisha Hayes of Media Justice about what they have learned so far and its impact on activists. Hayes also discusses their efforts to urge Congress to stop federal funding for surveillance of people exercising their constitutional rights and to educate activists about ways to protect themselves.
Many of the new surveillance powers now sought by the government to address the COVID-19 crisis would harm our First Amendment rights for years to come. People will be chilled and deterred from speaking out, protesting in public places, and associating with like-minded advocates if they fear scrutiny from cameras, drones, face recognition, thermal imaging, and location trackers. It is all too easy for governments to redeploy the infrastructure of surveillance from pandemic containment to political spying. It won't be easy to get the government to suspend its newly acquired tech and surveillance powers. When this wave of the public health emergency is over and it becomes safe for most people to leave their homes, they may find a world with even more political debate than when they left it.
UCLA is known for its strong academics and winning sports teams. But the school almost became known for something far more sinister: as the first university in the United States actively planning to use facial recognition surveillance on campus. Today, in a major victory for the movement against facial recognition, our Deputy Director Evan Greer received a statement directly from the UCLA’s Administrative Vice Chancellor saying that the school is abandoning its plan in the face of community backlash in the lead up to a national day of action on March 2 to ban facial recognition from campus. “We are beyond excited by the potential agenda-setting a top school like UCLA might bring about nationwide through the prohibition of facial recognition software and through listening to the students, workers, faculty and local community,” said Matthew William Richard, a 3rd year Political Science Major at UCLA and Vice-Chair of the Campus Safety Alliance.
On behalf of leading consumer, privacy, and civil liberties organizations, we are calling on administrations to commit to not using facial recognition technology (for non-personal reasons, e.g. when used to unlock personal phones) in schools. This invasive and biased technology inherently violates the liberty and the rights of students and faculty and has no place in our educational institutions. Facial recognition technology isn’t safe. It’s biased and is more likely to misidentify students of color, which can result in traumatic interactions with law enforcement, loss of class time, disciplinary action, and potentially a criminal record. The data collected is vulnerable to hackers, and we’ve seen that schools are ill-equipped to safeguard this data.
Anyone who happened to be loitering in the London borough of Greenwich on the evening of 16 January may have spotted a strange sight. Ten or so individuals, faces daubed in brightly painted patterns, winding their way in complete silence through rain-slicked streets, passing the borough’s sleek residential new-builds and empty redevelopment sites.
For years, the Denver public school system worked with Video Insight, a Houston-based video management software company that centralized the storage of video footage used across its campuses. So when Panasonic acquired Video Insight, school officials simply transferred the job of updating and expanding their security system to the Japanese electronics giant. That meant new digital HD cameras and access to more powerful analytics software, including Panasonic’s facial recognition, a tool the public school system’s safety department is now exploring.
We Scanned Thousands Of Faces In DC Today To Show Why Facial Recognition Surveillance Should Be Banned
Today, activists working with digital rights group Fight for the Future conducted live facial recognition surveillance in the halls of Congress and the area surrounding Capitol Hill, to show why this technology is so dangerous that it should be banned. Using Amazon’s commercially available Rekognition software — running on smartphones strapped to our heads — our team ran 13,732 biometric face scans in Washington, DC. By comparing live footage against a database we had assembled, the system successfully identified a member of Congress in real time: Representative Mark DeSaulnier of California.
A growing number of districts are deploying cameras and software to prevent attacks. But the systems are also used to monitor students—and adult critics. On a steamy evening in May, 9,000 people filled Stingaree Stadium at Texas City High School for graduation night. A rainstorm delayed ceremonies by a half hour, but the school district’s facial recognition system didn’t miss a beat. Cameras positioned along the fence line allowed algorithms to check every face that walked in the gate.
The House Committee on Oversight and Reform is officially investigating military and private-sector use of facial recognition, according to a letter sent to top military officials in June. The letter, obtained by OneZero under a public records request, outlines a broad inquiry into all weapons and security technologies used by the U.S. Department of Defense, Army, Navy, and Air Force, as well as accuracy analyses and partnerships with public or private institutions. “The operational benefits of facial recognition technology for the warfighter are promising,” the letter says.
Groups representing 15 million+ people plan to flood local, state, and federal lawmakers with letters and calls as a bipartisan backlash to biometric surveillance reaches a boiling point Opposition to facial recognition is reaching a boiling point. Today, nearly 30 organizations from across the political spectrum announced they had endorsed the BanFacialRecognition.com campaign calling for a federal ban on law enforcement use of facial recognition technology. The groups, which represent more than 15 million combined members, plan to flood lawmakers with emails and calls from constituents.