Skip to content

Artificial Intelligence

EU Advances AI Rules Restricting Facial Recognition

Digital rights groups on Wednesday applauded lawmakers across the European Union after they passed a draft law that would strictly regulate the use of artificial intelligence including facial recognition technology and chatbots, potentially setting a new standard for protecting the public from the misuse of AI—but noted that some provisions could exclude vulnerable people. The European Parliament passed a major legislative hurdle as it voted in favor of the draft rules in the Artificial Intelligence Act, with 499 lawmakers supporting the provisions, 28 opposing, and 93 abstaining from voting.

Should We Be Concerned About What ChatGPT ‘Thinks’ About Latin America?

ChatGPT is a powerful AI chatbot that is as easy to use as Google and provides more direct answers to users’ questions. Ask it anything you like, and you will receive an answer that sounds like it was written by a human, based on knowledge and writing skills gained from massive amounts of data from across the internet. Because of its growing popularity, there are already political questions about it, for example assertions that it has a left-wing bias or concerns about privacy issues, which have led to the bot being banned in Italy just this month. It is already banned in China and Russia. A search on Google reveals little or no discussion about the relevance of ChatGPT to writing or research about Latin America.

Open Mic Protests Plan For AI-Powered Taser Drones In Schools

In May 2022, the company’s own AI Ethics Board voted against a pilot program with law enforcement due to concerns over surveillance and abuse, particularly against people of color. However, weeks later, in the wake of the Uvalde tragedy, Axon announced its intention to embed Taser-equipped drones in schools to stop mass shootings, using AI surveillance and virtual reality simulations. Nine of the thirteen members of the AI Ethics Board resigned, stating they had "lost faith in Axon's ability to be a responsible partner." Axon shareholders are now requesting that the company discontinue the development and plans for sale of a remotely-operated Taser drone system, which poses serious risks to privacy, racial equity, and physical safety.

Ten Predictions For Labor In 2023

It’s December, which means that it is, by law, the time when we look ahead at the coming year, and make shockingly insightful predictions about what lays ahead. A year ago, we made Ten Predictions for the Year Ahead in Labor that were, it turns out, very good. More on that below. With that track record of quality, you must feel compelled to read our predictions for 2023. Joys, disappointments, and killer robots, ahoy! AI is a labor problem. Have you played with DALL-E 2, the artificial intelligence system that can spit out professional-quality illustrations based on any prompts you give it? How about ChatGPT, that can write essays, computer code, or anything else as you converse with it? They are amazing pieces of technology, and they are also a big, flashing sign of gargantuan labor problems ahead.

UN Fails To Agree On ‘Killer Robot’ Ban

Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one. The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.

Eric Schmidt Cashes In On Artificial Intelligence Arms Race

Mountain View, California - The United States is leading a new artificial intelligence arms race that could spell the end of humanity. Back in 2014, a few years before he died, Stephen Hawking warned us about artificial intelligence: The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Today, artificial intelligence, or AI, is the centerpiece of the U.S. empire’s plan to maintain global dominance. AI is essentially computer super-intelligence that does what human brains can not. Exponential technological advances have rendered our human brains, constrained by the slow process of biological evolution, inferior to modern supercomputers.

How Artificial Intelligence Depends on Low-Paid Workers

When thinking of AI futures, the classic sci-fi tropes tell us that machines will one day take over and replace humans, with robots rendering work as we know it obsolete: the outcome will either be a post-work utopia or robot-human war. But that future is here, and the reality is far more mundane. Instead of eliminating human work, the AI industry is creating new ways of exploiting and obscuring workers. Lurking behind the amorphous and often abstract notion of ‘AI’ are material realities. 80 percent of machine learning development consists of repetitive data preparation tasks and ‘janitorial’ work such as collecting data, labelling data to feed algorithms, and data cleaning – tasks that are a far cry from the high glamour of the tech CEOs who parade their products on stage.

Artificial (Un)intelligence And The US Military

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields.

I Quit My Job To Protest My Company’s Work On Building Killer Robots

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place. We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.

Artificial Intelligence May Destroy Humanity By Accident

The U.S. military has quietly said it wants 70 unmanned self-driving supply trucks by 2020. And seeing as $21 trillion has gone unaccounted for at the Pentagon over the past 20 years, when the Pentagon wants something, it tends to get that something. Of course supply trucks in and of themselves don’t sound so bad. Even if the self-driving trucks run over some poor unsuspecting saps, that will still be the least destruction our military has ever manifested. But because I’ve read a thing or two about our military, I’ll assume that by “supply trucks,” they mean “ruthless killing machines.”

Controversial AI ‘Lie Detectors’ Coming To EU Airports, Border Crossings

Several European airports will deploy an AI-powered lie detector at border checkpoints in a trial run of the new technology, reports CNN.  When a passenger approaches customs, they will be asked a series of questions by a "virtual border guard avatar," which will use an Artificial Intelligence to monitor their faces to quickly determine whether they are lying in an effort to reduce congestion.  The avatar will become "more skeptical" and change its tone of voice if it believes a person has lied, before referring suspect passengers to a human guard and allowing those believed to be honest to pass through, said Keeley Crockett of Manchester Metropolitan University in England, who was involved in the project.

Human Rights And Artificial Intelligence: The Challenge Of An Era

A new set of principles—the Toronto Declaration—aims to put human rights front and centre in the development and application of machine learning technologies. In May 2018, Amnesty International, Access Now, and a handful of partner organizations launched the Toronto Declaration on protecting the right to equality and non-discrimination in machine learning systems. The Declaration is a landmark document that seeks to apply existing international human rights standards to the development and use of machine learning systems (or “artificial intelligence”). Machine learning (ML) is a subset of artificial intelligence. It can be defined as “ provid[ing] systems the ability to automatically learn and improve from experience without being explicitly programmed.”

Google Sets Limits But Allows Work For Military

Earier this year, Google CEO Sundar Pichai described artificial intelligence as more profound to humanity than fire. Thursday, after protests from thousands of Google employees over a Pentagon project, Pichai offered guidelines for how Google will—and won’t—use the technology. One thing Pichai says Google won’t do: work on AI for weapons. But the guidelines leave much to the discretion of company executives and allow Google to continue to work for the military. The ground rules are a response to more than 4,500 Googlers signing a letter protesting the company’s involvement in a Pentagon project called Maven that uses machine learning to interpret drone surveillance video. The dissenting employees asked Google to swear off all military work. Pichai’s response? We hear you, but you can trust us to do this responsibly.

Google Employee Opposition Derails Military AI Project

Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires. Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business. Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

Urgent End Of Year Fundraising Campaign

Online donations are back! Keep independent media alive. 

Due to the attacks on our fiscal sponsor, we were unable to raise funds online for nearly two years.  As the bills pile up, your help is needed now to cover the monthly costs of operating Popular Resistance.

Urgent End Of Year Fundraising Campaign

Online donations are back! 

Keep independent media alive. 

Due to the attacks on our fiscal sponsor, we were unable to raise funds online for nearly two years.  As the bills pile up, your help is needed now to cover the monthly costs of operating Popular Resistance.

Sign Up To Our Daily Digest

Independent media outlets are being suppressed and dropped by corporations like Google, Facebook and Twitter. Sign up for our daily email digest before it’s too late so you don’t miss the latest movement news.