The Israeli army’s expanded authorization for bombing non-military targets, the loosening of constraints regarding expected civilian casualties, and the use of an artificial intelligence system to generate more potential targets than ever before, appear to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip, an investigation by +972 Magazine and Local Call reveals. These factors, as described by current and former Israeli intelligence members, have likely played a role in producing what has been one of the deadliest military campaigns against Palestinians since the Nakba of 1948.
On August 28th, Deputy Secretary of Defense Kathleen Hicks chose the occasion of a three-day conference organized by the National Defense Industrial Association (NDIA), the arms industry’s biggest trade group, to announce the “Replicator Initiative.” Among other things, it would involve producing “swarms of drones” that could hit thousands of targets in China on short notice. Call it the full-scale launching of techno-war. Her speech to the assembled arms makers was yet another sign that the military-industrial complex (MIC) President Dwight D. Eisenhower warned us about more than 60 years ago is still alive, all too well, and taking a new turn. Call it the MIC for the digital age.
Over the last decade, the entertainment industry has shifted away from legacy distribution models like film and television and embraced a streaming-first model. The move has been a lucrative one, bringing billions of dollars in revenue to the industry. But those profits haven’t reached working actors and writers. Some 87% of actors earn less than $26,000 per year; many writers have to work second jobs to make ends meet. And so, for the first time since 1960, members of the Writers Guild of America (WGA) and the Screen Actors Guild (SAG-AFTRA) are on strike simultaneously — and Hollywood has effectively shut down.
The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity. But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor.
The recent explosion in the stunning power of artificial intelligence is likely to transform virtually every domain of human life in the near future, with effects that no-one can yet predict. The breakneck rate at which AI is developing is such that its potential impact is almost impossible to grasp. As Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, demonstrate in their landmark presentation, The AI Dilemma, AI accomplishments are beginning to read like science fiction. After just three seconds of hearing a human voice, for example, an AI system can autocomplete the sentence being spoken with a voice so perfectly matched that no-one can distinguish it from the real thing.
Digital rights groups on Wednesday applauded lawmakers across the European Union after they passed a draft law that would strictly regulate the use of artificial intelligence including facial recognition technology and chatbots, potentially setting a new standard for protecting the public from the misuse of AI—but noted that some provisions could exclude vulnerable people. The European Parliament passed a major legislative hurdle as it voted in favor of the draft rules in the Artificial Intelligence Act, with 499 lawmakers supporting the provisions, 28 opposing, and 93 abstaining from voting.
ChatGPT is a powerful AI chatbot that is as easy to use as Google and provides more direct answers to users’ questions. Ask it anything you like, and you will receive an answer that sounds like it was written by a human, based on knowledge and writing skills gained from massive amounts of data from across the internet. Because of its growing popularity, there are already political questions about it, for example assertions that it has a left-wing bias or concerns about privacy issues, which have led to the bot being banned in Italy just this month. It is already banned in China and Russia. A search on Google reveals little or no discussion about the relevance of ChatGPT to writing or research about Latin America.
In May 2022, the company’s own AI Ethics Board voted against a pilot program with law enforcement due to concerns over surveillance and abuse, particularly against people of color. However, weeks later, in the wake of the Uvalde tragedy, Axon announced its intention to embed Taser-equipped drones in schools to stop mass shootings, using AI surveillance and virtual reality simulations. Nine of the thirteen members of the AI Ethics Board resigned, stating they had "lost faith in Axon's ability to be a responsible partner." Axon shareholders are now requesting that the company discontinue the development and plans for sale of a remotely-operated Taser drone system, which poses serious risks to privacy, racial equity, and physical safety.
It’s December, which means that it is, by law, the time when we look ahead at the coming year, and make shockingly insightful predictions about what lays ahead. A year ago, we made Ten Predictions for the Year Ahead in Labor that were, it turns out, very good. More on that below. With that track record of quality, you must feel compelled to read our predictions for 2023. Joys, disappointments, and killer robots, ahoy! AI is a labor problem. Have you played with DALL-E 2, the artificial intelligence system that can spit out professional-quality illustrations based on any prompts you give it? How about ChatGPT, that can write essays, computer code, or anything else as you converse with it? They are amazing pieces of technology, and they are also a big, flashing sign of gargantuan labor problems ahead.
Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one. The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but didn’t reach consensus on a ban. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.
Mountain View, California - The United States is leading a new artificial intelligence arms race that could spell the end of humanity. Back in 2014, a few years before he died, Stephen Hawking warned us about artificial intelligence: The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Today, artificial intelligence, or AI, is the centerpiece of the U.S. empire’s plan to maintain global dominance. AI is essentially computer super-intelligence that does what human brains can not. Exponential technological advances have rendered our human brains, constrained by the slow process of biological evolution, inferior to modern supercomputers.
When thinking of AI futures, the classic sci-fi tropes tell us that machines will one day take over and replace humans, with robots rendering work as we know it obsolete: the outcome will either be a post-work utopia or robot-human war. But that future is here, and the reality is far more mundane. Instead of eliminating human work, the AI industry is creating new ways of exploiting and obscuring workers. Lurking behind the amorphous and often abstract notion of ‘AI’ are material realities. 80 percent of machine learning development consists of repetitive data preparation tasks and ‘janitorial’ work such as collecting data, labelling data to feed algorithms, and data cleaning – tasks that are a far cry from the high glamour of the tech CEOs who parade their products on stage.
With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones, and autonomous submarines. So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place. We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
The U.S. military has quietly said it wants 70 unmanned self-driving supply trucks by 2020. And seeing as $21 trillion has gone unaccounted for at the Pentagon over the past 20 years, when the Pentagon wants something, it tends to get that something. Of course supply trucks in and of themselves don’t sound so bad. Even if the self-driving trucks run over some poor unsuspecting saps, that will still be the least destruction our military has ever manifested. But because I’ve read a thing or two about our military, I’ll assume that by “supply trucks,” they mean “ruthless killing machines.”