Above Photo: From Activistpost.com
We expect this new Facebook project claiming to be on ‘trusted news’ is a veiled attack against websites like ours. While we are often more accurate in reporting the news then so-called trusted news, we are critical of the narrative of the power structure, which includes both parties, their donors, their media outlets and other mouthpieces. We will be monitoring this activity and letting readers know how it develops. KZ
Facebook CEO Mark Zuckerberg stated Tuesday that his company has begun to implement a system that will rank news organizations based on trustworthiness while suppressing content that doesn’t fit in that metric, CNET reported.
Zuckerberg said that Facebook has gathered data on how its users perceive news brands by asking them to identify whether they have heard of certain publications and if they trust them.
“We put that data into the system, and it is acting as a boost or a suppression, and we’re going to dial up the intensity of that over time,” Zuckerberg said. “We feel like we have a responsibility to further break down polarization and find common ground.”
Zuckerberg met with a group of mainstream media executives at Rosewood Sand Hill hotel in Menlo Park after presenting a keynote speech at Facebook’s annual F8 developer conference Tuesday, where he opened up about the new policies governing news content on the platform.
Zuckerberg further expanded that the company will invest “billions” of dollars into artificial intelligence and tens of thousands of human moderators to keep “fake news and deliberate propaganda off the site especially during elections.
“We’re essentially going to be losing money on doing political ads,” stated Zuckerberg.
“We deployed AI tools that have taken down tens of thousands of accounts,” he added.
Last year in December, Facebook began testing some of these “features” according to Mashable. Facebook users can now mark a story in their newsfeed as fake. This will result in a story having a disputed by “Fact-checkers” popup under its title along with a link to a corresponding article explaining why it might be false, as seen by a Gizmodo reporter on Twitter. These posts will then appear lower in the news feed, throttled by Facebook’s algorithm, and users will receive a warning before sharing the story on their Facebook feeds, RT reported.
Stories marked as disputed may be subject to “fact-checking” by organizations such as Politifact or Snopes, two notoriously biased “fact-checking” sites that have glaring credibility issues. Both organizations are slanted to the left and have dismissed credible government documents on various topics. Other “fact-checkers” include the very fake mainstream media itself – ABC News, FactCheck.org, and the Associated Press.
The claims that fake news influenced the election have been disputed by a Stanford University/NYU study authored by NYU economics professor Hunt Allcott and Stanford economics professor Matthew Gentzkow, who found that fake coverage did not change the outcome of the election.
The problem with going after news media while there are some really fake news websites out there is that it threatens free thought in the process. Who can determine what is and isn’t propaganda? One person’s “political propaganda” may be another person’s opinion about a politician they have determined through vigorous research to be corrupt.
This comes as Facebook is facing blowback from allowing its users’ data to be shared with political consultation firm Cambridge Analytica. Shockingly, Facebook recently admitted that it scans its users’ photos and links posted on the social media giant and will review text you send if something is flagged.
Not only was Facebook scanning images and links, but the social media giant admitted to storing call and text meta data in a blog post. The data included who the call/text was to or from, the date, time and duration of the phone call.
“Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite on Android,” the company wrote. “This helps you find and stay connected with the people you care about and provides you with a better experience across Facebook.” Once this feature is enabled, the Messenger app begins “to continuously upload your contacts as well as your call and text history.”
Last month, reports alleged that U.K.-based political research firm Cambridge Analytica (a shell company of SCL Group) collected data from Facebook users when just 270,000 users downloaded a psychology quiz app. (More on SCL Group here in my Steemit article: “The TRUTH About The Cambridge Analytica Scandal Is Bigger Than Just Facebook.”)
Another more recent Facebook blog update noted that “most people on Facebook could have had their public profile scraped,” by the shady political firm.
A U.S. court just last year dismissed a nationwide litigation accusing Facebook of tracking users’ Internet activity even after they logged out of the social media website.
Now a whistleblower, thanks to the Cambridge Analytica scandal, has emerged: Christoper Wylie. Wylie appeared before a committee of British MPs, delivering bombshell testimony noting that Facebook has the ability to spy on all of its users in their homes and offices.
“There’s been various speculation about the fact that Facebook can, through the Facebook app on your smartphone, listen in to what people are talking about and discussing and using that to prioritize the advertising as well,” Collins said. “Other people would say, no, they don’t think it’s possible. It’s just that the Facebook system is just so good at predicting what you’re interested in that it can guess.”
“On a comment about using audio and processing audio, you can use it for, my understanding generally of how companies use it… not just Facebook, but generally other apps that pull audio, is for environmental context,” Wylie said. “So if, for example, you have a television playing versus if you’re in a busy place with a lot of people talking versus a work environment.” He clarified, “It’s not to say they’re listening to what you’re saying. It’s not natural language processing. That would be hard to scale. But to understand the environmental context of where you are to improve the contextual value of the ad itself” is possible.