Janine Jackson interviewed Fight for the Future’s Evan Greer about the Kids Online Safety Act for the June 9, 2023, episode of CounterSpin. This is a lightly edited transcript.
Janine Jackson: Louisiana just banned abortion at six weeks, before many people even know they’re pregnant, while also saying 16-year-old girls are mature enough to marry.
Arkansas says there’s no need for employers to check the age of workers they hire. As one state legislator put it, “There’s no reason why anyone should get the government’s permission to get a job.”
And Wisconsin says 14-year-olds, sure, can serve alcohol. Iowa says they can shift loads in freezers and meat coolers.
Simultaneously and in the same country, we have a raft of legislation saying that young people should not be in charge of what they look at online. Bone saws: cool. TikTok: bad.
The way this country thinks about young people is odd, you could say. “Incoherent” would be another word.
When it comes to the online stuff, there seem to be some good intentions at work. Anyone who’s been on the internet can see how it can be manipulative and creepy. But are laws like the Kids Online Safety Act the appropriate way to address those concerns?
We’ll talk about that now with Evan Greer, director of the group Fight for the Future. She joins us now by phone. Welcome back to CounterSpin, Evan Greer.
Evan Greer: Thanks so much for having me. Always happy to chat.
JJ: Let’s start specifically with KOSA, with the Kids Online Safety Act, because it’s a real piece of legislation, and there are things that you and other folks are not disputing, that big tech companies do have practices that are bad for kids, and especially bad for some vulnerable kids.
But the method of addressing those concerns is the question. What would KOSA do that people may not understand, in terms of the impact on, ostensibly, those young people we’re told that they care about?
EG: Yeah, and I think it’s so important that we do start from the acknowledgement that big tech companies are doing harm to our kids, because it’s just not acceptable to pretend otherwise.
There is significant evidence to suggest that these very large corporations are engaging in business practices that are fundamentally incompatible with human rights, with democracy, but also with what we know young people, and really everyone, needs, which is access to online information and community, rather than having their data harvested and information shoved down their throat in a way that enriches companies rather than empowering young people and adults.
And so when we look at this problem, I think it is important that we start there, because there is a real problem, and the folks pushing this legislation often like to characterize those of us that oppose it as big tech shills or whatever.
It’s hard for me not to laugh at that, given that I’ve dedicated the better part of my adult life to confronting these big tech companies and their surveillance-capitalist business model, and working to dismantle it.
But I think it’s important that we say very clearly that we oppose these bills, not because we think that they are an inappropriate trade off between human rights and children’s safety. We oppose these bills because they will make children less safe, not more safe.
And it’s so important that we make that clear, because we know from history that politicians love to put in the wrapping paper of protecting children any type of legislation or regulation that they would like to advance and avoid political opposition to.
It is, of course, very difficult for any elected official to speak out against or vote against a bill called the Kids Online Safety Act, regardless of whether that bill actually makes kids safer online or not. And so what I’m here to explain a bit is why this legislation will actually make kids less safe.
It’s important to understand a few things. So one is that KOSA is not just a bill that focuses on privacy or ending the collection of children’s data. It’s a bill that gives the government control over what content platforms can recommend to which users.
And this is, again, kind of well-intentioned, trying to address a real problem, which is that because platforms like Instagram and YouTube employ this surveillance-advertising and surveillance-capitalist business model, they have a huge incentive to algorithmically recommend content in a way that’s maximized for engagement, rather than in a way that is curated or attempting to promote helpful content.
Their algorithms are designed to make them money. And so because of that, we know that platforms often algorithmically recommend all kinds of content, including content that can be incredibly harmful.
That’s the legitimate problem that this bill is trying to solve, but, unfortunately, it would actually make that problem worse.
And the way it would do that is it creates what’s called a broad duty of care that requires platforms to design their algorithmic recommendation systems in a way that has the best interest of children in mind.
And it specifies what they mean by that, in terms of tying it to specific mental health outcomes, like eating disorders or substance abuse or anxiety or depression, and basically says that platforms should not be recommending content that causes those types of disorders.
Now, if you’re sticking with me, all of that sounds perfectly reasonable. Why wouldn’t we want to do that? The problem is that the bill gives the authority to determine and enforce that to state attorneys general.
And if you’ve been paying attention at all to what’s happening in the states right now, you would know that state attorneys general across the country, in red states particularly, are actively arguing, right now today, that simply encountering LGBTQ people makes kids depressed, causes them to be suicidal, gives them mental health disorders.
They are arguing that providing young people with gender-affirming care that’s medically recommended, and where there is medical consensus, is a form of child abuse.
And so while this bill sounds perfectly reasonable on its face, it utterly fails to recognize the political moment that we’re in, and rather than making kids safer, what it would do is empower the most bigoted attorneys general law enforcement officers in the country to dictate what content young people can see in their feed.
And that would lead to widespread suppression, not just of LGBTQ content, or content related to perhaps abortion and reproductive health, but really suppression of important but controversial topics across the board.
So, for example, the bill’s backers envision a world where this bill leads to less promotion of content that promotes eating disorders.
In reality, the way that this bill would work, it would just suppress all discussion of eating disorders among young people, because at scale, a platform like YouTube or Instagram is not going to be able to make a meaningful determination between, for example, a video that’s harmful in promoting eating disorders, or a video where a young person is just speaking about their experience with an eating disorder, and how they sought out help and support, and how other young people can do it too.
In practice, these platforms are simply going to use AI, as they’ve already been doing, more aggressively to filter content. That’s the only way that they could meaningfully comply with a bill like KOSA.
And what we’ll see is exactly what we saw with SESTA/FOSTA, which was the last major change to Section 230, a very similar bill that was intended to address a real problem, online sex trafficking, that actually made it harder for law enforcement to prosecute actual cases of sex trafficking while having a detrimental effect for consensual sex workers, who effectively had online spaces that they used to keep themselves safe, to screen clients, to find work in ways that were safer for them, shut down almost overnight, because of this misguided legislation that was supposed to make them safer.
And so we’re now in a moment where we could actually see the same happen, not just for content related to sex and sexuality, but for an enormous range of incredibly important content that our young people actually need access to.
This is cutting young people off from life-saving information and online community, rather than giving them what they need, which is resources, support, housing, healthcare. Those are the types of things that we know prevent things like child exploitation.
But unfortunately, lawmakers seem more interested in trampling the First Amendment, and putting the government in charge of what content can be recommended, than in addressing those material conditions that we actually have evidence to suggest, if we could address them, would reduce the types of harms that lawmakers say they’re trying to reduce.
JJ: Thank you. And I just wanted to say, I’m getting Reefer Madness vibes, and a conflation of correlation and causality; and I see in a lot of the talk around this, people pointing to research: social media use drives mental illness.
So I just wanted ask you, briefly, there is research, but what does the research actually say or not say on these questions?
EG: It’s a great question, and there’s been some news on this fairly recently. There was a report out from the surgeon general of the United States a couple weeks ago, and it is interesting because, as you said, there is research, and what the research says is basically: It’s complicated. But unfortunately, our mainstream news outlets and politicians giving speeches don’t do very well with complicated.
And so what you saw is a lot of headlines that basically said, social media is bad for kids, and the research certainly backs that up to a certain extent. There is significant and growing evidence to suggest that, again, these types of predatory design practices that companies put into place, things like autoplay, where you just play a video and then the next one plays, or infinite scroll, where you can just keep scrolling through TikToks forever and ever, and suddenly an hour has passed, and you’re like, “What am I doing with my life?”
There is significant evidence that those types of design choices do have negative mental health effects, for young people and adults, in that they can lead to addictive behaviors, to anxiety, etc.
There’s also evidence in that report, that was largely ignored by a lot of the coverage of it, that showed that for some groups of young people, including LGBTQ young people, there’s actually significant evidence to suggest that access to social media improves their mental health.
And it’s not that hard to understand why. Anyone who knows a queer or trans young person knows online spaces can provide a safe haven, can provide a place to access community or resources or information, especially for young people who perhaps have unsupportive family members, or live in an area where they don’t have access to in-person community in a safe way. This can be a lifeline.
And so, again, there is research out there, and it is important that we build our regulatory and legislative responses on top of actual evidence, rather than conjecture and hyperbole.
But, again, I think what’s important here is that we embrace the both/and, and recognize that this is not about saying social media is totally fine as it is, and leave these companies alone, and we can all live in a cyber-libertarian paradise.
That’s not the world we’re living in. These companies are big, they are greedy, they are engaging in business practices that are doing harm, and they should be regulated.
But what we need to focus on is regulating the surveillance-capitalist business model that’s at the root of their harm, rather than attempting to regulate the speech of young people, suppress their ability to express themselves, and take away life-saving resources that they need in order to thrive and succeed in this deeply unjust and messed-up world that we are handing to them.
JJ: All right then. We’ve been speaking with Evan Greer. She’s director of Fight for the Future. They’re online at FightForTheFuture.org. Evan Greer, thank you so much for joining us this week on CounterSpin.
EG: Anytime. Thanks for having me.