Is It Time To Regulate Social Media?

Duke prof’s upcoming book examines platforms’ free rein
Kathleen Smythe
Muslim activist Jibril Hough speaks during a vigil in First Ward Park on Sunday for the victims of the mass shooting in Christchurch, New Zealand, last week.

A few hours after a man shot and killed 50 people in a pair of mosques in Christchurch, New Zealand last week, The Washington Post’s Drew Harwell tweeted: ”The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react.” The ghastliness of the killings; the role of social media as conduits for live video of the murders and the killer’s “manifesto”; and online platforms’ helplessness to stop their spread kicked back into play the prospect of governmental or other regulation of social media, an idea occasionally floated but never adopted in the United States.

A pair of academics based in North Carolina have done extensive research into the destructive capacity of social media misuse: Zeynep Tufekci at UNC Chapel Hill and Philip Napoli at Duke. Tufekci is a frequent contributor to The New York Times’ op-ed section, where she’s written about the radicalizing properties of YouTube. Napoli has written a book, which Columbia University Press is expected to publish in August, titled Social Media and the Public Interest: Media Regulation In the Disinformation Age. The book examines how platforms like Facebook and YouTube have largely replaced traditional gatekeepers of news, and makes the case for an updated regulatory structure that applies to social media as broadcast regulations apply to radio and television.

We spoke with Napoli on Monday (Tufekci said via email that she was unavailable). His responses have been edited for clarity and space.

Charlotte magazine: Given your research on this topic, what were your initial thoughts on the New Zealand massacre?

Philip Napoli: For some folks, this is the first, ‘Oh, gosh,’ moment with social media. But we’ve had the live-streaming of things like suicides already, and beheadings. So the thing I always wonder is, what will it be about this event that’s different, that actually gets this beyond a topic of conversation for a week, and we actually see anything happen in the regulation and policy space? Obviously, in the U.S., we have nothing. The model that platforms operate under is one in which there’s no legal obligation for them to do anything. As much as they’ve done quite a bit to try to make sure that this video doesn’t circulate, that’s not even something they have to do. That’s them acting, at this point, on their sense of social responsibility.

So then the question becomes whether there are any regulatory mechanisms that can compel them to do better. The whole model that these platforms operate under is, of course, ‘Publish whatever you want, and then we will subsequently, through our various mechanisms, determine whether it is harmful in some way or another.’ And that’s not the model that media traditionally operated under, and we kind of take that for granted. But if we look at other media, that’s certainly never how they operated. All content was evaluated, in some way, shape, or form, before it was published. Now it’s, ‘Publish, and then let’s see if any of our signals—algorithms or users reporting or content moderators—spot it.’

Napoli

CM: Do you sense that this incident may be a tipping point toward stronger regulation of online content, especially when it’s shared over social media?

PN: I think I’ve had this reaction three or four times over the past three years: ‘OK, this is the one.’ So I’m not feeling like this incident is going to be the tipping point. I’d love to be wrong. To me, we have almost this triangulation of issues now. There is the fake news and disinformation issue; there’s the data breaches and misuse and sharing of personal data issue; and now this. They’re different types of issues, but what I wonder is if, OK, now does this represent a kind of perfect storm of issues that gets Congress thinking about a comprehensive regulatory oversight model? The platforms have been doing some work in this area as far as trying to develop a content moderation advisory board, things like that. Does this become something not piecemeal, but something comprehensive that encompasses all the different ways in which these platforms can be abused?

CM: What were those other incidents?

PN: Certainly I thought Cambridge Analytica on the data front, and I thought the irrefutable evidence of Russian spreading of disinformation on the platforms was going to do more than provoke a few hearings, and there’s really been very little regulatory action on that front. Those are the two that come immediately to mind, where I thought, ‘Wow, we’re talking about things that seem to strike right at the core of how our democracy functions.’ That was the issue with Cambridge Analytica, that it really provided opportunities to deliver persuasion and disinformation in a targeted way to people who had really not given their OK to be targeted in this way.

CM: What’s the disincentive for tech companies to crack down more forcefully on abusers of their platforms?

PN: For these problems to be solved, the entire model of how social media platforms operate would need to change—completely. Could you imagine a model where you post something to social media, let’s say, and it’s available to people the next day, and that was the norm, and it was understood that everything you post will go through a multi-stage evaluation process before it’s accessible to anybody? Now, what I just described—is that feasible? Maybe. I don’t know if it is. Does that completely undermine anyone’s interest in using these platforms? Maybe. But is that really the only way you could guarantee that this kind of content doesn’t circulate, however briefly? It probably is.

In other words, change the model, so that it’s not published first, evaluated later, but evaluated first, then published. Imagine social media platforms operating in that way—that we all just didn’t have the right to post a video to YouTube and see it there, bingo, instantaneously. This is what we’ve all come to expect, this is what the platforms provide, and certainly if someone were to propose that kind of massive change, the presumption would be that you’d be doing incredible damage to the digital economy, and with the First Amendment tradition that we have in this country—this all gets very tricky.

CM: That was my next question—how do you do that without an immediate legal challenge on First Amendment grounds? Would that kind of change survive the inevitable lawsuits?

PN: To me, that’s the crux of the question. That’s actually a topic I try to get into in the book a bit. We have other media that have been regulated. Television and radio broadcasting have a much more stringent regulatory framework applied to them than, say, print media do. What’s interesting—and this event brings it to the forefront again, yet nobody talks about it—is that one of the reasons why broadcasting was regulated is what the U.S. Supreme Court said was its ‘uniquely pervasive presence.’ That is the idea that you could be using radio or television and suddenly be exposed to something that was harmful that you did not expect or want to have any exposure to. This emerged years back from the famous ‘Seven Dirty Words’ case, where a radio station aired a George Carlin routine, and he was using profanity, and a kid was exposed, and the fine the broadcaster had to pay was justified in part on this idea that broadcasting is ‘uniquely pervasive.’

To me, it raises such an interesting question, which, again, I don’t see anybody asking: Are social media similarly uniquely pervasive? I’m scrolling through my news feed, and holy crap, suddenly I’m watching a mass murder. I wasn’t looking for a mass murder, but there it is. How is that different from, I’m flipping through the channels, and oh, my God, I’m suddenly confronted with nudity or profanity? One could argue or speculate that it is a mechanism by which a First Amendment challenge could be overcome.

CM: You could argue, I guess, that social media are even more pervasive because radio and TV are one-way streets—part of the power of social media are their interactivity, and that you can immediately amplify content via social media in a way you can’t with traditional broadcast media.

PN: Absolutely true—the speed of the dissemination. You have an army of distributors, not just one.

CM: But social media platforms have been comparatively strict in policing things like child porn and ISIS propaganda, especially after the execution of James Foley. Might those examples serve as a template for controlling, say, extremely violent content or the explicitly white supremacist material that the New Zealand killer advanced?

PN: That’s what’s interesting in this country—the larger political issue, for a while now, which is: Who is and who is not willing to equate these kinds of white nationalist-motivated attacks with how we traditionally define terrorism? Especially given the President’s statements over the last couple of days, if a similarly explicit policy was articulated by the platforms saying, ‘We’re specifically targeting and getting more aggressive on white nationalist content,’ wow. It would be very interesting—and at this point in time, it would not surprise me to see plenty of politicians, political leaders, be vocal in opposition to that. No one’s as afraid to go there as they once were. But yes, is the same kind of approach feasible and justified? Absolutely, I would say.

CM: What compelled you to write this book, and what’s its basic premise?

PN: Really, it was the lead-up to and the aftermath of the 2016 election that prompted me to work on it, and especially watching conversations happen that were completely divorced from what we already have done about media regulation—that we are not thinking about this space in the way we have traditionally thought about our news media, and bringing the same principles and goals to bear. What I try to argue in the book is that it might make sense to think about these platforms as more like television. We regulated television, and part of the reason why we justified it was that we have a regulated space in television and radio, but alongside it you still have a largely unregulated space in print. The question I ask in the book is whether we might need to think about the internet in the same way.

Before social media, we had the web, and we had this incredibly unregulated space, but you had to go find this information. Social media turned the internet into a ‘push’ medium akin to television: This content can get pushed to you with you doing very little on your own to be exposed to it. That’s a massive difference. We are only beginning now to appreciate how huge a difference the internet of the late ’90s is from the internet of now. We might need to think the same way and say, ‘Hey, you know what, maybe the regulatory model we apply to social media needs to be different and maybe a bit more restrictive than the one we apply to the web as a whole,’ under the recognition that it operates like a push medium, that there is more opportunity for ill.

CM: Is it just those two steps—push and amplify—that separate social media from radio and TV?

PN: It’s three things: Push, amplify, and target. We’ve never had a broadcast medium that offered the targeting capabilities of what we would call ‘narrowcasting.’ It’s a mass medium and a hyper-targeted medium all rolled up into one, and we’ve never had that before.

About a year or so back, Facebook, in response to a lot of this, changed its news feed algorithm to downplay news and provide more prioritization for content being shared from friends and family—but also prioritizing content that generated a reaction. Sure enough, the data seem to suggest that the hyper-partisan news outlets that produce the kinds of stories that get people riled up and that seed conflict, their performance is improving. These decisions that are made to try to help can often backfire, and they get made unilaterally, with no process that these decisions have to go through to be evaluated or to receive feedback from some broader group of stakeholders. And the impact is instantly global, if they choose to roll it out to that degree.

CM: Any other thoughts?

PN: ​No. It’s just getting grim, isn’t it? Can you put the horse back in the barn? But you know what—it’s one of those things, just like with piracy and things like that, you’re never going to completely solve the problem. But once you make the process difficult or inconvenient for the majority of the people, that’s really all you can hope for. If government regulation of these platforms ended up being so onerous that they just decided not to be in the business of allowing news to be shared, great. I think people need to go back to being active seekers of news and information and not just passive receivers of it, because that to me is a big part of what’s problematic now.

Categories: #discussclt, DiscussCLT Conversation, The Buzz