The Federal Trade Commission (FTC) uses its authority under Section 5 of the FTC Act to enforce against unfair and deceptive practices in consumer privacy. Section 5(n) provides the standard for finding that a practice is unfair: 1) the practice causes or is likely to cause substantial injury to consumers, 2) the injury is not reasonably avoidable by the consumers themselves, and 3) the injury is not outweighed by countervailing benefits to consumers or competition. As Professor Neil Richards explains, the unfairness balancing test may be too onerous to capture the types of manipulative practices that companies regularly employ to collect more consumer data and increase engagement, practices he believes are abusive. He discusses how Section 5(n) falls short and makes the case for broadening the FTC’s authority to regulate unfair and deceptive practices.
Neil Richards is a professor at Washington University in St. Louis School of Law. He is one of the world's leading experts in privacy law, information law, and freedom of expression.
Human Information Privacy - A longform interview with Professor Neil Richards about human information policy and consumer protections.
Section 5(n)- Standard of proof; public policy considerations
The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination. (Federal Trade Commission Act)
The Board of Governors of the Federal Reserve System, Consumer Compliance Handbook: Federal Trade Commission Act Section 5: Unfair or Deceptive Acts or Practices
Interview with Privacy Law Scholar – Professor Neil Richards
FTC Act Section 5
Neil Richards: We can also strengthen unfairness authority. Right now, the FTC has to go through a torturous balancing test in subsection n of section 5 requiring to show not just substantial harm to consumers before something is unfair, but to make sure that the the harm is not outweighed by the benefit to consumers or to commerce. In other words, current law allows substantial harm to individual consumers if it helps consumers in the aggregate, and it allows substantial harm to all consumers if it enhances competition. And I think that has the calculus entirely the wrong way around; I think unfairness should be expanded to include significant injuries that don't meet the very high statutory threshold.
TalksOnLaw (Host): What do you mean by valuing the group of individuals over a subgroup?
Neil Richards: So, the FTC Act was amended in order to bring this balancing test in. So the text of subsection n basically creates this balancing test that in order for something to be an unfair trade practice, the substantial injury cannot be outweighed by a benefit to consumers or to commerce.
Applying the Unfairness Test
Host: So what's an example?
Neil Richards: So you could imagine the Facebook mental health allegations. I'm deliberately making up the numbers here as an example here, rather than accusing Facebook of this. So, let's say you have Instagram, and Instagram the company knows that one third of its users are going to be made sad and suffer mental health problems as a result of the service, but it learns that two-thirds of its consumers are going to really really enjoy it. That would, under the text of subsection n, that would not constitute an unfair trade practice because of the balancing test. So, I would call for scrapping the balancing test. An unfair trade practice is one that is unfair, is one that causes significant or substantial injury to consumers with no corresponding benefit to them.
Host: Let's take a real-life example: let's say that 10% of tennis players found the sport made them very sad and 90% enjoyed it. Would you ask the state to close the public courts?
Neil Richards: Of course not, but I think if tennis were a monopoly the way a lot of these platforms are, and the Tennis Inc. knew that there was a way that the service was being built that was injuring or harming 10% of their customers. So maybe if you have, not government control of tennis, but if you have a health club, and the health club is using a surface on its indoor tennis courts that causes 10% of people to slip and sprain their ankles needlessly, I think it's perfectly reasonable to require them to use a different paint on the surface so it's a safe environment for all of the customers, all of the customers who are paying them to make, to provide these tennis or search or social networking services. That's all I'm saying.
Host: So you're saying, you know, it's more akin to the wet aisle in a grocery store where someone might slip and seriously injure themselves while others may maintain balance. You know, that person who fell has a cause of action against the grocery store.
Neil Richards: Yes. And I would say on the social media example, if it is true that a company's business model that promotes engagement, right— total time spent on the service, more eyeballs equals more ad seed, equals more revenue—if the engagement model causes a significant mental health problem in a significant fraction of their customers, then maybe the model should be regulated. That maybe allowing companies to take engagement as the only metric by which they build their services should be inappropriate.
Host: This sounds like it could actually be interpreted radically. I'm thinking of China's recent decisions to limit video game play because they've determined that extended play of video games is unhealthy for the society. Could you imagine a structure under which, you know, a U.S. law could do something similar?
Neil Richards: I could. I think American politics would be resistant to anything that was a) imposed by the Chinese government and b) a result of top-down government command and control of how people choose to use their computers, right? I think politically that would be infeasible. But I think it's certainly okay to build in safeguards, so limits on notification practices. So maybe you can play the game as long as you want, but the company is prohibited from reminding you, “Hey, you haven't played Clash of Clans or Tetris or Halo in the last six hours. Your friends are waiting for you.”
Host: “Check back in. Your dragon is about to hatch.”
Neil Richards: Exactly. Well, so that was the next thing is I would love the FTC to seriously look in the context of games at real time timers. I think those are deeply problematic because they're purely designed to make the technology more addictive, not through better gameplay but through manipulation and design. And so if you take those off the table, then game designers don't get to compete on how can we better manipulate people to coming back to our service, how can we structure ads, making them watch ads to enable them to progress in our addictive game. They have to compete on game play, in game design and graphics and character and story, and that's really cool.