Digital platforms now enable the near-instantaneous distribution of information, including misinformation and disinformation, to vast audiences. Disinformation refers to false or manipulated information deliberately created to deceive, whereas misinformation is inaccurate or misleading information that is sometimes shared without harmful intent. Professor Barbara McQuade—a former U.S. attorney and current professor of National Security Law at the University of Michigan Law School—explores these challenges, noting that disinformation currently poses one of the biggest threats to national security.
A key battleground in combating misinformation lies in the design and impact of social media algorithms. These systems are typically designed to maximize user engagement and time on site. In many cases, this design choice has led to the inadvertent promotion of content that incites anger, outrage, or the spread of conspiracy theories—factors that drive traffic and prolong user interaction. By establishing oversight and technical standards, policymakers could require platforms to adjust these algorithms so that they prioritize accuracy and reduce harmful amplification while still respecting user engagement. Professor McQuade’s insights underscore the need for regulatory strategies that evolve in step with rapid technological innovation.
It is important to note that private platforms are not bound by the First Amendment, which grants these companies broader discretion in moderating content without being constrained by constitutional free-speech requirements. Any government regulation of disinformation, however, must adhere to the stringent legal standards under the First Amendment. Measures that restrict speech—whether by curbing disinformation directly or by limiting how algorithms amplify content—must satisfy the “strict scrutiny” test. This requires the government to demonstrate a compelling interest (such as protecting national security or public order), to ensure that the regulation is narrowly tailored to achieve that interest, and to confirm that the measure is the least restrictive means available. Professor McQuade makes the argument that regulation governing anonymous bot accounts or antisocial algorithms could meet these high standards. Afterall, as Professor McQuade states (quoting Supreme Court Justice Robert Jackson), “The Constitution is not a suicide pact.”
It is important to note that private platforms are generally not bound by the First Amendment, which grants these companies broader discretion in moderating content. In contrast, any government regulation of disinformation must adhere to the stringent legal standards under the First Amendment. Measures that restrict speech—including curbing disinformation or misinformation—must satisfy the “strict scrutiny” test. This requires the government to demonstrate a compelling interest (such as protecting national security or public order), ensure that the regulation is narrowly tailored to achieve that interest, and confirm that the measure is the least restrictive means available. Professor McQuade argues that regulations governing anonymous bot accounts or antisocial algorithms could meet these high standards. After all, as Professor McQuade states (quoting Supreme Court Justice Robert Jackson), “The Constitution is not a suicide pact.”
Professor Barbara McQuade is a former U.S. attorney and national security prosecutor, now serving as a professor of National Security Law at the University of Michigan Law School.