Governments have always regulated what people can see, buy, and do. Sometimes these rules protect public safety. Other times, they restrict choices that adults are capable of making on their own. The question worth asking in 2026 is not whether regulation is necessary. It is whether the rules we have match the world we live in.
That question sits at the center of two ongoing debates: how governments control information online, and how they control consumer products that carry some degree of risk. These may seem like separate issues. They are not. Both involve the same tension between state authority and individual autonomy, and both reveal how regulation designed to protect people can, when poorly calibrated, end up limiting their freedom instead.
The Regulatory Instinct
When a government decides that certain content is too dangerous for its citizens to access, the rationale usually follows a predictable pattern: the public needs protection, and only state intervention can provide it. This is the logic behind broadcasting regulations that control what can be transmitted over public airwaves. It is the same logic behind the WIPO Broadcasting Treaty proposals that would extend new rights to broadcasters over content they did not create, potentially restricting how people share, remix, and build on existing works.
The European Union's Digital Services Act, which reached its first full year of enforcement in 2025, illustrates the tension clearly. The DSA aims to make the internet safer by requiring platforms to remove illegal content and be more transparent about their algorithms. Those are reasonable goals. But critics have pointed out that the law's broad definition of “illegal content” and the pressure it places on platforms to over-moderate could create what one analysis called “an internet governed by fear” — fear of fines, fear of bans, and fear of expressing unpopular views.
The same dynamic plays out with consumer products. When governments decide that a product poses risks, the default response is often to restrict access. Ban it. Tax it heavily. Make it harder to buy. This approach assumes that people cannot evaluate risk for themselves, and that the state's judgment should replace individual choice.
When Protection Becomes Paternalism
There is a meaningful difference between protecting people from genuine harm and preventing them from making informed decisions about their own lives. The philosopher John Stuart Mill drew this line more than 150 years ago: the only purpose for which power can be rightfully exercised over any member of a civilized community, against their will, is to prevent harm to others. Not harm to themselves.
In the digital realm, we see this line crossed routinely. Broadcasting treaties and content regulations often go beyond preventing harm to others and instead aim to control what information adults can access and share. The digital rights movement exists precisely because this pattern keeps repeating: well-meaning regulations that start with public safety end up constraining legitimate expression, fair use, and public domain access.
Consumer product regulation follows the same arc. The question is not whether products should be safe. Of course they should. The question is whether adults should have access to less harmful alternatives when they exist, or whether blanket restrictions should force them into a binary of total abstinence or continued use of more dangerous options.
Harm Reduction as a Framework
Harm reduction is a pragmatic approach that accepts a simple reality: not all risk can be eliminated, and policies that demand perfection often produce worse outcomes than those that aim for realistic improvement. In public health, this means giving people access to safer alternatives rather than insisting on abstinence alone. In digital policy, it means finding ways to address genuine harms without shutting down the open exchange of information.
Consider how this applies across domains. In broadcasting policy, a harm reduction approach would focus on protecting the integrity of signals without granting broadcasters sweeping new rights over content in the public domain. It would target actual piracy rather than restrict fair use, commentary, and education. The signal-based approach discussed at WIPO since 2012 moves in this direction, though negotiations continue to stall over how broadly protections should extend.
In consumer products, harm reduction has already transformed several sectors. One notable example is the shift in tobacco policy. For decades, regulation focused almost entirely on discouraging use through bans, taxes, and advertising restrictions. Those measures saved lives. But millions of adults continued to smoke. The emergence of reduced-risk alternatives like nicotine pouches — tobacco-free oral products that deliver nicotine without combustion — has given regulators a new option: let people switch to something less harmful rather than insisting they quit or continue smoking. In January 2025, the U.S. FDA authorized the marketing of specific nicotine pouch products after determining they pose a lower risk of cancer and other serious conditions than combustible tobacco. For those interested in understanding how these products work and where the science stands, nicotine-pouches.org offers a comprehensive overview. The principle is the same one that should guide digital policy: reduce actual harm without eliminating consumer choice.
What these examples share is a recognition that the world is not binary. People will continue to consume content, use products, and make choices that carry some risk. Policy that acknowledges this reality and works within it tends to produce better outcomes than policy that pretends it can eliminate risk entirely through prohibition.
The Digital Parallel
The internet amplifies both the benefits and the risks of information access. People can educate themselves, connect across borders, and hold power accountable. They can also encounter misinformation, harmful content, and exploitative design patterns. How do you address the harms without destroying the benefits?
The answer should not be to hand more control to gatekeepers. The WIPO Broadcasting Treaty, in its more expansive proposals, would do exactly that — giving broadcasters rights over content they transmit regardless of whether they created it. This is the digital equivalent of banning a safer alternative because the category itself is seen as dangerous. It targets the medium rather than the actual harm.
Similarly, the EU's push toward a Digital Fairness Act, while motivated by real concerns about manipulative interfaces and predatory subscription practices, risks over-correcting into territory where regulators decide what choices consumers are allowed to make. Research tied to the Digital Markets Act has already shown that over-enforcement can make user experiences worse, not better, with consumers seeing no tangible savings while platforms scramble to comply with rules that do not always match how people actually use technology.
A Better Standard
What would regulation look like if it took harm reduction seriously as a guiding principle? It would start by asking a different set of questions. Instead of “how do we prevent people from accessing this?” it would ask “how do we give people better options?” Instead of “how do we control this medium?” it would ask “what specific harms are we trying to address, and what is the most targeted way to address them?”
In broadcasting policy, this means protecting signal integrity without creating new layers of control over public domain content. It means preserving fair use and access to knowledge while addressing actual piracy. In consumer policy, it means allowing adults to choose less harmful alternatives when the science supports them, rather than restricting entire categories based on worst-case assumptions.
The principle that unites these positions is straightforward: informed adults should have the right to make their own choices, and policy should focus on reducing genuine harm rather than restricting autonomy. That principle applies whether we are talking about what you can watch, what you can share, or what products you can buy.
The Stakes Are Connected
It is easy to treat digital rights and consumer rights as separate conversations. They happen in different committees, involve different lobbyists, and attract different advocacy groups. But the underlying question is always the same: how much authority should governments have over the choices of informed adults?
The WIPO Broadcasting Treaty negotiations, now in their third decade, are a case study in how regulatory processes can drift from their original purpose. What began as an effort to protect broadcasting signals has repeatedly expanded into proposals that would control content, restrict fair use, and add new layers of rights on top of existing copyright law. Each expansion is justified as protection. Each one reduces consumer choice.
Whether the issue is what content you can access online or what products you can buy as an adult, the standard should be the same. Regulation should target specific, demonstrable harms. It should favor transparency and informed choice over blanket restrictions. And it should never assume that limiting options is the same thing as protecting people.