Skip to main content

Online platforms vs. publishers

Suppose that someone sends an illegal package through the postal services, or through a courier service, to someone. Such as a package containing illegal material (such as drugs), or something dangerous such as a bomb. When such a crime is being dealt with, nobody even thinks about suing the post office for facilitating this crime.

Or suppose that someone makes a phone call in order to organize some criminal activity, such as organizing a terrorist attack, or purchasing illegal substances. Once again, nobody even thinks that the phone company should be sued for facilitating such illegal activity.

This principle is sometimes called a "safe harbor principle", or "safe harbor status". This means that if a company offers a service that allows people to communicate in one way or another with each other, the company will not be held responsible for any criminal activity that some people might engage in, as long as the company remains neutral, and collaborates with the police as required.

This same principle applies to online digital services. For example, Google will not be held responsible if some people use Google's email services for illegal activities. Online forums will not be held responsible for the crimes of their users (at least if the infringing material is removed at the request of authorities).

This is different from publishers, such as newspapers and media corporation. If, for example, a journalist publishes something illegal in the newspaper, the news corporation itself can be held responsible for it, and ostensibly sued.

This is because a news corporation actively monitors and chooses which kind of content it publishes. It's not a neutral publishing service, but has an active role in deciding what kind of material it will publish. Thus if it decides to publish something illegal, the entire company can be held responsible, because it was published with their explicit approval.

One of the key differences between these two types of platforms is that of neutrality: When the company offers a communication service for everybody, while remaining completely neutral to what the users communicate, the company enjoys safe harbor status: The company cannot be held responsible for what its users do.

However, once such a company starts selectively choosing what kind of communication it allows, and what kind of people it allows, now it's losing its safe harbor status. This is because by explicitly disallowing some type of communication, it's tacitly approving any other type of communication. Thus, the company is effectively saying "this type of message is ok in our books, we approve of it".

If that's the case, and that message was something illegal, the company could be held responsible for it. Once a company starts choosing its content, it stops being a platform and becomes a publisher, with all the responsibilities that entails.

This would be a solution to the current problem of Silicon Valley tech corporations engaging in censorship of political ideas and opinions: If they indeed are choosing what kind of opinions are allowed on their platform, that means that they are tacitly approving of those opinions that are published, and could thus be sued for anything that's illegal.

The government should remove the "safe harbor" status from these companies, until they stop censoring political opinions. That would be an easy solution to the problem.

Comments