Section 230 Is a Government License to Build Rage Machines

from Wired

Facebook has been called the “ largest piece of the QAnon infrastructure.” The app has not only hosted plenty of the conspiracy group’s dark and dangerous content, it has also promoted and expanded its audience. QAnon is hardly the only beneficiary: Facebook promotes and expands the audience of militia organizers, racists, those who seek to spread disinformation to voters, and a host of other serious troublemakers. The platform’s basic business, after all, is deciding which content keeps people most engaged, even if it undermines civil society. But unlike most other businesses, Facebook’s most profitable operations benefit from a very special get-out-of-jail-free card provided by the US government.

Section 230 of the Communications Decency Act protects “interactive computer services” like Facebook and Google from legal liability for the posts of their users. This is often portrayed as an incentive for good moderation. What is underappreciated is that it also provides special protection for actively bad moderation and the unsavory business practices that make the big tech platforms most of their money.

Google might be viewed as a search engine, and Facebook as a virtual community, but these services are not where the profits lie. They make money by deciding what content will keep readers’ eyeballs locked near ads. The platforms are paid for their ability to actively select and amplify whatever material keeps you hooked and online. And all of that content is specifically protected by Section 230, even when they are recommending QAnon or Kenosha Guard.

This unusual state of affairs exists because, while Section 230 was intended to limit the platforms’ responsibility for bad content, the courts have also perversely interpreted it as providing protection for commercial decisions to elevate and push stories to users. This allows Google and Facebook to focus on user engagement to the exclusion of everything else, including content quality and user well-being. If I threaten or defame someone in an online post (assuming I’m not acting anonymously), I can be sued. If a platform decides to promote that threatening post to millions of other people to drive user interest and thus increase time on the site, they can do so without any fear of consequences. Section 230 is a government license to build rage machines.

More here.

Posted in Law and tagged , , , .


  1. Like it says in the second paragraph of the article most people consider Facebook to be an “online community” and as such I believe people should be able to say what they please. As citizens we have the right to free speech and that should translate to online platforms as well. Now this is not to say that just because you say something online that you should not be sued or charged if it is slanderous or extremely offensive, but I do not think that the platform on which it was said should be held responsible. Section 230 ensures this safety by declaring that no platform provider will be held liable for the content of the posts of its users. I agree with this completely and do not think it should be changed like David Chavern does. The point of platforms such as this is to allow people the freedom to post what they like, while also trying to maximize views and interaction. Getting angry at Facebook for giving the people what they want is basically like getting angry at someone for running their business well. And to call it a “Rage Machine” is a little far-fetched. Facebook has 30,000 employees working on safety and security with about 15,000 of them being content moderators. The content moderators review any videos and posts that have been flagged or that are inappropriate and decide to remove them or not from there. With thousands of employees monitoring posts for content and misinformation I do not believe that they should be referred to as a rage machine. The platform does its best to allow freedom of speech while still assuring that content is not offensive or inappropriate for its users. In relation to spreading misinformation, in a previous blog article that referred to Zuckerberg and the power his social media platforms have given him, he stated that he was going to do his best to limit this. The platform is not purposely going to spread misinformation and spark rage as Chavern insists, rather Zuckerberg plans to block all misinformation and flag all content that could be considered false news. I believe that platforms do a fine job allowing freedom of speech at a scale to its users while trying their best to monitor content. Therefore, I do not believe that social media platforms should be held accountable for the posts of their users and I think that Section 230 should remain unchanged.

  2. Google and Facebook are one of the biggest companies around right now. Google is classified as a search engine, and facebook as a virtual community. Unfortunately these companies do not have full control over what gets posted or uploaded to their platforms. Even if the content is offensive it is difficult for them to filter through everything and make sure all the content is appropriate. The Section 230 law provides protection for companies like facebook and google to not be held reliable for what their users post. There is another side to this argument that everyone is entitled to their own opinion due to the first amendment. Throughout the blog something that stuck out to me was that if an individual is threatening others, or becoming a hazard for other users facebook can promote this post without any fear of consequences. Facebook was threatening to cancel access to Australia, but they quickly realized that they were not risking anything and decided that would not be the best business decision for them. I completely agree with the blog post above that states when you wrap massive companies into special protections, the markets and society suffer. This could not be more true. Unfortunately these big time companies have practically nothing to worry about when it comes to what gets published as they are fully protected and cannot get flagged for inappropriate content.

  3. The protection that companies like Google and Facebook receive from Section 230 takes the responsibility away from the company and puts it on the producers of the content while the companies take all the money. The lifted responsibility allows them to promote whatever content they feel will make them the most money and honestly, why shouldn’t they? As long as they cannot be held accountable for promoting whatever people really want to see, why should they care? At the end of the day, it is a business and as a business their number one goal is to make money. However, a lot of the things that are being presented to the public is misinformation, which is never good because some people put a lot of stock in the things our media puts out there. Since Google and Facebook are businesses trying to maximize profit, I do not think they deserve all the blame because they just want to make money. Some of the blame has to be placed on Section 230. Section 230 was put into place in 1996, which is 24 years ago, and is outdated. The internet has evolved an incredible amount within the last 24 years but the laws governing the internet have not. Section 230 should be reformed and made to fit the modern internet where companies care more about the money that they generate over the quality of the content they promote. Making companies responsible for what they support would certainly make them change what they promote. I do not think a company would want to be known for supporting a hate group or consistently releasing false information. It will also help with competition, a fair chance for journalists who put real information out there because their work will be getting shown to the public instead of the false news being viewed. The Justice Department recently sent a draft to Congress to reform Section 230 of the Communications Decency Act. Attorney General William P. Barr said “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America. We therefore urge congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online” ( I agree with this idea of reformation because Section 230 is long overdue considering companies have been abusing their ability to promote what they want without having to take responsibility for it. Hopefully, Section 230 gets reformed by Congress, so companies do not amplify harmful content and make internet a better community as a whole.

  4. Contrary to the article, I think the internet is most certainly still in its infancy. In the grand scheme of things, the internet and its capabilities have barely scratched the surface. It is basically a whole other world, requiring its own policing. There is simply too much content on the internet to make it viable and efficient to police given the current technology we have. Algorithms are the main problem with the internet. Nobody really knows how they work but they always manage to push a certain kind of content, which will get the most clicks and views. Facebook and Google are businesses and as such their goal is to make as much money as they can, which provoked them to make algorithms that would do just that. The Facebook algorithm learns and adapts to push content to the top that it thinks will get the most attention. What gains the most attention often happens to be clickbait or fake news becuase a Trump supporter is going to click on a post that says Joe Biden did something bad and a Biden supporter is going to click on a post that says Trump did something bad. What happens next is people take the article as fact and share it all over until so many people have seen it its too late to correct it. Facebook and Google should be held responsible for creating an better algorithm that promotes real content, from real sources (most fake news articles are from bot accounts with 0 friends/followers) instead of one that solely reccomends content based on clicks, but they should not be held responsible simply for having fake news, Qanon posts, or any other bad content. It has proven to be impossible with the current tech to police every single post, for example Facebook already has a factcheck feature that rarely ever appears and often has no basis for proving something innaccurate. It will not be Facebook and Google’s top priority to keep their platforms trustworthy if the status quo is working just fine. With Section 230 of the Communications Decency Act the government is giving even more incentive for Facebook and Google to keep pushing out clickbait with the law protecting them from any liability. An addition to Section 230 should lay out guidelines as to what type of content their platforms are allowed to promote, keeping the liability of the companies restricted to their own actions, not the actions of the people using the platform.

  5. The emphasis on Section 230 is an interesting explanation of a seemingly unfamiliar law. According to the article, Section 230 of the Communications Decency Act of 1996 serves the general purpose of protecting “interactive computer services”, such as social media companies, from liability for the posts of their users. Now, the author is very critical of social media companies for amplifying and headlining abusive, anger-filled stories that are only intended to keep the viewers’ attention. These social media companies, as per the article, get their money from being able to actively consume viewers with eye-catching content, regardless of how much civil unrest results. Under Section 230, social media companies are legally protected from being sued over hateful, negative, or otherwise disturbing posts from their users, arguably an excuse for the company to get away with promoting controversial content, including QAnon conspiracies, which inflame social tension by portraying unsubstantiated claims. And while individuals who post hateful content can be held accountable, Section 230 appears to let large social media companies off the hook.

    Now, the Communications Decency Act was passed in 1996, and as the author argues, the internet is no longer in its “infancy.” Thus, an updated version of the law seems to be appropriate, especially to even out the playing field for both news publishers and social media giants. Under Section 230, news publishers retain legal accountability and can be sued for promoting the same content that Facebook or Twitter can get away with. The author makes the compelling argument that “Propelling misinformation and suppressing competitors shouldn’t be government-protected activities.” Here, the author addresses his two major grievances against Section 230 (misinformation/negative content, as well as suppressing news editors), while also placing blame on the government for protecting such actions. While I have not heard of this Section before reading this article, if Chavern’s points are indeed objective, then it would definitely be appropriate to demand government reform of this one-sided, accountability-lacking Section. I think expecting social media companies to limit the amount of negative content from users is indeed a daunting task, but prohibiting them from promoting hurtful, false, or hateful content should be a bare minimum requirement. In no world should large corporations not be legally accountable for posts or comments that individual citizens can be sued for. How does this support the notion that businesses are viewed as citizens under the law? Should we really protect the free speech of tech giants by giving their free speech fewer restrictions than the average Americans? This seems like a complete contradiction to the free speech ruling established in the infamous Citizens United. Not to mention the ethical concerns of promoting content for the sole purpose of attracting viewers, something social media companies are arguably guilty of. Social media companies should not be given a legal pass for content that harms the general well-being of the public, including conspiracy theories, hate groups, and other incendiary posts.

    Similarly, the actions of social media companies themselves can be regulated, while on the other hand, it is of course much more controversial to regulate the free speech of the populace. The platform that companies like Facebook and Twitter have is expansive, so widespread that they should take special consideration as to how the content they promote can affect the attitudes of society. I think that reforms will only be enacted if issues like these are brought to the forefront of political conversation and capture the attention of legislators. We cannot expect our elected officials to know every section and subsection of United States law, making it ever so important that citizens demand change. If reforms to Section 230 will place more responsibility in the hands of tech giants, then I would argue that technological change is indeed necessary.

  6. This new idea that social media corporations can decide what conversations can and cannot be elevated is extremely interesting as well as scary. The writer of this article seems to be of the opinion that Facebook should crack down harder on disinformation or socially unacceptable speech, but should Facebook have the power to manipulate which conversations are being posted on its site at all ? Legally, the First Amendment only protects us only from government action, but in the modern world, Twitter and Facebook have replaced the megaphone and the street corner. In such a world, wouldn’t the natural conclusion be to limit Facebook’s ability to interfere with speech entirely? The article states: ” While stopping the public from posting bad content is a truly difficult problem, all decisions about amplifying that content are the platforms’ own. They should be expected to police themselves”. This quote assumes that it is Facebook’s responsibility to determine what “bad content” is. The potential hazard here is that Facebook and similar platforms would now become the judge and jury on what speech is allowed around the world. The consequences of this decision are immense. What political parties are permitted to grow, what figures are allowed to have a public voice, will all be decided by unelected CEO’s and chairmen of companies that have no affiliation with any government. In my opinion, that seems extremely dystopian. With that being said, it is not hard to understand why the writer of this article comes away with the perspective that they have. Facebook is elevating more sensational, often false headlines with no regard for what is true. This is extremely dangerous towards any democratic nation, especially when you consider Facebook’s massive influence on public opinion along with the fact that both political parties have become less and less moderate over recent years. This could have massive negative implications for people who work in information and media, as well as for social cohesion at large. All of the recent riots and protests start to beg the question, how much of this could’ve been prevented if people weren’t being exposed to hyper-partisan information in every moment of their life. These aspects of social media should certainly be resolved via government intervention, however in making these corporations responsible for the information they host, you will also force them to host exclusively uncontroversial ideas as the risk of hosting those opinions outweighs the potential benefits for the those in charge. I think the obvious answer would be an all of nothing policy. A social media platform should be legally prohibited from manipulating or elevating any speech hosted on their platform, unless they consent to being held to the same standard as journalists and other media entities.

  7. Corporations like Facebook and Google preaching freedom of speech when it comes to policing the content on their platform is a double edge sword. On the one hand, this prevents anybody’s opinions, however detrimental to society, from being suppressed by one singular corporation, giving the power to engage in and correct those opinions, if need be, in the hands of the people. On the other hand, this allows conspiracy theory groups, like QAnon, and violent white supremacist groups, like the Proud Boys, to spread their disinformation to thousands of people at large without Facebook or Google facing any substantial backlash for their lack of discipline towards those groups.

    I always found myself fascinated by how fast QAnon became part of mainstream culture. A conspiracy theory that most of the elites are a part of a pedophilic satanic cult and President Donald Trump is actively trying to take them down from the White House while also feigning Russian collusion to get former FBI Director Robert Mueller in on his supposed plan sounds far too ludicrous to be anywhere near mainstream news. Even with people like Jeffery Epstein getting exposed for their sexual crimes, there has been nothing to indicate global participation in this alleged pedophilic satanic cult. David Chavern, as he wrote in his article, expertly lays out how in the age of the internet, Facebook and Google help elevate fringe far-right conspiracies into mainstream politics.

    The fact that a section in an Internet law passed in 1996 where the Internet was nowhere near the cultural superpower it is today gives social media companies like Facebook and Google a get-out-of-jail-free card whenever a dangerous conspiracy spreads like wildfire terrifies me. I understand Google and Facebook cannot eliminate every awful internet opinion, but the fact that they have done next to nothing to slow down the spread of this content because it happens to increase bandwidth and their algorithms keep pushing this cannot go without consequence for these companies. Toxic information as contained in QAnon is diluting real, quality journalism as it generates more clicks and keeps people on Facebook. As Chavern says, “The internet is no longer in its infancy, as it was when the Communications Decency Act was passed in 1996. We need new rules for the digital market that limit government distortions and promote genuine competition (Chavern). Section 230 must be reformed so these companies are not heavily incentivized to push dangerous conspiracy content simply because it generates revenue.

  8. David Chavern addresses two very important issues that face modern internet users; the prevalence of misinformation and harmful speech, and the lack of liability which “interactive computer service” providers hold. These two issues have come to light in the proceedings of the Facebook proceedings over the last two years and have even become scapegoats for political efficacy in various debates and discussions. I think Mr.Chaern perfectly describes his position on both issues and calls for an increase in service provider liability to a reasonable degree with a large body of supporting evidence including similar code comparisons and logical parallels. I agree with him and his reasoning however I think he glosses over some very important contextual information in the first paragraph. In fact, I think he misses out on two very important and effective sources that relate to the results of the Qanon groups activities (promoted by service providers) and language in Section 230 that specifically identifies the intent of the law and provides room for critics to denounce the way it is being taken advantage of without resorting to rhetorical arguments. Additionally, the mention of Qanon took me by surprise initially because I did not know what the organization was and therefore, it required a quick search to contextualize. I came upon a New York Times article that explained the core beliefs of the organization/movement as well as the results and activities of the group throughout previous years. I think asserting that the group is dangerous in the first paragraph without specific evidence or examples is a choice that limits the powers of the author’s assertions. Simply put, QAnon is characterized by wide-reaching conspiracies, violence, and activist agendas while simultaneously recognized by the FBI as a potential domestic terror threat. There are numerous examples of crimes allegedly committed by vocal QAnon supporters which would further characterize the impact of allowing providers to push news from this group and similars without repercussions.

    As Patrick states in the comment above, the specific section of code referred to in the article and the language that is cited is older than many of the internet-based companies that are now afforded exploitations for profit. Unfortunately, it can’t simply be stated that a lack of updating has lead to this exploitation because there have been amendments to adjacent and related codes in 2018 (as referenced in the code notes). The changes however did not address the clear exploitation that Mr. Chavern identifies in his article. I believe that while the development pace of internet-related technology as stated in the code may be rapid, it is imperative for lawmakers to focus more directly at the intent of many of the laws which are currently being exploited in an effort to identify the weaknesses which must be reworked. This is especially true when it is observed that the language describing rapid growth is dated to 1996 and the growth of related technology has exponentially surpassed expectations of that time period. It is apparent by the speed at which fringe groups like QAnon have been pushed into the limelight by companies like Google and Facebook that the exploitation of these groups for attention has increased interest and attention and therefore contributes to the growth of these groups when ‘respected’ companies see fit to promote them (whether it be negatively or positively). Regardless of the connotation attached by the providers to the articles, giving any serious promotional attention to these groups and ideas which are often harmful and unsupported by any factual evidence is a net loss when those who may be susceptible to misinformation and brand recognition power are exposed. This brand association is most often seen as a thought process that relates a hosted idea to the host’s image and equates the quality and value of the former to the latter.

    I think the easiest way to update the current code without a full rework is supported by Mr. Chavern towards the end of his piece, i.e. to weaken the liability clause that protects the service providers. Going further than this, I believe the protection should be maintained, however, the application and definition of the “providing/hosting” concept should be altered. The alteration that I am suggesting would limit the protections to content posted by individuals that have not been positionally manipulated to appear in places where the author did not initially intend for the content to be displayed. For example, hate speech or conspiracies posted by an individual are manipulated by an algorithm after “friends” or other users that are directly engaged with the original content creator give attention and time to the post. After the post is determined by the algorithm to be engaging it is sent to a general position accessible by a larger majority of users. This pushing concept is evaluated by Mr. Chavern and is clearly a core tenet of the provider financial models. I believe this translation of content to more accessible data streams should be defined as a “post” by the provider itself and therefore no longer be protected by the code. Currently, the code lists blanket immunity for providers because it does not consider the hosting of content to be equivalent to hosting. I would move that separate definition must be created which will allow for liability to be attributed to the provider in certain cases where they can be found (through software evidence) responsible for a post’s positional change and promotion. This would obviously be a difficult proposition because the providers utilize algorithms that do this by their nature, but maybe an addition of liability would be the only necessary push factor for better information management on behalf of social media giants. There are currently zero incentives for providers to regulate promoted content aside from FCC obligations. Some liability would force providers to incorporate legitimacy and quality factors into algorithms that look for content to promote. When the issue is observed at a lower resolution, it becomes clear that these providers experience unique immunity. A corporation can be sued as an individual and held liable for the same crimes as individuals, however, section 203 fails to apply this concept by protecting providers who exploit this unusual immunity while simultaneously allowing for individuals to be prosecuted.

  9. The article “Section 230 Is a Government License to Build Rage Machines” is an article with a name in which I don’t entirely agree with. The law in question, Section 230 of the Communications Decency Act, protects internet services like Facebook and Google from legal liability for the posts of their users. As the author of this article mentions, companies like Facebook make most of their money off advertisements. The way he describes it, “They make money by deciding what content will keep readers’ eyeballs locked near ads.” At the end of the day, Facebook is just a business, and it needs to make money to survive. By only focusing advertisements on “quality” posts, they would be losing out on plenty of advertising opportunity. The question that ultimately comes into play is “Should Facebook decide which content is morally acceptable to place advertisements on?” In my personal opinion, this should not be Facebook’s job. As Mark Zuckerberg has said, “We [Facebook] do not want to become the arbiters of truth.” Facebook’s main concern as a business should be to get as many clicks and time spent focusing on advertisements as possible. It shouldn’t be to promote quality news stories and posts. The problem with deciding which news article is false or “click-bait” and which article promotes truth and quality is that every story has two sides, and to determine who’s right and who’s wrong in some of these political news articles shouldn’t be the job of Facebook. Additionally, if Facebook takes action against one conspiracy group like QAnon, there is now a precedent to do the same with all other conspiracy groups. This would be the exact mess that Facebook shouldn’t want to get into. As for the law itself, I don’t really see a problem with it. I don’t believe that Facebook should be held responsible for the posts of its users. In my opinion, this seems like a no brainer. However, I do see the concern that the author of this article is expressing. Though it is unethical to post advertisements on pages of conspiracy groups, I don’t think that Facebook should face any legal consequences for doing so. In this instance, Facebook is acting in its own self-interest. I don’t believe the act in itself is a “government license to build rage machines”, but rather, a protective measure put in place to help these big businesses. Also, I understand that if you are posting quality news on Facebook, you are dealt a bad hand. I wouldn’t blame section 230 or even Facebook for this, but rather, the people who refuse to look at both sides of the story.

  10. The media is one of the most talked about topics in the world. The media itself is such a broad term, but most people take it as what you hear. For example, the sources you get your news is media, anything you read online is media, anything on TV is media, and media can be never ending to be honest. The media is so loosely defined, and people in positions of power can use the media to convey a certain message and persuade a mass audience. I could probably talk about the media forever since it covers every facet of human live in current times. I am going to focus on social media, however and politics effect on media. Social media is undisputedly one of the most accessible and common ways to interact with the media. The media is usually characterized as a negative thing, and through politics, one side says the media is against them and the other side is usually the same. Facebook is not new to being involved with politics, as Russia hacked its users around the time of the last election to influence the election and get information on people. Facebook takes the stance of something that provides a platform to speak but is not responsible for what is said. Politicians for years partner with big corporations to push their agenda and have a mutually beneficial relationship. Facebook is one of those superpowers now and controlling Facebook can control what millions maybe even billions of people see on a daily basis. Remember Facebook also owns Instagram. So, while older people may be on Facebook, all of the young people are on Instagram. President Trump is someone who uses the media to create division and flare intense emotions, both in his supporters and opposers. He wants to keep people fighting and emotional, so he can appear as the bigger man and come in and fix the problems. He tells his supporters that liberal people are communist, and that they are trying to destroy the country. On the other side, Democrat leaders portray Trump and his supporters as people who wants to destroy the country as well. There really are no truths, because statistics can be skewed to support the point of whoever is speaking. Facebook has the power to skew the thinking of almost every American, which can be dangerous for either side. Giving these big corporations immunity when pushing hateful content can push the country into a state of anarchy.

  11. This news about section 230 is something that I never knew existed. It is very interesting that social media websites like Facebook can post anything they would like to without the same regulations that their users have. I understand why these companies have this type of protection from section 230 but I think it can be unreasonable at times. Companies should have this type of protection because they are in charge of a large social media website. With that said, it is important that these companies post in a respectful manner just like they would like their users to. I also think that it is unethical to use section 230 as an advertising advantage like the article says. Also in the article,It is says that their users cannot post inappropriate questions but the actual company can post whatever they want in order to attract a certain type of audience in order to make more money from their advertisements. Since Facebook cannot get in trouble because of the section 230, they can promote whatever they would like. In this case, the article says that they have been promoting inappropriate groups and users in order to maintain and grow their online audience. If what the article is saying is true, promoting these groups just to grow their audience is unethical. Companies should not be able to post whatever they want without being punished but they are protected from section 230.

    With all of this responsibility I think that Facebook should not be allowed to freely say anything over social media. They have a large audience which can be influenced through their posts. I agree that there should be exceptions in the section 230 in order to make it difficult for large social media companies to post inappropriate things. Being able to post misinformation like the article said Facebook did should definitely not be government protected. If these large companies were held responsible for things they posted to their users, their platforms could be used for just what they were intended for like connecting with people. Instead, the article explained that they are being used for making money and wrongly attracting people towards advertisements.

    If large companies were to be held accountable for what they post instead of being protected by section 230, it would be interesting to see what kind of platform it would turn into.

  12. Many of us, including myself, tend to see these websites like Google and Facebook as middlemen, giving speakers a medium to spread their message to an audience. We like to believe that these engines are impartial entities, simply doing their job, and most of the time they are. However, we do tend to forget that these our companies first and foremost and despite their values and beliefs, money is always going to be a factor in any decision. They are going to favor whatever gets the most clicks regardless of the content of the post itself. Unless the information is truly false or egregious, it will not get blocked. On the other hand, if an unfavorable opinion is published, it is considered safe under the First Amendment. The article is advocating for either a revision to Section 230 or for these companies to take it upon themselves in restraining the promotion of whatever gets them the most money. I understand that having unbiased platforms will allow for the spread of more ideas, but this is not a problem that can be thrown on these companies. They have every right to promote whatever will make them money because its what the people want. If you look at the source of their profit solely based on application use, it is obvious it is driven by users. If the people want to see posts about QAnon and there is an abundance of engagement, thereby making that company money, why shouldn’t they promote it? People want that sort of information on their page and Facebook is not going to deny them the platform simply because it does not prescribe to everyone’s views. Of course, the lack of accountability leaves room for negligence or even corruption, but I really think this problem reflects more on its users. Balancing freedom of speech with the spread of disinformation is and will always be a difficult task as most of time it is simply a problem of conflicting views. Unless people’s mindsets and beliefs change, and they won’t, putting more restrictions on these platforms can only do so much, and the real issue, whatever it may be, will still persist.

  13. This article starts off by making quite a large accusation about QAnon and Facebook. The article claims that Facebook is the “largest piece of the QAnon infrastructure,” (Wired). Before I delve into this let me give you an example and a summary of what QAnon actually is. QAnon is defined as “a far-right conspiracy theory alleging that a cabal of Satan-worshiping pedophiles running a global child sex-trafficking ring is plotting against President Donald Trump, who is battling against the cabal, leading to a “day of reckoning” involving the mass arrest of politicians. No part of the theory is based on fact.” This being said QAnon is just a really big name for a conspiracy theory based on no facts what so ever. After reading this article I can agree that Facebook may just be fueling the fire of the spreading of these conspiracy theories. Facebook is a place where people can openly give their opinions and face little to no repercussions beside maybe being banned from the site. So where does section 230 come into this discussion? Section 230 is described by our government as “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do,” ( This comes into play because it inadvertently allows people on these social media apps such as Facebook and Instagram to use complete freedom of speech, not having to worry about anything bad happening to them. With this going on rumors are spreading faster than ever, including the rumor of QAnon. At the end of this article the author states that, “In short, CDA 230 is perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996,” which I believe is true. Although it promotes all these opinions and false conspiracies it does the best job at promoting and keeping everyone’s first amendment right, freedom of speech. Overall, this section of the internet decency act is single handedly the most important act regarding online freedom of speech, but does promote the spreading of opinions, false facts, and conspiracies that can have harm.

  14. Facebook has a habit of appearing in my space as of late, whether it be the news or these Shannon Web posts. Most often, I tend to just wish for Facebook to go the way of Myspace before it, as in fade from its user base and be replaced with a better platform altogether. But like those same brainless Facebook users who believe every QAnon post or those who hurry to buy fire arms because of a post claiming the “revolution” is coming, the ideas of the company heads when it comes to curtailing or even moderating this content is the same level of thought put into it as those baseless conspiracy theories. Since reading the article on Section 230 of the Communications Decency Act, I have come to realize why so many different websites, such as YouTube, Facebook, Reddit, 4chan and TikTok are all notorious when it comes to either offensive or illegal content. But after reading the article on the article, it suddenly makes sense why it is advantageous for these websites and the companies running them to ignore (and therefore let foster) these kinds of posts. If a platform cannot be held liable for the posts made on it by its users, then they have no real motivation besides user outrage to take them down. A good example of this is YouTube, when it was still a fledgling company. Back in the day before this act was passed, YouTube was sued by Viacom for 1 billion dollars. The reason? Many Viacom backed shows such as Spongebob Squarepants had clips on YouTube that were posted by amateur users, but raked in millions of views by other users. This was before the days of companies having their own YouTube channels, so Viacom became rather angry at this. Viacom argued that these views represented lost revenue, and as a result YouTube would be liable. Ultimately YouTube won, and to placate Viacom and other companies with intellectual property they created their own copyright system to identify content that would fall into either offensive, illegal, or in breach of fair use terms. The problem nowadays for this is since under article 230, they don’t have to put all their effort into identifying these types of content, and so many such examples of bad content slips through the gaps almost daily. But as for removing article 230, I would be against that. If it was ever removed, then these companies would overreact, and start censoring preemptively to avoid any bad press or lawsuits. So to preserve freedom of speech but also protect people, there must be a balance struck some how to keep the Internet for everyone.

  15. I was surprised when I found out that section 230 policy exists, when it comes to QAnon and Facebook policies. For this reason alone I feel that this policy, is not great due to social media being a place to express one’s feelings. This article now makes me aware of how to use social media safely and ethically. I feel that social media sites is a tool that any person, group, and audience can express their thoughts and feelings. Overall, I feel that this policy is violating the freedoms of press, media, and speech. If this policy is passed and still continues, Facebook and QAnon would not be able to flourish as businesses and could lose money and file for bankruptcy.

  16. Social media companies ought to be held to a higher standard than they currently are to preserve democracy. Though freedom of speech is important to preserve, private entities are able to restrict speech as they see fit on their own platforms. For example, if I were to shout obscenities in someone’s shop, they would have a right to boot me off the premises with haste. They have a right to prevent their reputation from being maligned as a store that allows obscenities to be shouted in front of strangers and children. If the shopkeeper allows me to continue shouting obscenities, it could be construed as supporting my speech. For this reason, the business-owner has a free speech protection that allows them to remove me from the premises.

    In the case of Facebook and Twitter, these companies could be construed as supporting hate speech and conspiracy theories if they allow these messages to flow unregulated on their platforms. To suggest they ought to “preserve the free speech” of people who hold abhorrent values in contrast to their own is absurd. Facebook and Twitter have every right to remove users on the basis of harmful activity like spreading false news and hate. They outwardly choose not to do so in many cases for the reasons outlined by this article. There is an enormous profit incentive to allow conspiracy theories like QAnon to flourish because it boosts engagement among users. The more engagement, the more eyeballs are on screen for the advertisers that are paying Facebook and Twitter for ad space. These social media companies need to hold themselves accountable and sacrifice that bit of profitability to preserve our democracy and sense of truth. Without these fundamental pillars of American society, the entire system is at risk of collapse.

  17. The Wired brings up a lot of good points and exposes an issue that affects us all. I agree that social media platforms promoting polarized and low-quality content is problematic. I am unsure if I agree that we should dispose of section 230. In the same way that I would defend Amazon’s liability protection against faulty products sold on their platform from separate entities, I lean towards defending Facebook from being liable for what its users say. I think that section 230 becomes questionable with two things that social media giants are doing. The first is the algorithm’s that, as the article mentions, takes no responsibility for the posts that they promote. How active of a role does social media have to take with posts before they become an active contributor instead of a passive supporter. If I retweet something, am I liable for what I retweeted? No. If I am twitter and put a post in front of millions of people who did not ask for the post in their feeds, am I liable for the post? Maybe? The other problem I see with section 230 is that social media giants are now choosing to suppress certain forms of speech that they arbitrarily deem distasteful either by their standards or the standards of whatever mob of people is pointing fingers.
    Since they have decided to curate content allowed on their platform, I think we should hold them to a standard of equality in their perspective. This also raises the question of who should regulate social media giants. Can society and its susceptibility to mob mentality be trusted? Can the government and the slippery slope of regulating the first amendment be trusted, or can they be trusted to regulate themselves? YouTube has faced criticism for demonetizing people that don’t meet their community guidelines. These guidelines are algorithmically enforced, and the algorithm cannot distinguishe between genuine editorial content reporting on extremism and extremist themselves, or video game violence opposed to real-life violence. The situation lends credence to Zuckerberg’s position on the difficulty of enforcing guidelines. Youtube also demonetizes videos that have “extreme,” “offensive,” or “misinformed,” speech, which I can certainly get behind if it didn’t disproportionately affect conservative viewpoints, comedians, and niche content. Is YouTube refusing to monetize creators functionally banning them when the entire platform? Youtube is built on people making money from monetization, and getting promoted by the algorithim. YouTube can say that the creators are still free to post videos, but if they don’t get promoted or monetized, there is little point in posting videos.
    Amazon acts as a marketplace that sellers can come to, they also do some marketing and promotion for their sellers. However, at the end of the day, a transaction happens between sellers and customers with amazon in the middle. Social media giants used to remain in the middle like Amazon, but they might be drifting towards the side of being responsible for their posts.

  18. I do not use my Facebook account often but when I do, I always see sponsored advertisements for news articles or user posts that are alarming. These articles or posts typically have catchy titles or headings created to draw in as many people as possible. Once people click on them, however, they often consist of blatant misinformation and sometimes even false propaganda in support of someone’s political agenda. While the freedom of speech is a right guaranteed to citizens of the United States, false and harmful information should not be allowed to run amok on sites created for families, friends, and community to connect online.

    Facebook is an app that a lot of people use to connect with one another virtually when they cannot physically. From old classmates or teammates to distant family members that you rarely get to see, Facebook is a way for users to get a little hint about what is going on in the lives of people they care about. This sense of connection is completely undermined when the creators of the software do not act to protect their users from harm. Whether this harm comes from false news sources trying to stir a panic or websites spewing hateful or racist remarks, Section 230 allows websites like Facebook to go unpunished for not monitoring their sites.

    The hypocrisy of companies like Facebook and Google to cash-out at the expense of their users is something that many people would not expect. What even more people would not expect is that there is a government policy that protects their right to do so. Section 230, as suggested by David Chavern, should be limited so platforms cannot promote content that can be harmful to their patrons. We would think that the government and these platforms would want their users to be getting the best experience as possible, including having the most accurate information available from trusted sources. Instead, they are working hand-in-hand to make the most money and protect one another from their foul mispractices.

    When inaccurate information and content is spread online, many people view it that cannot determine its validity and accuracy. While everyone should be entitled to their freedom of speech, we already have limitations on how far it goes before it becomes harmful to other members of society. If we can limit speech such as threats or lies that are locally produced, we should also not allow massive companies to spread harmful information across their platforms for even more people to become victims. Since when did the government serve corporations instead of their individual citizens? The government should function as an asset to the people, not a contributor to the people’s detriment.

  19. Before reading this article, I had never heard of QAnon. It is quite perplexing that a small group of middle-aged men in Germany can believe something along the lines of the Holocaust being fake, COVID being a hoax, and the President of the United States can save the world. I think it is terrible that people can spread beliefs like this on the internet because there is seemingly no evidence to support their claims. I understand why in Section 230 of the Communications Decency Act that large internet-based companies are protected against legal liability because in some cases, without this legislation, they could be sued over something small that they really had no control over. However, when it comes to major world events – like the past and upcoming U.S. election – there needs to be a line drawn. There are many people who would read something on Facebook and would immediately believe what they read. Meanwhile, their source is someone with no credentials and an unwanted opinion. Facebook needs to take responsibility for the QAnon pages because they could have a seriously damaging effect on society.

    One thing I found particularly interesting that Facebook threatened to terminate access to news in Australia because that would violate the First Amendment – hindering freedom of the media. People in the U.S. would not be able to take advantage of this right because the news would be kept from them.

    I believe the media and politicians have been making this a political issue for a long time now. The founders of sites like Facebook and Google should have ethical principles that they should be applying. I do not think there should be this much controversy about how it is wrong to promote beliefs so horrible like the ones mentioned above. Companies like these should not be given special treatment when it comes to “freedom of speech.”

  20. Personally I do not use Facebook and I am not an avid fan of Google so to speak but I understand the content that both provide to users. I have never heard of the Communications Decency Act and the sections within it until reading this article covering one section in particular. I understand now how Google and Facebook are able to get away with posting some content that really should not belong on the internet as the content overall is just disgusting sometimes.
    Plus we got to look at this from the perspective of its competition too. They get put at such unfair disadvantage just because of this law protecting Google and Facebook. As the article said “This allows Google and Facebook to focus on user engagement to the exclusion of everything else, including content quality and user well-being. If I threaten or defame someone in an online post (assuming I’m not acting anonymously), I can be sued. If a platform decides to promote that threatening post to millions of other people to drive user interest and thus increase time on the site, they can do so without any fear of consequences“. This law has now become a way of keeping competitors down and maintaining the dominance of these two powerhouses. Plus people are quick to jump on the “juicy” news and these search engines and apps get paid for the advertisements on the page hence why these advertisements are spammed all over the “juicy” news. The competition is not protected to post this controversial yet clickbait news and it puts them at a disadvantage when everyone wants the tea of something not rated E for everyone on those special occasions. I personally believe this law is ridiculous and creates an unfair market for the reason above. Google and Facebook should not be given a pass to post content like they can but they will not be stopped because at the end of the day money speaks volumes.
    This unfair advantage is why Google and Facebook are an agreement to this law. It basically fends off the competition from ever succeeding them while staying on top. You do have to question a company’s ethics from this though. Consumers are not stupid, they know this hasty stuff these two powerhouses get away with are not right but it is so hard to turn your head away from these topics. Hate groups, blatantly racist or any other term to describe these unethical thinking people roam all over these two pages and instead of doing something about it you do not realize sometimes these posts and information are still up because Google and Facebook drenched them with advertisements. I get the whole “Freedom of Speech”, but are we really going to be allowing for this type of freedom? I think there should be a fine line between expressing your opinions and saying something so blatantly wrong. The problem is that Google and Facebook do not know better when it comes to chasing a check.

  21. As Brody states in the comment above David Chavern raises 2 important points in this article: the spread of disinformation and the lack of liability on computer service providers. Government is always on the giant companies side and less on the users. I am glad I was able to conceptualize what Section 230 is all about and I dislike that protects companies like Facebook and google to not be held reliable for what their users post. It is a very uncomfortable concept. These type of companies should be held accountable when they decide to promote threatening post to millions of other people to drive user interest and thus increase time on the site, even if Section 230 was intended to limit the platforms’ responsibility for bad content. There should be fear of consequences. Government should enact more stringent regulations and police their accuracy. There is so much commercial exploitation of people’s data, political use of data, and selling your confidential information, as it happened when Facebook allowed third-party clients to mine data not just of Facebook users but of their “friends” in the Cambridge Analytica case.
    In 2011, the Federal Trade Commission went after Facebook for failing to keep its privacy commitments. The FTC opened an investigation and in theory could have fined Facebook $40,000 for each violation of the privacy rights of the 50 million people whose privacy was breached in the Analytica debacle. But the FTC failed to exercise its authority.
    Social media, with a few exceptions, have morphed into a realm of near-total toxicity. How do we stop disinformation on social media? We trust information we get from the people close to us. But there are bad actors out there who are increasingly exploiting the ways we share information with one another for example during election. As in 2016 and again in 2018, Russian agents posed as people on both sides of hot-button issues to foment distrust and discord.
    According to studies disinformation is dangerous precisely because it can deceive anyone with broad consequences for our society and democracy.

  22. In the second paragraph of the articles, the author defines Google as a “research engine” while Facebook is more like an online community where people can post whatever is on their mind. However, Facebook is mostly the same as Google since anybody can use those resources to promote bad things like those terrorism’s messages. Facebook enables people to create communities and it enables to reach people in many ways, so it is more dangerous than Google. Section 230 prevents those companies to be sued by the government for sponsoring those communities and wrong messages. Facebook has plenty of communities that send wrong messages like terrorism, sexual abuses and racism, and some other communities create spaces where they can actually do illegal businesses like selling drugs, weapons and anything that is considered illegal. Facebook is not promoting those wrong messages, in contrast, this social media company fights against those issues that can threaten the society. In fact, the company has created internal agencies that check everyday the content that is posted in the social media. The creation of this section by the Congress is good thing, since companies should not be accounted liable for something that they have not promoted. However, there should be more regulations that would avoid sponsoring those bad massages. Facebook and Google should create more internal regulations and a new set of agencies that check any second if something illegal is promoted. Facebook has the power to create algorithms to promote any kind of advertisement to the people. Whenever somebody search something for a couple of seconds, the whole Facebook page is going to be invaded by ads. Facebook is capable to create any kind of algorithm, so it should create an algorithm that bans whatever is illegal. Section 230 is a good action by the Government to create a benefit to those companies, but those companies should try to create something that would prevent illegal activities in their platforms. Facebook should care more about these illegal activities than creating new ways to advertise products that will bring them more revenue. Those illegal activities have been a big threat for the whole world.

  23. I agree that Facebook and google don’t have the control over what gets posted or what gets said onto their apps or search bars. Being one of the most if not the most used apps ever or popular ones, they should have control. Even though freedom of speech or also known as our first amendment right, we shouldn’t be able to post non-sense and very cruel acts on those apps. Many people can see what people post especially when its on public, or even you’re famous many around the world can see it and see how it may affect others. I feel like they should have control over what gets posted because many people use this and it is very childish to post negative thoughts or pictures. If you are going to post negative things or knowing things are wrong, just keep it to yourself. Also there is many kids now that our technology is upgrading at every second, kids age 13-15 are on Facebook and sometimes they shouldn’t be seeing what is posted. However Facebook does block you now for minimum 30 days for if you comment with curse words or if you post something that is inappropriate, so they are getting the hang of it and trying to put a stop to negative posts and/or comments so hopefully they put stricter laws and maybe start banning accounts or blocking profiles. Section 230 is part of trying to put a stop to it and I totally agree with it and am for it. There are many racist people out there and ignorant people so to put a stop to them will do a good job, and it will be a lesson for everyone else and we can finally stop the hatred and stop the ignorance.

Leave a Reply

Your email address will not be published. Required fields are marked *