Centre for Internet & Society

EFF and CIS Intermediary Liability Project is aimed towards the creation of a set of principles for intermediary liability in consultation with groups of Internet-focused NGOs and the academic community.

The draft paper has been created to frame the discussion and will be made available for public comments and feedback. The draft document and the views represented here are not representative of the positions of the organisations involved in the drafting.

http://tinyurl.com/k2u83ya

3 September  2014

Introduction

The purpose of this white paper is to frame the discussion at several meetings between groups of Internet-focused NGOs that will lead to the creation of a set of principles for intermediary liability.

The principles that develop from this white paper are intended as a civil society contribution to help guide companies, regulators and courts, as they continue to build out the legal landscape in which online intermediaries operate. One aim of these principles is to move towards greater consistency with regards to the laws that apply to intermediaries and their application in practice.

There are three general approaches to intermediary liability that have been discussed in much of the recent work in this area, including CDT’s 2012 report called “Shielding the Messengers: Protecting Platforms for Expression and Innovation.” The CDT’s 2012 report divides approaches to intermediary liability into three models: 1. Expansive Protections Against Liability for Intermediaries, 2. Conditional Safe Harbor from Liability, 3. Blanket or Strict Liability for Intermediaries.[1]

This white paper argues in the alternative that (a) the “expansive protections against liability” model is preferable, but likely not possible given the current state of play in the legal and policy space (b) therefore the white paper supports “conditional safe harbor from liability” operating via a ‘notice-to-notice’ regime if possible, and a ‘notice and action’ regime if ‘notice-to-notice’ is deemed impossible, and finally (c) all of the other principles discussed in this white paper should apply to whatever model for intermediary liability is adopted unless those principles are facially incompatible with the model that is finally adopted.

As further general background, this white paper works from the position that there are three general types of online intermediaries- Internet Service Providers (ISPs), search engines, and social networks. As outlined in the recent draft UNESCO Report (from which this white paper draws extensively);

“With many kinds of companies operating many kinds of products and services, it is important to clarify what constitutes an intermediary. In a 2010 report, the Organization for Economic Co-operation and Development (OECD) explains that Internet intermediaries “bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.”

Most definitions of intermediaries explicitly exclude content producers. The freedom of expression advocacy group Article 19 distinguishes intermediaries from “those individuals or organizations who are responsible for producing information in the first place and posting it online.”  Similarly, the Center for Democracy and Technology explains that “these entities facilitate access to content created by others.”  The OECD emphasizes “their role as ‘pure’ intermediaries between third parties,” excluding “activities where service providers give access to, host, transmit or index content or services that they themselves originate.”  These views are endorsed in some laws and court rulings.  In other words, publishers and other media that create and disseminate original content are not intermediaries. Examples of such media entities include a news website that publishes articles written and edited by its staff, or a digital video subscription service that hires people to produce videos and disseminates them to subscribers.

For the purpose of this case study we will maintain that intermediaries offer services that host, index, or facilitate the transmission and sharing of content created by others. For example, Internet Service Providers (ISPs) connect a user’s device, whether it is a laptop, a mobile phone or something else, to the network of networks known as the Internet. Once a user is connected to the Internet, search engines make a portion of the World Wide Web accessible by allowing individuals to search their database. Search engines are often an essential go-between between websites and Internet users. Social networks connect individual Internet users by allowing them to exchange messages, photos, videos, as well as by allowing them to post content to their network of contacts, or the public at large. Web hosting providers, in turn, make it possible for websites to be published and to be accessed online.”[2]

General Principles for ISP Governance - Content Removals

The discussion that follows below outlines nine principles to guide companies, government, and civil society in the development of best practices related to the regulation of online content through intermediaries, as norms, policies, and laws develop in the coming years. The nine principles are: Transparency, Consistency, Clarity, Mindful Community Policy Making, Necessity and Proportionality in Content Restrictions, Privacy, Access to Remedy, Accountability, and Due Process in both Legal and Private Enforcement. Each principle contains subsections that expand upon the theme of the principle to cover more specific issues related to the rights and responsibilities of online intermediaries, government, civil society, and users.

Principle I: Transparency

“Transparency enables users’ right to privacy and right to freedom of expression. Transparency of laws, policies, practices, decisions, rationale, and outcomes related to privacy and restrictions allow users to make informed choices with respect to their actions and speech online. As such - both governments and companies have a responsibility in ensuring that the public is informed through transparency initiatives.” [3]

Government Transparency

  • In general, governments should publish transparency reports:

As part of the democratic process, the citizens of each country have a right to know how their government is applying its laws, and a right to provide feedback about the government’s legal interpretations of its laws. Thus, all governments should be required to publish online transparency reports that provide information about all requests issued by any branch or agency of government for the removal or restriction of online content. Further, governments should allow for the submission of comments and suggestions by a webform hosted on the same webpage where that government’s transparency report is hosted. There should also be some legal mechanism that requires the government to look at the feedback provided by its citizens, ensure that relevant feedback is passed along to legislative bodies, and provide for action to be taken on the citizen-provided feedback where appropriate. Finally, and where possible, the raw data that constitutes each government’s transparency report should be made available online, for free, in a common file format such as .csv, so that civil society may have easy access to it for research purposes.

  • Governments should be more transparent about content orders that they impose on ISPs
    The legislative process proceeds most effectively when the government knows how the laws that it creates are applied in practice and is able to receive feedback from the public about how those laws should change further, or remain the same. Relatedly, regulation of the Internet is most effective when the legislative and judicial branches are aware of what the other is doing. For all of these reasons, governments should publish information about all of the court orders and executive requests for content removals that they send to online intermediaries. Publishing all of this information in one place necessarily requires that some single entity within the government collects the information, which will have the benefits of giving the government a holistic view of how it is regulating the internet, encouraging dialogue between different branches of government about how best to create and enforce internet content regulation, and encouraging dialogue between the government and its citizens about the laws that govern internet content and their application.
  • Governments should make the compliance requirements they impose on ISPs public
    Each government should maintain a public website that publishes as complete a picture as possible of the content removal requests made by any branch of that government, including the judicial branch. The availability of a public website of this type will further many of the goals and objectives discussed elsewhere in this section. The website should be biased towards high levels of detail about each request and towards disclosure that requests were made, subject only to limited exceptions for compelling public policy reasons, where the disclosure bias conflicts directly with another law, or where disclosure would reveal a user’s PII. The information should be published periodically, ideally more than once a year. The general principle should be: the more information made available, the better. On the same website where a government publishes its ‘Transparency Report,’ that government should attempt to provide a plain-language description of its various laws related to online content, to provide users notice about what content is lawful vs. unlawful, as well as to show how the laws that it enacts in the Internet space fit together. Further, and as discussed in section “b,” infra, government should provide citizens with an online feedback mechanism so that they may participate in the legislative process as it applies to online content.
  • Governments should give their citizens a way to provide input on these policies
    Private citizens should have the right to provide feedback on the balancing between their civil liberties and other public policies such as security that their government engages in on their behalf. If and when these policies and the compliance requirements they impose on online intermediaries are made publicly available online, there should also be a feedback mechanism built into the site where this information is published. This public feedback mechanism could take a number of different forms, like, for example, a webform that allowed users to indicate their level of satisfaction with prevailing policy choices by choosing amongst several radio buttons, while also providing open text fields to allow the user to submit clarifying comments and specific suggestions. In order to be effective, this online feedback mechanism would have to be accompanied by some sort of legal and budgetary apparatus that would ensure that the feedback was monitored and given some minimum level of deference in the discussions and meetings that led to new policies being created.
  • Government should meet users concerned about its content policies in the online domain. Internet users, as citizens of both the internet and the country their country of origin, have a natural interest in defining and defending their civil liberties online; government should meet them there to extend the democratic process to the Internet. Denying Internet users a voice in the policymaking processes that determine their rights undermines government credibility and negatively influences users’ ability to freely share information online. As such, content policies should be posted in general terms online and users should have the ability to provide input on those policies online.

    ISP Transparency
    “The transparency practices of a company impact users’ freedom expression by providing insight into the scope of restriction that is taking in place in specific jurisdiction. Key areas of transparency for companies include: specific restrictions, aggregate numbers related to restrictions, company imposed regulations on content, and transparency of applicable law and regulation that the service provider must abide by.”[4]

    “Disclosure by service providers of notices received and actions taken can provide an important check against abuse. In addition to providing valuable data for assessing the value and effectiveness of a N&A system, creating the expectation that notices will be disclosed may help deter fraudulent or otherwise unjustified notices. In contrast, without transparency, Internet users may remain unaware that content they have posted or searched for has been removed pursuant due to a notice of alleged illegality. Requiring notices to be submitted to a central publication site would provide the most benefit, enabling patterns of poor quality or abusive notices to be readily exposed.”[5] Therefore, ISPs at all levels should publish transparency reports that include:

    • Government Requests

    All requests from government agencies and courts should be published in a periodic transparency report, accessible on the intermediary’s website, that publishes information about the requests the intermediary received and what the intermediary did with them in the highest level of detail that is legally possible. The more information that is provided about each request, the better the understanding that the public will have about how laws that affect their rights online are being applied. That said, steps should be taken to prevent the disclosure of personal information in relation to the publication of transparency reports. Beyond redaction of personal information, however, the maximum amount of information about each request should be published, subject as well to the (ideally minimal) restrictions imposed by applicable law. A thorough Transparency Report published by an ISP or online intermediary should include information about the following categories of requests:

  • Police and/or Executive Requests
    This category includes all requests to the intermediary from an agency that is wholly a part of the national government; from police departments, to intelligence agencies, to school boards from small towns. Surfacing information about all requests from any part of the government helps to avoid corruption and/or inappropriate exercises of governmental power by reminding all government officials, regardless of their rank or seniority, that information about the requests they submit to online intermediaries is subject to public scrutiny.
  • Court Orders
    This category includes all orders issued by courts and signed by a judicial officer. It can include ex-parte orders, default judgments, court orders directed at an online intermediary, or court orders directed at a third party presented to the intermediary as evidence in support of a removal request. To the extent legally possible, detailed information should be published about these court orders detailing the type of court order each request was, its constituent elements, and the actions(s) that the intermediary took in response to it. All personally identifying information should be redacted from any court orders that are published by the intermediary as part of a transparency report before publication.
  • First Party
    Information about court orders should be further broken down into two groups; first party and third party. First party court orders are orders directed at the online intermediary in an adversarial proceeding to which the online intermediary was a party.
  • Third Party
    As mentioned above, ‘third party’ refers to court orders that are not directed at the online intermediary, but rather a third party such as an individual user who posted an allegedly defamatory remark on the intermediary’s platform. If the user who obtains a court order approaches an online intermediary seeking removal of content with a court order directed at the poster of, say, the defamatory content, and the intermediary decides to remove the content in response to the request, the online intermediary that decided to perform the takedown should publish a record of that removal. To be accepted by an intermediary, third party court orders should be issued by a court of appropriate jurisdiction after an adversarial legal proceeding, contain a certified and specific statement that certain content is unlawful, and specifically identify the content that the court has found to be unlawful, by specific, permalinked URL where possible.
  • This type of court order should be broken out separately from court orders directed at the applicable online intermediary in companies’ transparency reports because merely providing aggregate numbers that do not distinguish between the two types gives an inaccurate impression to users that a government is attempting to censor more content than it actually is. The idea of including first party court orders to remove content as a subcategory of ‘government requests’ is that a government’s judiciary speaks on behalf of the government, making determinations about what is permitted under the laws of that country. This analogy does not hold for court orders directed at third parties- when the court made its determination of legality on the content in question, it did not contemplate that the intermediary would remove the content. As such, the court likely did not weigh the relevant public interest and policy factors that would include the importance of freedom of expression or the precedential value of its decision. Therefore, the determination does not fairly reflect an attempt by the government to censor content and should not be considered as such.

    Instead, and especially considering that these third party court order may be the basis for a number of content removals, third party court orders should be counted separately and presented with some published explanation in the company’s transparency report as to what they are and why the company has decided it should removed content pursuant to its receipt of one.

    Private-Party Requests
    Private-party requests are requests to remove content that are not issued by a government agency or accompanied by a court order. Some examples of private party requests include copyright complaints submitted pursuant to the Digital Millennium Copyright Act or complaints based on the laws of specific countries, such as laws banning holocaust denial in Germany.

    Policy/TOS Enforcement
    To give users a complete picture of the content that is being removed from the platforms that they use, corporate transparency reports should also provide information about the content that the intermediary removes pursuant to its own policies or terms of service, though there may not be a legal requirement to do so.

    User Data Requests
    While this white paper is squarely focused on liability for content posted online and best practices for deciding when and how content should be removed from online services, corporate transparency reports should also provide information about requests for user data from executive agencies, courts, and others.

    Principle II: Consistency

  • Legal requirements for ISPs should be consistent, based on a global legal framework that establishes baseline limitations on legal immunity
    Broad variation amongst the legal regimes of the countries in which online intermediaries operate increases compliance costs for companies and may discourage them from offering their services in some countries due to the high costs of localized compliance. Reducing the number of speech platforms that citizens have access to limits their ability to express themselves. Therefore, to ensure that citizens of a particular country have access to a robust range of speech platforms, each country should work to harmonize the requirements that it imposes upon online intermediaries with the requirements of other countries. While a certain degree of variation between what is permitted in one country as compared to another is inevitable, all countries should agree on certain limitations to intermediary liability, such as the following:
  • Conduits should be immune from claims about content that they neither created nor modified
    As noted in the 2011 Joint Declaration on Freedom of Expression and the Internet, “[n]o one who simply provides technical Internet services such as providing access, or searching for, or transmission or caching of information, should be liable for content generated by others, which is disseminated using those services, as long as they do not specifically intervene in that content or refuse to obey a court order to remove that content, where they have the capacity to do so (‘mere conduit principle’).”[6]
  • Court orders should be required for the removal of content that is related to speech, such as defamation removal requests
    In the Center for Democracy and Technology’s Additional Responses Regarding Notice and Action, CDT outlines the case against allowing notice and action procedures to apply to defamation removal requests. They write:
  • “Uniform notice-and-action procedures should not apply horizontally to all types of illegal content. In particular, CDT believes notice-and-takedown is inappropriate for defamation and other areas of law requiring complex legal and factual questions that make private notices especially subject to abuse. Blocking or removing content on the basis of mere allegations of illegality raises serious concerns for free expression and access to information. Hosts are likely to err on the side of caution and comply with most if not all notices they receive, because evaluating notices is burdensome and declining to comply may jeopardize their protection from liability. The risk of legal content being taken down is especially high in cases where assessing the illegality of the content would require detailed factual analysis and careful legal judgments that balance competing fundamental rights and interests. Intermediaries will be extremely reluctant to exercise their own judgment when the legal issues are unclear, and it will be easy for any party submitting a notice to claim a good faith belief that the content in question is unlawful. In short, the murkier the legal analysis, the greater the potential for abuse.

    To reduce this risk, removal of or disablement of access to content based on unadjudicated allegations of illegality (i.e., notices from private parties) should be limited to cases where the content at issue is manifestly illegal – and then only with necessary safeguards against abuse as described above.

    CDT believes that online free expression is best served by narrowing what is considered manifestly illegal and subject to takedown upon private notice. With proper safeguards against abuse, for example, notice-and-action can be an appropriate policy for addressing online copyright infringement. Copyright is an area of law where there is reasonable international consensus regarding what is illegal and where much infringement is straightforward. There can be difficult questions at the margins – for example concerning the applicability of limitations and exceptions such as “fair use” – but much online infringement is not disputable.

    Quite different considerations apply to the extension of notice-and-action procedures to allegations of defamation or other illegal content. Other areas of law, including defamation, routinely require far more difficult factual and legal determinations. There is greater potential for abuse of notice-and-action where illegality is less manifest and more disputable. If private notices are sufficient to have allegedly defamatory content removed, for example, any person unhappy about something that has been written about him or her would have the ability and incentive to make an allegation of defamation, creating a significant potential for unjustified notices that harm free expression. This and other areas where illegality is more disputable require different approaches to notice and action. In the case of defamation, CDT believes “notice” for purposes of removing or disabling access to content should come only from a competent court after full adjudication.

    In cases where it would be inappropriate to remove or disable access to content based on untested allegations of illegality, service providers receiving allegations of illegal content may be able to take alternative actions in response to notices. Forwarding notices to the content provider or preserving data necessary to facilitate the initiation of legal proceedings, for example, can pose less risk to content providers’ free expression rights, provided there is sufficient process to allow the content provider to challenge the allegations and assert his or her rights, including the right to speak anonymously.”[7]

    Principle III: Clarity

  • All notices that request the removal of content should be clear and meet certain minimum requirements
    The Center for Democracy and Technology outlined requirements for clear notices in a notice and action system in response a European Commission public comment period on a revised notice and action regime.[8] They write:
  • “Notices should include the following features:

    1. Specificity. Notices should be required to specify the exact location of the material – such as a specific URL – in order to be valid. This is perhaps the most important requirement, in that it allows hosts to take targeted action against identified illegal material without having to engage in burdensome search or monitoring. Notices that demand the removal of particular content wherever it appears on a site without specifying any location(s) are not sufficiently precise to enable targeted action.
    2. Description of alleged illegal content. Notices should be required to include a detailed description of the specific content alleged to be illegal and to make specific reference to the law allegedly being violated. In the case of copyright, the notice should identify the specific work or works claimed to be infringed.
    3. Contact details. Notices should be required to contain contact information for the sender. This facilitates assessment of notices’ validity, feedback to senders regarding invalid notices, sanctions for abusive notices, and communication or legal action between the sending party and the poster of the material in question.
    4. Standing: Notices should be issued only by or on behalf of the party harmed by the content. For copyright, this would be the rightsholder or an agent acting on the rightsholderʼs behalf. For child sexual abuse images, a suitable issuer of notice would be a law enforcement agency or a child abuse hotline with expertise in assessing such content. For terrorism content, only government agencies would have standing to submit notice.
    5. Certification: A sender of a notice should be required to attest under legal penalty to a good-faith belief that the content being complained of is in fact illegal; that the information contained in the notice is accurate; and, if applicable, that the sender either is the harmed party or is authorized to act on behalf of the harmed party. This kind of formal certification requirement signals to notice-senders that they should view misrepresentation or inaccuracies on notices as akin to making false or inaccurate statements to a court or administrative body.
    6. Consideration of limitations, exceptions, and defenses: Senders should be required to certify that they have considered in good faith whether any limitations, exceptions, or defenses apply to the material in question. This is particularly relevant for copyright and other areas of law in which exceptions are specifically described in law.
    7. An effective appeal and counter-notice mechanism. A notice-and-action regime should include counter-notice procedures so that content providers can contest mistaken and abusive notices and have their content reinstated if its removal was wrongful.
    8. Penalties for unjustified notices. Senders of erroneous or abusive notices should face possible sanctions. In the US, senders may face penalties for knowingly misrepresenting that content is infringing, but the standard for “knowingly misrepresenting” is quite high and the provision has rarely been invoked.  A better approach might be to use a negligence standard, whereby a sender could be held liable for damages or attorneys’ fees for making negligent misrepresentations (or for repeatedly making negligent misrepresentations). In addition, the notice-and-action system should allow content hosts to ignore notices from senders with an established record of sending erroneous or abusive notices or allow them to demand more information or assurances in notices from those who have in the past submitted erroneous notices. (For example, hosts might be deemed within the safe harbor if they require repeat abusers to specifically certify that they have actually examined the alleged infringing content before sending a notice).”[9]
  • All ISPs should publish their content removal policies online and keep them current as they evolve
    The UNESCO report states, by way of background, that “[c]ontent restriction practices based on Terms of Service are opaque. How companies remove content based on Terms of Service violations is more opaque than their handling of content removals based on requests from authorized authorities. When content is removed from a platform based on company policy, [our] research found that all companies provide a generic notice of this restriction to the user, but do not provide the reason for the restriction. Furthermore, most companies do not provide notice to the public that the content has been removed. In addition, companies are inconsistently open about removal of accounts and their reasons for doing so.”[10]
  • There are legitimate reasons why an ISP may want to have policies that permit less content, and a narrower range of content, than is technically permitted under the law, such as maintaining a product that appeals to families. However, if a company is going to go beyond the minimal legal requirements in terms of content that it must restrict, the company should have clear policies that are published online and kept up-to-date to provide its users notice of what content is and is not permitted on the company’s platform. Notice to the user about the types of content that are permitted encourages her to speak freely and helps her to understand why content that she posted was taken down if it must be taken down for violating a company policy.

  • When content is removed, a clear notice should be provided in the product that explains in simple terms that content has been removed and why
    This subsection works in conjunction with “ii,” above. If content is removed for any reason, either pursuant to a legal request or because of a violation of company policy, a user should be able to learn that content was removed if they try to access it. Requiring an on-screen message that explains that content has been removed and why is the post-takedown accompaniment to the pre-takedown published online policy of the online intermediary: both work together to show the user what types of content are and are not permitted on each online platform. Explaining to users why content has been removed in sufficient detail may also spark their curiosity as to the laws or policies that caused the content to be removed, resulting in increased civic engagement in the internet law and policy space, and a community of citizens that demands that the companies and governments it interacts with are more responsive to how it thinks content regulation should work in the online context.
  • The UNESCO report provides the following example of how Google provides notice to its users when a search result is removed, which includes a link to a page hosted by Chilling Effects:[11]

    “When search results are removed in response to government or copyright holder demands, a notice describing the number of results removed and the reasons for their removal is displayed to users (see screenshot below) and a copy of the request to the independent non-proft organization ChillingEffects.org, which archives and publishes the request.  When possible the company also contacts the website’s owners.”[12]

    This is an example of the message that is displayed when Google removes a search result pursuant to a copyright complaint.[13]

  • Requirements that governments impose on intermediaries should be as clear and unambiguous as possible
    Imposing liability on internet intermediaries without providing clear guidance as to the precise type of content that is not lawful and the precise requirements of a legally sufficient notice encourages intermediaries to over-remove content. As Article 19 noted in its 2013 report on intermediary liability:
  • “International bodies have also criticized ‘notice and takedown’ procedures as they lack a clear legal basis. For example, the 2011 OSCE report on Freedom of Expression on the internet highlighted that: Liability provisions for service providers are not always clear and complex notice and takedown provisions exist for content removal from the Internet within a number of participating States. Approximately 30 participating States have laws based on the EU E-Commerce Directive. However, the EU Directive provisions rather than aligning state level policies, created differences in interpretation during the national implementation process. These differences emerged once the national courts applied the provisions.

    These procedures have also been criticized for being unfair. Rather than obtaining a court order requiring the host to remove unlawful material (which, in principle at least, would involve an independent judicial determination that the material is indeed unlawful), hosts are required to act merely on the say-so of a private party or public body. This is problematic because hosts tend to err on the side of caution and therefore take down material that may be perfectly legitimate and lawful. For example, in his report, the UN Special Rapporteur on freedom of expression noted:

    [W]hile a notice-and-takedown system is one way to prevent intermediaries from actively engaging in or encouraging unlawful behavior on their services, it is subject to abuse by both State and private actors. Users who are notified by the service provider that their content has been flagged as unlawful often has little recourse or few resources to challenge the takedown. Moreover, given that intermediaries may still be held financially or in some cases criminally liable if they do not remove content upon receipt of notification by users regarding unlawful content, they are inclined to err on the side of safety by overcensoring potentially illegal content. Lack of transparency in the intermediaries’ decision-making process also often obscures discriminatory practices or political pressure affecting the companies’ decisions. Furthermore, intermediaries, as private entities, are not best placed to make the determination of whether a particular content is illegal, which requires careful balancing of competing interests and consideration of defenses.”[14]

    Considering the above, if liability is to be imposed on intermediaries for certain types of unlawful content, the legal requirements that outline what is unlawful content and how to report it must be clear. Lack of clarity in this area will result in over-removal of content by rational intermediaries that want to minimize their legal exposure and compliance costs. Over-removal of content is at odds with the goals of freedom of expression.

    The UNESCO Report made a similar recommendation, stating that; “Governments need to ensure that legal frameworks and company policies are in place to address issues arising out of intermediary liability. These legal frameworks and policies should be contextually adapted and be consistent with a human rights framework and a commitment to due process and fair dealing. Legal and regulatory frameworks should also be precise and grounded in a clear understanding of the technology they are meant to address, removing legal uncertainty that would provide opportunity for abuse.”[15]

    Similarly, the 2011 Joint Declaration on Freedom of Expression and the Internet states:

    “Consideration should be given to insulating fully other intermediaries, including those mentioned in the preamble, from liability for content generated by others under the same conditions as in paragraph 2(a). At a minimum, intermediaries should not be required to monitor user-generated content and should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression (which is the case with many of the ‘notice and takedown’ rules currently being applied).”[16]

    Principle IV: Mindful Community Policy Making

    “Laws and regulations as well as corporate policies are more likely to be compatible with freedom of expression if they are developed in consultation with all affected stakeholders – particularly those whose free expression rights are known to be at risk.”[17] To be effective, policies should be created through a multi-stakeholder consultation process that gives voice to the communities most at risk of being targeted for the information they share online. Further, both companies and governments should embed an ‘outreach to at-risk communities’ step into both legislative and policymaking processes to be especially sure that their voices are heard. Finally, civil society should work to ensure that all relevant stakeholders have a voice in both the creation and revision of policies that affect online intermediaries. In the context of corporate policymaking, civil society can use strategies from activist investing to encourage investors to make the human rights and freedom of expression policies of Internet companies’ part of the calculus that investors use to decide where to place their money. Considering the above;

    1. Human rights impact assessments, considering the impact of the proposed law or policy on various communities from the perspectives of gender, sexuality, sexual preference, ethnicity, religion, and freedom of expression, should be required before:
    2. New laws are written that govern content issues affecting ISPs or conduct that occurs primarily online
    3. “Protection of online freedom of expression will be strengthened if governments carry out human rights impact assessments to determine how proposed laws or regulations will affect Internet users’ freedom of expression domestically and globally.”[18]
  • Intermediaries enact new policies
    “Protection of online freedom of expression will be strengthened if companies carry out human rights impact assessments to determine how their policies, practices, and business operations affect Internet users’ freedom of expression. This assessment process should be anchored in robust engagement with stakeholders whose freedom of expression rights are at greatest risk online, as well as stakeholders who harbor concerns about other human rights affected by online speech.”[19]
  • Multi-stakeholder consultation processes should precede any new legislation that will apply to content issues affecting online intermediaries or online conduct
    “Laws and regulations as well as corporate policies are more likely to be compatible with freedom of expression if they are developed in consultation with all affected stakeholders – particularly those whose free expression rights are known to be at risk.”[20]
  • Civil society and public interest groups should encourage responsible investment in companies who implement policies that reflect best practices for internet intermediaries
    “Over the past thirty years, responsible investors have played a powerful role in incentivizing companies to improve environmental sustainability, supply chain labor practices, and respect for human rights of communities where companies physically operate. Responsible investors can also play a powerful role in incentivizing companies to improve their policies and practices affecting freedom of expression and privacy by developing metrics and criteria for evaluating companies on these issues in the same way that they evaluate companies on other “environmental, social, and governance” criteria.”[21]
  • Principle V: Necessity and Proportionality in Content Restriction

  • Content should only be restricted when there is a legal basis for doing so, or the removal is performed in accordance with a clear, published policy of the ISP
    As CDT outlined in its 2012 intermediary liability report, “[a]ctions required of intermediaries must be narrowly tailored and proportionate, to protect the fundamental rights of Internet users. Any actions that a safe-harbor regime requires intermediaries to take must be evaluated in terms of the principle of proportionality and their impact on Internet users’ fundamental rights, including rights to freedom of expression, access to information, and protection of personal data. Laws that encourage intermediaries to take down or block certain content have the potential to impair online expression or access to information. Such laws must therefore ensure that the actions they call for are proportional to a legitimate aim, no more restrictive than is required for achievement of the aim, and effective for achieving the aim. In particular, intermediary action requirements should be narrowly drawn, targeting specific unlawful content rather than entire websites or other Internet resources that may support both lawful and unlawful uses.”[22]
  • When content must be restricted, it should be restricted in the most minimal way possible (i.e., prefer domain removals to IP-blocking)
    There are a number of different ways that access to content can be restricted. Examples include hard deletion of the content from all of a company’s servers, blocking the download of an app or other software program in a particular country, blocking the content on all IP addresses affiliated with a particular country (“IP-Blocking”), removing the content from a particular domain of a product (i.e., removing from a link from the .fr version of a search engine that remains accessible on the .com version), blocking content from a ‘version’ of an online product that is accessible through a ‘country’ or ‘language’ setting on that product, or some combination of the last three options (i.e., an online product that directs the user to a version of the product based on the country that their IP address is coming from, but where the user can alter a URL or manipulate a drop-down menu to show her a different ‘country version’ of the product, providing access to content that may otherwise be inaccessible).
  • While almost all of the different types of content restrictions described above can be circumvented by technical means such as the use of proxies, IP-cloaking, or Tor, the average internet user does not know that these techniques exist, much less how to use them. Of the different types of content restrictions described above, a domain removal, for example, is easier for an individual user to circumvent than IP-Blocked content because you only have to change the URL of the product you are using to, i.e. “.com” to see content that has been locally restricted. To get around an IP-block, you would have to be sufficiently savvy to employ a proxy or cloak your true IP address.

    Therefore, the technical means used to restrict access to controversial content has a direct impact on the magnitude of the actual restriction on speech. The more restrictive the technical removal method, the fewer people that will have access to that content. To preserve access to lawful content, online intermediaries should choose the least restrictive means of complying with removal requests, especially when the removal request is based on the law of a particular country that makes certain content unlawful that is not unlawful in other countries. Further, when building new products and services, intermediaries should built in removal capability that minimally restricts access to controversial content.

  • If content is restricted due to its illegality in a particular country, the geographical scope of the content restriction should be as minimal as possible
    Building on the discussion in “ii,” supra, a user should be able to access content that is lawful in her country even if it is not lawful in another country. Different countries have different laws and it is often difficult for intermediaries to determine how to effectively respond to requests and reconcile the inherent conflicts that result. For example, content that denies the holocaust is illegal in certain countries, but not in others. If an intermediary receives a request to remove content based on the laws of a particular country and determines that it will comply because the content is not lawful in that country, it should not restrict access to the content such that it cannot be accessed by users in other countries where the content is lawful. To respond to a request based on the law of a particular country by blocking access to that content for users around the world, or even users of more than one country, essentially allows for extraterritorial application of the laws of the country that the request came from. While it is preferable to standardize and limit the legal requirements imposed on online intermediaries throughout the world, to the extent that this is not possible, the next-best option is to limit the application of laws that are interpreted to declare certain content unlawful to the users that live in that country. Therefore, intermediaries should choose the technical means of content restriction that is most narrowly tailored to limit the geographical scope and impact of the removal.
  • The ability of conduits (telecommunications/internet service providers) to filter content should be minimized to the extent technically and legally possible
  • The 2011 Joint Declaration on Freedom of Expression and the Internet made the following points about the dangers of allowing filtering technology:

    “Mandatory blocking of entire websites, IP addresses, ports, network protocols or types of uses (such as social networking) is an extreme measure – analogous to banning a newspaper or broadcaster – which can only be justified in accordance with international standards, for example where necessary to protect children against sexual abuse.

    Content filtering systems which are imposed by a government or commercial service provider and which are not end-user controlled are a form of prior censorship and are not justifiable as a restriction on freedom of expression.

    Products designed to facilitate end-user filtering should be required to be accompanied by clear information to end-users about how they work and their potential pitfalls in terms of over-inclusive filtering.”[23]

    In short, filtering at the conduit level is a blunt instrument that should be avoided whenever possible. Similar to how conduits should not be legally responsible for content that they neither host nor modify (the ‘mere conduit’ rule discussed supra), conduits should technically restrict their ability to filter content such that it would be inefficient for government agencies to contact them to have content filtered. Mere conduits are not able to assess the context surrounding the controversial content that they are asked to remove and are therefore not the appropriate party to receive takedown requests. Further, when mere conduits have the technical ability to filter content, they open themselves to pressure from government to exercise that capability. Therefore, mere conduits should limit or not build in the capability to filter content.

  • Notice and notice, or notice and judicial takedown, should be preferred to notice and takedown, which should be preferred to unilateral removal
    Mechanisms for content removal that involve intermediaries acting without any oversight or accountability, or those which only respond to the interests of the party requesting removal, are unlikely to do a very good job at balancing public and private interests. A much better balance is likely to be struck through a mechanism where power is distributed between the parties, and/or where an independent and accountable oversight mechanism exists.
  • Considered in this way, there is a continuum of content removal mechanisms that ranges from those are the least balanced and accountable, and those that are more so.  The least accountable is the unilateral removal of content by the intermediary without legal compulsion in response to a request received, without affording the uploader of the content the right to be heard or access to remedy.

    Notice and takedown mechanisms fit next along the continuum, provided that they incorporate, as the DMCA attempts to do, an effective appeal and counter-notice mechanism. However where notice and takedown falls down is that the cost and incentive structure is weighted towards removal of content in the case of doubt or dispute, resulting in more content being taken down and staying down than would be socially optimal.

    A better balance is likely to be struck by a “notice and notice” regime, which provides strong social incentives for those whose content is reported to be unlawful to remove the content, but does not legally compel them to do so. If legal compulsion is required, a court order must be separately obtained.

    Canada is an example of a jurisdiction with a notice and notice regime, though limited to copyright content disputes. Although this regime is now established in legislation, it formalizes a previous voluntary regime, whereby major ISPs would forward copyright infringement notifications received from rightsholders to subscribers, but without removing any content and without releasing subscriber data to the rightsholders absent a court order. Under the new legislation additional record-keeping requirements are imposed on ISPs, but otherwise the essential features of the regime remain unchanged.

    Analysis of data collected during this voluntary regime indicates that it has been effective in changing the behavior of allegedly infringing subscribers.  A 2010 study by the Entertainment Software Association of Canada (ESAC) found that 71% of notice recipients did not infringe again, whereas a similar 2011 study by Canadian ISP Rogers found 68% only received one notice, and 89% received no more than two notices, with only 1 subscriber in 800,000 receiving numerous notices.[24] However, in cases where a subscriber has a strong good faith belief that the notice they received was wrong, there is no risk to them in disregarding the erroneous notice – a feature that does not apply to notice and takedown.

    Another similar way in which public and private interests can be balanced is through a notice and judicial takedown regime, whereby the rightsholder who issues a notice about offending content must have it assessed by an independent judicial (or perhaps administrative) authority before the intermediary will respond by taking the content down.

    An example of this is found in Chile, again limited to the case of copyright.[25] In response to its Free Trade Agreement with the United States, the system introduced in 2010 is broadly similar to the DMCA, with the critical difference that intermediaries are not required to take material down in order to benefit from a liability safe harbor, until such time as a court order for removal of the material is made. Responsibility for evaluating the copyright claims made is therefore shifted from intermediaries onto the courts.

    Although this requirement does impose a burden on the rightsholder, this serves a purpose by disincentivizing the issue of automated or otherwise unjustified notices that are more likely to restrict or chill freedom of expression.  In cases where there is no serious dispute about the legality of the content, it is unlikely that the lawsuit would be defended. In any case, the legislation authorizes the court to issue a preliminary injunction on an ex parte basis, on condition of payment of a bond.

  • Intermediaries should be allowed to charge for the time and expense associated with processing legal requests
    As an intermediary, it is time consuming and relatively expensive to understand the obligations that each country’s legal regime imposes on you, and to accurately how each legal request should be handled. Especially for intermediaries without many resources, such as forum operators or owners of home Wifi networks, the costs associated with being an intermediary can be prohibitive. Therefore, it should be within their rights to charge for their compliance costs if they are either below a certain user threshold or can show financial necessity in some way.
  • Legal requirements imposed on intermediaries should be a floor, not a ceiling- ISPs can adopt more restrictive policies to more effectively serve their users as long as they have published policies that explain what they are doing
    The Internet has space for a wide range of platforms and applications directed to different communities, with different needs and desires. A social networking site directed at children, for example, may reasonably want to have policies that are much more restrictive than a political discussion board. Therefore, legal requirements that compel intermediaries to take down content should be seen as a ‘floor,’ but not a ‘ceiling’ on the range and quantity that of content those intermediaries may remove. Intermediaries should retain control over their own policies as long as they are transparent about what those policies are, what type of content the intermediary removes, and why they removed certain pieces of content.
  • Principle VI: Privacy

  • It is important to protect the ability of Internet users to speak by narrowing and making less ambiguous the range of content that intermediaries can be held liable for, but it is also very important to make users feel comfortable sharing their view by ensuring that their privacy is protected. Protecting the user’s ability to share her views, especially when those views are controversial or have a direct bearing on important political issues, requires that the user can trust the intermediaries that she uses. This concept can be further broken down into three sub-principles:
  • The user’s personal information should be protected to the greatest extent possible given the state of the art in encryption, security, and policy
    Users will be less willing to speak on important topics if they have legitimate concerns that their data may be taken from them. As stated in the UNESCO Report, “[b]ecause of the amount of personal information held by companies and ability to access the same, a company’s practices around collection, access, disclosure, and retention are key. To a large extent a service provider’s privacy practices are influenced by applicable law and operating licenses required by the host government. These can include requirements for service providers to verify subscribers, collect and retain subscriber location data, and cooperate with law enforcement when requested. Outcome: The implications of companies trying to balance a user’s expectation for privacy with a government’s expectation for cooperation can be serious and are inadequately managed in all jurisdictions studied.”[26]
  • Where possible, ISPs should help to preserve the user’s right to speak anonymously
    An important aspect of an Internet user’s ability to exercise her right to free expression online is ability to speak anonymously. Anonymous speech is one of the great advances of the Internet as a communications medium and should be preserved to the extent possible. As noted by special rapporteur Frank LaRue, “[i]n order for individuals to exercise their right to privacy in communications, they must be able to ensure that these remain private, secure and, if they choose, anonymous. Privacy of communications infers that individuals are able to exchange information and ideas in a space that is beyond the reach of other members of society, the private sector, and ultimately the State itself. Security of communications means that individuals should be able to verify that only their intended recipients, without interference or alteration, receive their communications and that the communications they receive are equally free from intrusion. Anonymity of communications is one of the most important advances enabled by the Internet, and allows individuals to express themselves freely without fear of retribution or condemnation.”[27]
  • The user’s PII should never be sold or used without her consent, and she should always know what is being done with it via an easily comprehensible dashboard
    The user’s trust in the online platform that she uses and relies upon is influenced not only by the relationships the intermediary maintains with the government, but also with other commercial entities. A user, who feels that her data will be constantly shared with third parties, perhaps without her consent and/or for marketing purposes, will never feel like she is able to freely express her opinion. Therefore, it is the intermediary’s responsibility to ensure that its users know exactly what information it retains about them, who it shares that information with and under what circumstances, and how to change the way that her data is shared. All of this information should be available on a dashboard that is comprehensible to the average user, and which gives her the ability to easily modify or withdraw her consent to the way her data is being shared, or the amount of data, or specific data, that the intermediary is retaining about her.
  • Principle VII: Access to Remedy

  • As noted in the UNESCO Report, “Remedy is the third central pillar of the UN Guiding Principles on Business and Human Rights, placing an obligation both on governments and on companies to provide individuals access to effective remedy. This area is where both governments and companies are almost consistently lacking. Across intermediary types, across jurisdictions and across the types of restriction, individuals whose content is restricted and individuals who wish to access such content are offered little or no effective recourse to appeal restriction decisions, whether in response to government orders, third party requests or in accordance with company policy. There are no private grievance or due process mechanisms that are clearly communicated and readily available to all users, or consistently applied.”[28]

  • Any notice and takedown system is subject to abuse, and any company policy that results in the removal of content is subject to mistaken or inaccurate takedowns, both of which are substantial problems that can only be remedied by the ability for users to let the intermediary know when the intermediary improperly removed a specific piece of content and the technical and procedural ability of the intermediary to put the content back. However, the technical ability to reinstate content that was improperly removed may conflict with data retention laws. This conflict should be explored in more detail. In general, however, every time content is removed, there should be:

  • A clear mechanism through which users can request reinstatement of content
    When an intermediary decides to remove content, it should be immediately clear to the user that content has been removed and why it was removed (see discussion of in-product notice, supra). If the user disagrees with the content removal decision, there should be an obvious, online method for her to request reinstatement of the content.
  • Reinstatement of content should be technically possible
    When intermediaries (who are subject to intermediary liability) are building new products, they should build the capability to remove content into the product with a high degree of specificity so as to allow for narrowly tailored content removals when a removal is legally required. Relatedly, all online intermediaries should build the capability to reinstate content into their products while maintaining compliance with data retention laws.
  • Intermediaries should have policies and procedures in place to handle reinstatement requests
    Between the front end (online mechanism to request reinstatement of content) and the backend (technical ability to reinstate content) is the necessary middle layer, which consists of the intermediary’s internal policies and processes that allow for valid reinstatement requests to be assessed and acted upon. In line with the corporate ‘responsibility to respect’ human rights, and considered along with the human rights principle of ‘access to remedy,’ intermediaries should have a system in place from the time that an online product launches to ensure that reinstatement requests can be made and will be processed quickly and appropriately.
  • Principle VIII: Accountability

  • Governments must ensure that independent, transparent, and impartial accountability mechanisms exist to verify the practices of government and companies with regards to managing content created online
    “While it is important that companies make commitments to core principles on freedom of expression and privacy, make efforts to implement those principles through transparency, policy advocacy, and human rights impact assessments, it is also important that companies take these steps in a manner that is accountable to stakeholders. One way of doing this is by committing to external third party assurance to verify that their policies and practices are being implemented to a meaningful standard, with acceptable consistency wherever their service is offered. Such assurance gains further public credibility when carried out with the supervision and affirmation of multiple stakeholders including civil society groups, academics, and responsible investors. The Global Network Initiative provides one such mechanism for public accountability.  Companies not currently participating in GNI, or a process of similar rigor and multi-stakeholder involvement, should be urged by users, investors, and regulators to do so.”[29]
  • Civil society should encourage comparative studies between countries and between ISPs with regards to their content removal practices to identify best practices
    Civil society has the unique ability to look longitudinally across this issue to determine and compare how different intermediaries and governments are responding to content removal requests. Without information about how other governments and intermediaries are handling these issues, it will be difficult for each government or intermediary to learn how to improve its laws or policies. Therefore, civil society has an important role to play in the process of creating increasingly better human rights outcomes for online platforms by performing and sharing ongoing, comparative research.
  • Civil society should establish best practices and benchmarks against which ISPs and government can be measured, and should track governments and ISPs over time in public reports
    “A number of projects that seek, define and implement indicators and benchmarks for governments or companies are either in development (examples include: UNESCO’s Indicators of Internet Development project examining country performance, Ranking Digital Rights focusing on companies) or already in operation (examples include the Web Foundation’s Web Index, Freedom House’s Internet Freedom Index, etc.). The emergence of credible, widely-used benchmarks and indicators that enable measurement of country and company performance on freedom of expression will help to inform policy, practice, stakeholder engagement processes, and advocacy.”[30]
  • Principle IX: Due Process - In Both Legal and Private Enforcement

  • ISPs should always consider context before removing content and Governments and courts should always consider context before ordering that certain content be removed
    “Governments need to ensure that legal frameworks and company policies are in place to address issues arising out of intermediary liability. These legal frameworks and policies should be contextually adapted and be consistent with a human rights framework and a commitment to due process and fair dealing. Legal and regulatory frameworks should also be precise and grounded in a clear understanding of the technology they are meant to address, removing legal uncertainty that would provide opportunity for abuse.”[31]
  • Principles for Courts
  • An independent and impartial judiciary exists, at least in part, to preserve the citizen’s due process rights. Many have called for an increased reliance on courts to make determinations about the legality of content posted online in order to both shift the censorship function from unaccountable private actors and to ensure that courts only order the removal of content that is actually unlawful. However, when courts do not have an adequate technical understanding of how content is created and shared on the internet, the rights of the intermediaries that facilitate the posting of the content, and who should be ordered to remove unlawful content, they do not add value to the online ecosystem. Therefore, courts should keep certain principles in mind to preserve the due process rights of the users that post content and the intermediaries that host the content.

  • Preserve due process for intermediaries- do not order them to do something before giving them notice and the opportunity to appear before the court
  • In a dispute between two private parties over a specific piece of content posted online, it may appear to the court that the easy solution is to order the intermediary who hosts the content to remove it. However, this approach does not extend any due process protections to the intermediary and does not adequately reflect the intermediary's status as something other than the creator of the content. If a court feels that it is necessary for an intermediary to intervene in a legal proceeding between two private parties, the court should provide the intermediary with proper notice and give them the opportunity to appear before the court before issuing any orders.

  • Necessity and proportionality of judicial determinations- judicial orders determining the illegality of specific content should be narrowly tailored to avoid over-removal of content
  • With regards to government removal requests, the UNESCO Report notes that “[o]ver-broad law and heavy liability regimes cause intermediaries to over-comply with government requests in ways that compromise users’ right to freedom of expression, or broadly restrict content in anticipation of government demands even if demands are never received and if the content could potentially be found legitimate even in a domestic court of law.”[32] Courts should follow the same principle: only order the removal of the bare minimum of content that is necessary to remedy the harm identified and nothing more.

  • Courts should clarify whether ISPs have to remove content in response to court orders directed to third parties, or only have to remove content when directly ordered to do so (first party court orders) after an adversarial proceeding to which the ISP was a party
  • See discussion of the difference between first party and third party court orders (supra, section a., “Transparency”). Ideally, any decision that courts reach on this issue would be consistent across different countries.

  • Questions- related unresolved issues that should be kicked to the larger group
  • How should the conflict between access to remedy and data retention laws that say content must be hard deleted after a certain period of time be resolved?  I think the access to remedy has to be subordinated to the data protection laws. Let's make that our draft position, but continue to flag it for discussion.
  • Should ISPs have to remove content in response to court orders directed to third parties, or only have to remove content when directly ordered to do so (first party court orders) after an adversarial proceeding to which the ISP was a party?  I think first party orders.  Let's make that our draft position, but continue to flag it for discussion.

  • [1] Center for Democracy and Technology, Shielding the Messengers: Protecting Platforms for Expression and Innovation at 4-15 (Version 2, 2012), available at https://www.cdt.org/files/pdfs/CDT-Intermediary-Liability-2012.pdf (see pp.4-15 for an explanation of these different models and the pros and cons of each).

    [2] UNESCO, “Fostering Freedom Online: The Roles, Challenges, and Obstacles of Internet Intermediaries” at 6-7 (Draft Version, June 16th, 2014) (Hereinafter “UNESCO Report”).

    [3] UNESCO Report at 56.

    [4] UNESCO Report at 37.

    [5] Center for Democracy and Technology, Additional Responses Regarding Notice and Action, Available at https://www.cdt.org/files/file/CDT%20N&A%20supplement.pdf.

    [6] The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information, Article 19, Global Campaign for Free Expression, and the Centre for Law and Democracy, JOINT DECLARATION ON FREEDOM OF EXPRESSION AND THE INTERNET at 2 (2011), available at http://www.osce.org/fom/78309 (Hereinafter “Joint Declaration on Freedom of Expression).

    [7] Center for Democracy and Technology, Additional Responses Regarding Notice and Action, Available at https://www.cdt.org/files/file/CDT%20N&A%20supplement.pdf.

    [8] Id.

    [9] Id.

    [10] UNESCO Report at 113-14.

    [11] ‘Chilling Effects’ is a website that allows recipients of ‘cease and desist’ notices to submit the notice to the site and receive information about their legal rights. For more information about ‘Chilling Effects’ see: http://www.chillingeffects.org.

    [12] Id. at 73. You can see an example of a complaint published on Chilling Effects at the following location. “DtecNet DMCA (Copyright) Complaint to Google,” Chilling Effects Clearinghouse, March 12, 2013, www.chillingeffects.org/notice.cgi?sID=841442.

    [13] UNESCO Report at 73.

    [14] Article 19, Internet Intermediaries: Dilemma of Liability (2013), available at http://www.article19.org/data/files/Intermediaries_ENGLISH.pdf.

    [15] UNESCO Report at 120.

    [16] Joint Declaration on Freedom of Expression and the Internet at 2.

    [17] Id.

    [18] Id.

    [19] Id. at 121.

    [20] Id. at 104.

    [21] Id. at 122.

    [22] Center for Democracy and Technology, Shielding the Messengers: Protecting Platforms for Expression and Innovation at 12 (Version 2, 2012), available at https://www.cdt.org/files/pdfs/CDT-Intermediary-Liability-2012.pdf.

    [23] Joint Declaration on Freedom of Expression at 2-3.

    [24] Geist, Michael, Rogers Provides New Evidence on Effectiveness of Notice-and-Notice System (2011), available at http://www.michaelgeist.ca/2011/03/effectiveness-of-notice-and-notice/

    [25] Center for Democracy and Technology, Chile’s Notice-and-Takedown System for Copyright Protection: An Alternative Approach (2012), available at https://www.cdt.org/files/pdfs/Chile-notice-takedown.pdf

    [26] UNESCO Report at 54.

    [27] “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue (A/HRC/23/40),” United Nations Human Rights, 17 April 2013, http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A.HRC.23.40_EN.pdf, § 24, p. 7.

    [28] UNESCO Report at 118.

    [29] UNESCO Report at 122.

    [30] Id.

    [31] UNESCO Report at 120.

    [32] Id. at 119.

    The views and opinions expressed on this page are those of their individual authors. Unless the opposite is explicitly stated, or unless the opposite may be reasonably inferred, CIS does not subscribe to these views and opinions which belong to their individual authors. CIS does not accept any responsibility, legal or otherwise, for the views and opinions of these individual authors. For an official statement from CIS on a particular issue, please contact us directly.