3.1 Proactive content moderation
Adobe believes that maintaining engaging and trustworthy communities that foster creativity requires clear guidelines for acceptable behavior and robust processes for consistent content enforcement. Our Policies, which may be found at our Transparency Center, establish standards for user conduct across all Adobe products and services. We discover policy-violative and alleged illegal content through in-product reporting, our publicly available reporting channels and through automated technologies.
When content violates our Policies, we take action against it globally. Although our Policies typically cover material that is locally illegal, Adobe is also committed to respecting applicable laws of the EU and its member states. If we determine that content violates local law but does not otherwise violate our Policies, we may disable it locally by blocking it in the relevant jurisdiction.
Content Reporting Mechanisms
In-Product Reporting
With many Adobe products and services, users can report content they believe violates our Policies via in-product reporting options. Those reporting options are detailed on a per-product basis here. For any other products and services, users and non-users may always reach out to abuse@adobe.com to file a report with Adobe’s Trust & Safety team. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Reporting Forms
Anyone in the EU can report content on Adobe products or services that they believe violates applicable laws of the EU or its Member States through our Illegal Content Reporting Form. Reporters are asked to provide additional context about the allegedly illegal content, including the basis for the report and the country where they allege the law has been violated. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Anyone around the world can report an intellectual property violation by visiting our infringement reporting form. We also accept notices via mail or fax as detailed in our Intellectual Property Removal Policy.
Intellectual Property Removal Policy
At Adobe, we respect the intellectual property rights of others, and we expect our users to do the same. As outlined above, intellectual property infringement is a violation of our Policies across all Adobe products and services. We disable content in response to complete and valid notices of infringement. When one effective notice is filed with Adobe against a user regarding one or more pieces of allegedly infringing content, the user will receive one 'strike' against their account. If the user receives three 'strikes' within a one-year period, their user account will be terminated. Our Intellectual Property Removal Policy is set out here.
Abusive Content Detection
To enforce our Policies on a global scale, Adobe relies on a variety of tools and mechanisms to detect and remove potentially violative content that is hosted on our servers. We utilize different measures depending on whether the content at issue is posted on an online platform (such as posted to Behance), shared using a publicly accessible link (such as shared via a public Adobe Express page) (collectively, “publicly accessible content”) or if the content is kept in private cloud storage. We do not utilize any of these measures on locally stored content.
Fully Automated Tools
Our automated tools use multiple signals to detect and remove publicly accessible content that may violate our Policies. For example, these tools enable us to detect and automatically remove fraud or phishing and spam content on products such as Adobe Express. They also enable us to detect and remove content on Behance that might violate our nudity or violence and gore policies. Classifiers assign scores to text, images, and videos detected across our products and services and remove the content based on these scores. We never automatically remove content located in private storage. Using these automated models helps us detect more problematic content and make quicker enforcement decisions, which in turn helps keep our communities safe.
Hybrid Tools
In addition to fully automated content removal, in some cases, we supplement automatic detection of violative publicly accessible content with human review to ensure the accuracy of our actions. Classifiers assign scores to text, images, and videos detected across our products and services, and our Trust & Safety team reviews the detected content and takes appropriate enforcement action. In all situations, human review of content only occurs after it has been flagged by our abuse detection models or reported by another user. We also use this hybrid system of review to combat child sexual abuse material, which may also be stored in private cloud storage, as detailed below.