Published: February 27, 2026
Reporting Period: January 1, 2025 – December 31, 2025
About this Transparency Report
Adobe Software Systems Ireland (“Adobe” or “we”) shares the European Commission's goal of making the internet safer through transparency and accountability. In accordance with Articles 15 and 24 of the European Union (“EU”) Digital Services Act (“DSA” or “Regulation”), Adobe is publishing our first transparency report (“Report”) for the products and services we determined to be within scope of the Regulation. It outlines our efforts to moderate content across our diverse offerings in accordance with our General Terms of Use, Content Policies, and product-specific Community Guidelines (collectively, “Policies”), and provides detailed metrics regarding those efforts for the reporting period.
Adobe has products and services ranging from subscription enterprise collaboration tools to free photo editing mobile apps and are continually assessing whether our offerings fall within scope of the DSA. We are committed to regularly updating our Policies and processes and publishing this report on an annual basis in accordance with our obligations under this Regulation. More information about how we approach content moderation, and the DSA may be found at our Transparency Center.
This report includes the following information:
1. Orders Received from Member States’ Authorities
2. Notices Received in Accordance with Article 16
3. Content Moderation Engaged in at Adobe’s Own Initiative
4. Complaints Received Through Adobe’s Internal Complaint Handling Systems
5. Out-of-Court Dispute Settlements
6. Article 23 Suspensions Imposed to Mitigate Misuse
7. Average Monthly Active Users
Adobe is providing a summary of this year’s report for readability and accessibility.
Adobe’s official report in accordance with Commission Implementing Regulation 2024/2835 (“Report”) and last year’s report may be accessed below (following Section 7: Average Monthly Active Users).
Section 1: Orders received from Member States’ authorities
Article 15(1)(a): information about orders received from Member States’ authorities
This section details the volume of requests from law enforcement and government agencies to remove content or provide user information pursuant to Articles 9 and 10.
1.1 Government removal orders from EU member states
Adobe did not receive any removal orders from courts or government agencies in the EU during the reporting period.
User data requests may include requests issued pursuant to local law and law enforcement requests, such as emergency disclosure requests. These requests are carefully reviewed by the Adobe Trust & Safety team to determine the validity of the legal process, to assess the proportionality of the request, and to ensure compliance with international data protection commitments made by Adobe.
Median time to acknowledge receipt: 24 hours
Median time to give effect to the order: 96 hours
User data requests include orders issued pursuant to local law and law enforcement requests as well as emergency disclosure requests. Our Trust & Safety Team carefully reviews these requests to determine the validity of the legal process, assess the proportionality of the request, and ensure compliance with international data protection commitments made by Adobe. More information about how Adobe handles requests for user data may be found in our guidelines on Government and Third-Party Requests for User Data.
The table below provides information on the number of requests for user data Adobe received from EU Member State authorities and whether any information was produced for the reporting period.
Median time to give effect to the order: 120 hours
Section 2: Notices Received in Accordance with Article 16
Adobe has processes in place to allow anyone in the EU, including users, non-users, Trusted Flaggers as defined by Article 22, and other entities, to report content that they believe violates Member State law in accordance with Article 16. When we receive these specific notices, we review the reported content in accordance with our Policies and take the appropriate action. If the reported content does not violate Adobe’s Policies, we review it for legality based on the provided information and may restrict access to the content in the relevant jurisdiction. Article 16 notices are reviewed manually by the Adobe Trust & Safety and IP teams and are not processed by automated means for any of Adobe’s in-scope products or services.
Trusted Flaggers who have been designated by the Digital Services Coordinator of the Member State in which they are established can, as described above, report allegedly illegal content in line with Article 22.
Adobe has not received any reports by designated Trusted Flaggers as of 31 December 2025
2.1 Total number of Article 16 noticesii
The table below provides information on the number of notices submitted by EU users, including Adobe policy violations and content suspected of being illegal under local law during the reporting period, broken down by category and product/service.
2. 2 Median time to take action on content identified in Article 16 noticesiii
The table below provides information on the median time required for Adobe to take action on content identified in Article 16 notices, broken down by product/service.
Section 3: Content moderation engaged in at Adobe’s own initiative
3.1 Proactive content moderation
Adobe believes that maintaining engaging and trustworthy communities that foster creativity requires clear guidelines for acceptable behavior and robust processes for consistent policy enforcement. Our Policies, which may be found at our Transparency Center, establish standards for user conduct across all Adobe products and services. We discover policy-violative and alleged illegal content through in-product reporting, our publicly available reporting channels, and through automated technologies.
When content violates our Policies, we take action against it globally. Although our Policies typically cover material that is locally illegal, Adobe is also committed to respecting applicable laws of the EU and its member states. If we determine that content violates local law but does not otherwise violate our Policies, we may disable it locally by blocking it in the relevant jurisdiction.
Content Reporting Mechanisms
In-Product Reporting
With many Adobe products and services, users can report content they believe violates our Policies via in-product reporting options. Those reporting options are detailed on a per-product basis here. For any other products and services, users and non-users may always reach out to abuse@adobe.com to file a report with Adobe’s Trust & Safety team. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Reporting Forms
Anyone in the EU can report content on Adobe products or services that they believe violates applicable laws of the EU or its Member States through our Illegal Content Reporting Form. Reporters are asked to provide additional context about the allegedly illegal content, including the basis for the report and the country where they allege the law has been violated. Whenever someone reports an alleged violation of our Policies, our team may review the content in question to determine whether a violation took place and action that content accordingly.
Anyone around the world can report an intellectual property violation by visiting our infringement reporting form. We also accept notices via mail or fax as detailed in our Intellectual Property Removal Policy.
Intellectual Property Removal Policy
At Adobe, we respect the intellectual property rights of others, and we expect our users to do the same. As outlined above, intellectual property infringement is a violation of our Policies across all Adobe products and services. We disable content in response to complete and valid notices of infringement. Under the intellectual property policy, when one effective notice is filed with Adobe against a user regarding one or more pieces of allegedly infringing content, the user will receive one 'strike' against their account. If the user receives three 'strikes' within a one-year period, their user account will be terminated. Our Intellectual Property Removal Policy is set out here.
Abusive Content Detection
To enforce our Policies on a global scale, Adobe relies on a variety of tools and mechanisms to detect and remove potentially violative content that is hosted on our servers. We utilize different measures depending on whether the content at issue is posted on an online platform (such as posted to Behance), shared using a publicly accessible link (such as shared via a public Adobe Express page) (collectively, “publicly accessible content”) or if the content is kept in private cloud storage. We do not utilize any of these measures on locally stored content.
Fully Automated Tools
Our automated tools use multiple signals to detect and remove publicly accessible content that may violate our Policies. For example, these tools enable us to detect and automatically remove fraud or phishing and spam content on products such as Adobe Express. They also enable us to detect and remove content on Behance that might violate our nudity or violence and gore policies. Classifiers assign scores to text, images, and videos detected across our products and services and remove the content based on these scores. We never automatically remove content located in private storage. Using these automated models helps us detect more problematic content and make quicker enforcement decisions, which in turn helps keep our communities safe.
Hybrid Tools
In addition to fully automated content removal, in some cases, we supplement automatic detection of violative publicly accessible content with human review to ensure the accuracy of our actions. Classifiers assign scores to text, images, and videos detected across our products and services, and our moderation team reviews the detected content and takes appropriate enforcement action. In most situations, human review of content within an account only occurs after it has been flagged by our abuse detection models or reported to us by another user or law enforcement. We also use this hybrid system of review to combat child sexual abuse material, which may also be stored in private cloud storage, as detailed below.
In limited circumstances, we may conduct manual human review of content associated with accounts that are suspected to be repeat offenders or otherwise linked to known abusive activity based on account-level, behavioral, or technical signals (such as prior enforcement history, device or network indicators, or other integrity signals).
Tools Used to Combat Child Sexual Abuse Material (CSAM)
Adobe is deeply committed to keeping children safe online and we are doing our part to fight the spread of child sexual abuse material (CSAM). We have a zero-tolerance policy against any material uploaded to our servers, whether publicly accessible or kept in private storage, that sexualizes, sexually exploits, or endangers children.
- As part of these efforts, we employ several mechanisms combining automatic detection of content with human review, including multiple methods to detect CSAM, such as sophisticated machine learning models and hash matching technology. These scans, using hash matching technology, enable us to compare digital signatures (or “hashes”) of images and videos uploaded by any Adobe user to relevant storage servers against databases of known CSAM hashes. All confirmed matches are human reviewed to ensure a high level of accuracy and quality of all reports before they are sent to the National Center for Missing and Exploited Children (NCMEC).
- Our Trust & Safety team will then review, report, and remove CSAM discovered by our machine learning models and hash matching technology, as well as by user reports or account investigations. Where possible, we permanently terminate any Adobe user account found to have uploaded or created content sexualizing minors. We share hashes of previously undetected CSAM to the NCMEC-managed industry hash list to help prevent the redistribution of CSAM on other platforms. We continue to advance Adobe’s CSAM detection capabilities and build wellness features into our moderation tools.
Content Enforcement
Content Enforcement Teams
Adobe has several specially trained teams in place to ensure that reported content across our broad set of products and services is promptly reviewed and appropriately assessed:
- The Trust & Safety team is responsible for moderation of content on our user-generated content products and services (such as Behance) that violates the Adobe Content Policies.
- In addition, Adobe Stock has a team of individuals responsible for reviewing contributor submissions prior to allowing them to be offered for licensing as part of the Adobe Stock collection.
- Lastly, we have a team of IP agents who are specifically trained to handle IP infringement claims across all our products and services.
Our content enforcement teams receive detailed training on a variety of topics during onboarding and are updated on new laws or relevant political or historical contexts. In the event of particularly complex reports, content enforcement teams can discuss with leadership or escalate to members of Adobe’s IP and Trust & Safety Legal and Policy teams, who may in turn consult with both internal and external specialists with expertise in the laws of the EU and the Member States.
Content and Account Actions
Adobe acts quickly to take appropriate action against violations of our Policies or applicable law. Our Trust & Safety, Stock and IP teams review content that has been reported or detected for potential violations. When we determine that content is violative, we take action that may include:
- Global Deactivation: We first review content for violations of our Policies, and violative content is deactivated globally. When we deactivate content, it is no longer available to users or non-users anywhere in the world.
- Local Block: Adobe may restrict access to content in each relevant jurisdiction if we determine that the content violates local law but does not violate our policies. When we locally block content, that content is not visible to users or non-users in the relevant jurisdiction but remains visible elsewhere.
- Limiting Distribution: When we limit distribution of a piece of content, that content will remain on the product or service but may not be visible to certain users.
- Monetary restrictions: For Adobe Stock users, there are defined licensing and contributor requirements. Contributors are limited in submissions and must avoid prohibited activity like multiple accounts intended to game visibility or artificially inflate sales, and submissions are reviewed before they become publicly visible in search and licensing results. For buyers, assets are sold under tiered licenses (standard, enhanced, extended) that permit specific usage (e.g., commercial use, copy/view limits, restrictions on resale/merchandise) and cost money either via subscription plans or credit packs; failure to hold the correct license can restrict use even in monetary projects. Where applicable, Adobe might remove or restrict funds obtained through deceptive behaviors designed to artificially inflate licensing results.
We consider several factors when determining the appropriate content enforcement action. As described above, we may make the decision to globally deactivate or locally block the content based on the nature of the violation or other context specific to each case. We also may decide on a case-by-case basis to allow certain content to remain on our products and services but take steps to ensure that it cannot be discovered accidentally or viewed by certain users.
Adobe may also restrict or limit the distribution or capabilities of any account for violations of our policies.
Notice and Appeals
Adobe takes several steps to ensure transparency in our policies and actions, and to provide users with a method of appealing our decisions.
- For our in-scope products and services, Adobe provides email notice of content enforcement actions to impacted users and individuals or entities who report content.
- Both impacted users and reporters can appeal our content enforcement decisions. If a user or reporter believes that our decision was made in error, they can file an appeal via our appeals form or via email. Some users may have additional appeal options or redress mechanisms available under their local law.
- When we send an email to a user or reporter to detail an enforcement action, we typically provide a link to our appeals form within that email. If an appeal is submitted via the form, the user and reporter will receive additional updates via email. If a user or reporter chooses to contact us via email instead of through the appeals form, we will also send additional updates via email.
3.2 Number of Content Moderation Actions Taken at Adobe’s Initiative for violation of Terms and Conditionsiv
Adobe considers content moderation actions taken at our own initiative to be actions taken on content available in the EU on the basis that the content violates the Policies but was not formally reported to Adobe.
These content moderation actions include both proactive and reactive enforcement. Proactive enforcement occurs when an Adobe employee or contractor or Adobe technology identifies potentially policy-violating content, and actions that content based on our Policies. Reactive enforcement occurs when a user or other external entity reports content to Adobe, and that content is actioned if in violation of our Policies.
3.3 Number of Content Moderation Actions Taken by Automated Meansvii
The table below provides information on the number of pieces of content removed as a direct result of automated enforcement, broken down by product/service. Not all Adobe products and services have automated enforcement mechanisms.
3.4 Number of Content Moderation Actions Taken at Adobe’s Initiative for Content Deemed Illegalviii
Adobe considers content moderation actions taken at our own initiative to be actions taken on content available in the EU on the basis that the content is deemed to be illegal but was not formally reported to Adobe. These actions can be applied to the specific product or applied at the Adobe-ID level across several Adobe products.
For Adobe-ID level actions, Adobe removed 1,260 total number of measures taken for content deemed illegal under local regulations due to child safety. More information about the reason for these removals can be found in the Tools Used to Combat Child Sexual Abuse Material (CSAM) section of this report.
For Behance, Adobe removed 18 total pieces of content deemed illegal under local regulations due to Intellectual Property. More information about the about the reason for these removals can be found in our Intellectual Property Removal Policy is set out here.
Section 4: Complaints Received through Adobe’s Internal Complaint Handling Systems (i.e., Appeals)
4.1 Number of Appeals Received and Resultsix
The table below provides information on the number of appeals received through all appeal channels, i.e., via the appeal link included in our notification emails and via email, organized by product/service.
4.2 Number of Appeals Received from Content Reportersx
Adobe did not receive any appeals from content reporters during the reporting period.
Section 5: Out-of-Court Dispute Settlements (Article 24(1)(a))
Adobe will inform both the user and the reporter of their opportunity to seek additional review by certified, EU-based out-of-court dispute settlement bodies. Adobe did not receive any requests from users nor reporters to settle content moderation disputes out of court during the reporting period.
Section 6: Article 23 Suspensions Imposed to Mitigate Misuse (Article 24(1)(b))
6.1 Number of Suspensions for Manifestly Illegal Content Imposed Pursuant to Article 23
If an Adobe user frequently uploads manifestly illegal content, or a reporter frequently submits manifestly unfounded reports or appeals through our channels, we may suspend, for a reasonable period of time, their ability to use the relevant product, service, or system. Adobe did not suspend any accounts for submitting manifestly unfounded reports or appeals through our channels for the reporting period.
Section 7: Average Monthly Active Users
Article 24(2) of the (DSA) requires “online platforms” to publish information on their average monthly active users MAUs in the EU every six months. Adobe provides information about the EU MAUs for our products and services that may fall within the DSA’s scope of “online platforms” here.
Adobe’s official report in accordance with Commission Implementing Regulation 2024/2835 (“Report”):
You can access Adobe’s older Transparency Reports here:
i We have included in the above table only those EU Member States whose authorities’ submitted requests for user data during the reporting period.
ii. We have included in the above tables only those Adobe products and services, and categories of alleged violations for which we received Article 16 notices during the reporting period. For a full breakdown of products including those that don’t have any moderation actions for this period, please see the required excel spreadsheet.
iii. We have included in the above tables only those Adobe products and services, and categories of alleged violations for which we received Article 16 notices during the reporting period. For a full breakdown of products including those that don’t have any moderation actions for this period, please see the required excel spreadsheet.
iv. We have included only those products and services, and policy categories in the above table that have data to report for the reporting period.
v. Adobe Stock may remove contributor assets in accordance with its Content Submission Guidelines, which contain product-specific requirements that do not conform with the policy categories enumerated in this table. Moderation volumes vary by product and use case. In some instances, higher numbers reflect proactive clean‑up efforts or audits during the reporting period, which are part of our ongoing approach to platform safety.
vi. Not all enumerated policy violation categories in this table will be applicable to our diverse offering and uses of our products and services. For this reason, some products will have content moderation actions listed under “Other” to account for enforcement of other product-specific terms and policies.
vii. We have included in the above table only those products and services, and policy categories that have data to report for the reporting period.
viii. We have included in the below table only those Adobe products and services from which restricted accounts in the European Union during the reporting period.
ix. We have included in the below table only those Adobe products and services for which we received appeals during the reporting period.
x. We have included in the below table only those Adobe products and services for which we received appeals, and their associated policy categories, during the reporting period.