At xHamster (the "Platform", "we"), we prioritize safety, privacy, and trust, and we are committed to protecting our users from illegal content. Upholding these values is essential to our corporate culture. We firmly believe in inclusivity and freedom of expression, which can only be achieved if our platform remains secure and trustworthy for our adult users, business partners, and content creators. As responsible members of the online community, we recognize the importance of devoting sufficient time and resources to combat inappropriate and illegal content, including non-consensual sexual and intimate content and child sexual abuse material (CSAM).
Throughout our tenure as a leading platform in the adult industry, we have taken proactive measures to protect our platform from abusive and illegal content, including any trace of CSAM. We have implemented various policies and procedures, both internally and publicly available on the platform, to maintain a safe and trustworthy environment. We continually develop our processes to efficiently identify, remove, investigate, and report inappropriate materials, employing both proactive and reactive approaches. We acknowledge that improvement is an ongoing process, and we constantly seek innovative solutions to stay ahead in our mission.
Below, you will find information about the documents, policies, techniques, safeguards, procedures, and mechanisms we have implemented on our platform to maintain the high standards mentioned above.
I. Platform’s documentation:
The Platform published certain documents, that clearly outline the Platform’s zero-tolerance policy towards any kind of illegal content, namely:
- Terms & Conditions / User Agreement (the “User Agreement”) available at https://xhamster.com/info/terms. The User Agreement describes in detail how the Platform operates, what kind of content can be published by verified uploaders (individuals and legal entities), what kind of measures have been implemented by the Platform to combat illegal content and material, the general prohibition of illegal content on the Platform, etc.
- DMCA Notice & Takedown Policy and Procedures (the “DMCA policy”) available at https://xhamster.com/info/dmca. The DMCA policy describes a transparent process that copyright owners can follow to protect their content if they believe that their content has been published without the necessary permissions/authorizations or valid grounds.
- Parental Controls (the “Parental Controls Policy”) available at https://xhamster.com/info/parentalcontrol. In order to protect minors, our users should implement parental control protections, such as computer hardware, software, or filtering services, which may help users to limit minors’ access to harmful material. The Parental Controls Policy describes tools to be used by parents to protect children from adult content and provides links for the said tools.
II. Only verified uploaders can upload content to the Platform.
We have established a rigorous and efficient verification process for uploaders who wish to publish content on our platform. This process combines automated AI tools and manual review by our dedicated moderation team. To ensure the highest standards of safety and security, we collaborate with leading third-party providers in digital identification.
The platform establishes that all verified uploaders-individuals are either over eighteen (18) years of age or have reached the age of majority in their respective jurisdiction. Verified uploaders are required to maintain appropriate documentation for all individuals (models, co-performers, participants) depicted in the content, in compliance with record-keeping requirements such as 18 U.S.C. §2257 and 28 C.F.R. 75 and/or the guidelines of a world-known financial service provider regarding adult industry, as well as any other applicable laws and regulations.
The onboarding process for uploaders involves the participation of multiple Platform’s teams, including the Moderation Team, Legal Team, and Support Team. Currently, potential uploaders must complete the following steps and requirements to publish content on the platform:
1. To begin using the Platform, prospective uploaders must create an account by completing the registration form and becoming a member (registered user of the Platform). This requires providing a valid email address.
2. Members must then undergo a verification process to become verified uploaders. The process involves filling out a form on the dedicated web page of the Platform. The required information differs for natural persons and legal entities. For natural persons, the process includes submitting a photo of their identification document and taking a selfie with the document, using our third-party digital identification service providers.
3. Live age verification is mandatory for all uploaders-individuals persons. The Platform has a contract with reputable third-party service providers in the sphere of digital identity and age verification to conduct these live checks. Negotiations are also underway with other reliable service providers in this sphere to enhance the uploader's experience. Upon successful completion of the age verification process and manual approval by the Platform's moderation team, the member attains verified status.
4. Following the automated checks by third-party service providers, the Platform's moderation team performs additional manual moderation of the information and documents submitted. This team ensures the accuracy of the data and reviews the previous stages for any errors. If the application is error-free, it may be accepted; otherwise, it will be rejected for failing to meet the onboarding requirements.
5. If the uploader represents a legal entity, the Platform conducts two separate checks: a business check and a legal check. The business check assesses the commercial potential and compatibility of cooperation with the uploader from a business and content perspective. The legal check, carried out by the Platform's legal team, involves a Know Your Customer (KYC) verification of the uploader using internal procedures and assistance from relevant companies' registries in each jurisdiction.
6. After the onboarding process, the moderation team is responsible for evaluating each video and content item based on specific criteria outlined in Section III, "Content Moderation," of this page. The content must also undergo various internal software checks as specified below.
7. If the uploader successfully completes all the steps and requirements, their content will be approved by the moderation team and made publicly available.
III. Content Moderation.
Content Moderation on the Platform consists of two different phases: Pre – Publishing and Post – Publishing.
Pre – Publishing phase consists of the following separate steps:
1. Verified uploaders upload the content on our Platform and our moderation team reviews the content, based on our internal reviewer’s guide, in order to determine that the content does not feature any illegal elements. In other words, the first criterion for moderation of content on pre–publishing phase is general acceptability for publication, that should be evaluated based on the internal reviewer’s guide.
2. Assessment of the content from a technical perspective. See section V. “Technical and organizational safeguards” below for more details about this content moderation criterion.
3. Additionally, the verified creators are obliged to upload the identification documents and co-performer (model) release forms for each individual participating in the content. Our moderation team reviews all the said documentation, prior to allowing the content to be published and be publicly available to other users of the Platform. In other words, the third criterion is to check that the uploader has all the necessary documents for each participant in the content. The following documents must be collected and made available to the Platform: (i) model release, which enables the Platform to verify that the co-performer gives consent to participate in the video and to be published on the Platform; (ii) identification document of the co-performer, which enables the Platform to verify the age of the co-performer; (iii) 2257 form (if applicable).
4. What is more, we maintain a broad list of banned keywords which is updated on a continuous basis. These keywords cannot be used in content’s titles and descriptions nor in users’ searches.
Post – Publishing Phase:
Content Moderation is an everlasting procedure. Therefore, we have introduced various mechanisms to achieve the Content’s constant moderation:
1. We have provided our users with several ways to inform the Platform of any issues or concerns they may have regarding any type of content available on the Platform. Our users’ perspective and opinion are what we value the most on our Platform. Users can submit a request/report about the content in the following ways:
a) Each piece of content is accompanied by a flag button, in order for our users to have an efficient and easy way to express their concerns about the content.
b) Users can also express their concerns about the content through our Contact us/Single point of contact by selecting the 'Report Violation of Our Rules' option.
c) Users can notify us of any alleged copyright infringement on the Platform through our Contact us/Single point of contact by selecting the 'Report Violation of Our Rules' option and then selecting the 'Copyright Infringement / DMCA Notice and Takedown' option.
2. Upon receiving reports concerning content available on the Platform, our moderators proceed with its re-evaluation, and if the said content is deemed inappropriate or in violation of our Terms & Conditions, it will be immediately removed from the Platform.
By following the measures described in sections II and III of this page, the Platform accomplishes to provide to its users a safe environment without any illegal or inappropriate content, as well as remove any potential malicious uploaders.
IV. Content Downloading is Prohibited
The users do not have the option to download the content of the Platform. Thus, we reduce the risk of redistribution of content that may ultimately be determined as illegal or inappropriate and protect the Content from potential copyright infringements.
However, Virtual Reality content is an exception due to technical limitations and the inability to provide a seamless user experience within the Platform interface.
V. Technical and organizational safeguards
To support our zero-tolerance policy towards any kind of illegal content, the Platform has implemented various technical and organizational measures as described and explained below:
1. The Platform is proud to partner with charities and organizations that support noble causes such as combating child exploitation, human trafficking, slavery, and providing general support to adult industry performers. Some of our partnerships include RTA (Restricted to Adults - https://www.rtalabel.org/), ASACP (the Association of Sites Advocating Child Protection - https://www.asacp.org/?content=validate&ql=0b38b549f187edb883db00b7b057cbaa), Revenge Porn Helpline (UK service supporting adults (aged 18+) who are experiencing intimate image abuse - https://revengepornhelpline.org.uk/), and others. The cost of these services is borne solely by the Platform. They are offered free of charge to our users and uploaders.
2. xHamster is committed to facilitating parental control over children's access to adult content. All xHamster pages contain "restricted to adults" (RTA) tags, enabling parental tools, controls, and similar solutions to block adult content. In simple words, RTA tag allows parents to protect minors from adult content on various browsers, devices (mobile phones, laptops), and operating systems by easily setting up parental controls measures. More information about such measures can be found in our Parental Controls Policy (https://xhamster.com/info/parentalcontrol).
3. The Platform has developed an advanced system which analyzes text related to the content (title, description) for prohibited and suspicious words. Such words are flagged by the system and displayed to the moderation team during the moderation process.
4. Our Platform is in collaboration with various leading software providers detecting potentially harmful content. All content prior to moderation is processed such software that allows the platform to initially identify content that may potentially violate the Platform's User Agreement. The software enables the platform to detect inappropriate content using a shared database of digital hashes (fingerprints) and can also detect inappropriate content based on artificial intelligence technologies.
5. The Platform uses digital fingerprinting technology which has been specifically designed for the Platform. This software protects the Platform against inappropriate content that has already been removed from the Platform in the past. Digital fingerprinting technology compares the hashes (fingerprints) of newly uploaded content with a database of hashes (fingerprints) of previously removed content. If there is a match, this correlation is highlighted for the moderation team and a human-based decision is made about the content. In simple words, this software prevents inappropriate content from being re-uploaded.
VI. Personnel allocation
We have a dedicated and diligent team in our moderation and support departments that helps our Platform in achieving the goals defined herein. Our teams are structured to ensure the highest levels of moderation, security, and compliance for our Platform. Our moderation team is responsible for monitoring content and ensuring that the Platform remains a safe and welcoming environment. In addition, our legal team ensures that we remain in full compliance with all the relevant legal requirements and regulations.
VII. Commitment to improvement
We are committed to keeping our Platform as safe as possible, and we are always looking for ways to improve. As part of this effort, we invest regularly in our safety measures and work to identify any weaknesses in our processes.
We have implemented several actions to achieve this goal, including:
1. Constantly increasing the number of our dedicated employees and training our teams to identify potential violations and respond promptly to users’ requests.
2. Continuously improving our technologies to make our moderation tools more accurate and efficient in combating potential violations.
We believe these efforts contribute to maintaining a safe and enjoyable Platform for all users.
If you have questions about this page, our Support Team will be happy to answer them.