Content Moderation Policies and Procedures

By using this website, you agree to this policy—please read it carefully.

Introduction: This document outlines YumyHub’s Content Moderation Policy (“Policy”), which explains when, why, and how YumyHub moderates content, as well as how content that violates the Terms of Service is handled. If you are a YumyHub user, this Policy is part of your legal agreement with the platform. Please read it in its entirety. Key points to note include:

  • Our Terms of Service and Community Guidelines clarify what is and isn’t allowed on YumyHub.
  • If you wish to appeal a decision to deactivate content, disable an account, or issue a final warning, you must complete the Deactivation Appeal Form
    , subject to our Appeals Policy.

Interpretation: Unless otherwise specified, terms defined in the Terms of Service have the same meanings in this Policy.

When YumyHub moderates content: All content is screened prior to upload onto the site. While YumyHub allows creators and fans to express themselves freely, we moderate content to ensure it aligns with our Terms of Service, which includes our Acceptable Use Policy and Community Guidelines.

Why we moderate content: YumyHub allows creators to monetize their content and connect with their audience. To ensure the safety of our user community, YumyHub may, though not required, moderate content to comply with its Terms of Service.

What content we moderate: YumyHub reserves the right to review and remove any content on the platform, including text, images, audio, direct messages, and videos, at any time.

How we moderate content: YumyHub uses advanced digital technology in combination with human moderators to assess whether content adheres to our Terms of Service. Moderators are trained to identify and remove content that violates these terms.

Technology used for content moderation: We use several technological tools to detect content that might breach our Terms of Service. These include:

  • “Hashlists” and “stop words” (prohibited terms on YumyHub) to scan media, text, and audio to block restricted content. Hashlists are unique identifiers linked to harmful media, such as non-consensual intimate images (NCII) and child sexual abuse material (CSAM). Any content matching these lists or prohibited words is automatically blocked.
  • Image and text scanning tools that help moderators prioritize content for review. These tools flag potential violations, which are then manually checked by moderators before any action is taken.

Consequences of sharing prohibited content: If you upload content that violates YumyHub’s Terms of Service, we may:

  • Deactivate the content
  • Issue a warning or final warning
  • Deactivate the account
  • Ban the account holder from the platform

Repeated or severe violations may result in account suspension or termination, and the user may be prohibited from creating new accounts on YumyHub.

Common reasons for content removal: We often remove content that violates our Acceptable Use Policy, such as content that:

  • Shows creators whose age we cannot verify as 18+.
  • Features unverified creators or entirely AI-generated content.
  • Depicts a creator pretending to be under 18, even in fictional or role-play scenarios.
  • Shows explicit material with individuals who haven’t verified their age, identity, or consent.
  • Includes nudity in public places or involving animals.
  • Is illegal where you live or where the content was filmed.
  • Features extreme violence, blood, or harmful content.
  • Displays weapons, illegal drugs, or unsafe use of everyday objects, such as sex toys.
  • Promotes in-person meetups between creators and fans, such as through raffles or contests.

Illegal content: Any illegal content will be removed, and when applicable, we will report it to law enforcement. We report suspected CSAM to the National Center for Missing and Exploited Children via their Cyber Tipline. For details on how we comply with data-sharing obligations and assist law enforcement, please refer to our Privacy Policy.

Appealing a content moderation decision: If you believe your content or account was wrongly deactivated, you can submit an appeal by completing the Deactivation Appeal Form
according to our Appeals Policy.

Last updated: January 1, 2025.