Content Moderation Policy

Last Updated: 18 November 2025
Website: oum.wtf
Platform Name: Open University Media


1. Introduction

This Content Moderation Policy describes how Open University Media (“the Platform”) reviews, evaluates, approves, or removes user-generated content. The Platform is committed to providing a secure, respectful, and legally compliant environment where students can express themselves anonymously without fear of harassment or illegal exposure.

Content moderation is conducted through a combination of automated AI-based tools and human oversight. Moderation prioritizes safety, legality, respect, and the integrity of community interactions.

This Moderation Policy must be read along with the Terms & Conditions, Privacy Policy, and Community Guidelines, all of which collectively govern user participation.


2. Moderation Objectives

The primary goals of moderation on Open University Media are:

  1. To ensure user safety and mental well-being

  2. To prevent illegal or harmful content

  3. To protect anonymity and privacy

  4. To maintain a respectful and constructive community

  5. To comply with applicable Indian laws

  6. To facilitate free and emotional expression within reasonable limits

  7. To remove content that violates rules, or may cause harm to others

Moderation is designed to serve students while upholding strict safety standards.


3. Moderation Methods

Moderation is carried out using two primary methods:

3.1 Automated AI Moderation

All submitted content is scanned by AI systems trained to detect:

  • Hate speech

  • Excessive or targeted abusive language

  • Sexual content

  • Content involving minors

  • Violence or threats

  • Encouragement of illegal activities

  • Non-English language

  • Identity violations

  • Doxxing

  • Self-harm encouragement

  • Sensitive or graphic content

  • Possible defamation

  • Excessive swearing beyond acceptable limits

  • Context that may indicate harassment

AI moderation provides fast, consistent screening of all submissions.

However, AI tools are not flawless. They may:

  • Misinterpret sarcasm or emotional venting

  • Block content that is safe or acceptable

  • Approve content that later needs removal

  • Misjudge context or tone

Users acknowledge that AI moderation may result in errors and is supplemented by human review where necessary.

3.2 Human Moderation

Human moderators intervene when:

  • Content is flagged by AI

  • Users report content

  • The system detects patterns of harmful behavior

  • Content involves complicated or borderline issues

  • Legal concerns arise

  • Potentially severe violations occur

  • Appeals or disputes are submitted by users

Humans make final judgments in sensitive, nuanced, or complex cases.


4. Moderation Priorities

The Platform follows a strict priority order when reviewing content:

Priority 1: Illegal Content

Immediate action is taken for content involving:

  • Drugs

  • Weapons

  • Violence

  • Child safety issues

  • Sexual content involving minors

  • Cybercrime or hacking

  • Exam leaks or confidential documents

  • Financial fraud

  • Threats of harm

  • Terrorism or extremism

  • Revenge porn

Such content may be:

  • Removed instantly

  • Logged for investigation

  • Reported to authorities where legally required

Priority 2: Safety and Well-Being

Content involving:

  • Suicide encouragement

  • Self-harm instructions

  • Severe harassment

  • Threats of violence

  • Misleading medical or safety information

is prioritized for immediate moderation.

Priority 3: Abuse and Harassment

Moderation removes:

  • Personal attacks

  • Name-calling directed at real individuals

  • Sexual insults

  • Discriminatory language

  • Repeated harassment

  • Doxxing or exposure of identity

  • Attempts to damage reputation

Priority 4: Platform Rules

Content violating rules regarding:

  • Excessive profanity

  • Sexual explicitness

  • Non-English content

  • Spam

  • Repetitive or disruptive behaviour

  • Inappropriate venting that targets real people

is moderated as needed.

Priority 5: Quality and Coherence

Content may be removed or edited if it:

  • Includes unreadable or incoherent text

  • Violates English-only rules

  • Contains irrelevant or nonsensical input

  • Is written for trolling or disruption


5. Swearing and Language Moderation

Open University Media allows limited swearing meant for emotional expression. Moderation distinguishes between:

Allowed:

  • Frustration toward situations

  • Strong language used casually (e.g., “fucking useless class”)

  • Emotional venting not targeting individuals

Not Allowed:

  • Sexual swearwords

  • Targeted insults toward identifiable people

  • Caste-based, religion-based, racist, or discriminatory insults

  • Excessive profanity intended purely as insult

  • Harassment masked as emotional venting

Moderators assess language according to intent, target, frequency, and tone.


6. English-Only Enforcement

The Platform only accepts posts written in English. Moderation tools detect and remove:

  • Regional languages

  • Hinglish or transliterated languages

  • Non-English scripts

  • Mixed-language posts where English is minimal

This rule maintains consistency and effective moderation.


7. Handling of User Identity and Email Addresses

Moderators cannot see user email addresses unless legally required or necessary for direct moderation communication.

Moderators will only contact a user by email if:

  • A serious rule violation requires explanation

  • An account faces suspension or termination

  • Identity or legal clarifications are needed

  • A user files an appeal

Under no circumstance is email displayed publicly.


8. Content Removal Categories

Content may be removed in the following ways:

8.1 Automatic Removal

AI instantly blocks or removes content that clearly violates guidelines.

Examples:

  • Sexual content

  • Hate speech

  • Threats

  • Illegal discussions

  • Posts not in English

  • Explicit doxxing

8.2 Soft Removal (Hidden Pending Review)

Content may be temporarily hidden until a moderator can manually verify it.

8.3 Hard Removal

Content is permanently removed and cannot be restored.

8.4 Shadow Limiting

Users who repeatedly violate rules may:

  • Have posts flagged more often

  • Have content slowed or filtered

  • Temporarily lose posting privileges

8.5 Account Suspension

Accounts may be temporarily or permanently suspended based on severity.


9. User Reporting System

Users may manually report content that:

  • Violates rules

  • Contains harm or threats

  • Includes illegal elements

  • Attacks individuals

  • Exposes privacy

  • Is sexually explicit

  • Shows bullying or harassment

  • Appears to involve minors

  • Contains misinformation or dangerous content

Reported posts receive prioritized review by moderators.

False or malicious reporting may result in disciplinary action.


10. Appeals and Reconsideration

Users may appeal moderation decisions by contacting support with:

  • A clear explanation

  • The content in question

  • The reason they believe moderation was incorrect

Moderators review appeals manually. Upon review:

  • The original decision may be upheld

  • The decision may be reversed

  • Content may be restored

  • A warning may be issued instead

  • Additional restrictions may be applied if abuse is detected

Appeals are not guaranteed to succeed.


11. Escalation to Authorities

In cases involving:

  • Child safety

  • Criminal activity

  • Threats of serious violence

  • Cybercrime

  • Sexual exploitation

  • Defamation or targeted harassment under Indian law

The Platform may be legally obliged to assist authorities by sharing:

  • Email addresses

  • Relevant logs

  • Offending content

  • Technical metadata (when legally warranted)

This is done strictly in compliance with Indian legal requirements.


12. Moderation Transparency

Open University Media aims to maintain transparency by:

  • Explaining key rules publicly

  • Providing clear reasons for removal in serious cases

  • Updating users about rule changes

  • Allowing appeals where possible

  • Maintaining consistent enforcement standards

However, internal moderation processes may remain confidential for security reasons.


13. Changes to Moderation Policy

This Policy may be updated at any time to:

  • Reflect new moderation tools

  • Improve user safety

  • Respond to emerging risks

  • Adjust to legal requirements

Revisions take effect upon publication.


14. Contact Information

For moderation-related concerns or appeals:

Email: support@oum.wtf
Subject Line: “Moderation Appeal” or “Moderation Inquiry”

Moderation decisions are made carefully and with the intent to protect users and maintain a fair platform environment.