Profanity Detection

Project Overview

Profanity Detection Demo

About Profanity Detection

This project is a robust multimodal system for detecting and rephrasing profanity in both speech and text. It leverages advanced NLP models to ensure accurate filtering while preserving conversational context. The system detects inappropriate language, provides a toxicity score, and automatically rephrases content to maintain the original meaning while removing offensive elements.

Key Features

  • Multimodal Analysis: Process both written text and spoken audio
  • Context-Aware Detection: Goes beyond simple keyword matching
  • Automatic Content Refinement: Intelligently rephrases content while preserving meaning
  • Audio Synthesis: Converts rephrased content into high-quality spoken audio
  • Real-time Streaming: Process audio in real-time as you speak
  • Toxicity Classification: Automatically categorize content from "No Toxicity" to "Severe Toxicity"

Technical Details

Models & Implementation

The system leverages four powerful AI models:

  • Profanity Detection: RoBERTa-based model (parsawar/profanity_model_3.1) trained for offensive language detection
  • Content Refinement: T5-based model (s-nlp/t5-paranmt-detox) for rephrasing offensive language
  • Speech-to-Text: OpenAI's Whisper (large-v2) for accurate speech transcription
  • Text-to-Speech: Microsoft's SpeechT5 for high-quality voice synthesis

Technical Architecture

  • Built with PyTorch and Hugging Face Transformers
  • Intuitive Gradio-based user interface
  • Docker containerization with GPU optimization
  • Hugging Face ZeroGPU technology for efficient hosted deployment
  • Adjustable sensitivity threshold for fine-tuning detection strictness

Team Information

This project was developed by:

  • Brian Tham
  • Hong Ziyang
  • Nabil Zafran
  • Adrian Ian Wong
  • Lin Xiang Hong

Try It Out

Experience the Profanity Detection system in action! You can use the live demo to test both text and audio inputs, adjust the sensitivity, and see how the system detects and rephrases inappropriate content.

Scan the QR code below to access the project on your mobile device:

QR Code

Let's get in touch

Contact me