← Back to Skills
Security

glin-profanity

thegdsks By thegdsks 👁 6 views ▲ 0 votes

Profanity detection and content moderation library

GitHub
---
name: glin-profanity
description: Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.
---

# Glin Profanity - Content Moderation Library

Profanity detection library that catches evasion attempts like leetspeak (`f4ck`, `sh1t`), Unicode tricks (Cyrillic lookalikes), and obfuscated text.

## Installation

```bash
# JavaScript/TypeScript
npm install glin-profanity

# Python
pip install glin-profanity
```

## Quick Usage

### JavaScript/TypeScript

```javascript
import { checkProfanity, Filter } from 'glin-profanity';

// Simple check
const result = checkProfanity("Your text here", {
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

result.containsProfanity  // boolean
result.profaneWords       // array of detected words
result.processedText      // censored version

// With Filter instance
const filter = new Filter({
  replaceWith: '***',
  detectLeetspeak: true,
  normalizeUnicode: true
});

filter.isProfane("text")           // boolean
filter.checkProfanity("text")      // full result object
```

### Python

```python
from glin_profanity import Filter

filter = Filter({
    "languages": ["english"],
    "replace_with": "***",
    "detect_leetspeak": True
})

filter.is_profane("text")           # True/False
filter.check_profanity("text")      # Full result dict
```

### React Hook

```tsx
import { useProfanityChecker } from 'glin-profanity';

function ChatInput() {
  const { result, checkText } = useProfanityChecker({
    detectLeetspeak: true
  });

  return (
    <input onChange={(e) => checkText(e.target.value)} />
  );
}
```

## Key Features

| Feature | Description |
|---------|-------------|
| Leetspeak detection | `f4ck`, `sh1t`, `@$$` patterns |
| Unicode normalization | Cyrillic `fսck` → `fuck` |
| 24 languages | Including Arabic, Chinese, Russian, Hindi |
| Context whitelists | Medical, gaming, technical domains |
| ML integration | Optional TensorFlow.js toxicity detection |
| Result caching | LRU cache for performance |

## Configuration Options

```javascript
const filter = new Filter({
  languages: ['english', 'spanish'],     // Languages to check
  detectLeetspeak: true,                 // Catch f4ck, sh1t
  leetspeakLevel: 'moderate',            // basic | moderate | aggressive
  normalizeUnicode: true,                // Catch Unicode tricks
  replaceWith: '*',                      // Replacement character
  preserveFirstLetter: false,            // f*** vs ****
  customWords: ['badword'],              // Add custom words
  ignoreWords: ['hell'],                 // Whitelist words
  cacheSize: 1000                        // LRU cache entries
});
```

## Context-Aware Analysis

```javascript
import { analyzeContext } from 'glin-profanity';

const result = analyzeContext("The patient has a breast tumor", {
  domain: 'medical',        // medical | gaming | technical | educational
  contextWindow: 3,         // Words around match to consider
  confidenceThreshold: 0.7  // Minimum confidence to flag
});
```

## Batch Processing

```javascript
import { batchCheck } from 'glin-profanity';

const results = batchCheck([
  "Comment 1",
  "Comment 2",
  "Comment 3"
], { returnOnlyFlagged: true });
```

## ML-Powered Detection (Optional)

```javascript
import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';

await loadToxicityModel({ threshold: 0.9 });

const result = await checkToxicity("You're the worst");
// { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }
```

## Common Patterns

### Chat/Comment Moderation
```javascript
const filter = new Filter({
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

bot.on('message', (msg) => {
  if (filter.isProfane(msg.text)) {
    deleteMessage(msg);
    warnUser(msg.author);
  }
});
```

### Content Validation Before Publish
```javascript
const result = filter.checkProfanity(userContent);

if (result.containsProfanity) {
  return {
    valid: false,
    issues: result.profaneWords,
    suggestion: result.processedText  // Censored version
  };
}
```

## Resources

- Docs: https://www.typeweaver.com/docs/glin-profanity
- Demo: https://www.glincker.com/tools/glin-profanity
- GitHub: https://github.com/GLINCKER/glin-profanity
- npm: https://www.npmjs.com/package/glin-profanity
- PyPI: https://pypi.org/project/glin-profanity/
security

Comments

Sign in to leave a comment

Loading comments...