Jaiswar, Siddharth and Patil, Harshali P (2025) Design and development of AI driven content moderation system. International Journal of Science and Research Archive, 15 (1). pp. 112-119. ISSN 2582-8185
![IJSRA-2025-0937.pdf [thumbnail of IJSRA-2025-0937.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
IJSRA-2025-0937.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
AI-powered content management systems are becoming indispensable for digital platforms that manage large amounts of user-generated content. These systems use machine learning, computer vision, and natural language processing (NLP) to analyze, classify, and filter text, images, videos, and content in real time. AI helps online communities stay safe, inclusive, and follow specific procedures by identifying inappropriate or harmful content such as hate speech, misinformation, spam, ambiguous content, and threats. Text management involves identifying abuse, profanity, and threats; while an intelligent machine with computer capabilities can detect violence, pornography, and other thoughts; there is no need for this. AI systems can instantly flag or remove inappropriate content, apply filters, or refer inappropriate cases to human reviewers. Through a learning process, these systems become smarter over time, increasing their accuracy and reducing negative or negative feedback from review teams. and efficiency, but there are significant challenges. Biases in AI algorithms can lead to biased analysis, especially if the data is not diverse or misrepresents certain communities. This can result in content from marginalized groups being flagged as negative or healthy conversations being censored due to cultural differences or misinterpretations of messages. Additionally, when the system is not satirical, humorous, or political, over-filtering can occur, leading to inappropriate content being flagged or removed. Balancing the need for integrated content with user privacy is an ongoing challenge for platform designers. AI performs initial filtering, instantly managing violations received, while complex or ambiguous cases are escalated to human review. This allows the system to remain flexible and fair, while continuously improving with human feedback. AI and human analytics feedback is vital for the transformation of changing language, regional language, and new digital content models. the most suitable truck options.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/ijsra.2025.15.1.0937 |
Uncontrolled Keywords: | Content Moderation; Bias; Artificial Intelligence; Ethics; Hate Speech; NLP; Consistency |
Depositing User: | Editor IJSRA |
Date Deposited: | 22 Jul 2025 15:04 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/1361 |