Building a multi headed model thats capable of detecting different types of toxicity like threats, obscenity, insult and identity based hate. Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to efficiently facilitate conversations, leading many communities to limit or completely shut down user comments. So far we have a range of publicly available models served through the perspective APIs, including toxicity. But the current models still make errors, and they dont allow users to select which type of toxicity theyre interested in finding.
BY Pallam Ravi | Hari Narayana Batta | Greeshma S | Shaik Yaseen "Toxic Comment Classification"
Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019,
Paper URL: https://www.ijtsrd.com/computer-science/other/23464/toxic-comment-classification/pallam-ravi