Abstract
Despite the common use of rule-based tools for online content moderation, human moderators still spend a lot of time monitoring them to ensure they work as intended. Based on surveys and interviews with Reddit moderators who use AutoModerator, we identified the main challenges in reducing false positives and false negatives of automated rules: not being able to estimate the actual effect of a rule in advance and having difficulty figuring out how the rules should be updated. To address these issues, we built ModSandbox, a novel virtual sandbox system that detects possible false positives and false negatives of a rule and visualizes which part of the rule is causing issues. We conducted a comparative, between-subject study with online content moderators to evaluate the effect of ModSandbox in improving automated rules. Results show that ModSandbox can support quickly finding possible false positives and false negatives of automated rules and guide moderators to improve them to reduce future errors.
| Original language | English |
|---|---|
| Title of host publication | CHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems |
| Publisher | Association for Computing Machinery |
| ISBN (Electronic) | 9781450394215 |
| DOIs | |
| State | Published - 19 Apr 2023 |
| Event | 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023 - Hamburg, Germany Duration: 23 Apr 2023 → 28 Apr 2023 |
Publication series
| Name | Conference on Human Factors in Computing Systems - Proceedings |
|---|
Conference
| Conference | 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023 |
|---|---|
| Country/Territory | Germany |
| City | Hamburg |
| Period | 23/04/23 → 28/04/23 |
Bibliographical note
Publisher Copyright:© 2023 ACM.
Keywords
- automated moderation bots
- human-AI collaboration
- moderation
- online communities
- sociotechnical systems
- virtual sandbox