Abstract
To guide the design of interactive image annotation systems that generalize to new domains and applications, we need ways to evaluate the capabilities of new annotation tools across a range of different types of image, content, and task domains. In this work, we introduce Corsica, a test harness for image annotation tools that uses calibration images to evaluate a tool's capabilities on general image properties and task requirements. Corsica is comprised of sets of three key components: 1) synthesized images with visual elements that are not domain-specific, 2) target microtasks that connects the visual elements and tools for evaluation, and 3) ground truth data for each microtask and visual element pair. By introducing a specification for calibration images and microtasks, we aim to create an evolving repository that allows the community to propose new evaluation challenges. Our work aims to help facilitate the robust verification of image annotation tools and techniques.
Original language | English |
---|---|
Title of host publication | UIST 2019 Adjunct - Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology |
Publisher | Association for Computing Machinery, Inc |
Pages | 81-83 |
Number of pages | 3 |
ISBN (Electronic) | 9781450368179 |
DOIs | |
State | Published - 14 Oct 2019 |
Event | 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019 - New Orleans, United States Duration: 20 Oct 2019 → 23 Oct 2019 |
Publication series
Name | UIST 2019 Adjunct - Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology |
---|
Conference
Conference | 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019 |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 20/10/19 → 23/10/19 |
Bibliographical note
Publisher Copyright:© 2019 Copyright is held by the owner/author(s).
Keywords
- Crowdsourcing
- Evaluation
- Image annotation
- Tools