TY - JOUR
T1 - Lane Segmentation Data Augmentation for Heavy Rain Sensor Blockage Using Realistically Translated Raindrop Images and CARLA Simulator
AU - Pahk, Jinu
AU - Park, Seongjeong
AU - Shim, Jungseok
AU - Son, Sungho
AU - Lee, Jungki
AU - An, Jinung
AU - Lim, Yongseob
AU - Choi, Gyeungho
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Lane segmentation and Lane Keeping Assist System (LKAS) play a vital role in autonomous driving. While deep learning technology has significantly improved the accuracy of lane segmentation, real-world driving scenarios present various challenges. In particular, heavy rainfall not only obscures the road with sheets of rain and fog but also creates water droplets on the windshield or lens of the camera that affects the lane segmentation performance. There may even be a false positive problem in which the algorithm incorrectly recognizes a raindrop as a road lane. Collecting heavy rain data is challenging in real-world settings, and manual annotation of such data is expensive. In this research, we propose a realistic raindrop conversion process that employs a contrastive learning-based Generative Adversarial Network (GAN) model to transform raindrops randomly generated using Python libraries. In addition, we utilize the attention mask of the lane segmentation model to guide the placement of raindrops in training images from the translation target domain (real Rainy-Images). By training the ENet-SAD model using the realistically Translated-Raindrop images and lane ground truth automatically extracted from the CARLA Simulator, we observe an improvement in lane segmentation accuracy in Rainy-Images. This method enables training and testing of the perception model while adjusting the number, size, shape, and direction of raindrops, thereby contributing to future research on autonomous driving in adverse weather conditions.
AB - Lane segmentation and Lane Keeping Assist System (LKAS) play a vital role in autonomous driving. While deep learning technology has significantly improved the accuracy of lane segmentation, real-world driving scenarios present various challenges. In particular, heavy rainfall not only obscures the road with sheets of rain and fog but also creates water droplets on the windshield or lens of the camera that affects the lane segmentation performance. There may even be a false positive problem in which the algorithm incorrectly recognizes a raindrop as a road lane. Collecting heavy rain data is challenging in real-world settings, and manual annotation of such data is expensive. In this research, we propose a realistic raindrop conversion process that employs a contrastive learning-based Generative Adversarial Network (GAN) model to transform raindrops randomly generated using Python libraries. In addition, we utilize the attention mask of the lane segmentation model to guide the placement of raindrops in training images from the translation target domain (real Rainy-Images). By training the ENet-SAD model using the realistically Translated-Raindrop images and lane ground truth automatically extracted from the CARLA Simulator, we observe an improvement in lane segmentation accuracy in Rainy-Images. This method enables training and testing of the perception model while adjusting the number, size, shape, and direction of raindrops, thereby contributing to future research on autonomous driving in adverse weather conditions.
KW - Computer vision for automation
KW - data sets for robotic vision
KW - simulation and animation
UR - http://www.scopus.com/inward/record.url?scp=85190731284&partnerID=8YFLogxK
U2 - 10.1109/LRA.2024.3390587
DO - 10.1109/LRA.2024.3390587
M3 - Article
AN - SCOPUS:85190731284
SN - 2377-3766
VL - 9
SP - 5488
EP - 5495
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 6
ER -