TY - JOUR
T1 - Speed harmonisation and merge control using connected automated vehicles on a highway lane closure
T2 - A reinforcement learning approach
AU - Ko, Byungjin
AU - Ryu, Seunghan
AU - Park, Byungkyu Brian
AU - Son, Sang Hyuk
N1 - Publisher Copyright:
© 2020 The Institution of Engineering and Technology.
PY - 2020/8/1
Y1 - 2020/8/1
N2 - A lane closure bottleneck usually leads to traffic congestion and a waste of fuel consumption on highways. In mixed traffic that consists of human-driven vehicles and connected automated vehicles (CAVs), the CAVs can be used for traffic control to improve the traffic flow. The authors propose speed harmonisation and merge control, taking advantage of CAVs to alleviate traffic congestion at a highway bottleneck area. To this end, they apply a reinforcement learning algorithm called deep Q network to train behaviours of CAVs. By training the merge control Q-network, CAVs learn a merge mechanism to improve the mixed traffic flow at the bottleneck area. Similarly, speed harmonisation Q-network learns speed harmonisation to reduce fuel consumption and alleviate traffic congestion by controlling the speed of following vehicles. After training two Q-networks of the merge mechanism and speed harmonisation, they evaluate the trained Q-networks under various conditions in terms of vehicle arrival rates and CAV market penetration rates. The simulation results indicate that the proposed approach improves the mixed traffic flow by increasing the throughput up to 30% and reducing the fuel consumption up to 20%, when compared to the late merge control without speed harmonisation.
AB - A lane closure bottleneck usually leads to traffic congestion and a waste of fuel consumption on highways. In mixed traffic that consists of human-driven vehicles and connected automated vehicles (CAVs), the CAVs can be used for traffic control to improve the traffic flow. The authors propose speed harmonisation and merge control, taking advantage of CAVs to alleviate traffic congestion at a highway bottleneck area. To this end, they apply a reinforcement learning algorithm called deep Q network to train behaviours of CAVs. By training the merge control Q-network, CAVs learn a merge mechanism to improve the mixed traffic flow at the bottleneck area. Similarly, speed harmonisation Q-network learns speed harmonisation to reduce fuel consumption and alleviate traffic congestion by controlling the speed of following vehicles. After training two Q-networks of the merge mechanism and speed harmonisation, they evaluate the trained Q-networks under various conditions in terms of vehicle arrival rates and CAV market penetration rates. The simulation results indicate that the proposed approach improves the mixed traffic flow by increasing the throughput up to 30% and reducing the fuel consumption up to 20%, when compared to the late merge control without speed harmonisation.
UR - http://www.scopus.com/inward/record.url?scp=85091466779&partnerID=8YFLogxK
U2 - 10.1049/iet-its.2019.0709
DO - 10.1049/iet-its.2019.0709
M3 - Article
AN - SCOPUS:85091466779
SN - 1751-956X
VL - 14
SP - 947
EP - 957
JO - IET Intelligent Transport Systems
JF - IET Intelligent Transport Systems
IS - 8
ER -