Skip to content

Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".

Notifications You must be signed in to change notification settings

usail-hkust/LLMTSCS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

53 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLMLight: Large Language Models as Traffic Signal Control Agents

llmlight

Testing Status Testing Status Testing Status Stars Visits Badge

| 1 Introduction | 2 Requirements | 3 Usage | 4 Baselines | 5 LightGPT Training | 6 Code structure | 7 Datasets | 8 Citation | Website |

πŸŽ‰ News

  • πŸš€πŸ”₯ [2024.11] πŸŽ―πŸŽ―πŸ“’πŸ“’ Exciting News! We are thrilled to announce that our 🌟LLMLight🌟 has been accepted by KDD'2025! πŸŽ‰πŸŽ‰πŸŽ‰ Thanks to all the team members πŸ€—
  • πŸš€πŸ”₯ [2024.11] πŸŽ―πŸŽ―πŸ“’πŸ“’ Exciting Update! We’re thrilled to announce that our LightGPT family has expanded with four new members now available on HuggingFace. These models include fine-tuned backbones based on Qwen2 and Llama3. Check them out!

1 Introduction

Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".

Traffic Signal Control (TSC) is a crucial component in urban traffic management, aiming to optimize road network efficiency and reduce congestion. Traditional methods in TSC, primarily based on transportation engineering and reinforcement learning (RL), often exhibit limitations in generalization across varied traffic scenarios and lack interpretability. This paper presents LLMLight, a novel framework employing Large Language Models (LLMs) as decision-making agents for TSC. Specifically, the framework begins by instructing the LLM with a knowledgeable prompt detailing real-time traffic conditions. Leveraging the advanced generalization capabilities of LLMs, LLMLight engages a reasoning and decision-making process akin to human intuition for effective traffic control. Moreover, we build LightGPT, a specialized backbone LLM tailored for TSC tasks. By learning nuanced traffic patterns and control strategies, LightGPT enhances the LLMLight framework cost-effectively. Extensive experiments on nine real-world and synthetic datasets showcase the remarkable effectiveness, generalization ability, and interpretability of LLMLight against nine transportation-based and RL-based baselines.

The code structure is based on Efficient_XLight.

workflow

Watch Our Demo Video Here:

Demo.mov

2 Requirements

python>=3.9,tensorflow-cpu=2.8.0, cityflow, pandas=1.5.0, numpy=1.26.2, wandb, transformers=4.36.2, peft=0.7.1, accelerate=0.25.0, datasets=2.16.1, fire

cityflow needs a Linux environment, and we run the code on Ubuntu.

3 Usage

Parameters are well-prepared, and you can run the code directly.

  • For example, to run Advanced-MPLight:
python run_advanced_mplight.py --dataset hangzhou \
                               --traffic_file anon_4_4_hangzhou_real.json \
                               --proj_name TSCS
  • To run GPT-3.5/GPT-4 with LLMLight, you need to set your key in ./models/chatgpt.py:
headers = {
    "Content-Type": "application/json",
    "Authorization": "YOUR_KEY_HERE"
}

Then, run LLMLight by:

python run_chatgpt.py --prompt Commonsense \
                      --dataset hangzhou \
                      --traffic_file anon_4_4_hangzhou_real.json \
                      --gpt_version gpt-4 \
                      --proj_name TSCS

You can either choose Commonsense or Wait Time Forecast as the prompt argument.

  • To run with open-sourced LLMs (or LightGPT) and LLMLight:
# with default methods of Transformers
python run_open_LLM.py --llm_model LLM_MODEL_NAME_ONLY_FOR_LOG \
                       --llm_path LLM_PATH \
                       --dataset hangzhou \
                       --traffic_file anon_4_4_hangzhou_real.json \
                       --proj_name TSCS
                       
# or with VLLM (much faster but will cost more GPU memory)
python run_open_LLM_with_vllm.py --llm_model LLM_MODEL_NAME_ONLY_FOR_LOG \
                                 --llm_path LLM_PATH \
                                 --dataset hangzhou \
                                 --traffic_file anon_4_4_hangzhou_real.json \
                                 --proj_name TSCS

4 Baselines

  • Heuristic Methods:
    • FixedTime, Maxpressure, EfficientMaxPressure
  • DNN-RL:
    • PressLight, MPLight, CoLight, AttendLight, EfficientMPLight, EfficientPressLight, Efficient-Colight
  • Adv-DNN-RL:
    • Advanced-MaxPressure, Advanced-MPLight, Advanced-Colight
  • LLMLight+LLM:
    • gpt-3.5-turbo-0613, gpt-4-0613, llama-2-13b-chat-hf, llama-2-70b-chat-hf
  • LLMLight+LightGPT:

5 LightGPT Training

Step 1: Imitation Fine-tuning

python ./finetune/run_imitation_finetune.py --base_model MODEL_PATH \
                                            --data_path DATA_PATH \
                                            --output_dir OUTPUT_DIR
                                            
python ./finetune/merge_lora.py --adapter_model_name="OUTPUT_DIR" \
                                --base_model_name="MODEL_PATH" \
                                --output_name="MERGED_MODEL_PATH"

We merge the adapter with the base model by running merge_lora.py.

Step 2: Policy Refinement Data Collection

  • You first need to train Advanced-CoLight by running:
python run_advanced_colight.py --dataset hangzhou \
                               --traffic_file anon_4_4_hangzhou_real.json \
                               --proj_name TSCS

The RL model weights will be automatically saved in a checkpoint folder in ./model. You need to copy it and put it under the ./model_weights/AdvancedColight/{traffic_file}/" folder.

  • Then, collect the data by running:
python ./finetune/run_policy_refinement_data_collection.py --llm_model MODEL_NAME_ONLY_FOR_LOG \
                                                           --llm_path MODEL_PATH \
                                                           --dataset hangzhou \
                                                           --traffic_file anon_4_4_hangzhou_real.json

The fine-tuning data will be ready at ./data/cgpr/cgpr_{traffic_file}.json.

Step 3: Critic-guided Policy Refinement

python ./finetune/run_policy_refinement.py --llm_model MODEL_NAME_ONLY_FOR_LOG \
                                           --llm_path MODEL_PATH \
                                           --llm_output_dir OUTPUT_DIR \
                                           --dataset hangzhou \
                                           --traffic_file anon_4_4_hangzhou_real.json \
                                           --proj_name LightGPTFineTuning
                                           
python ./finetune/merge_lora.py --adapter_model_name="OUTPUT_DIR_{traffic_file}" \
                                --base_model_name="MODEL_PATH" \
                                --output_name="MERGED_MODEL_PATH"

Similarly, we merge the adapter with the base model by running merge_lora.py.

6 Code structure

  • models: contains all the models used in our article.
  • utils: contains all the methods to simulate and train the models.
  • frontend: contains visual replay files of different agents.
  • errors: contains error logs of ChatGPT agents.
  • {LLM_MODEL}_logs: contains dialog log files of a LLM.
  • prompts: contains base prompts of ChatGPT agents.
  • finetune: contains codes for LightGPT training.

7 Datasets

Road networks Intersections Road network arg Traffic files
Jinan 3 X 4 jinan anon_3_4_jinan_real
anon_3_4_jinan_real_2000
anon_3_4_jinan_real_2500
anon_3_4_jinan_synthetic_24000_60min
Hangzhou 4 X 4 hangzhou anon_4_4_hangzhou_real
anon_4_4_hangzhou_real_5816
anon_4_4_hangzhou_synthetic_24000_60min
New York 28 X 7 newyork_28x7 anon_28_7_newyork_real_double
anon_28_7_newyork_real_triple

8 Citation

@misc{lai2024llmlight,
      title={LLMLight: Large Language Models as Traffic Signal Control Agents}, 
      author={Siqi Lai and Zhao Xu and Weijia Zhang and Hao Liu and Hui Xiong},
      year={2024},
      eprint={2312.16044},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

About

Official code for article "LLMLight: Large Language Models as Traffic Signal Control Agents".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •