Skip to content

Highway driving simulator incorporating NGSIM dataset using reinforcement learning

License

Notifications You must be signed in to change notification settings

ghoshavirup0/HighwayENV

Repository files navigation

Simulation of self-driving car in highway-environment incorporating NGSIM dataset

A collection of environments for self driving and tactical decision-making tasks

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Racetrack

env = gym.make("racetrack-v0")

A continuous control task involving lane-keeping and obstacle avoidance.


The racetrack-v0 environment.

Program requirements

Clone the repository in your local machine or cluster and install the following dependencies:

  • gym
  • numpy
  • pygame
  • matplotlib
  • pandas
  • scipy

Note: The code works with python version 3.8. It is better to use custom conda environment to execute the program.

conda create --name custom_env python=3.8

All the dependencies with their corrosponding proper versions are mentioned in the requirements.txt file. Install all the dependencies by the following command in terminal

pip3 install -r requirements.txt

Execution

Create a script in the root folder of the downloaded repository.

Sample Script

# Importing the libraries
import gym
from stable_baselines3 import DQN
from stable_baselines3.common.vec_env import VecVideoRecorder, DummyVecEnv
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import highway_env


if __name__ == '__main__':

    # Selecting/Creating the environment
    env = gym.make('highway-v0')
    env = env.reset()
    
    # Training the model using Deep Q Network
    model = DQN('CnnPolicy', DummyVecEnv([env]),
                learning_rate=5e-4,
                buffer_size=15000,
                learning_starts=200,
                batch_size=32,
                gamma=0.8,
                train_freq=1,
                gradient_steps=1,
                target_update_interval=50,
                exploration_fraction=0.7,
                verbose=1,
                tensorboard_log="traied_cnn/")
    model.learn(total_timesteps=int(1e5))
    
    # Saving the trained model
    model.save("traied_cnn/model")

    
    # Loading the trained model
    model = DQN.load("model")

    # Rendering the model
    env = gym.make('highway-v0')
    obs = env.reset()
    done = False
    while not done:
        action,_=model.predict(obs)
        obs, _, done, _ = env.step(action)
        env.render()
    

The sample script should give output like the following

demo.mov

Customizing the environment

The following aspects of the environment can be customized as per requirement

  • Lane count
  • NPC vehicle count (Vehicles that follows NGSIM dataset)
  • Duration of the simulation
  • Collision reward
  • Lane change reward
  • High speed reward
  • Right lane reward
  • Spacing of vehicles
  • Vehicle density
  • Initial lane for the EGO vehicle

To customize the parameters navigate to highway_env/envs from the root folder and select your currnet working environment and edit the parameters as per requirement.

Citing

If you use the project in your work, please consider citing it with:

  @misc{highway-env-NGSIM},
  author = {Avirup Ghosh},
  title = {Simulation of self-driving car in highway-environment incorporating NGSIM dataset},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/ghoshavirup0/HighwayENV}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

About

Highway driving simulator incorporating NGSIM dataset using reinforcement learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published