A collection of environments for self driving and tactical decision-making tasks
env = gym.make("highway-v0")
In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.
A faster variant, highway-fast-v0
is also available, with a degraded simulation accuracy to improve speed for large-scale training.
env = gym.make("merge-v0")
In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.
env = gym.make("roundabout-v0")
In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.
The roundabout-v0 environment.
env = gym.make("parking-v0")
A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.
env = gym.make("intersection-v0")
An intersection negotiation task with dense traffic.
The intersection-v0 environment.
env = gym.make("racetrack-v0")
A continuous control task involving lane-keeping and obstacle avoidance.
Clone the repository in your local machine or cluster and install the following dependencies:
- gym
- numpy
- pygame
- matplotlib
- pandas
- scipy
Note: The code works with python version 3.8. It is better to use custom conda environment to execute the program.
conda create --name custom_env python=3.8
All the dependencies with their corrosponding proper versions are mentioned in the requirements.txt file. Install all the dependencies by the following command in terminal
pip3 install -r requirements.txt
Create a script in the root folder of the downloaded repository.
Sample Script
# Importing the libraries
import gym
from stable_baselines3 import DQN
from stable_baselines3.common.vec_env import VecVideoRecorder, DummyVecEnv
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import highway_env
if __name__ == '__main__':
# Selecting/Creating the environment
env = gym.make('highway-v0')
env = env.reset()
# Training the model using Deep Q Network
model = DQN('CnnPolicy', DummyVecEnv([env]),
learning_rate=5e-4,
buffer_size=15000,
learning_starts=200,
batch_size=32,
gamma=0.8,
train_freq=1,
gradient_steps=1,
target_update_interval=50,
exploration_fraction=0.7,
verbose=1,
tensorboard_log="traied_cnn/")
model.learn(total_timesteps=int(1e5))
# Saving the trained model
model.save("traied_cnn/model")
# Loading the trained model
model = DQN.load("model")
# Rendering the model
env = gym.make('highway-v0')
obs = env.reset()
done = False
while not done:
action,_=model.predict(obs)
obs, _, done, _ = env.step(action)
env.render()
The sample script should give output like the following
demo.mov
The following aspects of the environment can be customized as per requirement
- Lane count
- NPC vehicle count (Vehicles that follows NGSIM dataset)
- Duration of the simulation
- Collision reward
- Lane change reward
- High speed reward
- Right lane reward
- Spacing of vehicles
- Vehicle density
- Initial lane for the EGO vehicle
To customize the parameters navigate to highway_env/envs from the root folder and select your currnet working environment and edit the parameters as per requirement.
If you use the project in your work, please consider citing it with:
@misc{highway-env-NGSIM},
author = {Avirup Ghosh},
title = {Simulation of self-driving car in highway-environment incorporating NGSIM dataset},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ghoshavirup0/HighwayENV}},
}
List of publications & preprints using highway-env
(please open a pull request to add missing entries):
- Approximate Robust Control of Uncertain Dynamical Systems (Dec 2018)
- Interval Prediction for Continuous-Time Systems with Parametric Uncertainties (Apr 2019)
- Practical Open-Loop Optimistic Planning (Apr 2019)
- α^α-Rank: Practically Scaling α-Rank through Stochastic Optimisation (Sep 2019)
- Social Attention for Autonomous Decision-Making in Dense Traffic (Nov 2019)
- Budgeted Reinforcement Learning in Continuous State Space (Dec 2019)
- Multi-View Reinforcement Learning (Dec 2019)
- Reinforcement learning for Dialogue Systems optimization with user adaptation (Dec 2019)
- Distributional Soft Actor Critic for Risk Sensitive Learning (Apr 2020)
- Bi-Level Actor-Critic for Multi-Agent Coordination (Apr 2020)
- Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes (Jun 2020)
- Beyond Prioritized Replay: Sampling States in Model-Based RL via Simulated Priorities (Jul 2020)
- Robust-Adaptive Interval Predictive Control for Linear Uncertain Systems (Jul 2020)
- SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction (Jul 2020)
- Delay-Aware Multi-Agent Reinforcement Learning for Cooperative and Competitive Environments (Aug 2020)
- B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation (Nov 2020)
- Model-based Reinforcement Learning from Signal Temporal Logic Specifications (Nov 2020)
- Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs (Dec 2020)
- Assessing and Accelerating Coverage in Deep Reinforcement Learning (Dec 2020)
- Distributionally Consistent Simulation of Naturalistic Driving Environment for Autonomous Vehicle Testing (Jan 2021)
- Interpretable Policy Specification and Synthesis through Natural Language and RL (Jan 2021)
- Deep Reinforcement Learning Techniques in Diversified Domains: A Survey (Feb 2021)
- Corner Case Generation and Analysis for Safety Assessment of Autonomous Vehicles (Feb 2021)
- Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment (Feb 2021)
- Building Safer Autonomous Agents by Leveraging Risky Driving Behavior Knowledge
- Quick Learner Automated Vehicle Adapting its Roadmanship to Varying Traffic Cultures with Meta Reinforcement Learning (Apr 2021)
- Deep Multi-agent Reinforcement Learning for Highway On-Ramp Merging in Mixed Traffic (May 2021)
- Accelerated Policy Evaluation: Learning Adversarial Environments with Adaptive Importance Sampling (Jun 2021)
- Learning Interaction-aware Guidance Policies for Motion Planning in Dense Traffic Scenarios (Jul 2021)
- Robust Predictable Control (Sep 2021)
- [Driving-IRL-NGSIM] (https://github.com/MCZhi/Driving-IRL-NGSIM) (Nov 2021)