This repository contains four primary components: generating_images
, densedepth
, monodepth2
, and adversarial_patch
. Each component serves a distinct purpose in the workflow for generating images, applying depth estimation, and creating adversarial patches to evaluate and enhance depth prediction models.
- generating_images/: Scripts and tools for generating and preparing images used in training and evaluation.
- densedepth/: A modified version of the DenseDepth model, adapted for our custom dataset and depth estimation tasks.
- monodepth2/: A customized version of the MonoDepth2 model, configured for training with our data and integrated with custom scripts.
- adversarial_patch/: Code for generating and applying adversarial patches to assess and improve the robustness of depth estimation models.
The generating_images
folder contains scripts designed to create or preprocess images that will be used in depth estimation tasks. This includes generating synthetic images using 3D2Fool code.
-
data_loader_mde.py: Contains the
MyDataset
class for loading and preprocessing the training set.- data_dir: Path to RGB background images.
- obj_name: Path to the car model.
- camou_mask: Path to the mask for the texture area to attack.
- tex_trans_flag: Texture transformation flag.
- phy_trans_flag: Physical transformation flag.
- set_textures: Method to set textures for camouflage.
- camera_pos: Camera relative position data.
-
attack_base.py: Main script for setting up and running the adversarial attack.
- camou_mask: Path to the camouflage texture mask.
- camou_shape: Shape of the camouflage texture.
- obj_name: Path to the car model.
- train_dir: Path to RGB background images.
- log_dir: Path to save results.
-
training dataset:
- BaiduNetdisk Link: Contains background images and camera position matrix.
- ./rgb/*.jpg: RGB background images.
- ./ann.pkl: Camera position matrix.
-
Generate Images:
- Use
data_loader_mde.py
to prepare the dataset for training. Specify paths to the required data and settings for texture and camouflage.
- Use
-
Run Adversarial Attack:
- Execute
attack_base.py
to generate adversarial examples based on the camouflage texture and background images.
- Execute
The Patch_augmenttion
folder is focused on generating and applying adversarial patches to test the robustness of depth estimation models. It includes:
- Object Detection: Using YOLOv8 to detect vehicles in images.
- Patch Application: Augmenting and applying adversarial patches to assess model performance.
- adversarial_patch_augmentation.ipynb: Google Colab notebook containing the full workflow for generating and applying adversarial patches. This notebook includes:
- YOLOv8 object detection for identifying vehicles.
- Augmentation of the adversarial patch with images.
- Application of the patch to images and analysis of depth maps.
- texture_seed.png: The adversarial patch used in the experiments. This image is applied to the detected vehicles to test the impact on depth estimation.
- Open and Run the Colab Notebook:
- Execute the cells in
adversarial_patch_augmentation.ipynb
to follow the process of detecting objects, applying adversarial patches, and analyzing the results.
- Execute the cells in
- Adversarial Patch:
- The
texture_seed.png
file is used as the adversarial patch for testing the models.
- The
The monodepth2
folder includes a customized version of MonoDepth2, a self-supervised depth estimation model that can predict depth from monocular images without requiring ground truth depth data for training.
- Trainer Updates: Modified
trainer.py
to integrate with the custom dataset and incorporate additional training options. - Options Configuration: Adjusted
options.py
for easier command-line argument parsing. - Custom Dataset Integration: Included
custom_dataset.py
to handle our specific dataset format.
- Replace the original
trainer.py
,train.py
, andoptions.py
with the custom versions provided. - Run the
train.py
script to initiate training on your dataset. - Evaluate the trained model using
evaluate_depth.py
.
The densedepth
folder contains a modified version of the DenseDepth model, designed for depth estimation tasks with custom datasets.
- Normalization Adjustments: Adapted to fit the normalization standards of our specific dataset.
- Data Structure Changes: Customized to work with the image outputs from the
generating_images
folder. - Training Script: Use
Dense_depth_adversarial_training_and_evaluation.py
to implement training and evaluation.
- data.py: Contains the data loader and preprocessing steps to prepare images for training and testing.
- loss.py: Defines the loss functions used during the training of the model.
- model.py: The main model file with updated architecture specific to our depth estimation needs.
- updated_outpt.csv: CSV file containing training data path.
- patched_test.csv/: CSV file for testing raw data after applying adversarial patches.
- test_data.csv: CSV file for testing raw data before applying adversarial patches.
- Prepare Your Dataset: Place your custom dataset in the appropriate directory.
- Run Training: Execute
Dense_depth_adversarial_training_and_evaluation.py
to start the training process. - Monitor Progress: Use graphs and logs to monitor the training process.
- Evaluate Results: Analyze the results using the metrics provided in
updated_outpt.csv
and test data inpatched_test/
.
This project is based on multiple open-source projects. Each folder might have its own license file—please refer to them for more details.
For any questions or further assistance, please contact [Saahil Khanna].