- π©βπ» Introduction
- β‘οΈ Installation
- πΎ Examples
- 1. Gradient Comparation between AD & Central Difference
- 2. Iso-acoustic Model Tests
- 3. Iso-elastic & VTI-elastic Model Tests
- 4. Misfits Tests
- 5. Optimizer Tests
- 6. Regularization Methods
- 7. Multi-Scale Strategy in FWI
- 8. Deep Image Prior (Earth Model Reparameterization)
- 9. Uncertainty Estimation Using Deep Neural Networks (DNNs)
- π Features
- βοΈ LICENSE
- ποΈ To-Do List
- π° Contact
ββADFWI is an open-source framework for high-resolution subsurface parameter estimation by minimizing discrepancies between observed and simulated waveform. Utilizing automatic differentiation (AD), ADFWI simplifies the derivation and implementation of Full Waveform Inversion (FWI), enhancing the design and evaluation of methodologies. It supports wave propagation in various media, including isotropic acoustic, isotropic elastic, and both vertical transverse isotropy (VTI) and tilted transverse isotropy (TTI) models.
ββIn addition, ADFWI provides a comprehensive collection of Objective functions, regularization techniques, optimization algorithms, and deep neural networks. This rich set of tools facilitates researchers in conducting experiments and comparisons, enabling them to explore innovative approaches and refine their methodologies effectively.
To install ADFWI, please follow these steps:
-
Ensure Prerequisites
make sure you have the following software installed on your system:- Python 3.8+: Download Python from the official website: Python Downloads.
- pip (Python package installer).
-
Create a Virtual Environment (Optional but Recommended) It is recommended to create a virtual environment to manage your project dependencies. You can use either
venv
orconda
. For example, usingconda
:conda create --name adfwi-env python=3.8 conda activate adfwi-env
-
Install Required Packages
- Method 1: Clone the github Repository
This method provides the latest version, which may be more suitable for your research:
Then, install the necessary packages:
git clone https://github.com/liufeng2317/ADFWI.git cd ADFWI
pip install -r requirements.txt
- Method 2: Install via pip
Alternatively, you can directly install ADFWI from PyPI:
pip install ADFWI-Torch
-
Verify the Installation To ensure that ADFWI is installed correctly, run any examples located in the examples folder.
-
Troubleshooting If you encounter any issues during installation, please check the Issues section of the GitHub repository for potential solutions or to report a new issue.
A comparative analysis of gradient calculations obtained through AD and the central difference method.
Test Name | Status | Example's Path | Example's Figure |
---|---|---|---|
Marmousi2 (low resulotion) | β | Example-Marmousi2 (low) | |
Marmousi2 (high resulotion) | β | Example-Marmousi2 (high) | |
Marmousi2 (vp and rho) | β | Example-vp & rho | |
FootHill (low resulotion) | β | Example-FootHill (low) | |
FootHill (high resulotion) | β | Example-FootHill (high) | |
SEAM-I | β | Example-SEAM-I | |
Overthrust-offshore | β | Example-Overthrust-offshore | |
Anomaly | β | Example-Anomaly |
Test Name | Status | Path | Figure |
---|---|---|---|
Iso-elastic Anomaly | β | Example-Anomaly | |
Iso-elastic Marmousi2-1 | β | Shot & Rec on Surface | |
Iso-elastic Marmousi2-2 | β | Shot & Rec Underwater | |
VTI-elastic Anomaly-1 | β | Inv Epsilon | |
VTI-elastic Anomaly-2 | β | Inv Epsilon & Delta |
We assess the convexity of different objective functions by simulating seismic records using shifted wavelets. The following table summarizes the results and provides examples for further exploration.
Convexity |
Example-Ricker Shift Example-Ricker Shift & vary Amplitude Example-Ricker Shift & domain Frequency Example-Ricker Shift & Gaussian noise |
It is important to note that we present the performance of various objective functions under poorer initial models. When using better initial model conditions, each objective function demonstrates improved performance. Relevant results can be found in Better Initial Model.
Test Name | Status | Path | Figure |
---|---|---|---|
L1-norm | β | Example-L1 | |
L2-norm | β | Example-L2 | |
T-distribution (StudentT) | β | Example-StudentT | |
Envelope | β | Example-Envelope | |
Global Correlation (GC) | β | Example-GC | |
Soft Dynamic Time Warping (soft-DTW) | β | Example-SoftDTW | |
Wasserstein Distance with Sinkhorn | β | Example-Wasserstein | |
Hybrid Misfit: Envelope & Global Correlation (WECI) | β | Example-WECI |
The following misfit functions are still in beta version and are undergoing further development and validation. Their performance and reliability will be evaluated in future studies.
Misfit Name | Status | Path | Figure |
---|---|---|---|
Travel Time | π οΈ | Example-TravelTime | |
Normalized Integration Method (NIM) | π οΈ | Example-NIM |
Inversion ProcessπΌοΈ Image under development |
The results presented below specifically characterize the impact of using the L2-norm objective function
in conjunction with the Marmousi2 model. It is important to note that the effects of different optimization algorithms may vary significantly when applied to other objective functions or models. Consequently, the findings should be interpreted within this specific context, and further investigations are recommended to explore the performance of these algorithms across a broader range of scenarios.
Test Name | Status | Path | Figure |
---|---|---|---|
Stochastic Gradient Descent (SGD) | β | Example-SGD | |
Average Stochastic Gradient Descent (ASGD) | β | Example-ASGD | |
Root Mean Square Propagation (RMSProp) | β | Example-RMSProp | |
Adaptive Gradient Algorithm (Adagrad) | β | Example-Adagrad | |
Adaptive Moment Estimation (Adam) | β | Example-Adam | |
Adam with Weight Decay (AdamW) | β | Example-AdamW | |
Nesterov-accelerated Adam (NAdam) | β | Example-NAdam | |
Rectified Adam (RAdam) | β | Example-RAdam |
Test Name | Status | Path | Figure |
---|---|---|---|
L-BFGS | β | Example-LBFGS |
Test Name | Status | Path | Figure |
---|---|---|---|
no-regularization | β | no-regular | |
Tikhonov-1st Order | β | Example-Tikhonov1 | |
Tikhonov-2nd Order | β | Example-Tikhonov2 | |
Total Variation-1st Order | β | Example-TV1 | |
Total Variation-2nd Order | β | Example-TV2 |
Multi-scale strategies play a critical role in FWI as they help to mitigate non-linearity issues and enhance convergence, especially for complex models. Multi-scale strategies are currently in development to further improve robustness and efficiency.
Test Name | Status | Path | Figure |
---|---|---|---|
Iso-elastic Marmousi2 | β | Multi-freq (2Hz,3Hz,5Hz) |
Multi-Scale Name | Status | Path | Figure |
---|---|---|---|
Multi-Offsets | π οΈ | on-going |
Inversion ProcessπΌοΈ Image under development |
Multi-scale in Time | π οΈ | on-going |
Inversion ProcessπΌοΈ Image under development |
Test Name | Status | Path | Figure |
---|---|---|---|
no-regularization | β | no-regular | |
2-Layer CNN | β | Example-2LayerCNN | |
3-Layer CNN | β | Example-3LayerCNN | |
3-Layer Unet | β | Example-3LayerUNet | |
4-Layer Unet | β | Example-4LayerUNet |
We employ DNNs derived from the Deep Image Prior (DIP) test described earlier, to perform uncertainty estimation. The variable p
represents the dropout ratio applied during both training and inference to evaluate uncertainty.
Test Name | Status | Path | Figure |
---|---|---|---|
2LayerCNN-uncertainty | β | Codes |
-
Multi-Wave Equation:
- Iso-Acoustic
- Iso-Elastic
- VTI-Elastic
- TTI-Elastic
-
Various Objective Functions
-
Various Optimization Methods
-
Multiscale FWI inversion
- multi-frequency
-
Deep Neural Network Integration
- DNNs Reparameterization: DNNs reparameterize the Earth model, introducing learnable regularization to improve the inversion process.
- Dropout: Applied to assess inversion uncertainty by randomly dropping units during training, providing a measure of model robustness.
- Multiphysics Joint Inversion (on-going): Neural networks are used to fuse data from different physical fields, enabling joint inversion for a more comprehensive and accurate Earth model.
-
Resource Management
- Mini-batch: multi-shot data can be large due to different source positions and time steps. Splitting it into mini-batches prevents loading the entire dataset into memory at once.
- Checkpointing: a key memory-saving technique, particularly for backpropagation in FWI. Instead of storing all intermediate results, only a few checkpoints are saved. During backpropagation, missing steps are recomputed, reducing memory usage at the cost of extra computation.
- boundary saving (on-going): methods are being developed to efficiently reduce memory usage by saving only the wavefield boundaries during forward propagation instead of the entire wavefield, allowing for their later use in backpropagation.
-
Acceleration Methods
- GPU Acceleration: Utilizes the parallel processing power of GPUs to significantly speed up computations, especially for large-scale simulations like FWI.
- JIT(Just-in-Time): Speeds up code execution by compiling Python code into optimized machine code at runtime, improving performance without modifying the original codebase.
- Reconstruction Using Lower-Level Language (C++) (on-going): Involves rewriting performance-critical components in C++ to leverage lower-level optimizations, resulting in faster execution times and improved overall efficiency.
-
Robustness and Portability
- Each of the method has proposed a code for testing.
The Automatic Differentiation-Based Full Waveform Inversion (ADFWI) framework is licensed under the MIT License. This license allows you to:
- Use: You can use the software for personal, academic, or commercial purposes.
- Modify: You can modify the software to suit your needs.
- Distribute: You can distribute the original or modified software to others.
- Private Use: You can use the software privately without any restrictions.
-
Memory Optimization through Boundary and Wavefield Reconstruction
Objective: Implement a strategy to save boundaries and reconstruct wavefields to reduce memory usage.Explanation: This approach focuses on saving only the wavefield boundaries during forward propagation and reconstructing the wavefields as needed. By doing so, it aims to minimize memory consumption while ensuring the accuracy of wave propagation simulations, particularly in large-scale models.
Related Work:
- [1] P. Yang, J. Gao, and B. Wang, RTM using Effective Boundary Saving: A Staggered Grid GPU Implementation, Comput. Geosci., vol. 68, pp. 64β72, Jul. 2014. doi: 10.1016/j.cageo.2014.04.004.
- [2] Wang, S., Jiang, Y., Song, P., Tan, J., Liu, Z., & He, B., 2023. Memory Optimization in RNN-Based Full Waveform Inversion Using Boundary Saving Wavefield Reconstruction, IEEE Trans. Geosci. Remote Sens., 61, 1β12. doi: 10.1109/TGRS.2023.3317529.
-
C++ / C-Based Forward Propagator
Objective: Develop a forward wave propagation algorithm using C++ or C.Explanation: Implementing the forward propagator in lower-level languages like C++ or C will significantly enhance computational performance, particularly for large-scale simulations. The aim is to leverage the improved memory management, faster execution, and more efficient parallel computing capabilities of these languages over Python-based implementations.
-
Resource Optimization for Memory Efficiency
Objective: Reduce memory consumption for improved resource utilization.Explanation: The current computational framework may encounter memory bottlenecks, especially when processing large datasets. Optimizing memory usage by identifying redundant storage, streamlining data structures, and using efficient algorithms will help in scaling up the computations while maintaining or even enhancing performance. This task is critical for expanding the capacity of the system to handle larger and more complex datasets.
-
Custom Input Data Management System
Objective: Develop a tailored system for managing input data effectively.Explanation: A customized data management framework is needed to better organize, preprocess, and handle input data efficiently. This may involve designing workflows for data formatting, conversion, and pre-processing steps, ensuring the consistency and integrity of input data. Such a system will provide flexibility in managing various input types and scales, and it will be crucial for maintaining control over data quality throughout the project lifecycle.
-
Enhanced Gradient Processing
Objective: Implement advanced techniques for gradient handling.Explanation: Developing a sophisticated gradient processing strategy will improve the inversion results by ensuring that gradients are effectively utilized and interpreted. This may include techniques such as gradient clipping, adaptive learning rates, and noise reduction methods to enhance the stability and convergence of the optimization process, ultimately leading to more accurate inversion outcomes.
-
Multi-Scale Inversion Strategies (done!)
Objective: Introduce multi-scale approaches for improved inversion accuracy.Explanation: Multi-scale inversion involves processing data at various scales to capture both large-scale trends and small-scale features effectively. Implementing this strategy will enhance the robustness of the inversion process, allowing for better resolution of subsurface structures. Techniques such as hierarchical modeling and wavelet analysis may be considered to achieve this goal, thus improving the overall quality of the inversion results.
-
Real Data Testing
Objective: Evaluate the performance and robustness of the developed methodologies using real-world datasets. Explanation: Conducting tests with actual data is crucial for validating the effectiveness of the Status algorithms. This will involve the following steps:-
Dataset Selection: Identify relevant real-world datasets that reflect the complexities of the target applications. These datasets should include diverse scenarios and noise characteristics typical in field data.
-
Preprocessing: Apply necessary preprocessing techniques to ensure data quality and consistency. This may include data normalization, filtering, and handling missing or corrupted values.
-
Implementation: Utilize the developed algorithms on the selected datasets, monitoring their performance metrics such as accuracy, computational efficiency, and convergence behavior.
-
Comparison: Compare the results obtained from the Status methods against established benchmarks or existing methodologies to assess improvements.
-
Analysis: Analyze the outcomes to identify strengths and weaknesses, and document any discrepancies or unexpected behaviors. This analysis will help refine the algorithms and inform future iterations.
-
Reporting: Summarize the findings in a comprehensive report, detailing the testing procedures, results, and any implications for future work.
This actual data testing phase is essential for ensuring that the developed methodologies not only perform well in controlled environments but also translate effectively to real-world applications. It serves as a critical validation step before broader deployment and adoption.
-
Developed by Feng Liu at the University of Science and Technology of China (USTC) and Shanghai Jiao Tong University (SJTU).
The related paper Full Waveform Inversion of (An)Isotropic Wave Physics in a Versatile Deep Learning Framework is currently in preparation.
For any inquiries, please contact Liu Feng via email at: [email protected] or [email protected].
@software{ADFWI_LiuFeng_2024,
author = {Feng Liu, Haipeng Li, GuangYuan Zou and Junlun Li},
title = {ADFWI},
month = July,
year = 2024,
version = {v1.0},
}