StorSeismic: An approach to pre-train a neural network to store seismic data features (Caesario, M.R., and Alkhalifah, T., 2022)

Through the help of the self-attention mechanism embedded in the Bidirectional Encoder Representation from Transformers (BERT), a Transformer-based network architecture which was originally developed for Natural Language Processing (NLP) tasks, we capture and store the local and global features of seismic data in the pre-training stage, then utilize them in various seismic processing tasks in the fine-tuning stage. 

Figures_for_Research14_Randy_Fig1

Figure 1. a) Example reconstruction of masked seismic shot gathers from the testing set. b) Attention maps that corresponds to Figure 1a. c) Attention Rollout for trace no. 11 calculated from the attention maps in Figure 1b.

 

Figures_for_Research14_Randy_Fig2

Figure 2. a) Example of denoising with 1-sigma noise (corresponds to the test shot gather in Figure 1a) from the fine-tuned model. b) Test shot gather after being denoised using the first task (left) and the corresponding true and predicted velocities (right) from the fine-tuned model, which correspond to data in Figure 1a.

 

References

Caesario, M.R., and Alkhalifah, T., 2022, "StorSeismic: An approach to pre-train a neural network to store seismic data features", submitted to the 83rd EAGE Annual Conference and Exhibition.

Caesario, M.R., and Alkhalifah, T., 2022, "StorSeismic: An approach to pre-train a neural network to store seismic data features", submitted to the 83rd EAGE Annual Conference and Exhibition.