论文部分内容阅读
传统的利用有限差分方法模拟地震波场需要耗费较大的机时.为了提高地震波场的模拟效率,采用GPU并行计算技术是一种非常好的方法.文章基于一阶应力-速度声波方程的交错网格有限差分法,采用分块策略,将一个地质模型分解成多个小规模的地质子块,每个子块交由一个线程块负责,并利用常数存储器、块内共享存储器和寄存器减少对全局存储器的访问,实现了波场模拟的GPU加速.单CPU和GPU/CPU下不同规模网格的波场模拟结果表明:利用GPU加速可以将模拟效率提高数倍.尤其是当模拟大规模网格且炮点个数较多时,可以更加显著的提升模拟效率.
In order to improve the simulation efficiency of seismic wavefield, it is a very good method to use GPU parallel computing technology.Firstly, based on the first-order stress-velocity acoustic wave equation, the staggered grid Grid finite difference method, a block strategy is used to decompose a geological model into multiple small-scale geologic sub-blocks. Each sub-block is under the responsibility of one thread block and uses constant memory, shared memory and registers in the block to reduce the impact on global memory The wave field simulations of GPU acceleration are implemented.The wavefield simulation results of different scale grids under single CPU and GPU / CPU show that the simulation efficiency can be increased by several times with GPU acceleration, especially when simulating large scale grid When the number of shots is large, the simulation efficiency can be significantly improved.