高级检索

    基于改进U-Net网络和知识蒸馏的三维断层识别方法

    Three-D fault identification based on improved U-Net and knowledge distillation

    • 摘要: 深度学习方法在三维地震资料断层识别中得到了广泛应用,但方法的应用面临数据集质量欠佳、资源消耗过高以及训练周期长等问题。为此,提出了一种融合改进U-Net网络和知识蒸馏的三维断层识别方法。该方法先将改进的U-Net网络模型作为教师模型,将空洞空间金字塔池化(ASPP)结构与 U-Net 网络模型相融合,构建轻量级学生模型,然后引入知识蒸馏技术对学生模型进行优化,并调整网络训练超参数和知识蒸馏损失参数,使学生模型获取更丰富的断层信息,提升学生模型的网络性能。该方法通过将复杂的教师模型的知识迁移到轻量级学生模型,显著降低了模型的计算复杂度,同时保持了较高的识别精度。测试结果表明,在合成测试集和实际地震数据的断层识别中,经过知识蒸馏训练的学生模型在识别精度和连续性上均优于未经过蒸馏的学生模型和单独训练的教师模型,充分验证了方法的可行性和有效性。

       

      Abstract: Deep learning is a powerful tool for fault identification based on seismic data. However, traditional methods are plagued by poor dataset quality, excessive resource consumption, and lengthy training cycles. To address these challenges, we propose a 3D fault identification method integrating an improved U-Net and knowledge distillation. An enhanced U-Net model, working as the teacher model, is integrated with the atrous spatial pyramid pooling (ASPP) structure to construct a lightweight student model. The model is then optimized through knowledge distillation. By adjusting network training hyperparameters and knowledge distillation loss parameters, the model acquires richer fault information and thereby enhances its network performance. Through transferring knowledge from the complex teacher model to the lightweight student model, this approach significantly reduces computational complexity while maintaining high recognition accuracy. Synthetic and field data tests demonstrate that the knowledge-distilled student model outperforms both the undistilled student model and the independently trained teacher model in terms of recognition accuracy and fault continuity, fully verifying the feasibility and effectiveness of this method.

       

    /

    返回文章
    返回