张书铭,杨瑞杰,朱森华,王皓,田素青,张旭阳,李佳奇,雷润宏.两种自动勾画头颈部危及器官方法的比较研究[J].中华放射医学与防护杂志,2020,40(5):385-391
两种自动勾画头颈部危及器官方法的比较研究
Comparative study of two different methods for automatic segmentation of organs at risk in head and neck region
投稿时间:2019-12-23  
DOI:10.3760/cma.j.issn.0254-5098.2020.05.010
中文关键词:  自动勾画  危及器官  深度学习  图谱库
英文关键词:Automatic segmentation  Organs at risk  Deep learning  Atlas library
基金项目:国家自然科学基金(81071237,81372420)
作者单位E-mail
张书铭 北京大学第三医院肿瘤放疗科 100191  
杨瑞杰 北京大学第三医院肿瘤放疗科 100191 ruijyang@yahoo.com 
朱森华 北京连心医疗科技有限公司 100085 张旭阳现在首都医科大学附属北京潞河医院肿瘤中心, 北京 101100  
王皓 北京大学第三医院肿瘤放疗科 100191  
田素青 北京大学第三医院肿瘤放疗科 100191  
张旭阳 北京大学第三医院肿瘤放疗科 100191  
李佳奇 北京大学第三医院肿瘤放疗科 100191  
雷润宏 北京大学第三医院肿瘤放疗科 100191  
摘要点击次数: 2188
全文下载次数: 1102
中文摘要:
      目的 设计一种基于深度学习的自动勾画模型,用于勾画头颈部危及器官(OARs),并与基于图谱方法的Smart segmentation勾画软件进行比较。方法 自动勾画模型由基于深度学习神经网络的分类模型和勾画模型组成。分类模型将CT图像从头脚方向分为6个分类,将每个OARs对应分类的CT图像输入勾画模型进行分割勾画。自动勾画模型使用150例病例训练模型,Smart segmentation使用相同的150例病例组成图谱库,两者同时对20例测试集进行勾画。使用相似度系数(DSC)和豪斯多夫距离(HD)评估2种方法勾画准确性,同时记录两种方法勾画花费时间。根据数据是否满足正态分布,分别使用配对t检验和Wilcoxon符号秩和检验。结果 自动勾画模型的DSC和HD结果如下:脑干为0.88和4.41 mm、左眼球为0.89和2.00 mm、右眼球为0.89和2.12 mm、左视神经为0.70和3.00 mm、右视神经为0.80和2.24 mm、左颞叶为0.81和7.98 mm、右颞叶为0.84和8.82 mm、下颌骨为0.89和5.57 mm、左腮腺为0.70和11.92 mm和右腮腺为0.77和11.27 mm。除腮腺外,自动勾画模型勾画结果均优于Smart segmentation,差异有统计学意义(t=3.115~7.915,Z=-1.352~-3.921,P<0.05)。同时,自动勾画模型速度比Smart segmentation提高了51.28%。结论 利用深度学习方法建立了自动勾画头颈部OARs的模型,得到较准确结果,勾画精度和速度均优于Smart segmentation软件。
英文摘要:
      Objective To develope a deep-learning-based auto-segmentation model to segment organs at risk (OARs) in head and neck (H&N) region and compare with atlas-based auto-segmentation software (Smart segmentation). Methods The auto-segmentation model consisted of classification model and segmentation model based on deep learning neural network. The classification model was utilized to classify CT slices into six categories in the cranio-caudal direction, and then the CT slices corresponding to the categories for different OARs were pushed to the segmentation model respectively. The CT image data of 150 patients were used for auto-segmentation model training and building atlas library in Smart segmentation software. Another 20 patients were used as testing dataset for both auto-segmentation model and Smart segmentation software. Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to evaluate the accuracy of two method, and auto-segmentation time cost was recorded. Paired Student's t-test or non-parametric Wilcoxon signed-rank test was performed depending on result of normality test. Results The DSC and HD of auto-segmentation model for brainstem, left eye, right eye, left optic nerve, right optic nerve, left temporal lobe, right temporal lobe, mandible, left parotid and right parotid were 0.88 and 4.41 mm, 0.89 and 2.00 mm, 0.89 and 2.12 mm, 0.70 and 3.00 mm, 0.80 and 2.24 mm, 0.81 and 7.98 mm, 0.84 and 8.82 mm, 0.89 and 5.57 mm, 0.70 and 11.92 mm, 0.77 and 11.27 mm respectively. The results of auto-segmentation model were better than those of Smart segmentation (t=3.115-7.915, Z=-1.352 to -3.921, P<0.05) except left and right parotids. In addition, the speed of auto-segmentation model was 51.28% faster than that of Smart segmentation. Conclusions In this study, the deep-learning-based auto-segmentation model demonstrated superior performance in accuracy and efficiency on segmenting OARs in H&N CT images, which was better than Smart segmentation software.
HTML  查看全文  查看/发表评论  下载PDF阅读器
关闭