| 164 | 0 | 138 |
| 下载次数 | 被引频次 | 阅读次数 |
朱墨时序鉴定作为文书鉴定的重要组成部分,其传统鉴定方法依赖于鉴定人的主观经验,在复杂案件中易出现结论可靠性不足和重复性差等问题。针对上述问题,提出一种基于深度学习模型的朱墨时序鉴定方法,实现了多场景下朱墨时序的快速鉴定。该方法通过采集字迹与印文交叉、非交叉部位的立体显微特征图像,构建2种字迹与3种印章组合的6类数据集,按8∶2的比例随机划分训练集和验证集;结合Vision Transformer (ViT)的全局上下文的捕捉能力与EfficientNet (EN)的局部特征高效提取能力,自主构建出ViT-EN(Vision Transformer-Efficient Net)复合模型,以达到对朱墨时序显微图像的快速准确智能识别。结果表明,6类数据集的验证准确率分别达到99.00%、98.00%、99.00%、100.00%、99.00%、98.00%。该方法为朱墨时序鉴定提供了一种客观、高效、可量化的智能辅助手段。
Abstract:As a crucial component of document authentication, the traditional methodology for determining the sequence of seal-ink overlapping primarily relies on examiners' subjective expertise, often leading to insufficient reliability and poor reproducibility in complex cases. To address the above issues, a method for sequence determination of seal-ink overlapping based on deep learning models is proposed, enabling rapid identification across multiple scenarios. The method constructs a six-category dataset comprising combinations of two handwriting types and three stamp types through three-dimensional microscopic characteristic images captured at intersecting and non-intersecting regions of strokes and stamp impressions. The dataset is randomly divided into training and validation sets in an 8∶2 ratio. By integrating the global context capture capability of Vision Transformer(ViT) with the efficient local feature extraction of EfficientNet(EN), a ViT-EN(Vision Transformer-EfficientNet) composite model is independently developed to achieve rapid and accurate intelligent recognition of microscopic images for sequence determination. Experimental results demonstrate validation accuracies of 99.00%, 98.00%, 99.00%, 100.00%, 99.00%, and 98.00% for the six categories respectively. This approach provides an objective, efficient, and quantifiable intelligent auxiliary solution for sequence determination of seal-ink overlapping.
[1] 中华人民共和国司法部.文件制作时间鉴定技术规范:GB/T 37233—2018[S].北京:中国标准出版社,2018.
[2] 肖冯飞,莫小婵.鉴定实务中常见朱墨时序检验[J].法制与社会,2021(1):32-33.
[3] 申靖尧.基质辅助激光解吸/电离飞行时间质谱在红色印油检验中的应用研究[D].上海:华东政法大学,2021.
[4] 衡磊,孟朝阳.基于颜色数字化特征的朱墨时序判断研究[J].中国人民公安大学学报(自然科学版),2017,23(1):4-9.
[5] 杜英杰.文件检验中朱墨时序检验方法述评[J].山东化工,2017,46(9):56-57.
[6] FERREIRA A,BONDI L,BAROFFIO L,et al.Data-driven feature characterization techniques for laser printer attribution[J].IEEE Transactions on Information Forensics and Security,2017,12(8):1860-1873.
[7] 徐长英,赖伟财,陈英.基于深度学习的印刷体文档字符识别的研究[J].现代电子技术,2020,43(23):72-75.
[8] LI C,LIN F,WANG Z Y,et al.DeepHSV:User-independent offline signature verification using two-channel CNN[C]//2019 International Conference on Document Analysis and Recognition(ICDAR),2019:166-171.
[9] LAI S X,JIN L W,ZHU Y C,et al.SynSig2Vec:Forgery-free learning of dynamic signature representations by sigma lognormal-based synthesis and 1D CNN[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,44(10):6472-6485.
[10] 张倩,韩星周.高仿真光敏印章盖印印文的自动识别[J].太赫兹科学与电子信息学报,2020,18(1):136-141.
[11] ELNGAR A A,JAIN N,SHARMA D,et al.A deep learning based analysis of the big five personality traits from handwriting samples using image processing[J].Journal of Information Technology Management,2020,1:3-35.
[12] 熊武,曹从军,宋雪芳,等.基于多尺度混合域注意力机制的笔迹鉴别方法[J].计算机应用,2024,44(7):2225-2232.
[13] 李福坤,周治道,魏源松,等.一种基于卷积神经网络的快速识别朱墨时序方法[J].长春理工大学学报(自然科学版),2021,44(4):131-137.
[14] TAN M X,LE Q V.EfficientNet:Rethinking model scaling for convolutional neural networks[C]//International Conference on Machine Learning,2019:6105-6114.
[15] DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.An image is worth 16x16 words:transformers for image recognition at scale[J].ArXiv,2020.
[16] 中华人民共和国司法部.印章印文鉴定技术规范:GB/T 37231—2018[S].北京:中国标准出版社,2018.
[17] 谢刘阳,谢明仁.打印文件朱墨时序显微镜检验方法研究综述[J].广东公安科技,2020,28(3):14-16.
[18] 张凌燕,钟玲,林佳景.加热因素对激光打印文件朱墨时序立体显微特征变化的影响研究[J].中国人民公安大学学报(自然科学版),2023,29(1):21-29.
[19] 黄娟娟,蒋威,徐猛.基于干涉现象的黑色书写字迹朱墨时序检验研究[J].轻工标准与质量,2023(4):112-115.
[20] 欧阳国亮.现状与趋向:朱墨时序检验方法评述[J].河南司法警官职业学院学报,2020,18(2):102-106.
[21] DUTTA P,UPADHYAY P,DE M,et al.Medical image analysis using deep convolutional neural networks:CNN architectures and transfer learning[C]//2020 International Conference on Inventive Computation Technologies (ICICT),2020:175-180.
[22] ZHANG S C,LI X L,ZONG M,et al.Efficient kNN classification with different numbers of nearest neighbors[J].IEEE Transactions on Neural Networks and Learning Systems,2017,29(5):1774-1785.
[23] YE J,XIONG T.SVM versus least squares SVM[C]//International Conference on Artificial Intelligence and Statistics,2007:644-651.
[24] BELGIU M,DRAGUT L.Random forest in remote sensing:A review of applications and future directions[J].ISPRS Journal of Photogrammetry and Remote Sensing,2016,114:24-31.
[25] ALFIAN G,SYAFRUDIN M,FAHRURROZI I,et al.Predicting breast cancer from risk factors using SVM and extra-trees-based feature selection method[J].Computers,2022,11(9):136.
基本信息:
中图分类号:TP391.41;D918.9
引用信息:
[1]黄锐,翁宗州,谢小雪.基于ViT-EN复合模型的朱墨时序显微图像识别研究[J].中国人民公安大学学报(自然科学版),2025,31(04):9-20.
基金信息:
重庆市科委科技计划项目(2024NSCQ-LZX0205); 重庆市教委科研基金(CXQT21033); 西南政法大学本科拔尖创新人才科研能力提升项目(2025KYTS0092)