A.N. Grekov1,2, Y.E. Shishkin1, S.S. Peliushenko1, A.S. Mavrin1,2
1Institute of Natural and Technical Systems, RF, Sevastopol, Lenin St., 28
2Sevastopol State University, RF, Sevastopol, Universitetskaya St., 33
E–mail: i@angrekov.ru
DOI: 10.33075/2220-5861-2022-4-112-122
UDC 681.3
Abstract:
In modern science, machine vision is one of the most promising methods for automating the processes of analyzing data from visual monitoring of the marine environment. Over the past decade, great progress has been made in the field of real-time object detection in photo and video images through the development of single-stage neural network algorithms and high-performance GPUs for their practical application. Microobject detection is considered as a computer vision method that is used to locate and identify microplankton and microplastic objects in situ.
The article presents the results of research on the application of the YOLOV5 machine learning model to solve the problem of automated detection and recognition of micro-objects in the marine environment. Numerical metrics for assessing the quality of image recognition, accuracy and recall are selected. These loss estimation functions were used when setting up the recognition model, during training and for its validation. The convergence of the obtained solutions was estimated depending on the number of iterations during training and the size of the training sample. The training and validation of the model was carried out on a specially prepared database of real images containing microplankton and microplastic samples. The results of experiments using a trained algorithm for finding micro-objects in photo and video images in real time are presented. Experimental studies have shown a high reliability of the results obtained by the model, comparable to manual recognition.
Keywords: machine learning, marine environment, YOLOV5, microplastics, microplankton, real-time recognition.
To quote:
REFERENCES
-
- Viola P. and Jones M. Managing work role performance: challenges for twenty-first century organizations and their employees. Rapid Object Detect using a Boost Cascade Simple Feature, 2001, pp. 511–518
- Sachin M. Different Types of Object Detection Algorithms in Nut-shell. Machine Learning Knowledge, 2020.
- Dalal N. and Triggs B. Histograms of oriented gradients for human detection. 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), 2005, Vol. 1, pp. 886–893.
- Lowe D.G. Distinctive image features from scale-invariant keypoints. International journal of computer vision, Vol. 60, No. 2, 2004, pp. 91–110.
- LeCun Y., Bengio Y., and Hinton G. Deep learning. Nature, Vol. 521, No. 7553, 2015, pp. 436–444.
- Dai J.; Wang R.; Zheng H.; Ji G. and Qiao X. Zooplanktonet: Deep convolutional network for zooplankton classification. Proceedings of the OCEANS 2016-Shanghai, Shanghai, China, 10–13 April 2016; IEEE: New York, NY, USA, 2016, pp. 1–6.
- Girshick R., Donahue J., Darrell T. and Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
- Girshick R. Fast R-CNN. Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
- Ren S., He K., Girshick R., and Sun J. Faster R-CNN: Towards realtime object detection with region proposal networks. Advances in neural information processing systems, Vol. 28, 2015, pp. 91–99.
- Girshick R. Fast R-CNN. Proceeding IEEE International Conference Computer Visual, 2015 Inter: 1440–1448 (2015).
- Ren S., He K., Girshick R., and Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 2017, pp. 1137–1149.
- Konishi Y., Hanzawa Y., Kawade M., and Hashimoto M. Fast 6D pose estimation using hierarchical pose trees. Eccv. 1, 2016, pp. 398–413.
- Redmon J., Divvala S., Girshick R., and Farhadi A. You only look once: unified, real-time object detection. Proceeding IEEE Computer Social Conference Computer Visual Pattern Recognition 2016-December, 2016, pp. 779–788.
- Redmon J. and Farhadi A. YOLO 9000: better, faster, stronger. Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
- Farhadi A. and Redmon J. YOLOV3: An incremental improvement. Computer Vision and Pattern Recognition, 2018: Springer Berlin/Heidelberg, Germany, pp. 1804–2767.
- 16. Liu W. Ssd: Single shot multibox detector. European conference on computer vision, 2016: Springer, pp. 21–37.
- Timoshkin M.S., Mironov A.N., and Leontiev A.S. Sravnenie YOLOV5 i faster R-CNN dlya obnaruzheniya lyudej na izobrazhenii v potokovom rezhime (Comparison of YOLOV5 and faster R-CNN for streaming people detection). International research journal № 6 (120), Vol. 1, June, pp. 137–146.
- 18. Filichkin S.A. and Vologdin S.V. Primenenie nejronnoj seti YOLOV5 dlya raspoznavaniya nalichiya sredstv individual’noj zashchity (Application of the YOLOV5 neural network to recognize the presence of personal protective equipment). Intelligent systems in production, 2022, Vol. 20, No. 2, pp. 61–67.
- Liang T.J., Pan W.G., Bao H., and Pan F. Vehicle wheel weld detection based on improved YOLOV4 algorithm. COMPUTER OPTICS, 2022, Vol. 46, No. 2, pp. 271–279.
- Kaplunenko D.D., Zotov S.S., Subote A.E., and Fishchenko V.K. Primenenie nejronnyh setej dlya klassifikacii biologicheskih ob”ektov po podvodnym kameram MES ostrova Popova (Application of neural networks for classification of biological objects according to underwater chambers of MES of Popov island). Underwater research and robotics, 2022, No. 1 (39), pp. 72–79.