Security and threats overview in the machine learning
Authors: Losev N.S., Glinskaya E.V. | |
Published in issue: #1(96)/2025 | |
DOI: | |
Category: Informatics, Computer Engineering and Control | Chapter: Methods and Systems of Information Protection, Information Security |
|
Keywords: machine learning, countermeasure strategies, intruder attack data protection, protection methods, attack models, vulnerabilities, critical infrastructure |
|
Published: 14.02.2025 |
Machine learning is widespread in transforming various aspects of our lives with the intelligent digital solutions. The use of machine learning in the critical infrastructure entails the desire of intruders to gain access to it by obtaining the algorithms, methods, and data underlying the models in order to expand their influence and benefit from the system vulnerabilities. The paper considers and analyzes main vulnerabilities and classification of various machine learning attacks as an object of protection. It also presents efficient methods to prevent future attacks and technologies to mitigate the possible dangerous consequences.
References
[1] Koroteev M.V. Fundamentals of machine learning in Python. Moscow, KnoRus Publ., 2024, 432 p. (In Russ.).
[2] Ulyanikhin E. Machine learning in information security field. Language in the field of professional communication. International Scientific and Practical Conference of teachers, postgraduates and students: collection of tr. Yekaterinburg, Azhur Publishing House, 2020, pp. 704–707. (In Russ.).
[3] Gribunin V.G., Grishanenko R.L., Labaznikov A.P., Timonov A.A. Safety of machine learning systems. Protected assets, vulnerabilities, intruder and threat model, attack taxonomy. Proceedings of the Institute of Engineering Physics, 2021, No. 3 (61), pp. 65–71. (In Russ.).
[4] Lahe A.D., Singh G. A Survey on Security Threats to Machine Learning Systems at Different Stages of its Pipeline. International Journal of Information Technology and Computer Science, 2023, vol. 15, no. 2, pp. 23–34. https://doi.org/10.5815/ijitcs.2023.02.03
[5] Gupta P., Yadav K., Gupta B.B. et al. A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function. Computers & Security, 2023, vol. 130, art. 103270. https://doi.org/10.1016/j.cose.2023.103270
[6] Paracha A., Arshad Ju., Farah M.B., Ismail Kh. Machine learning security and privacy: a review of threats and countermeasures. EURASIP Journal on Information Security, 2024, vol. 2024, no. 1, art. 10. https://doi.org/10.1186/s13635-024-00158-3
[7] Bai Ya., Wang Y., Zeng Yu. et al. Query efficient black-box adversarial attack on deep neural networks. Pattern Recognition, 2023, vol. 133, art. 109037. https://doi.org/10.1016/j.patcog.2022.109037
[8] Wu Di., Qi S., Qi Y. et al. Understanding and defending against White-box membership inference attack in deep learning. Knowledge-Based Systems, 2023, vol. 259, art. 110014. https://doi.org/10.1016/j.knosys.2022.110014
[9] Popkov Yu.S. Machine learning and randomized machine learning: similarities and differences. System analysis and information technologies of SAIT-2019. Eighth International Conference: collection of tr. Irkutsk, FITZ IU RAS Publ., 2019, pp. 10–25. (In Russ.). https://doi.org/10.14357/SAIT2019001
[10] Astapov R.L., Mukhamadeeva R.M. Automation of machine learning parameter selection and machine learning model training. Current Scientific Research in the Modern World, 2021, no. 5–2 (73), pp. 34–37. (In Russ.).