The use of convolutional neural networks for training the manipulation robot to capture objects
Authors: Jamal Mais | |
Published in issue: #1(42)/2020 | |
DOI: 10.18698/2541-8009-2020-1-571 | |
Category: Mechanical Engineering and Machine Science | Chapter: Robots, Mechatronics, and Robotic Systems |
|
Keywords: robot, manipulator, capture operation, neural network, convolutional neural network, machine learning, neural network training, artificial intelligence |
|
Published: 31.01.2020 |
The capture of various objects is the main task performed by the robot in the study and manipulation in the environment. Given the significant difficulties associated with programming the necessary capture position for each of the objects, it is proposed to use a convolutional neural network to train the robot to capture various objects, taking into account their spatial position. The network was trained using a sample of 800 images of 20 objects. An experimental study showed that the network provides 53.04% success in capturing new objects that were not in the training set. This suggests that with an increase in the training sample, the robot will be able to successfully capture objects that meet certain conditions and are not in the training sample.
References
[1] Saxena A., Driemeyer J., Ng A.Y. Robotic grasping of novel objects using vision. The Int. J. Robot. Res., 2008, vol. 27, no. 2, pp. 157–173. DOI: https://doi.org/10.1177%2F0278364907087172
[2] Jiang Y., Moseson S., Saxena A. Efficient grasping from RGBD images: learning using a new rectangle representation. IEEE ICRA, 2011, pp. 3304–3311. DOI: https://doi.org/10.1109/ICRA.2011.5980145
[3] Ciocarlie M., Hsiao K., Jones E.G., et al. Towards reliable grasping and manipulation in household environments. In: Experimental robotics. Springer, 2014. pp. 241–252.
[4] Lenz I., Lee H., Saxena A. Deep learning for detecting robotic grasps. The Int. J. Robot. Res., 2015 vol. 34, no. 4-5, pp. 705–724. DOI: https://doi.org/10.1177%2F0278364914549607
[5] Redmon J., Angelova A. Real-time grasp detection using convolutional neural networks. IEEE ICRA, 2015, pp. 1316–1322. DOI: https://doi.org/10.1109/ICRA.2015.7139361
[6] Sahbani A., El-Khoury S., Bidaud P. An overview of 3-D object grasp synthesis algorithms. Rob. Auton. Syst., 2012, vol. 60, no. 3, pp. 326–336. DOI: https://doi.org/10.1016/j.robot.2011.07.016
[7] Caldera S., Rassau A., Chai D. Review of deep learning methods in robotic grasp detection. Multimodal Technologies Interact., 2018, vol. 2, no. 3, pp. 57. DOI: https://doi.org/10.3390/mti2030057
[8] Seliverstova E. Upravlenie mnogopalym zakhvatnym ustroystvom avtonomnogo robota pri zakhvate i manipulirovanii deformiruemymi ob’’ektami. Diss. ... kand. tekh. nauk [Control on polydactyl robotic grasp of autonomous robot at grasping and manipulating of deformable object. Kand. tech. sci. diss.]. Moscow, Bauman MSTU Publ., 2018 (in Russ.).
[9] Kumra S., Kanan C. Robotic grasp detection using deep convolutional neural networks. IEEE/RSJ IROS, pp. 769–776. DOI: https://doi.org/10.1109/IROS.2017.8202237
[10] Barclay J.R. Stream discharge from Harford, NY. ecommons.cornell.edu: веб-сайт. URL: http://hdl.handle.net/1813/34425 (дата обращения: 15.11.2019).
[11] Morrison D., Leitner J., Corke P. Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. RSS, 2018. DOI: https://doi.org/10.15607/RSS.2018.XIV.021