تعداد نشریات | 20 |
تعداد شمارهها | 1,149 |
تعداد مقالات | 10,518 |
تعداد مشاهده مقاله | 45,415,564 |
تعداد دریافت فایل اصل مقاله | 11,291,384 |
Residual Network of Residual Network: A New Deep Learning Modality to Improve Human Activity Recognition by Using Smart Sensors Exposed to Unwanted Shocks | ||
Health Management & Information Science | ||
مقاله 5، دوره 7، شماره 4، دی 2020، صفحه 228-239 اصل مقاله (1.27 M) | ||
نوع مقاله: Original Article | ||
نویسندگان | ||
Mohammad Javad Beirami* 1؛ Seyed Vahab Shojaedini2 | ||
1Faculty of Electrical, Biomedical and Mechatronics Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran, | ||
2Associate Professor of Biomedical Engineering, Iranian Research Organization for Science and Technology, Tehran, Iran | ||
چکیده | ||
Background and Objective: Recently, smartphones have been vastly utilized in monitoring the daily activities of people to check their health. The main challenge in this procedure is to distinguish similar activities based on signals recorded by using sensors mounted on smartphones and smartwatches. Although deep learning approaches have better addressed the above challenge than alternative methods, their performance may be severely degraded, especially when the mounted sensors receive disturbed signals due to smartphones and smartwatches not being in a fixed position. Methods: In this article, a new deep learning structure is introduced to recognize challenging human activities by using smartphones and smartwatches, even when the recorded signals are noisy due to the sensors being unstable. In the proposed structure, the residual network of residual network (i.e. ROR) is engaged as a new concept inside the deep learning architecture, which provides greater stability against either disturbed or noisy signals. Results: The performance of the proposed method is evaluated on recorded signals from smartphones and smartwatches and compared with the state of art techniques containing deep learning and classic (non-deep) schemes. The obtained results show that the proposed method may improve the recognition parameters at least 1.79 percent against deep alternatives in distinguishing challenging activities (i.e. downstairs and upstairs). These superiorities reach at least 32.86 percent for classic methods, which are applied on the same data. Conclusions: The effectiveness of the architecture in recognizing either challenging or non-challenging activities in the presence of unwanted cell phone shocks demonstrates its potential to be used as a mobile application for human activity recognition. | ||
کلیدواژهها | ||
Human Activity Recognition؛ Smartphone؛ Deep Learning؛ Gradient Flow؛ Residual Networks of Residual Networks | ||
مراجع | ||
1. Castro D, Coral W, Rodriguez C, Cabra J, Colorado J. Wearable-based human activity recognition using an iot approach. Journal of Sensor and Actuator Networks. 2017;6(4):28. doi: 10.3390/jsan6040028. 2. Zhang S, Wei Z, Nie J, Huang L, Wang S, Li Z. A Review on Human Activity Recognition Using Vision-Based Method. J Healthc Eng. 2017;2017:3090343. doi: 10.1155/2017/3090343. 3. Chaquet JM, Carmona EJ, Fernández-Caballero A. A survey of video datasets for human action and activity recognition. Computer Vision and Image Understanding. 2013;117(6):633-59. doi: 10.1016/j.cviu.2013.01.013. 4. He Z, Jin L. Activity recognition from acceleration data based on discrete consine transform and SVM. 2009 IEEE International Conference on Systems, Man and Cybernetics. 2009:5041-4. doi: 10.1109/ICSMC.2009.5346042. 5. Wang S, Yang J, Chen N, Chen X, Zhang Q. Human activity recognition with user-free accelerometers in the sensor networks. 2005 International Conference on Neural Networks and Brain. 2005;2:1212-7. 6. Bao L, Intille SS. Activity recognition from user-annotated acceleration data. International conference on pervasive computing. 2004:1-17. doi: 10.1007/978-3-540-24646-6_1. 7. Ravi N, Dandekar N, Mysore P, Littman ML. Activity recognition from accelerometer data. Aaai. 2005;5(2005):1541-6. 8. Mantyjarvi J, Lindholm M, Vildjiounaite E, Makela S-M, Ailisto H. Identifying users of portable devices from gait pattern with accelerometers. Proceedings(ICASSP’05) IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. 2005;2:ii/973-ii/6 Vol. 2. 9. Ordonez FJ, Roggen D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors (Basel). 2016;16(1). doi: 10.3390/s16010115. 10. Kwapisz JR, Weiss GM, Moore SA. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter. 2011;12(2):74- 82. doi: 10.1145/1964897.1964918. 11. Ignatov A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Applied Soft Computing. 2018;62:915-22. doi: 10.1016/j.asoc.2017.09.027. 12. Kim TW, Lee SM, Seong SC, Lee S, Jang J, Lee MC. Different intraoperative kinematics with comparable clinical outcomes of ultracongruent and posterior stabilized mobile-bearing total knee arthroplasty. Knee Surg Sports Traumatol Arthrosc. 2016;24(9):3036-43. doi: 10.1007/ s00167-014-3489-0. 13. Lee S-M, Yoon SM, Cho H. Human activity recognition from accelerometer data using Convolutional Neural Network. 2017 ieee international conference on big data and smart computing (bigcomp). 2017:131-4. 14. Alsheikh MA, Selim A, Niyato D, Doyle L, Lin S, Tan H-P. Deep activity recognition models with triaxial accelerometers. arXiv preprint arXiv:151104664. 2015. 15. Singh D, Merdivan E, Psychoula I, Kropf J, Hanke S, Geist M, et al. Human activity recognition using recurrent neural networks. International cross-domain conference for machine learning and knowledge extraction. 2017:267-74. doi: 10.1007/978-3-319-66808-6_18. 16. Shojaedini SV, Beirami MJ. Mobile sensor based human activity recognition: distinguishing of challenging activities by applying long shortterm memory deep learning modified by residual network concept. Biomed Eng Lett. 2020;10(3):419- 30. doi: 10.1007/s13534-020-00160-x. 17. Zhang Y, Chan W, Jaitly N. Very deep convolutional networks for end-to-end speech recognition. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017:4845-9. 18. Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. European conference on computer vision. 2014:818-33. doi: 10.1007/978-3-319-10590-1_53. 19. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010:249-56. 20. Bengio Y, Simard P, Frasconi P. Learning longterm dependencies with gradient descent is difficult. IEEE Trans Neural Netw. 1994;5(2):157- 66. doi: 10.1109/72.279181. 21. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:770-8. 22. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. International conference on machine learning. 2015:448-56. 23. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Journal of Machine Learning Research. 2010;9:249-56. 24. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE international conference on computer vision. 2015:1026-34. 25. LeCun YA, Bottou L, Orr GB, Müller K-R. Efficient backprop. Neural networks: Tricks of the trade: Springer; 2012. p. 9-48. 26. He K, Sun J. Convolutional neural networks at constrained time cost. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015:5353-60 27. Srivastava RK, Greff K, Schmidhuber J. Highway networks. arXiv preprint arXiv:150500387. 2015. 28. Zhang K, Sun M, Han TX, Yuan X, Guo L, Liu T. Residual networks of residual networks: Multilevel residual networks. IEEE Transactions on Circuits and Systems for Video Technology. 2017;28(6):1303-14. doi: 10.1109/ TCSVT.2017.2654543. 29. Weiss GM, Yoneda K, Hayajneh T. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access. 2019;7:133190-202. doi: 10.1109/ACCESS.2019.2940729. 30. Baldi P, Brunak S, Chauvin Y, Andersen CA, Nielsen H. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics. 2000;16(5):412-24. doi: 10.1093/ bioinformatics/16.5.412. 31. Plötz T, Hammerla NY, Olivier PL, editors. Feature learning for activity recognition in ubiquitous computing. Twenty-second international joint conference on artificial intelligence; 2011. 32. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, et al. Convolutional neural networks for human activity recognition using mobile sensors. 6th International Conference on Mobile Computing, Applications and Services. 2014:197- 205. doi: 10.4108/icst.mobicase.2014.257786. 33. Witten IH, Frank E, Hall MA. Data Mining: Practical Machine Learning Tools and Techniques. (The Morgan Kaufmann Series in Data Management Systems). 3rd Edition. Amsterdam: Elsevier. 2011. | ||
آمار تعداد مشاهده مقاله: 2,277 تعداد دریافت فایل اصل مقاله: 1,533 |