RT Journal Article T1 Activity Recognition Using Temporal Optical Flow Convolutional Features and Multilayer LSTM A1 Ullah, Amin A1 Muhammad, Khan A1 Del Ser, Javier A1 Baik, Sung Wook A1 De Albuquerque, Victor Hugo C. AB Nowadays digital surveillance systems are universally installed for continuously collecting enormous amounts of data, thereby requiring human monitoring for the identification of different activities and events. Smarter surveillance is the need of this era through which normal and abnormal activities can be automatically identified using artificial intelligence and computer vision technology. In this paper, we propose a framework for activity recognition in surveillance videos captured over industrial systems. The continuous surveillance video stream is first divided into important shots, where shots are selected using the proposed convolutional neural network (CNN) based human saliency features. Next, temporal features of an activity in the sequence of frames are extracted by utilizing the convolutional layers of a FlowNet2 CNN model. Finally, a multilayer long short-term memory is presented for learning long-term sequences in the temporal optical flow features for activity recognition. Experiments11https://github.com/Aminullah6264/Activity-Rec-ML-LSTM. are conducted using different benchmark action and activity recognition datasets, and the results reveal the effectiveness of the proposed method for activity recognition in industrial settings compared with state-of-the-art methods. SN 0278-0046 YR 2019 FD 2019-12 LA eng NO Ullah , A , Muhammad , K , Del Ser , J , Baik , S W & De Albuquerque , V H C 2019 , ' Activity Recognition Using Temporal Optical Flow Convolutional Features and Multilayer LSTM ' , IEEE Transactions on Industrial Electronics , vol. 66 , no. 12 , 8543495 , pp. 9692-9702 . https://doi.org/10.1109/TIE.2018.2881943 NO Publisher Copyright: © 1982-2012 IEEE. NO Manuscript received August 2, 2018; revised October 13, 2018; accepted October 28, 2018. Date of publication November 22, 2018; date of current version July 31, 2019. This work was supported by the National Research Foundation of Korea funded by the Korean government (MSIP) under Grant 2016R1A2B4011712. (Corresponding author: Sung Wook Baik.) A. Ullah and S. W. Baik are with the Intelligent Media Laboratory, Digital Contents Research Institute, Sejong University, Seoul 143-747, South Korea (e-mail:,aminullah@ieee.org; sbaik@sejong.ac.kr). DS TECNALIA Publications RD 28 sept 2024