Browsing by Keyword "Activity recognition"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Activity Recognition Using Temporal Optical Flow Convolutional Features and Multilayer LSTM(2019-12) Ullah, Amin; Muhammad, Khan; Del Ser, Javier; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IANowadays digital surveillance systems are universally installed for continuously collecting enormous amounts of data, thereby requiring human monitoring for the identification of different activities and events. Smarter surveillance is the need of this era through which normal and abnormal activities can be automatically identified using artificial intelligence and computer vision technology. In this paper, we propose a framework for activity recognition in surveillance videos captured over industrial systems. The continuous surveillance video stream is first divided into important shots, where shots are selected using the proposed convolutional neural network (CNN) based human saliency features. Next, temporal features of an activity in the sequence of frames are extracted by utilizing the convolutional layers of a FlowNet2 CNN model. Finally, a multilayer long short-term memory is presented for learning long-term sequences in the temporal optical flow features for activity recognition. Experiments11https://github.com/Aminullah6264/Activity-Rec-ML-LSTM. are conducted using different benchmark action and activity recognition datasets, and the results reveal the effectiveness of the proposed method for activity recognition in industrial settings compared with state-of-the-art methods.Item Multiview Summarization and Activity Recognition Meet Edge Computing in IoT Environments(2021-06-15) Hussain, Tanveer; Muhammad, Khan; Ullah, Amin; Ser, Javier Del; Gandomi, Amir H.; Sajjad, Muhammad; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IAMultiview video summarization (MVS) has not received much attention from the research community due to inter-view correlations and views' overlapping, etc. The majority of previous MVS works are offline, relying on only summary, and require additional communication bandwidth and transmission time, with no focus on foggy environments. We propose an edge intelligence-based MVS and activity recognition framework that combines artificial intelligence with Internet of Things (IoT) devices. In our framework, resource-constrained devices with cameras use a lightweight CNN-based object detection model to segment multiview videos into shots, followed by mutual information computation that helps in a summary generation. Our system does not rely solely on a summary, but encodes and transmits it to a master device using a neural computing stick for inter-view correlations computation and efficient activity recognition, an approach which saves computation resources, communication bandwidth, and transmission time. Experiments show an increase of 0.4 unit in F -measure on an MVS Office dateset and 0.2% and 2% improved accuracy for UCF-50 and YouTube 11 datesets, respectively, with lower storage and transmission times. The processing time is reduced from 1.23 to 0.45 s for a single frame and optimally 0.75 seconds faster MVS. A new dateset is constructed by synthetically adding fog to an MVS dateset to show the adaptability of our system for both certain and uncertain IoT surveillance environments.Item Robotic Ubiquitous Cognitive Ecology for Smart Homes(2015-12-01) Amato, G.; Bacciu, D.; Broxvall, M.; Chessa, S.; Coleman, S.; Di Rocco, M.; Dragone, M.; Gallicchio, C.; Gennaro, C.; Lozano, H.; McGinnity, T. M.; Micheli, A.; Ray, A. K.; Renteria, A.; Saffiotti, A.; Swords, D.; Vairo, C.; Vance, P.; Medical TechnologiesRobotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.