Browsing by Author "De Albuquerque, Victor Hugo C."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Activity Recognition Using Temporal Optical Flow Convolutional Features and Multilayer LSTM(2019-12) Ullah, Amin; Muhammad, Khan; Del Ser, Javier; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IANowadays digital surveillance systems are universally installed for continuously collecting enormous amounts of data, thereby requiring human monitoring for the identification of different activities and events. Smarter surveillance is the need of this era through which normal and abnormal activities can be automatically identified using artificial intelligence and computer vision technology. In this paper, we propose a framework for activity recognition in surveillance videos captured over industrial systems. The continuous surveillance video stream is first divided into important shots, where shots are selected using the proposed convolutional neural network (CNN) based human saliency features. Next, temporal features of an activity in the sequence of frames are extracted by utilizing the convolutional layers of a FlowNet2 CNN model. Finally, a multilayer long short-term memory is presented for learning long-term sequences in the temporal optical flow features for activity recognition. Experiments11https://github.com/Aminullah6264/Activity-Rec-ML-LSTM. are conducted using different benchmark action and activity recognition datasets, and the results reveal the effectiveness of the proposed method for activity recognition in industrial settings compared with state-of-the-art methods.Item Artificial Intelligence of Things-assisted two-stream neural network for anomaly detection in surveillance Big Video Data(2022-04) Ullah, Waseem; Ullah, Amin; Hussain, Tanveer; Muhammad, Khan; Heidari, Ali Asghar; Del Ser, Javier; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IAIn the last few years, visual sensors are deployed almost everywhere, generating a massive amount of surveillance video data in smart cities that can be inspected intelligently to recognize anomalous events. In this work, we present an efficient and robust framework to recognize anomalies from surveillance Big Video Data (BVD) using Artificial Intelligence of Things (AIoT). Smart surveillance is an important application of AIoT and we propose a two-stream neural network in this direction. The first stream comprises instant anomaly detection that is functional over resource-constrained IoT devices, whereas second phase is a two-stream deep neural network allowing for detailed anomaly analysis, suited to be deployed as a cloud computing service. Firstly, a self-pruned fine-tuned lightweight convolutional neural network (CNN) classifies the ongoing events as normal or anomalous in an AIoT environment. Upon anomaly detection, the edge device alerts the concerned departments and transmits the anomalous frames to cloud analysis center for their detailed evaluation in the second phase. The cloud analysis center resorts to the proposed two-stream network, modeled from the integration of spatiotemporal and optical flow features through the sequential frames. Fused features flow through a bi-directional long short-term memory (BD-LSTM) layer, which classifies them into their respective anomaly classes, e.g., assault and abuse. We perform extensive experiments over benchmarks built on top of the UCF-Crime and RWF-2000 datasets to test the effectiveness of our framework. We report a 9.88% and 4.01% increase in accuracy when compared to state-of-the-art methods evaluated over the aforementioned datasets.Item Communication Technologies for Edge Learning and Inference: A Novel Framework, Open Issues, and Perspectives(2023-03-01) Muhammad, Khan; Ser, Javier Del; Magaia, Naercio; Fonseca, Ramon; Hussain, Tanveer; Gandomi, Amir H.; Daneshmand, Mahmoud; De Albuquerque, Victor Hugo C.; IAWith the continuous advancement of smart devices and their demand for data, the complex computation that was previously exclusive to the cloud server is now moving toward the edge of the network. For numerous reasons (e.g., applications demanding low latencies and data privacy), data-based computation has been brought closer to the originating source, forging the edge computing paradigm. Together with machine learning, edge computing has become a powerful local decision-making tool, fostering the advent of edge learning. However, the latter has become delay-sensitive and resource-Thirsty in terms of hardware and networking. New methods have been developed to solve or minimize these issues, as proposed in this study. We first investigated representative communication methods for edge learning and inference (ELI), focusing on data compression, latency, and resource management. Next, we proposed an ELI-based video data prioritization framework that only considers data with events and hence significantly reduces the transmission and storage resources when implemented in surveillance networks. Furthermore, we critically examined various communication aspects related to edge learning by analyzing their issues and highlighting their advantages and disadvantages. Finally, we discuss the challenges and present issues that remain.Item Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions(2021-07) Muhammad, Khan; Ullah, Amin; Lloret, Jaime; Ser, Javier Del; De Albuquerque, Victor Hugo C.; IAAdvances in information and signal processing technologies have a significant impact on autonomous driving (AD), improving driving safety while minimizing the efforts of human drivers with the help of advanced artificial intelligence (AI) techniques. Recently, deep learning (DL) approaches have solved several real-world problems of complex nature. However, their strengths in terms of control processes for AD have not been deeply investigated and highlighted yet. This survey highlights the power of DL architectures in terms of reliability and efficient real-time performance and overviews state-of-the-art strategies for safe AD, with their major achievements and limitations. Furthermore, it covers major embodiments of DL along the AD pipeline including measurement, analysis, and execution, with a focus on road, lane, vehicle, pedestrian, drowsiness detection, collision avoidance, and traffic sign detection through sensing and vision-based DL methods. In addition, we discuss on the performance of several reviewed methods by using different evaluation metrics, with critics on their pros and cons. Finally, this survey highlights the current issues of safe DL-based AD with a prospect of recommendations for future research, rounding up a reference material for newcomers and researchers willing to join this vibrant area of Intelligent Transportation Systems.Item DeepReS: A Deep Learning-Based Video Summarization Strategy for Resource-Constrained Industrial Surveillance Scenarios(2020-09) Muhammad, Khan; Hussain, Tanveer; Del Ser, Javier; Palade, Vasile; De Albuquerque, Victor Hugo C.; IAThe exponential growth in the production of video contents in different industries causes an urgent need for effective video summarization (VS) techniques, in order to get an optimal storage and preservation of key information in the video. Compared to other domains, industrial videos are more challenging to process, as they usually contain diverse and complex events, which make their online processing a difficult task. In this article, we introduce an online system for intelligent video capturing, coarse and fine redundancy removal, and summary generation. First, we capture video data through resource-constrained devices in an industrial Internet of Things network, equipped with vision sensors and apply coarse redundancy removal through the comparison of low-level features. Second, we transmit the resulting frames to the cloud for detailed analysis, where sequential features are extracted for the selection of candidate keyframes. Finally, we refine the candidate keyframes in order to discriminate those with maximum information as part of the summary. The key contributions of this article include the coarse and fine refining of video data implemented over resource-restricted devices and the presentation of important data in the form of a summary. Experiments11[Online]. Available: https://github.com/tanveer-hussain/DeepRes-Video-Summarization. over publicly available datasets evince a 0.3-unit increase in the F1 score when compared to state-of-the-art and with reduced time complexity. Furthermore, we provide convincing results on our newly created dataset in an industrial environment, which is made publicly available for the research community along with its labeled ground truth.Item Group'n Route: An Edge Learning-Based Clustering and Efficient Routing Scheme Leveraging Social Strength for the Internet of Vehicles(2022-10-01) Magaia, Naercio; Ferreira, Pedro; Pereira, Paulo Rogerio; Muhammad, Khan; Ser, Javier Del; De Albuquerque, Victor Hugo C.; IAThe Internet of Vehicles (IoV) is undoubtedly at the core of the future of intelligent transportation. It will prevail over the road ecosystem, and it will have a huge impact on our lives throughout the provision of seamless connectivity among diverse transportation means. For the network to operate efficiently, the data needs to be quickly spread throughout the network, which requires low computational and bandwidth overheads. However, the dynamics of vehicular environments due to frequent node mobility poses many challenges to realize efficient data dissemination. This work addresses this type of problem by proposing a novel clustering algorithm at the edge of the network and an efficient message routing approach, which is known as Group'n Route (GnR). Both mechanisms resort to machine learning and graph metrics that reflect the social relationships between the nodes. Our performance evaluation reveals that the clustering algorithm yields stable results with varying road scenarios, which are becoming an advisable approach in the presence of mobile IoV nodes. Also, the designed routing protocol achieves two orders of magnitude smaller overhead and almost double the delivery rate when it is compared to traditional routing protocols, which thereby justify that the combination of our two proposed clustering and routing methods are a plausible alternative to support IoV communications in real-world setups.Item Human Short Long-Term Cognitive Memory Mechanism for Visual Monitoring in IoT-Assisted Smart Cities(2022-05-15) Wang, Shuai; Liu, Xinyu; Liu, Shuai; Muhammad, Khan; Heidari, Ali Asghar; Ser, Javier Del; De Albuquerque, Victor Hugo C.; IAIn the industry 4.0 era, the visualization and real-time automatic monitoring of smart cities supported by the Internet of Things is becoming increasingly important. The use of filtering algorithms in smart city monitoring is a feasible method for this purpose. However, maintaining fast and accurate monitoring in complex surveillance environments with restricted resources remains a major challenge. Since the cognitive theory in visual monitoring is difficult to realize in practice, efficient monitoring of complex environments is accordingly hard to be achieved. Moreover, current monitoring methods do not consider the particularities of the human cognitive system, so the remonitoring ability of the process/target is weak in case of monitoring failure by the monitoring system. To overcome these issues, this article proposes a novel human short-long cognitive memory mechanism for video surveillance in smart cities. In this mechanism, a memory with a high reliability target is used as a 'long-term memory,' whereas a memory with a low reliability target is used as a 'short-term memory.' During the monitoring process, the 'short-term memory' and 'long-term memory' alternation strategy is combined with the stored target appearance characteristics, ensuring that the original model in the memory will not be contaminated or mislaid by changes in the external environment (occlusion, fast motion, motion blur, and background clutter). Extensive simulations showcase that the algorithm proposed in this article not only improves the monitoring speed without hindering its real-time operation but also monitors and traces the monitored target accurately, ultimately improving the robustness of the detection in complex scenery, and enabling its application to IoT-assisted smart cities.Item Intelligent Embedded Vision for Summarization of Multiview Videos in IIoT(2020-04) Hussain, Tanveer; Muhammad, Khan; Ser, Javier Del; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IANowadays, video sensors are used on a large scale for various applications, including security monitoring and smart transportation. However, the limited communication bandwidth and storage constraints make it challenging to process such heterogeneous nature of Big Data in real time. Multiview video summarization (MVS) enables us to suppress redundant data in distributed video sensors settings. The existing MVS approaches process video data in offline manner by transmitting them to the local or cloud server for analysis, which requires extra streaming to conduct summarization, huge bandwidth, and are not applicable for integration with industrial Internet of Things (IIoT). This article presents a light-weight convolutional neural network (CNN) and IIoT-based computationally intelligent (CI) MVS framework. Our method uses an IIoT network containing smart devices, Raspberry Pi (RPi) (clients and master) with embedded cameras to capture multiview video data. Each client RPi detects target in frames via light-weight CNN model, analyzes these targets for traffic and crowd density, and searches for suspicious objects to generate alert in the IIoT network. The frames of each client RPi are encoded and transmitted with approximately 17.02% smaller size of each frame to master RPi for final MVS. Empirical analysis shows that our proposed framework can be used in industrial environments for various applications such as security and smart transportation and can be proved beneficial for saving resources.11[Online]. Available: https://github.com/tanveer-hussain/Embedded-Vision-for-MVS.Item Multiview Summarization and Activity Recognition Meet Edge Computing in IoT Environments(2021-06-15) Hussain, Tanveer; Muhammad, Khan; Ullah, Amin; Ser, Javier Del; Gandomi, Amir H.; Sajjad, Muhammad; Baik, Sung Wook; De Albuquerque, Victor Hugo C.; IAMultiview video summarization (MVS) has not received much attention from the research community due to inter-view correlations and views' overlapping, etc. The majority of previous MVS works are offline, relying on only summary, and require additional communication bandwidth and transmission time, with no focus on foggy environments. We propose an edge intelligence-based MVS and activity recognition framework that combines artificial intelligence with Internet of Things (IoT) devices. In our framework, resource-constrained devices with cameras use a lightweight CNN-based object detection model to segment multiview videos into shots, followed by mutual information computation that helps in a summary generation. Our system does not rely solely on a summary, but encodes and transmits it to a master device using a neural computing stick for inter-view correlations computation and efficient activity recognition, an approach which saves computation resources, communication bandwidth, and transmission time. Experiments show an increase of 0.4 unit in F -measure on an MVS Office dateset and 0.2% and 2% improved accuracy for UCF-50 and YouTube 11 datesets, respectively, with lower storage and transmission times. The processing time is reduced from 1.23 to 0.45 s for a single frame and optimally 0.75 seconds faster MVS. A new dateset is constructed by synthetically adding fog to an MVS dateset to show the adaptability of our system for both certain and uncertain IoT surveillance environments.Item Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks(2022-12-01) Muhammad, Khan; Hussain, Tanveer; Ullah, Hayat; Ser, Javier Del; Rezaei, Mahdi; Kumar, Neeraj; Hijji, Mohammad; Bellavista, Paolo; De Albuquerque, Victor Hugo C.; IAScene understanding plays a crucial role in autonomous driving by utilizing sensory data for contextual information extraction and decision making. Beyond modeling advances, the enabler for vehicles to become aware of their surroundings is the availability of visual sensory data, which expand the vehicular perception and realizes vehicular contextual awareness in real-world environments. Research directions for scene understanding pursued by related studies include person/vehicle detection and segmentation, their transition analysis, lane change, and turns detection, among many others. Unfortunately, these tasks seem insufficient to completely develop fully-autonomous vehicles i.e., achieving level-5 autonomy, travelling just like human-controlled cars. This latter statement is among the conclusions drawn from this review paper: scene understanding for autonomous driving cars using vision sensors still requires significant improvements. With this motivation, this survey defines, analyzes, and reviews the current achievements of the scene understanding research area that mostly rely on computationally complex deep learning models. Furthermore, it covers the generic scene understanding pipeline, investigates the performance reported by the state-of-the-art, informs about the time complexity analysis of avant garde modeling choices, and highlights major triumphs and noted limitations encountered by current research efforts. The survey also includes a comprehensive discussion on the available datasets, and the challenges that, even if lately confronted by researchers, still remain open to date. Finally, our work outlines future research directions to welcome researchers and practitioners to this exciting domain.