Browsing by Author "Hijji, Mohammad"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A fingerprint-based localization algorithm based on LSTM and data expansion method for sparse samples(2022-12) Jia, Bing; Qiao, Wenling; Zong, Zhaopeng; Liu, Shuai; Hijji, Mohammad; Del Ser, Javier; Muhammad, Khan; IAThe accuracy of WiFi fingerprint-based localization is related to the number of reference points, generally, to obtain better positioning accuracy, enough samples must be collected, which will inevitably lead to a huge sampling workload. Thus, it will be of great significance to design an algorithm using sparse samples to achieve positioning accuracy like that of dense samples. This paper proposes a WiFi fingerprint-based localization algorithm using Long Short-Term Memory Network (LSTM) with explainable feature and a sparse sample expansion algorithm (PGSE) based on Principal component analysis and Gaussian process regression for sparse samples. Specifically, in the case of limited number of collected reference points, principal component analysis is used to select the access point, and Gaussian process regression is used to model the reference point coordinates and the corresponding received signal strength values in the training sample set, to expand the signal data and construct a new fingerprint database. The effectiveness of the PGSE algorithm is verified by using the public dataset ’UJIIndoorLoc’. At the same time, the applicability of PGSE expansion algorithm to data with temporal information is verified in the fingerprint-based localization method. In addition, this paper also proposes a WiFi-RSSI indoor localization method based on Long Short-Term Memory Network. Lots of experiments are conducted in the actual scenes and the results are compared with several existing methods. The results indicate that the proposed method improves the precision of indoor localization on an average level compared to state-of-the-art methods.Item Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks(2022-12-01) Muhammad, Khan; Hussain, Tanveer; Ullah, Hayat; Ser, Javier Del; Rezaei, Mahdi; Kumar, Neeraj; Hijji, Mohammad; Bellavista, Paolo; De Albuquerque, Victor Hugo C.; IAScene understanding plays a crucial role in autonomous driving by utilizing sensory data for contextual information extraction and decision making. Beyond modeling advances, the enabler for vehicles to become aware of their surroundings is the availability of visual sensory data, which expand the vehicular perception and realizes vehicular contextual awareness in real-world environments. Research directions for scene understanding pursued by related studies include person/vehicle detection and segmentation, their transition analysis, lane change, and turns detection, among many others. Unfortunately, these tasks seem insufficient to completely develop fully-autonomous vehicles i.e., achieving level-5 autonomy, travelling just like human-controlled cars. This latter statement is among the conclusions drawn from this review paper: scene understanding for autonomous driving cars using vision sensors still requires significant improvements. With this motivation, this survey defines, analyzes, and reviews the current achievements of the scene understanding research area that mostly rely on computationally complex deep learning models. Furthermore, it covers the generic scene understanding pipeline, investigates the performance reported by the state-of-the-art, informs about the time complexity analysis of avant garde modeling choices, and highlights major triumphs and noted limitations encountered by current research efforts. The survey also includes a comprehensive discussion on the available datasets, and the challenges that, even if lately confronted by researchers, still remain open to date. Finally, our work outlines future research directions to welcome researchers and practitioners to this exciting domain.Item Visual Appearance and Soft Biometrics Fusion for Person Re-Identification Using Deep Learning(2023-05-01) Khan, Samee Ullah; Khan, Noman; Hussain, Tanveer; Muhammad, Khan; Hijji, Mohammad; Del Ser, Javier; Baik, Sung Wook; IALearning descriptions of individual pedestrian is a common goal of both person re-identification (P-ReID) and attribute recognition methods, which are typically differentiated only in terms of their granularity. However, existing P-ReID methods only consider identification labels for individual pedestrian. In this article, we present a multi-scale pyramid attention (MSPA) model for P-ReID that jointly manipulates the complementarity between semantic attributes and visual appearance to address this limitation. The proposed MSPA method mainly comprises three steps. Initially, a backbone model followed by appearance and attribute networks is individually trained to perform P-ReID and pedestrian attribute classification tasks. The attribute network primarily focuses on suppressed image areas associated with soft biometric data while retaining the semantic context among attributes using a convolutional long short-term memory architecture. Additionally, the identification network extracts rich contextual features from an image at varying scales using a residual pyramid module. In the second step, the dual network features are fused, and MSPA is re-trained for the P-ReID task to further improve its complementary capabilities. Finally, we experimentally evaluated the proposed model on the two benchmark datasets Market-1501 and DukeMTMC-reID, and the results show that our approach achieved state-of-the-art performance.