Browsing by Author "Kumar, Neeraj"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Fuzzy Logic in Surveillance Big Video Data Analysis(2021-06) Muhammad, Khan; Obaidat, Mohammad S.; Hussain, Tanveer; Ser, Javier Del; Kumar, Neeraj; Tanveer, Mohammad; Doctor, Faiyaz; IACCTV cameras installed for continuous surveillance generate enormous amounts of data daily, forging the term Big Video Data (BVD). The active practice of BVD includes intelligent surveillance and activity recognition, among other challenging tasks. To efficiently address these tasks, the computer vision research community has provided monitoring systems, activity recognition methods, and many other computationally complex solutions for the purposeful usage of BVD. Unfortunately, the limited capabilities of these methods, higher computational complexity, and stringent installation requirements hinder their practical implementation in real-world scenarios, which still demand human operators sitting in front of cameras to monitor activities or make actionable decisions based on BVD. The usage of human-like logic, known as fuzzy logic, has been employed emerging for various data science applications such as control systems, image processing, decision making, routing, and advanced safety-critical systems. This is due to its ability to handle various sources of real-world domain and data uncertainties, generating easily adaptable and explainable data-based models. Fuzzy logic can be effectively used for surveillance as a complementary for huge-sized artificial intelligence models and tiresome training procedures. In this article, we draw researchers' attention toward the usage of fuzzy logic for surveillance in the context of BVD. We carry out a comprehensive literature survey of methods for vision sensory data analytics that resort to fuzzy logic concepts. Our overview highlights the advantages, downsides, and challenges in existing video analysis methods based on fuzzy logic for surveillance applications. We enumerate and discuss the datasets used by these methods, and finally provide an outlook toward future research directions derived from our critical assessment of the efforts invested so far in this exciting field.Item Vision-based personalized Wireless Capsule Endoscopy for smart healthcare: Taxonomy, literature review, opportunities and challenges(2020-12) Muhammad, Khan; Khan, Salman; Kumar, Neeraj; Del Ser, Javier; Mirjalili, Seyedali; IAWireless Capsule Endoscopy (WCE) is a patient-friendly approach for digestive tract monitoring to support medical experts towards identifying any anomaly inside human's Gastrointestinal (GI) tract. The automatic recognition of such type of abnormalities is essential for early diagnosis and time saving. To this end, several computer aided diagnosis (CAD) methods have been proposed in the literature for automatic abnormal region segmentation, summarization, classification, and personalization in WCE videos. In this work, we provide a detailed review of computer vision-based methods for WCE videos analysis. Firstly, all the major domains of WCE video analytics with their generic flow are identified. Secondly, we comprehensively review WCE video analysis methods and surveys with their pros and cons presented to date. In addition, this paper reviews several representative public datasets used for the performance assessment of WCE techniques and methods. Finally, the most important aspect of this survey is the identification of several research trends and open issues in different domains of WCE, with an emphasis placed on future research directions towards smarter healthcare and personalization.Item Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks(2022-12-01) Muhammad, Khan; Hussain, Tanveer; Ullah, Hayat; Ser, Javier Del; Rezaei, Mahdi; Kumar, Neeraj; Hijji, Mohammad; Bellavista, Paolo; De Albuquerque, Victor Hugo C.; IAScene understanding plays a crucial role in autonomous driving by utilizing sensory data for contextual information extraction and decision making. Beyond modeling advances, the enabler for vehicles to become aware of their surroundings is the availability of visual sensory data, which expand the vehicular perception and realizes vehicular contextual awareness in real-world environments. Research directions for scene understanding pursued by related studies include person/vehicle detection and segmentation, their transition analysis, lane change, and turns detection, among many others. Unfortunately, these tasks seem insufficient to completely develop fully-autonomous vehicles i.e., achieving level-5 autonomy, travelling just like human-controlled cars. This latter statement is among the conclusions drawn from this review paper: scene understanding for autonomous driving cars using vision sensors still requires significant improvements. With this motivation, this survey defines, analyzes, and reviews the current achievements of the scene understanding research area that mostly rely on computationally complex deep learning models. Furthermore, it covers the generic scene understanding pipeline, investigates the performance reported by the state-of-the-art, informs about the time complexity analysis of avant garde modeling choices, and highlights major triumphs and noted limitations encountered by current research efforts. The survey also includes a comprehensive discussion on the available datasets, and the challenges that, even if lately confronted by researchers, still remain open to date. Finally, our work outlines future research directions to welcome researchers and practitioners to this exciting domain.