Browsing by Author "Minon, Raul"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Implementation of a Large-Scale Platform for Cyber-Physical System Real-Time Monitoring(2019) Canizo, Mikel; Conde, Angel; Charramendieta, Santiago; Minon, Raul; Cid-Fuentes, Raul G.; Onieva, Enrique; HPAThe emergence of Industry 4.0 and the Internet of Things (IoT) has meant that the manufacturing industry has evolved from embedded systems to cyber-physical systems (CPSs). This transformation has provided manufacturers with the ability to measure the performance of industrial equipment by means of data gathered from on-board sensors. This allows the status of industrial systems to be monitored and can detect anomalies. However, the increased amount of measured data has prompted many companies to investigate innovative ways to manage these volumes of data. In recent years, cloud computing and big data technologies have emerged among the scientific communities as key enabling technologies to address the current needs of CPSs. This paper presents a large-scale platform for CPS real-time monitoring based on big data technologies, which aims to perform real-time analysis that targets the monitoring of industrial machines in a real work environment. This paper is validated by implementing the proposed solution on a real industrial use case that includes several industrial press machines. The formal experiments in a real scenario are conducted to demonstrate the effectiveness of this solution and also its adequacy and scalability for future demand requirements. As a result of the implantation of this solution, the overall equipment effectiveness has been improved.Item K2E: Building MLOps Environments for Governing Data and Models Catalogues while Tracking Versions(Institute of Electrical and Electronics Engineers Inc., 2022) Zarate, Gorka; Minon, Raul; Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; HPANowadays, there are a variety of problems associated with the process of extracting value and information from data such as: Data heterogeneity, data distribution, model versioning, and the vast variety of techniques and approaches. Due to all this, the data management process becomes hard to implement in real world scenarios. In this context, the catalogue tools for data and Artificial Intelligence models alleviate the burden of dealing with versioning tasks. Thus, the automation of the data and models' management processes is facilitated, complying with DataOps and MLOps good practices. This work in progress enumerates key challenges to address when creating these types of catalogues: On the one hand, the management of the diversity of data and models' internal nature and their different versions, and on the other hand, the provision of adequate meta-information and Governance tools such as access control and auditing. In this paper, the Knowledge to Environment (K2E) platform is presented, whose architecture aims to define the necessary components for the creation of environments that allow working with data and model catalogues. By environment creation, we mean providing a workspace populated with the datasets and models of an organization, while tracking their distinct versions by using specialised catalogues. In addition, this workspace will incorporate added-value tools for governance and auditing. Finally, an approach for implementing K2E is detailed.Item MLPacker: A Unified Software Tool for Packaging and Deploying Atomic and Distributed Analytic Pipelines(Institute of Electrical and Electronics Engineers Inc., 2022) Minon, Raul; Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; Zarate, Gorka; Moreno-Fernandez-De-Leceta, Aitor; Solic, Petar; Nizetic, Sandro; Rodrigues, Joel J. P. C.; Rodrigues, Joel J.P.C.; Gonzalez-de-Artaza, Diego Lopez-de-Ipina; Perkovic, Toni; Catarinucci, Luca; Patrono, Luigi; HPAIn the last years, MLOps (Machine Learning Operations) paradigm is attracting the attention from the community, extrapolating the DevOps (Development and Operations) paradigm to the artificial intelligence (AI) development life-cycle. In this area, some challenges must be addressed to successfully deliver solutions since there are specific nuances when dealing with AI operationalization such as the model packaging or monitoring. Fortunately, interesting and helpful approaches, both from the research community and industry have emerged. However, further research is still necessary to fulfil key gaps. This paper presents a tool, MLPacker, for addressing some of them. Concretely, this tool provides mechanisms to package and deploy analytic pipelines both in REST APIs and in streaming mode. In addition, the analytic pipelines can be deployed atomically (i.e., the whole pipeline in the same machine) or in a distributed fashion (i.e., deploying each stage of the pipeline in distinct machines). In this way, users can take advantage from the cloud continuum paradigm considering edge-fog-cloud computing layers. Finally, the tool is decoupled from the training stage to avoid data scientists the integration of blocks of code in their experiments for the operationalization. Besides the package mode (REST API or streaming), the tool can be configured to perform the deployments in local or in remote machines and by using or not containers. For this aim, this paper describes the gaps this tool addresses, the detailed components and flows supported, as well as an scenario with three different case studies to better explain the research conducted.