Browsing by Author "Zarate, Gorka"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Akats: A System for Resilient Deployments on Edge Computing Environments Using Federated Machine Learning Techniques(Institute of Electrical and Electronics Engineers Inc., 2023) Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; Bonilla, Lander; López-De-Armentia, Juan; Miñón, Raúl; Zarate, Gorka; Almeida, Aitor; Solic, Petar; Nizetic, Sandro; Rodrigues, Joel J. P. C.; Rodrigues, Joel J. P. C.; Rodrigues, Joel J. P. C.; Lopez-de-Ipina Gonzalez-de-Artaza, Diego; Perkovic, Toni; Catarinucci, Luca; Patrono, Luigi; HPAEdge computing is a game changer for IoT, as it allows IoT devices to independently process and analyze data instead of just sending it to the cloud. But managing this considerable number of devices and deploying workloads on them in a coordinated and intelligent manner remains a challenge nowadays. In this paper, we focus on introducing the resilience dimension into these deployments, and we provide two main contributions: the use of federated machine learning techniques to develop a collaborative tool between the different devices aimed at detecting the possibility of a device failure, and subsequently, the utilization of the inferred information to optimize deployment plans ensuring the resilience in the devices. These two advances are implemented in an intelligent system, Akats, whose architecture is described in detail in this article. Finally, an application scenario is presented, based on Industry 4.0 - Machine predictive maintenance, to exemplify the benefits of the proposed intelligent system.Item K2E: Building MLOps Environments for Governing Data and Models Catalogues while Tracking Versions(Institute of Electrical and Electronics Engineers Inc., 2022) Zarate, Gorka; Minon, Raul; Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; HPANowadays, there are a variety of problems associated with the process of extracting value and information from data such as: Data heterogeneity, data distribution, model versioning, and the vast variety of techniques and approaches. Due to all this, the data management process becomes hard to implement in real world scenarios. In this context, the catalogue tools for data and Artificial Intelligence models alleviate the burden of dealing with versioning tasks. Thus, the automation of the data and models' management processes is facilitated, complying with DataOps and MLOps good practices. This work in progress enumerates key challenges to address when creating these types of catalogues: On the one hand, the management of the diversity of data and models' internal nature and their different versions, and on the other hand, the provision of adequate meta-information and Governance tools such as access control and auditing. In this paper, the Knowledge to Environment (K2E) platform is presented, whose architecture aims to define the necessary components for the creation of environments that allow working with data and model catalogues. By environment creation, we mean providing a workspace populated with the datasets and models of an organization, while tracking their distinct versions by using specialised catalogues. In addition, this workspace will incorporate added-value tools for governance and auditing. Finally, an approach for implementing K2E is detailed.Item MLPacker: A Unified Software Tool for Packaging and Deploying Atomic and Distributed Analytic Pipelines(Institute of Electrical and Electronics Engineers Inc., 2022) Minon, Raul; Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; Zarate, Gorka; Moreno-Fernandez-De-Leceta, Aitor; Solic, Petar; Nizetic, Sandro; Rodrigues, Joel J. P. C.; Rodrigues, Joel J.P.C.; Gonzalez-de-Artaza, Diego Lopez-de-Ipina; Perkovic, Toni; Catarinucci, Luca; Patrono, Luigi; HPAIn the last years, MLOps (Machine Learning Operations) paradigm is attracting the attention from the community, extrapolating the DevOps (Development and Operations) paradigm to the artificial intelligence (AI) development life-cycle. In this area, some challenges must be addressed to successfully deliver solutions since there are specific nuances when dealing with AI operationalization such as the model packaging or monitoring. Fortunately, interesting and helpful approaches, both from the research community and industry have emerged. However, further research is still necessary to fulfil key gaps. This paper presents a tool, MLPacker, for addressing some of them. Concretely, this tool provides mechanisms to package and deploy analytic pipelines both in REST APIs and in streaming mode. In addition, the analytic pipelines can be deployed atomically (i.e., the whole pipeline in the same machine) or in a distributed fashion (i.e., deploying each stage of the pipeline in distinct machines). In this way, users can take advantage from the cloud continuum paradigm considering edge-fog-cloud computing layers. Finally, the tool is decoupled from the training stage to avoid data scientists the integration of blocks of code in their experiments for the operationalization. Besides the package mode (REST API or streaming), the tool can be configured to perform the deployments in local or in remote machines and by using or not containers. For this aim, this paper describes the gaps this tool addresses, the detailed components and flows supported, as well as an scenario with three different case studies to better explain the research conducted.