Browsing by Keyword "MLOps"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item MLPacker: A Unified Software Tool for Packaging and Deploying Atomic and Distributed Analytic Pipelines(Institute of Electrical and Electronics Engineers Inc., 2022) Minon, Raul; Diaz-De-Arcaya, Josu; Torre-Bastida, Ana I.; Zarate, Gorka; Moreno-Fernandez-De-Leceta, Aitor; Solic, Petar; Nizetic, Sandro; Rodrigues, Joel J. P. C.; Rodrigues, Joel J.P.C.; Gonzalez-de-Artaza, Diego Lopez-de-Ipina; Perkovic, Toni; Catarinucci, Luca; Patrono, Luigi; HPAIn the last years, MLOps (Machine Learning Operations) paradigm is attracting the attention from the community, extrapolating the DevOps (Development and Operations) paradigm to the artificial intelligence (AI) development life-cycle. In this area, some challenges must be addressed to successfully deliver solutions since there are specific nuances when dealing with AI operationalization such as the model packaging or monitoring. Fortunately, interesting and helpful approaches, both from the research community and industry have emerged. However, further research is still necessary to fulfil key gaps. This paper presents a tool, MLPacker, for addressing some of them. Concretely, this tool provides mechanisms to package and deploy analytic pipelines both in REST APIs and in streaming mode. In addition, the analytic pipelines can be deployed atomically (i.e., the whole pipeline in the same machine) or in a distributed fashion (i.e., deploying each stage of the pipeline in distinct machines). In this way, users can take advantage from the cloud continuum paradigm considering edge-fog-cloud computing layers. Finally, the tool is decoupled from the training stage to avoid data scientists the integration of blocks of code in their experiments for the operationalization. Besides the package mode (REST API or streaming), the tool can be configured to perform the deployments in local or in remote machines and by using or not containers. For this aim, this paper describes the gaps this tool addresses, the detailed components and flows supported, as well as an scenario with three different case studies to better explain the research conducted.Item Pangea: An MLOps Tool for Automatically Generating Infrastructure and Deploying Analytic Pipelines in Edge, Fog and Cloud Layers: An MLOps Tool for Automatically Generating Infrastructure and Deploying Analytic Pipelines in Edge, Fog and Cloud Layers(2022-06-11) Miñón, Raúl; Diaz-de-Arcaya, Josu; Torre-Bastida, Ana I.; Hartlieb, Philipp; HPADevelopment and operations (DevOps), artificial intelligence (AI), big data and edge–fog–cloud are disruptive technologies that may produce a radical transformation of the industry. Nevertheless, there are still major challenges to efficiently applying them in order to optimise productivity. Some of them are addressed in this article, concretely, with respect to the adequate management of information technology (IT) infrastructures for automated analysis processes in critical fields such as the mining industry. In this area, this paper presents a tool called Pangea aimed at automatically generating suitable execution environments for deploying analytic pipelines. These pipelines are decomposed into various steps to execute each one in the most suitable environment (edge, fog, cloud or on-premise) minimising latency and optimising the use of both hardware and software resources. Pangea is focused in three distinct objectives: (1) generating the required infrastructure if it does not previously exist; (2) provisioning it with the necessary requirements to run the pipelines (i.e., configuring each host operative system and software, install dependencies and download the code to execute); and (3) deploying the pipelines. In order to facilitate the use of the architecture, a representational state transfer application programming interface (REST API) is defined to interact with it. Therefore, in turn, a web client is proposed. Finally, it is worth noting that in addition to the production mode, a local development environment can be generated for testing and benchmarking purposes.