%0 Book Section %T Adversarial sample crafting for time series classification with elastic similarity measures publisher Springer Verlag %D 2018 %@ https://hdl.handle.net/11556/1699 %X Adversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target. %~