Browsing by Keyword "Adversarial machine learning"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Adversarial sample crafting for time series classification with elastic similarity measures(Springer Verlag, 2018) Oregi, Izaskun; Del Ser, Javier; Perez, Aritz; Lozano, Jose A.; QuantumAdversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target.Item Multi-domain Adversarial Variational Bayesian Inference for Domain Generalization(2022) Gao, Zhifan; Guo, Saidi; Xu, Chenchu; Zhang, Jinglin; Gong, Mingming; Del Ser, Javier; Li, Shuo; IADomain generalization aims to learn common knowledge from multiple observed source domains and transfer it to unseen target domains, e.g. the object recognition in varieties of visual environments. Traditional domain generalization methods aim to learn the feature representation of the raw data with its distribution invariant across domains. This relies on the assumption that the two posterior distributions (the distributions of the label given the feature distribution and given the raw data) are stable in different domains. However, this does not always hold in many practical situations. In this paper, we relax the above assumption by permitting the posterior distribution of the label given the raw data changes in difference domains, and thus focuses on a more realistic learning problem that infers the conditional domain-invariant feature representation. Specifically, a multi-domain adversarial variational Bayesian inference approach is proposed to minimize the inter-domain discrepancy of the conditional distributions of the feature given the label. Besides, it is imposed by the constraints from the adversarial learning and feedback mechanism to enhance the condition invariant feature representation. The extensive experiments on two datasets demonstrate the effectiveness of our approach, as well as the state-of-the-art performance comparing with thirteen methods.Item Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences(2020-08) Oregi, Izaskun; Del Ser, Javier; Pérez, Aritz; Lozano, José A.; Quantum; IADue to their unprecedented capacity to learn patterns from raw data, deep neural networks have become the de facto modeling choice to address complex machine learning tasks. However, recent works have emphasized the vulnerability of deep neural networks when being fed with intelligently manipulated adversarial data instances tailored to confuse the model. In order to overcome this issue, a major effort has been made to find methods capable of making deep learning models robust against adversarial inputs. This work presents a new perspective for improving the robustness of deep neural networks in image classification. In computer vision scenarios, adversarial images are crafted by manipulating legitimate inputs so that the target classifier is eventually fooled, but the manipulation is not visually distinguishable by an external observer. The reason for the imperceptibility of the attack is that the human visual system fails to detect minor variations in color space, but excels at detecting anomalies in geometric shapes. We capitalize on this fact by extracting color gradient features from input images at multiple sensitivity levels to detect possible manipulations. We resort to a deep neural classifier to predict the category of unseen images, whereas a discrimination model analyzes the extracted color gradient features with time series techniques to determine the legitimacy of input images. The performance of our method is assessed over experiments comprising state-of-the-art techniques for crafting adversarial attacks. Results corroborate the increased robustness of the classifier when using our discrimination module, yielding drastically reduced success rates of adversarial attacks that operate on the whole image rather than on localized regions or around the existing shapes of the image. Future research is outlined towards improving the detection accuracy of the proposed method for more general attack strategies.