The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] example(36hit)

1-20hit(36hit)

  • Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface

    Tatsuya OYAMA  Kota YOSHIDA  Shunsuke OKURA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2023/09/26
      Vol:
    E107-A No:3
      Page(s):
    344-354

    Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.

  • Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network

    Hyun KWON  

     
    LETTER-Information Network

      Pubricized:
    2022/11/07
      Vol:
    E106-D No:2
      Page(s):
    262-266

    Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.

  • Projection-Based Physical Adversarial Attack for Monocular Depth Estimation

    Renya DAIMO  Satoshi ONO  

     
    LETTER

      Pubricized:
    2022/10/17
      Vol:
    E106-D No:1
      Page(s):
    31-35

    Monocular depth estimation has improved drastically due to the development of deep neural networks (DNNs). However, recent studies have revealed that DNNs for monocular depth estimation contain vulnerabilities that can lead to misestimation when perturbations are added to input. This study investigates whether DNNs for monocular depth estimation is vulnerable to misestimation when patterned light is projected on an object using a video projector. To this end, this study proposes an evolutionary adversarial attack method with multi-fidelity evaluation scheme that allows creating adversarial examples under black-box condition while suppressing the computational cost. Experiments in both simulated and real scenes showed that the designed light pattern caused a DNN to misestimate objects as if they have moved to the back.

  • Adversarial Example Detection Based on Improved GhostBusters

    Hyunghoon KIM  Jiwoo SHIN  Hyo Jin JO  

     
    LETTER

      Pubricized:
    2022/04/19
      Vol:
    E105-D No:11
      Page(s):
    1921-1922

    In various studies of attacks on autonomous vehicles (AVs), a phantom attack in which advanced driver assistance system (ADAS) misclassifies a fake object created by an adversary as a real object has been proposed. In this paper, we propose F-GhostBusters, which is an improved version of GhostBusters that detects phantom attacks. The proposed model uses a new feature, i.e, frequency of images. Experimental results show that F-GhostBusters not only improves the detection performance of GhostBusters but also can complement the accuracy against adversarial examples.

  • Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier

    Hyun KWON  Changhyun CHO  Jun LEE  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E105-D No:11
      Page(s):
    1880-1889

    Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).

  • Label-Adversarial Jointly Trained Acoustic Word Embedding

    Zhaoqi LI  Ta LI  Qingwei ZHAO  Pengyuan ZHANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2022/05/20
      Vol:
    E105-D No:8
      Page(s):
    1501-1505

    Query-by-example spoken term detection (QbE-STD) is a task of using speech queries to match utterances, and the acoustic word embedding (AWE) method of generating fixed-length representations for speech segments has shown high performance and efficiency in recent work. We propose an AWE training method using a label-adversarial network to reduce the interference information learned during AWE training. Experiments demonstrate that our method achieves significant improvements on multilingual and zero-resource test sets.

  • Adversarial Scan Attack against Scan Matching Algorithm for Pose Estimation in LiDAR-Based SLAM Open Access

    Kota YOSHIDA  Masaya HOJO  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    326-335

    Autonomous robots are controlled using physical information acquired by various sensors. The sensors are susceptible to physical attacks, which tamper with the observed values and interfere with control of the autonomous robots. Recently, sensor spoofing attacks targeting subsequent algorithms which use sensor data have become large threats. In this paper, we introduce a new attack against the LiDAR-based simultaneous localization and mapping (SLAM) algorithm. The attack uses an adversarial LiDAR scan to fool a pose graph and a generated map. The adversary calculates a falsification amount for deceiving pose estimation and physically injects the spoofed distance against LiDAR. The falsification amount is calculated by gradient method against a cost function of the scan matching algorithm. The SLAM algorithm generates the wrong map from the deceived movement path estimated by scan matching. We evaluated our attack on two typical scan matching algorithms, iterative closest point (ICP) and normal distribution transform (NDT). Our experimental results show that SLAM can be fooled by tampering with the scan. Simple odometry sensor fusion is not a sufficient countermeasure. We argue that it is important to detect or prevent tampering with LiDAR scans and to notice inconsistencies in sensors caused by physical attacks.

  • MTGAN: Extending Test Case set for Deep Learning Image Classifier

    Erhu LIU  Song HUANG  Cheng ZONG  Changyou ZHENG  Yongming YAO  Jing ZHU  Shiqi TANG  Yanqiu WANG  

     
    PAPER-Software Engineering

      Pubricized:
    2021/02/05
      Vol:
    E104-D No:5
      Page(s):
    709-722

    During the recent several years, deep learning has achieved excellent results in image recognition, voice processing, and other research areas, which has set off a new upsurge of research and application. Internal defects and external malicious attacks may threaten the safe and reliable operation of a deep learning system and even cause unbearable consequences. The technology of testing deep learning systems is still in its infancy. Traditional software testing technology is not applicable to test deep learning systems. In addition, the characteristics of deep learning such as complex application scenarios, the high dimensionality of input data, and poor interpretability of operation logic bring new challenges to the testing work. This paper focuses on the problem of test case generation and points out that adversarial examples can be used as test cases. Then the paper proposes MTGAN which is a framework to generate test cases for deep learning image classifiers based on Generative Adversarial Network. Finally, this paper evaluates the effectiveness of MTGAN.

  • Adversarial Black-Box Attacks with Timing Side-Channel Leakage

    Tsunato NAKAI  Daisuke SUZUKI  Fumio OMATSU  Takeshi FUJINO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    143-151

    Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.

  • Example Phrase Adaptation Method for Customized, Example-Based Dialog System Using User Data and Distributed Word Representations

    Norihide KITAOKA  Eichi SETO  Ryota NISHIMURA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/07/30
      Vol:
    E103-D No:11
      Page(s):
    2332-2339

    We have developed an adaptation method which allows the customization of example-based dialog systems for individual users by applying “plus” and “minus” operations to the distributed representations obtained using the word2vec method. After retrieving user-related profile information from the Web, named entity extraction is applied to the retrieval results. Words with a high term frequency-inverse document frequency (TF-IDF) score are then adopted as user related words. Next, we calculate the similarity between the distrubuted representations of selected user-related words and nouns in the existing example phrases, using word2vec embedding. We then generate phrases adapted to the user by substituting user-related words for highly similar words in the original example phrases. Word2vec also has a special property which allows the arithmetic operations “plus” and “minus” to be applied to distributed word representations. By applying these operations to words used in the original phrases, we are able to determine which user-related words can be used to replace the original words. The user-related words are then substituted to create customized example phrases. We evaluated the naturalness of the generated phrases and found that the system could generate natural phrases.

  • Robust CAPTCHA Image Generation Enhanced with Adversarial Example Methods

    Hyun KWON  Hyunsoo YOON  Ki-Woong PARK  

     
    LETTER-Information Network

      Pubricized:
    2020/01/15
      Vol:
    E103-D No:4
      Page(s):
    879-882

    Malicious attackers on the Internet use automated attack programs to disrupt the use of services via mass spamming, unnecessary bulletin boarding, and account creation. Completely automated public turing test to tell computers and humans apart (CAPTCHA) is used as a security solution to prevent such automated attacks. CAPTCHA is a system that determines whether the user is a machine or a person by providing distorted letters, voices, and images that only humans can understand. However, new attack techniques such as optical character recognition (OCR) and deep neural networks (DNN) have been used to bypass CAPTCHA. In this paper, we propose a method to generate CAPTCHA images by using the fast-gradient sign method (FGSM), iterative FGSM (I-FGSM), and the DeepFool method. We used the CAPTCHA image provided by python as the dataset and Tensorflow as the machine learning library. The experimental results show that the CAPTCHA image generated via FGSM, I-FGSM, and DeepFool methods exhibits a 0% recognition rate with ε=0.15 for FGSM, a 0% recognition rate with α=0.1 with 50 iterations for I-FGSM, and a 45% recognition rate with 150 iterations for the DeepFool method.

  • Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

    Hyun KWON  Hyunsoo YOON  Ki-Woong PARK  

     
    LETTER-Information Network

      Pubricized:
    2020/01/15
      Vol:
    E103-D No:4
      Page(s):
    883-887

    We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.

  • Simple Black-Box Adversarial Examples Generation with Very Few Queries

    Yuya SENZAKI  Satsuya OHATA  Kanta MATSUURA  

     
    PAPER-Reliability and Security of Computer Systems

      Pubricized:
    2019/10/02
      Vol:
    E103-D No:2
      Page(s):
    212-221

    Research on adversarial examples for machine learning has received much attention in recent years. Most of previous approaches are white-box attacks; this means the attacker needs to obtain before-hand internal parameters of a target classifier to generate adversarial examples for it. This condition is hard to satisfy in practice. There is also research on black-box attacks, in which the attacker can only obtain partial information about target classifiers; however, it seems we can prevent these attacks, since they need to issue many suspicious queries to the target classifier. In this paper, we show that a naive defense strategy based on surveillance of number query will not suffice. More concretely, we propose to generate not pixel-wise but block-wise adversarial perturbations to reduce the number of queries. Our experiments show that such rough perturbations can confuse the target classifier. We succeed in reducing the number of queries to generate adversarial examples in most cases. Our simple method is an untargeted attack and may have low success rates compared to previous results of other black-box attacks, but needs in average fewer queries. Surprisingly, the minimum number of queries (one and three in MNIST and CIFAR-10 dataset, respectively) is enough to generate adversarial examples in some cases. Moreover, based on these results, we propose a detailed classification for black-box attackers and discuss countermeasures against the above attacks.

  • Improved Majority Filtering Algorithm for Cleaning Class Label Noise in Supervised Learning

    Muhammad Ammar MALIK  Jae Young CHOI  Moonsoo KANG  Bumshik LEE  

     
    LETTER-Digital Signal Processing

      Vol:
    E102-A No:11
      Page(s):
    1556-1559

    In most supervised learning problems, the labelling quality of datasets plays a paramount role in the learning of high-performance classifiers. The performance of a classifier can significantly be degraded if it is trained with mislabeled data. Therefore, identification of such examples from the dataset is of critical importance. In this study, we proposed an improved majority filtering algorithm, which utilized the ability of a support vector machine in terms of capturing potentially mislabeled examples as support vectors (SVs). The key technical contribution of our work, is that the base (or component) classifiers that construct the ensemble of classifiers are trained using non-SV examples, although at the time of testing, the examples captured as SVs were employed. An example can be tagged as mislabeled if the majority of the base classifiers incorrectly classifies the example. Experimental results confirmed that our algorithm not only showed high-level accuracy with higher F1 scores, for identifying the mislabeled examples, but was also significantly faster than the previous methods.

  • A Family of Counterexamples to the Central Limit Theorem Based on Binary Linear Codes Open Access

    Keigo TAKEUCHI  

     
    LETTER-Coding Theory

      Vol:
    E102-A No:5
      Page(s):
    738-740

    The central limit theorem (CLT) claims that the standardized sum of a random sequence converges in distribution to a normal random variable as the length tends to infinity. We prove the existence of a family of counterexamples to the CLT for d-tuplewise independent sequences of length n for all d=2,...,n-1. The proof is based on [n, k, d+1] binary linear codes. Our result implies that d-tuplewise independence is too weak to justify the CLT, even if the size d grows linearly in length n.

  • Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers

    Hyun KWON  Yongchul KIM  Ki-Woong PARK  Hyunsoo YOON  Daeseon CHOI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/07/06
      Vol:
    E101-D No:10
      Page(s):
    2485-2500

    Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.

  • A Single Image Super-Resolution Algorithm Using Non-Local-Mean Self-Similarity and Noise-Robust Saliency Map

    Hui Jung LEE  Dong-Yoon CHOI  Kyoung Won LIM  Byung Cheol SONG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2017/04/05
      Vol:
    E100-D No:7
      Page(s):
    1463-1474

    This paper presents a single image super-resolution (SR) algorithm based on self-similarity using non-local-mean (NLM) metric. In order to accurately find the best self-example even under noisy environment, NLM weight is employed as a self-similarity metric. Also, a pixel-wise soft-switching is presented to overcome an inherent drawback of conventional self-example-based SR that it seldom works for texture areas. For the pixel-wise soft-switching, an edge-oriented saliency map is generated for each input image. Here, we derived the saliency map which can be robust against noises by using a specific training. The proposed algorithm works as follows: First, auxiliary images for an input low-resolution (LR) image are generated. Second, self-examples for each LR patch are found from the auxiliary images on a block basis, and the best match in terms of self-similarity is found as the best self-example. Third, a preliminary high-resolution (HR) image is synthesized using all the self-examples. Next, an edge map and a saliency map are generated from the input LR image, and pixel-wise weights for soft-switching of the next step are computed from those maps. Finally, a super-resolved HR image is produced by soft-switching between the preliminary HR image for edges and a linearly interpolated image for non-edges. Experimental results show that the proposed algorithm outperforms state-of-the-art SR algorithms qualitatively and quantitatively.

  • Neural Network Approaches to Dialog Response Retrieval and Generation

    Lasguido NIO  Sakriani SAKTI  Graham NEUBIG  Koichiro YOSHINO  Satoshi NAKAMURA  

     
    PAPER-Spoken dialog system

      Pubricized:
    2016/07/19
      Vol:
    E99-D No:10
      Page(s):
    2508-2517

    In this work, we propose a new statistical model for building robust dialog systems using neural networks to either retrieve or generate dialog response based on an existing data sources. In the retrieval task, we propose an approach that uses paraphrase identification during the retrieval process. This is done by employing recursive autoencoders and dynamic pooling to determine whether two sentences with arbitrary length have the same meaning. For both the generation and retrieval tasks, we propose a model using long short term memory (LSTM) neural networks that works by first using an LSTM encoder to read in the user's utterance into a continuous vector-space representation, then using an LSTM decoder to generate the most probable word sequence. An evaluation based on objective and subjective metrics shows that the new proposed approaches have the ability to deal with user inputs that are not well covered in the database compared to standard example-based dialog baselines.

  • Single Image Super Resolution by l2 Approximation with Random Sampled Dictionary

    Takanori FUJISAWA  Taichi YOSHIDA  Kazu MISHIBA  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E99-A No:2
      Page(s):
    612-620

    In this paper, we propose an example-based single image super resolution (SR) method by l2 approximation with self-sampled image patches. Example-based super resolution methods can reconstruct high resolution image patches by a linear combination of atoms in an overcomplete dictionary. This reconstruction requires a pair of two dictionaries created by tremendous low and high resolution image pairs from the prepared image databases. In our method, we introduce the dictionary by random sampling patches from just an input image and eliminate its training process. This dictionary exploits the self-similarity of images and it will no more depend on external image sets, which consern the storage space or the accuracy of referred image sets. In addition, we modified the approximation of input image to an l2-norm minimization problem, instead of commonly used sparse approximation such as l1-norm regularization. The l2 approximation has an advantage of computational cost by only solving an inverse problem. Through some experiments, the proposed method drastically reduces the computational time for the SR, and it provides a comparable performance to the conventional example-based SR methods with an l1 approximation and dictionary training.

  • Utilizing Human-to-Human Conversation Examples for a Multi Domain Chat-Oriented Dialog System

    Lasguido NIO  Sakriani SAKTI  Graham NEUBIG  Tomoki TODA  Satoshi NAKAMURA  

     
    PAPER-Dialog System

      Vol:
    E97-D No:6
      Page(s):
    1497-1505

    This paper describes the design and evaluation of a method for developing a chat-oriented dialog system by utilizing real human-to-human conversation examples from movie scripts and Twitter conversations. The aim of the proposed method is to build a conversational agent that can interact with users in as natural a fashion as possible, while reducing the time requirement for database design and collection. A number of the challenging design issues we faced are described, including (1) constructing an appropriate dialog corpora from raw movie scripts and Twitter data, and (2) developing an multi domain chat-oriented dialog management system which can retrieve a proper system response based on the current user query. To build a dialog corpus, we propose a unit of conversation called a tri-turn (a trigram conversation turn), as well as extraction and semantic similarity analysis techniques to help ensure that the content extracted from raw movie/drama script files forms appropriate dialog-pair (query-response) examples. The constructed dialog corpora are then utilized in a data-driven dialog management system. Here, various approaches are investigated including example-based (EBDM) and response generation using phrase-based statistical machine translation (SMT). In particular, we use two EBDM: syntactic-semantic similarity retrieval and TF-IDF based cosine similarity retrieval. Experiments are conducted to compare and contrast EBDM and SMT approaches in building a chat-oriented dialog system, and we investigate a combined method that addresses the advantages and disadvantages of both approaches. System performance was evaluated based on objective metrics (semantic similarity and cosine similarity) and human subjective evaluation from a small user study. Experimental results show that the proposed filtering approach effectively improve the performance. Furthermore, the results also show that by combing both EBDM and SMT approaches, we could overcome the shortcomings of each.

1-20hit(36hit)