The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] verification(227hit)

1-20hit(227hit)

  • Edge Device Verification Techniques for Updated Object Detection AI via Target Object Existence Open Access

    Akira KITAYAMA  Goichi ONO  Hiroaki ITO  

     
    PAPER-Intelligent Transport System

      Pubricized:
    2023/12/20
      Vol:
    E107-A No:8
      Page(s):
    1286-1295

    Edge devices with strict safety and reliability requirements, such as autonomous driving cars, industrial robots, and drones, necessitate software verification on such devices before operation. The human cost and time required for this analysis constitute a barrier in the cycle of software development and updating. In particular, the final verification at the edge device should at least strictly confirm that the updated software is not degraded from the current it. Since the edge device does not have the correct data, it is necessary for a human to judge whether the difference between the updated software and the operating it is due to degradation or improvement. Therefore, this verification is very costly. This paper proposes a novel automated method for efficient verification on edge devices of an object detection AI, which has found practical use in various applications. In the proposed method, a target object existence detector (TOED) (a simple binary classifier) judges whether an object in the recognition target class exists in the region of a prediction difference between the AI’s operating and updated versions. Using the results of this TOED judgement and the predicted difference, an automated verification system for the updated AI was constructed. TOED was designed as a simple binary classifier with four convolutional layers, and the accuracy of object existence judgment was evaluated for the difference between the predictions of the YOLOv5 L and X models using the Cityscapes dataset. The results showed judgement with more than 99.5% accuracy and 8.6% over detection, thus indicating that a verification system adopting this method would be more efficient than simple analysis of the prediction differences.

  • High Precision Fingerprint Verification for Small Area Sensor Based on Deep Learning

    Nabilah SHABRINA  Dongju LI  Tsuyoshi ISSHIKI  

     
    PAPER-Biometrics

      Pubricized:
    2023/06/26
      Vol:
    E107-A No:1
      Page(s):
    157-168

    The fingerprint verification system is widely used in mobile devices because of fingerprint's distinctive features and ease of capture. Typically, mobile devices utilize small sensors, which have limited area, to capture fingerprint. Meanwhile, conventional fingerprint feature extraction methods need detailed fingerprint information, which is unsuitable for those small sensors. This paper proposes a novel fingerprint verification method for small area sensors based on deep learning. A systematic method combines deep convolutional neural network (DCNN) in a Siamese network for feature extraction and XGBoost for fingerprint similarity training. In addition, a padding technique also introduced to avoid wraparound error problem. Experimental results show that the method achieves an improved accuracy of 66.6% and 22.6% in the FingerPassDB7 and FVC2006DB1B dataset, respectively, compared to the existing methods.

  • Local-to-Global Structure-Aware Transformer for Question Answering over Structured Knowledge

    Yingyao WANG  Han WANG  Chaoqun DUAN  Tiejun ZHAO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/06/27
      Vol:
    E106-D No:10
      Page(s):
    1705-1714

    Question-answering tasks over structured knowledge (i.e., tables and graphs) require the ability to encode structural information. Traditional pre-trained language models trained on linear-chain natural language cannot be directly applied to encode tables and graphs. The existing methods adopt the pre-trained models in such tasks by flattening structured knowledge into sequences. However, the serialization operation will lead to the loss of the structural information of knowledge. To better employ pre-trained transformers for structured knowledge representation, we propose a novel structure-aware transformer (SATrans) that injects the local-to-global structural information of the knowledge into the mask of the different self-attention layers. Specifically, in the lower self-attention layers, SATrans focus on the local structural information of each knowledge token to learn a more robust representation of it. In the upper self-attention layers, SATrans further injects the global information of the structured knowledge to integrate the information among knowledge tokens. In this way, the SATrans can effectively learn the semantic representation and structural information from the knowledge sequence and the attention mask, respectively. We evaluate SATrans on the table fact verification task and the knowledge base question-answering task. Furthermore, we explore two methods to combine symbolic and linguistic reasoning for these tasks to solve the problem that the pre-trained models lack symbolic reasoning ability. The experiment results reveal that the methods consistently outperform strong baselines on the two benchmarks.

  • Dynamic Verification Framework of Approximate Computing Circuits using Quality-Aware Coverage-Based Grey-Box Fuzzing

    Yutaka MASUDA  Yusei HONDA  Tohru ISHIHARA  

     
    PAPER

      Pubricized:
    2022/09/02
      Vol:
    E106-A No:3
      Page(s):
    514-522

    Approximate computing (AC) has recently emerged as a promising approach to the energy-efficient design of digital systems. For realizing the practical AC design, we need to verify whether the designed circuit can operate correctly under various operating conditions. Namely, the verification needs to efficiently find fatal logic errors or timing errors that violate the constraint of computational quality. This work focuses on the verification where the computational results can be observed, the computational quality can be calculated from computational results, and the constraint of computational quality is given and defined as the constraint which is set to the computational quality of designed AC circuit with given workloads. Then, this paper proposes a novel dynamic verification framework of the AC circuit. The key idea of the proposed framework is to incorporate a quality assessment capability into the Coverage-based Grey-box Fuzzing (CGF). CGF is one of the most promising techniques in the research field of software security testing. By repeating (1) mutation of test patterns, (2) execution of the program under test (PUT), and (3) aggregation of coverage information and feedback to the next test pattern generation, CGF can explore the verification space quickly and automatically. On the other hand, CGF originally cannot consider the computational quality by itself. For overcoming this quality unawareness in CGF, the proposed framework additionally embeds the Design Under Verification (DUV) component into the calculation part of computational quality. Thanks to the DUV integration, the proposed framework realizes the quality-aware feedback loop in CGF and thus quickly enhances the verification coverage for test patterns that violate the quality constraint. In this work, we quantitatively compared the verification coverage of the approximate arithmetic circuits between the proposed framework and the random test. In a case study of an approximate multiply-accumulate (MAC) unit, we experimentally confirmed that the proposed framework achieved 3.85 to 10.36 times higher coverage than the random test.

  • Verikube: Automatic and Efficient Verification for Container Network Policies

    Haney KANG  Seungwon SHIN  

     
    LETTER-Information Network

      Pubricized:
    2022/08/26
      Vol:
    E105-D No:12
      Page(s):
    2131-2134

    Recently, Linux Container has been the de-facto standard for a cloud system, enabling cloud providers to create a virtual environment in a much more scaled manner. However, configuring container networks remains immature and requires automatic verification for efficient cloud management. We propose Verikube, which utilizes a novel graph structure representing policies to reduce memory consumption and accelerate verification. Moreover, unlike existing works, Verikube is compatible with the complex semantics of Cilium Policy which a cloud adopts from its advantage of performance. Our evaluation results show that Verikube performs at least seven times better for memory efficiency, at least 1.5 times faster for data structure management, and 20K times better for verification.

  • A Characterization on Necessary Conditions of Realizability for Reactive System Specifications

    Takashi TOMITA  Shigeki HAGIHARA  Masaya SHIMAKAWA  Naoki YONEZAKI  

     
    PAPER

      Pubricized:
    2022/04/08
      Vol:
    E105-D No:10
      Page(s):
    1665-1677

    This paper focuses on verification for reactive system specifications. A reactive system is an open system that continuously interacts with an uncontrollable external environment, and it must often be highly safe and reliable. However, realizability checking for a given specification is very costly, so we need effective methods to detect and analyze defects in unrealizable specifications to refine them efficiently. We introduce a systematic characterization on necessary conditions of realizability. This characterization is based on quantifications for inputs and outputs in early and late behaviors and reveals four essential aspects of realizability: exhaustivity, strategizability, preservability and stability. Additionally, the characterization derives new necessary conditions, which enable us to classify unrealizable specifications systematically and hierarchically.

  • Performance Evaluation of Classification and Verification with Quadrant IQ Transition Image

    Hiro TAMURA  Kiyoshi YANAGISAWA  Atsushi SHIRANE  Kenichi OKADA  

     
    PAPER-Network Management/Operation

      Pubricized:
    2021/12/01
      Vol:
    E105-B No:5
      Page(s):
    580-587

    This paper presents a physical layer wireless device identification method that uses a convolutional neural network (CNN) operating on a quadrant IQ transition image. This work introduces classification and detection tasks in one process. The proposed method can identify IoT wireless devices by exploiting their RF fingerprints, a technology to identify wireless devices by using unique variations in analog signals. We propose a quadrant IQ image technique to reduce the size of CNN while maintaining accuracy. The CNN utilizes the IQ transition image, which image processing cut out into four-part. An over-the-air experiment is performed on six Zigbee wireless devices to confirm the proposed identification method's validity. The measurement results demonstrate that the proposed method can achieve 99% accuracy with the light-weight CNN model with 36,500 weight parameters in serial use and 146,000 in parallel use. Furthermore, the proposed threshold algorithm can verify the authenticity using one classifier and achieved 80% accuracy for further secured wireless communication. This work also introduces the identification of expanded signals with SNR between 10 to 30dB. As a result, at SNR values above 20dB, the proposals achieve classification and detection accuracies of 87% and 80%, respectively.

  • Multi-Party Electronic Contract Signing Protocol Based on Blockchain

    Tong ZHANG  Yujue WANG  Yong DING  Qianhong WU  Hai LIANG  Huiyong WANG  

     
    PAPER

      Pubricized:
    2021/12/07
      Vol:
    E105-D No:2
      Page(s):
    264-271

    With the development of Internet technology, the demand for signing electronic contracts has been greatly increased. The electronic contract generated by the participants in an online way enjoys the same legal effect as paper contract. The fairness is the key issue in jointly signing electronic contracts by the involved participants, so that all participants can either get the same copy of the contract or nothing. Most existing solutions only focus on the fairness of electronic contract generation between two participants, where the digital signature can effectively guarantee the fairness of the exchange of electronic contracts and becomes the conventional technology in designing the contract signing protocol. In this paper, an efficient blockchain-based multi-party electronic contract signing (MECS) protocol is presented, which not only offers the fairness of electronic contract generation for multiple participants, but also allows each participant to aggregate validate the signed copy of others. Security analysis shows that the proposed MECS protocol enjoys unforgeability, non-repudiation and fairness of electronic contracts, and performance analysis demonstrates the high efficiency of our construction.

  • Formalization and Analysis of Ceph Using Process Algebra

    Ran LI  Huibiao ZHU  Jiaqi YIN  

     
    PAPER-Software System

      Pubricized:
    2021/09/28
      Vol:
    E104-D No:12
      Page(s):
    2154-2163

    Ceph is an object-based parallel distributed file system that provides excellent performance, reliability, and scalability. Additionally, Ceph provides its Cephx authentication system to authenticate users, so that it can identify users and realize authentication. In this paper, we first model the basic architecture of Ceph using process algebra CSP (Communicating Sequential Processes). With the help of the model checker PAT (Process Analysis Toolkit), we feed the constructed model to PAT and then verify several related properties, including Deadlock Freedom, Data Reachability, Data Write Integrity, Data Consistency and Authentication. The verification results show that the original model cannot cater to the Authentication property. Therefore, we formalize a new model of Ceph where Cephx is adopted. In the light of the new verification results, it can be found that Cephx satisfies all these properties.

  • An Algebraic Approach to Verifying Galois-Field Arithmetic Circuits with Multiple-Valued Characteristics

    Akira ITO  Rei UENO  Naofumi HOMMA  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/28
      Vol:
    E104-D No:8
      Page(s):
    1083-1091

    This study presents a formal verification method for Galois-field (GF) arithmetic circuits with the characteristics of more than two values. The proposed method formally verifies the correctness of circuit functionality (i.e., the input-output relations given as GF-polynomials) by checking the equivalence between a specification and a gate-level netlist. We represent a netlist using simultaneous algebraic equations and solve them based on a novel polynomial reduction method that can be efficiently applied to arithmetic over extension fields $mathbb{F}_{p^m}$, where the characteristic p is larger than two. By using the reverse topological term order to derive the Gröbner basis, our method can complete the verification, even when a target circuit includes bugs. In addition, we introduce an extension of the Galois-Field binary moment diagrams to perform the polynomial reductions faster. Our experimental results show that the proposed method can efficiently verify practical $mathbb{F}_{p^m}$ arithmetic circuits, including those used in modern cryptography. Moreover, we demonstrate that the extended polynomial reduction technique can enable verification that is up to approximately five times faster than the original one.

  • Online Signature Verification Using Single-Template Matching Through Locally and Globally Weighted Dynamic Time Warping

    Manabu OKAWA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2020/09/01
      Vol:
    E103-D No:12
      Page(s):
    2701-2708

    In this paper, we propose a novel single-template strategy based on a mean template set and locally/globally weighted dynamic time warping (LG-DTW) to improve the performance of online signature verification. Specifically, in the enrollment phase, we implement a time series averaging method, Euclidean barycenter-based DTW barycenter averaging, to obtain a mean template set considering intra-user variability among reference samples. Then, we acquire a local weighting estimate considering a local stability sequence that is obtained analyzing multiple matching points of an optimal match between the mean template and reference sets. Thereafter, we derive a global weighting estimate based on the variable importance estimated by gradient boosting. Finally, in the verification phase, we apply both local and global weighting methods to acquire a discriminative LG-DTW distance between the mean template set and a query sample. Experimental results obtained on the public SVC2004 Task2 and MCYT-100 signature datasets confirm the effectiveness of the proposed method for online signature verification.

  • Symbolic Representation of Time Petri Nets for Efficient Bounded Model Checking

    Nao IGAWA  Tomoyuki YOKOGAWA  Sousuke AMASAKI  Masafumi KONDO  Yoichiro SATO  Kazutami ARIMOTO  

     
    LETTER-Software System

      Pubricized:
    2019/12/20
      Vol:
    E103-D No:3
      Page(s):
    702-705

    Safety critical systems are often modeled using Time Petri Nets (TPN) for analyzing their reliability with formal verification methods. This paper proposed an efficient verification method for TPN introducing bounded model checking based on satisfiability solving. The proposed method expresses time constraints of TPN by Difference Logic (DL) and uses SMT solvers for verification. Its effectiveness was also demonstrated with an experiment.

  • Formal Verification of a Decision-Tree Ensemble Model and Detection of Its Violation Ranges

    Naoto SATO  Hironobu KURUMA  Yuichiroh NAKAGAWA  Hideto OGAWA  

     
    PAPER-Dependable Computing

      Pubricized:
    2019/11/20
      Vol:
    E103-D No:2
      Page(s):
    363-378

    As one type of machine-learning model, a “decision-tree ensemble model” (DTEM) is represented by a set of decision trees. A DTEM is mainly known to be valid for structured data; however, like other machine-learning models, it is difficult to train so that it returns the correct output value (called “prediction value”) for any input value (called “attribute value”). Accordingly, when a DTEM is used in regard to a system that requires reliability, it is important to comprehensively detect attribute values that lead to malfunctions of a system (failures) during development and take appropriate countermeasures. One conceivable solution is to install an input filter that controls the input to the DTEM and to use separate software to process attribute values that may lead to failures. To develop the input filter, it is necessary to specify the filtering condition for the attribute value that leads to the malfunction of the system. In consideration of that necessity, we propose a method for formally verifying a DTEM and, according to the result of the verification, if an attribute value leading to a failure is found, extracting the range in which such an attribute value exists. The proposed method can comprehensively extract the range in which the attribute value leading to the failure exists; therefore, by creating an input filter based on that range, it is possible to prevent the failure. To demonstrate the feasibility of the proposed method, we performed a case study using a dataset of house prices. Through the case study, we also evaluated its scalability and it is shown that the number and depth of decision trees are important factors that determines the applicability of the proposed method.

  • Blind Bandwidth Extension with a Non-Linear Function and Its Evaluation on Automatic Speaker Verification

    Ryota KAMINISHI  Haruna MIYAMOTO  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2019/10/25
      Vol:
    E103-D No:1
      Page(s):
    42-49

    This study evaluates the effects of some non-learning blind bandwidth extension (BWE) methods on state-of-the-art automatic speaker verification (ASV) systems. Recently, a non-linear bandwidth extension (N-BWE) method has been proposed as a blind, non-learning, and light-weight BWE approach. Other non-learning BWEs have also been developed in recent years. For ASV evaluations, most data available to train ASV systems is narrowband (NB) telephone speech. Meanwhile, wideband (WB) data have been used to train the state-of-the-art ASV systems, such as i-vector, d-vector, and x-vector. This can cause sampling rate mismatches when all datasets are used. In this paper, we investigate the influence of sampling rate mismatches in the x-vector-based ASV systems and how non-learning BWE methods perform against them. The results showed that the N-BWE method improved the equal error rate (EER) on ASV systems based on the x-vector when the mismatches were present. We researched the relationship between objective measurements and EERs. Consequently, the N-BWE method produced the lowest EERs on both ASV systems and obtained the lower RMS-LSD value and the higher STOI score.

  • On Locally Minimum and Strongest Assumption Generation Method for Component-Based Software Verification

    Hoang-Viet TRAN  Ngoc Hung PHAM  Viet Ha NGUYEN  

     
    PAPER

      Pubricized:
    2019/05/16
      Vol:
    E102-D No:8
      Page(s):
    1449-1461

    Since software becomes more complex during its life cycle, the verification cost becomes higher, especially for such methods which are using model checking in general and assume-guarantee reasoning in specific. To address the problem of reducing the assume-guarantee verification cost, this paper presents a method to generate locally minimum and strongest assumptions for verification of component-based software. For this purpose, we integrate a variant of membership queries answering technique to an algorithm which considers candidate assumptions that are smaller and stronger first, larger and weaker later. Because the algorithm stops as soon as it reaches a conclusive result, the generated assumptions are the locally minimum and strongest ones. The correctness proof of the proposed algorithm is also included in the paper. An implemented tool, test data, and experimental results are presented and discussed.

  • Consistency Checking between Java Equals and hashCode Methods Using Software Analysis Workbench

    Kozo OKANO  Satoshi HARAUCHI  Toshifusa SEKIZAWA  Shinpei OGATA  Shin NAKAJIMA  

     
    PAPER-Software System

      Pubricized:
    2019/05/14
      Vol:
    E102-D No:8
      Page(s):
    1498-1505

    Java is one of important program language today. In Java, in order to build sound software, we have to carefully implement two fundamental methods hashCode and equals. This requirement, however, is not easy to follow in real software development. Some existing studies for ensuring the correctness of these two methods rely on static analysis, which are limited to loop-free programs. This paper proposes a new solution to this important problem, using software analysis workbench (SAW), an open source tool. The efficiency is evaluated through experiments. We also provide a useful situation where cost of regression testing is reduced when program refactoring is conducted.

  • Speaker-Phonetic I-Vector Modeling for Text-Dependent Speaker Verification with Random Digit Strings

    Shengyu YAO  Ruohua ZHOU  Pengyuan ZHANG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/11/19
      Vol:
    E102-D No:2
      Page(s):
    346-354

    This paper proposes a speaker-phonetic i-vector modeling method for text-dependent speaker verification with random digit strings, in which enrollment and test utterances are not of the same phrase. The core of the proposed method is making use of digit alignment information in i-vector framework. By utilizing force alignment information, verification scores of the testing trials can be computed in the fixed-phrase situation, in which the compared speech segments between the enrollment and test utterances are of the same phonetic content. Specifically, utterances are segmented into digits, then a unique phonetically-constrained i-vector extractor is applied to obtain speaker and channel variability representation for every digit segment. Probabilistic linear discriminant analysis (PLDA) and s-norm are subsequently used for channel compensation and score normalization respectively. The final score is obtained by combing the digit scores, which are computed by scoring individual digit segments of the test utterance against the corresponding ones of the enrollment. Experimental results on the Part 3 of Robust Speaker Recognition (RSR2015) database demonstrate that the proposed approach significantly outperforms GMM-UBM by 52.3% and 53.5% relative in equal error rate (EER) for male and female respectively.

  • Visualization of Inter-Module Dataflow through Global Variables for Source Code Review

    Naoto ISHIDA  Takashi ISHIO  Yuta NAKAMURA  Shinji KAWAGUCHI  Tetsuya KANDA  Katsuro INOUE  

     
    LETTER-Software System

      Pubricized:
    2018/09/26
      Vol:
    E101-D No:12
      Page(s):
    3238-3241

    Defects in spacecraft software may result in loss of life and serious economic damage. To avoid such consequences, the software development process incorporates code review activity. A code review conducted by a third-party organization independently of a software development team can effectively identify defects in software. However, such review activity is difficult for third-party reviewers, because they need to understand the entire structure of the code within a limited time and without prior knowledge. In this study, we propose a tool to visualize inter-module dataflow for source code of spacecraft software systems. To evaluate the method, an autonomous rover control program was reviewed using this visualization. While the tool does not decreases the time required for a code review, the reviewers considered the visualization to be effective for reviewing code.

  • A Verification Framework for Assembly Programs Under Relaxed Memory Model Using SMT Solver

    Pattaravut MALEEHUAN  Yuki CHIBA  Toshiaki AOKI  

     
    PAPER-Software System

      Pubricized:
    2018/09/12
      Vol:
    E101-D No:12
      Page(s):
    3038-3058

    In multiprocessors, memory models are introduced to describe the executions of programs among processors. Relaxed memory models, which relax the order of executions, are used in the most of the modern processors, such as ARM and POWER. Due to a relaxed memory model could change the program semantics, the executions of the programs might not be the same as our expectation that should preserve the program correctness. In addition to relaxed memory models, the way to execute an instruction is described by an instruction semantics, which varies among processor architectures. Dealing with instruction semantics among a variety of assembly programs is a challenge for program verification. Thus, this paper proposes a way to verify a variety of assembly programs that are executed under a relaxed memory model. The variety of assembly programs can be abstracted as the way to execute the programs by introducing an operation structure. Besides, there are existing frameworks for modeling relaxed memory models, which can realize program executions to be verified with a program property. Our work adopts an SMT solver to automatically reveal the program executions under a memory model and verify whether the executions violate the program property or not. If there is any execution from the solver, the program correctness is not preserved under the relaxed memory model. To verify programs, an experimental tool was developed to encode the given programs for a memory model into a first-order formula that violates the program correctness. The tool adopts a modeling framework to encode the programs into a formula for the SMT solver. The solver then automatically finds a valuation that satisfies the formula. In our experiments, two encoding methods were implemented based on two modeling frameworks. The valuations resulted by the solver can be considered as the bugs occurring in the original programs.

  • Dose-Volume Histogram Evaluations Using Sparsely Measured Radial Data from Two-Dimensional Dose Detectors

    Yasushi ONO  Katsuya KONDO  Kazu MISHIBA  

     
    LETTER-Image

      Vol:
    E101-A No:11
      Page(s):
    1993-1998

    Intensity modulated radiation therapy (IMRT), which irradiates doses to a target organ, calculates the irradiation dose using the radiation treatment planning system (RTPS). The irradiation quality is ensured by verifying that the dose distribution planned by RTPS is the same as the data measured by two-dimensional (2D) detectors. Since an actual three-dimensional (3D) distribution of irradiated dose spreads complicatedly, it is different from that of RTPS. Therefore, it is preferable to evaluate by using not only RTPS, but also actual irradiation dose distribution. In this paper, in order to perform a dose-volume histogram (DVH) evaluation of the irradiation dose distribution, we propose a method of correcting the dose distribution of RTPS by using sparsely measured radial data from 2D dose detectors. And we perform a DVH evaluation of irradiation dose distribution and we show that the proposed method contributes to high-precision DVH evaluation. The experimental results show that the estimates are in good agreement with the measured data from the 2D detectors and that the peak signal to noise ratio and the structural similarity indexes of the estimates are more accurate than those of RTPS. Therefore, we present the possibility of an evaluation of the actual irradiation dose distribution using measured data in a limited observation direction.

1-20hit(227hit)