The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] debugging(23hit)

1-20hit(23hit)

  • A Data Augmentation Method for Fault Localization with Fault Propagation Context and VAE

    Zhuo ZHANG  Donghui LI  Lei XIA  Ya LI  Xiankai MENG  

     
    LETTER-Software Engineering

      Pubricized:
    2023/10/25
      Vol:
    E107-D No:2
      Page(s):
    234-238

    With the growing complexity and scale of software, detecting and repairing errant behaviors at an early stage are critical to reduce the cost of software development. In the practice of fault localization, a typical process usually includes three steps: execution of input domain test cases, construction of model domain test vectors and suspiciousness evaluation. The effectiveness of model domain test vectors is significant for locating the faulty code. However, test vectors with failing labels usually account for a small portion, which inevitably degrades the effectiveness of fault localization. In this paper, we propose a data augmentation method PVaug by using fault propagation context and variational autoencoder (VAE). Our empirical results on 14 programs illustrate that PVaug has promoted the effectiveness of fault localization.

  • Improving Fault Localization Using Conditional Variational Autoencoder

    Xianmei FANG  Xiaobo GAO  Yuting WANG  Zhouyu LIAO  Yue MA  

     
    LETTER-Software Engineering

      Pubricized:
    2022/05/13
      Vol:
    E105-D No:8
      Page(s):
    1490-1494

    Fault localization analyzes the runtime information of two classes of test cases (i.e., passing test cases and failing test cases) to identify suspicious statements potentially responsible for a failure. However, the failing test cases are always far fewer than passing test cases in reality, and the class imbalance problem will affect fault localization effectiveness. To address this issue, we propose a data augmentation approach using conditional variational auto-encoder to synthesize new failing test cases for FL. The experimental results show that our approach significantly improves six state-of-the-art fault localization techniques.

  • Logging Inter-Thread Data Dependencies in Linux Kernel

    Takafumi KUBOTA  Naohiro AOTA  Kenji KONO  

     
    PAPER-Software System

      Pubricized:
    2020/04/06
      Vol:
    E103-D No:7
      Page(s):
    1633-1646

    Logging is a practical and useful way of diagnosing failures in software systems. The logged events are crucially important to learning what happened during a failure. If key events are not logged, it is almost impossible to track error propagations in the diagnosis. Tracking an error propagation becomes utterly complicated if inter-thread data dependency is involved. An inter-thread data dependency arises when one thread accesses to share data corrupted by another thread. Since the erroneous state propagates from a buggy thread to a failing thread through the corrupt shared data, the root cause cannot be tracked back solely by investigating the failing thread. This paper presents the design and implementation of K9, a tool that inserts logging code automatically to trace inter-thread data dependencies. K9 is designed to be “practical”; it scales to one million lines of code in C, causes negligible runtime overheads, and provides clues to tracking inter-thread dependencies in real-world bugs. To scale to one million lines of code, K9 ditches rigorous static analysis of pointers to detect code locations where inter-thread data dependency can occur. Instead, K9 takes the best-effort approach and finds out “most” of those code locations by making use of coding conventions. This paper demonstrates that K9 is applicable to Linux and captures relevant code locations, in spite of the best-effort approach, enough to provide useful clues to root causes in real-world bugs, including a previously unknown bug in Linux. The paper also shows K9 runtime overhead is negligible. K9 incurs 1.25% throughput degradation and 0.18% CPU usage increase, on average, in our evaluation.

  • Signal Selection Methods for Debugging Gate-Level Sequential Circuits

    Yusuke KIMURA  Amir Masoud GHAREHBAGHI  Masahiro FUJITA  

     
    PAPER

      Vol:
    E102-A No:12
      Page(s):
    1770-1780

    This paper introduces methods to modify a buggy sequential gate-level circuit to conform to the specification. In order to preserve the optimization efforts, the modifications should be as small as possible. Assuming that the locations to be modified are given, our proposed method finds an appropriate set of fan-in signals for the patch function of those locations by iteratively calculating the state correspondence between the specification and the buggy circuit and applying a method for debugging combinational circuits. The experiments are conducted on ITC99 benchmark circuits, and it is shown that our proposed method can work when there are at most 30,000 corresponding reachable state pairs between two circuits. Moreover, a heuristic method using the information of data-path FFs is proposed, which can find a correct set of fan-ins for all the benchmark circuits within practical time.

  • TFIDF-FL: Localizing Faults Using Term Frequency-Inverse Document Frequency and Deep Learning

    Zhuo ZHANG  Yan LEI  Jianjun XU  Xiaoguang MAO  Xi CHANG  

     
    LETTER-Software Engineering

      Pubricized:
    2019/05/27
      Vol:
    E102-D No:9
      Page(s):
    1860-1864

    Existing fault localization based on neural networks utilize the information of whether a statement is executed or not executed to identify suspicious statements potentially responsible for a failure. However, the information just shows the binary execution states of a statement, and cannot show how important a statement is in executions. Consequently, it may degrade fault localization effectiveness. To address this issue, this paper proposes TFIDF-FL by using term frequency-inverse document frequency to identify a high or low degree of the influence of a statement in an execution. Our empirical results on 8 real-world programs show that TFIDF-FL significantly improves fault localization effectiveness.

  • Spectrum-Based Fault Localization Framework to Support Fault Understanding Open Access

    Yong WANG  Zhiqiu HUANG  Yong LI  RongCun WANG  Qiao YU  

     
    LETTER-Software Engineering

      Pubricized:
    2019/01/15
      Vol:
    E102-D No:4
      Page(s):
    863-866

    A spectrum-based fault localization technique (SBFL), which identifies fault location(s) in a buggy program by comparing the execution statistics of the program spectra of passed executions and failed executions, is a popular automatic debugging technique. However, the usefulness of SBFL is mainly affected by the following two factors: accuracy and fault understanding in reality. To solve this issue, we propose a SBFL framework to support fault understanding. In the framework, we firstly localize a suspicious fault module to start debugging and then generate a weighted fault propagation graph (WFPG) for the hypothesis fault module, which weights the suspiciousness for the nodes to further perform block-level fault localization. In order to evaluate the proposed framework, we conduct a controlled experiment to compare two different module-level SBFL approaches and validate the effectiveness of WFPG. According to our preliminary experiments, the results are promising.

  • Spectrum-Based Fault Localization Using Fault Triggering Model to Refine Fault Ranking List

    Yong WANG  Zhiqiu HUANG  Rongcun WANG  Qiao YU  

     
    PAPER-Software Engineering

      Pubricized:
    2018/07/04
      Vol:
    E101-D No:10
      Page(s):
    2436-2446

    Spectrum-based fault localization (SFL) is a lightweight approach, which aims at helping debuggers to identity root causes of failures by measuring suspiciousness for each program component being a fault, and generate a hypothetical fault ranking list. Although SFL techniques have been shown to be effective, the fault component in a buggy program cannot always be ranked at the top due to its complex fault triggering models. However, it is extremely difficult to model the complex triggering models for all buggy programs. To solve this issue, we propose two simple fault triggering models (RIPRα and RIPRβ), and a refinement technique to improve fault absolute ranking based on the two fault triggering models, through ruling out some higher ranked components according to its fault triggering model. Intuitively, our approach is effective if a fault component was ranked within top k in the two fault ranking lists outputted by the two fault localization strategies. Experimental results show that our approach can significantly improve the fault absolute ranking in the three cases.

  • The Impact of Information Richness on Fault Localization

    Yan LEI  Min ZHANG  Bixin LI  Jingan REN  Yinhua JIANG  

     
    LETTER-Software Engineering

      Pubricized:
    2015/10/14
      Vol:
    E99-D No:1
      Page(s):
    265-269

    Many recent studies have focused on leveraging rich information types to increase useful information for improving fault localization effectiveness. However, they rarely investigate the impact of information richness on fault localization to give guidance on how to enrich information for improving localization effectiveness. This paper presents the first systematic study to fill this void. Our study chooses four representative information types and investigates the relationship between their richness and the localization effectiveness. The results show that information richness related to frequency execution count involves a high risk of degrading the localization effectiveness, and backward slice is effective in improving localization effectiveness.

  • Automatic Rectification of Processor Design Bugs Using a Scalable and General Correction Model

    Amir Masoud GHAREHBAGHI  Masahiro FUJITA  

     
    PAPER-Dependable Computing

      Vol:
    E97-D No:4
      Page(s):
    852-863

    This paper presents a method for automatic rectification of design bugs in processors. Given a golden sequential instruction-set architecture model of a processor and its erroneous detailed cycle-accurate model at the micro-architecture level, we perform symbolic simulation and property checking combined with concrete simulation iteratively to detect the buggy location and its corresponding fix. We have used the truth-table model of the function that is required for correction, which is a very general model. Moreover, we do not represent the truth-table explicitly in the design. We use, instead, only the required minterms, which are obtained from the output of our backend formal engine. This way, we avoid adding any new variable for representing the truth-table. Therefore, our correction model is scalable to the number of inputs of the truth-table that could grow exponentially. We have shown the effectiveness of our method on a complex out-of-order superscalar processor supporting atomic execution of instructions. Our method reduces the model size for correction by 6.0x and total correction time by 12.6x, on average, compared to our previous work.

  • Deterministic Message Passing for Distributed Parallel Computing

    Xu ZHOU  Kai LU  Xiaoping WANG  Wenzhe ZHANG  Kai ZHANG  Xu LI  Gen LI  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:5
      Page(s):
    1068-1077

    The nondeterminism of message-passing communication brings challenges to program debugging, testing and fault-tolerance. This paper proposes a novel deterministic message-passing implementation (DMPI) for parallel programs in the distributed environment. DMPI is compatible with the standard MPI in user interface, and it guarantees the reproducibility of message with high performance. The basic idea of DMPI is to use logical time to solve message races and control asynchronous transmissions, and thus we could eliminate the nondeterministic behaviors of the existing message-passing mechanism. We apply a buffering strategy to alleviate the performance slowdown caused by mismatch of logical time and physical time. To avoid deadlocks introduced by deterministic mechanisms, we also integrate DMPI with a lightweight deadlock checker to dynamically detect and solve these deadlocks. We have implemented DMPI and evaluated it using NPB benchmarks. The results show that DMPI could guarantee determinism with incurring modest runtime overhead (14% on average).

  • Effective Fault Localization Approach Using Feedback

    Yan LEI  Xiaoguang MAO  Ziying DAI  Dengping WEI  

     
    PAPER-Software Engineering

      Vol:
    E95-D No:9
      Page(s):
    2247-2257

    At the stage of software debugging, the effective interaction between software debugging engineers and fault localization techniques can greatly improve fault localization performance. However, most fault localization approaches usually ignore this interaction and merely utilize the information from testing. Due to different goals of testing and fault localization, the lack of interaction may lead to the issue of information inadequacy, which can substantially degrade fault localization performance. In addition, human work is costly and error-prone. It is vital to study and simulate the pattern of debugging engineers as they apply their knowledge and experience to this interaction to promote fault localization effectiveness and reduce their workload. Thus this paper proposes an effective fault localization approach to simulate this interaction via feedback. Based on results obtained from fault localization techniques, this approach utilizes test data generation techniques to automatically produce feedback for interacting with these fault localization techniques, and then iterate this process to improve fault localization performance until a specific stopping condition is satisfied. Experiments on two standard benchmarks demonstrate the significant improvement of our approach over a promising fault localization technique, namely the spectrum-based fault localization technique.

  • Reticella: An Execution Trace Slicing and Visualization Tool Based on a Behavior Model

    Kunihiro NODA  Takashi KOBAYASHI  Shinichiro YAMAMOTO  Motoshi SAEKI  Kiyoshi AGUSA  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    959-969

    Program comprehension using dynamic information is one of key tasks of software maintenance. Software visualization with sequence diagrams is a promising technique to help developer comprehend the behavior of object-oriented systems effectively. There are many tools that can support automatic generation of a sequence diagram from execution traces. However it is still difficult to understand the behavior because the size of automatically generated sequence diagrams from the massive amounts of execution traces tends to be beyond developer's capacity. In this paper, we propose an execution trace slicing and visualization method. Our proposed method is capable of slice calculation based on a behavior model which can treat dependencies based on static and dynamic analysis and supports for various programs including exceptions and multi-threading. We also introduce our tool that perform our proposed slice calculation on the Eclipse platform. We show the applicability of our proposed method by applying the tool to two Java programs as case studies. As a result, we confirm effectiveness of our proposed method for understanding the behavior of object-oriented systems.

  • An Automatic Method of Mapping I/O Sequences of Chip Execution onto High-level Design for Post-Silicon Debugging

    Yeonbok LEE  Takeshi MATSUMOTO  Masahiro FUJITA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E94-A No:7
      Page(s):
    1519-1529

    Post-silicon debugging is getting even more critical to shorten the time-to-market than ever, as many more bugs escape pre-silicon verification according to the increasing design scale and complexity. Post-silicon debugging is generally harder than pre-silicon debugging due to the limited observability and controllability of internal signal values. Conventionally, simulation of corresponding low-level designs such as RTL or gate-level has been used to get observability and controllability, which is inefficient for contemporary large designs. In this paper, we introduce a post-silicon debugging approach using simulation of high-level designs, instead of low-level designs. To realize such a debugging approach, we propose an I/O sequence mapping method that converts I/O sequences of chip executions to those of the corresponding high-level design. First, we provide a formal definition of I/O sequence mapping and relevant notions. Then, based on the definition, we propose an I/O sequence mapping method by executing FSMs representing the interface specifications of the target design. Also, we propose an implementation of the proposed method to get further efficiency. We demonstrate that the proposed method can be effectively applied to several practical design examples with various interfaces.

  • HPChecker: An AMBA AHB On-Chip Bus Protocol Checker with Efficient Verification Mechanisms

    Liang-Bi CHEN  Jiun-Cheng JU  Chien-Chou WANG  Ing-Jer HUANG  

     
    PAPER-Multiple-Valued VLSI Technology

      Vol:
    E93-D No:8
      Page(s):
    2100-2108

    Bus-based system-on-a-chip (SoC) design has become the major integrated methodology for shortening SoC design time. The main challenge is how to verify on-chip bus protocols efficiently. Although traditional simulation-based bus protocol monitors can check whether bus signals obey bus protocol or not. They are still lack of an efficient bus protocols verification environment such as FPGA-level or chip-level. To overcome the shortage, we propose a rule-based synthesizable AMBA AHB on-chip bus protocol checker, which contains 73 related AHB on-chip bus protocol rules to check AHB bus signal behaviors, and two corresponding verification mechanisms: an error reference table (ERT) and a windowed trace buffer, to shorten verification time.

  • Hardware Design Verification Using Signal Transitions and Transactions

    Nobuyuki OHBA  Kohji TAKANO  

     
    PAPER

      Vol:
    E89-A No:4
      Page(s):
    1012-1017

    Hardware prototyping has been widely used for ASIC/SoC verification. This paper proposes a new hardware design verification method, Transition and Transaction Tracer (TTT), which probes and records the signals of interest for a long time, hours, days, or even weeks, without a break. It compresses the captured data in real time and stores it in a state transition format in memory. Since it records all the transitions, it is effective in finding and fixing errors, even ones that occur rarely or intermittently. It can also be programmed to generate a trigger for a logic analyzer when it detects certain transitions. This is useful for debugging situations where the engineer has trouble finding an appropriate trigger condition to pinpoint the source of errors. We have been using the method in hardware prototyping for ASIC/SoC development for two years and found it useful for system level tests, and in particular for long running tests.

  • Detection of Summative Global Predicates

    Loon-Been CHEN  I-Chen WU  

     
    LETTER-Theory and Models of Software

      Vol:
    E86-D No:5
      Page(s):
    976-980

    In many distributed systems, tokens are fundamental tools to manage resources shared by processes. Thus, monitoring tokens has become a significant problem in developing the distributed programs. This paper formulates the problems of monitoring tokens in terms of detecting the special global predicates, called summative global predicates. In this paper, several algorithms to detect various summative global predicates are developed and their time complexities are discussed.

  • Software Profit Model under Imperfect Debugging and Optimal Software Release Policy

    Chong-Hyung LEE  Kyung-Hyun NAM  Dong-Ho PARK  

     
    PAPER-Software Engineering

      Vol:
    E85-D No:5
      Page(s):
    833-838

    This paper considers a software reliability model which allows for two types of imperfect debuggings at each failure of the software system. For one type of imperfect debugging, a fault that causes the failure is imperfectly debugged without altering the fault contents of the software system. For the other type of imperfect debugging, the fault is not only imperfectly debugged, but also a new fault is generated and introduced into the system. The probability of perfect debugging is assumed to be an increasing function of the number of debuggings performed prior to the current failure of the system. Based on the software reliability model presented, we consider three profit models to determine the optimal software release times which maximize the expected software profit. These models consider: (1) constant life cycle, (2) random life cycle, (3) random life cycle and penalty cost which is imposed when the software is delivered late. The optimal release times are shown to be finite and unique. Numerical examples are provided for illustrative purposes.

  • Markovian Software Availability Measurement Based on the Number of Restoration Actions

    Koichi TOKUNO  Shigeru YAMADA  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    835-841

    In this paper, we construct a software availability model considering the number of restoration actions. We correlate the failure and restoration characteristics of the software system with the cumulative number of corrected faults. Furthermore, we consider an imperfect debugging environment where the detected faults are not always corrected and removed from the system. The time-dependent behavior of the system alternating between up and down states is described by a Markov process. From this model, we can derive quantitative measures for software availability assessment considering the number of restoration actions. Finally, we show numerical examples of software availability analysis.

  • Redundant Exception Check Elimination by Assertions

    Norio SATO  

     
    PAPER-Communication Software

      Vol:
    E81-B No:10
      Page(s):
    1881-1893

    Exception handling is not only useful for increasing program readability, but also provides an effective means to check and locate errors, so it increases productivity in large-scale program development. Some typical and frequent program errors, such as out-of-range indexing, null dereferencing, and narrowing violations, cause exceptions that are otherwise unlikely to be caught. Moreover, the absence of a catcher for exceptions thrown by API procedures also causes uncaught exceptions. This paper discusses how the exception handling mechanism should be supported by the compiler together with the operating system and debugging facilities. This mechanism is implemented in the compiler by inserting inline check code and accompanying propagation code. One drawback to this approach is the runtime overhead imposed by the inline check code, which should therefore be optimized. However, there has been little discussion of appropriate optimization techniques and efficiency in the literature. Therefore, a new solution is proposed that formulates the optimization problem as a common assertion elimination (CAE). Assertions consist of check code and useful branch conditions. The latter are effective to remove redundant check code. The redundancy can be checked and removed precisely with a forward iterative data flow analysis. Even in performance-sensitive applications such as telecommunications software, figures obtained by a CHILL optimizing compiler indicate that CAE optimizes the code well enough to be competitive with check suppressed code.

  • High-Level VLSI Design Specification Validation Using Algorithmic Debugging

    Jiro NAGANUMA  Takeshi OGURA  Tamio HOSHINO  

     
    PAPER

      Vol:
    E77-A No:12
      Page(s):
    1988-1998

    This paper proposes a new environment for high-level VLSI design specification validation using "Algorithmic Debugging" and evaluates its benefits on three significant examples (a protocol processor, an 8-bit CPU, and a Prolog processor). A design is specified at a high-level using the structured analysis (SA) method, which is useful for analyzing and understanding the functionality to be realized. The specification written in SA is transformed into a logic programming language and is simulated in it. The errors (which terminate with an incorrect output in the simulation) included in the three large examples are efficiently located by answering junt a few queries from the algorithmic debugger. The number of interactions between the designer and the debugger is reduced by a factor of ten to a hundred compared to conventional simulation based validation methodologies. The correct SA specification can be automatically translated into a Register Transfer Level (RTL) specification suitable for logic synthesis. In this environment, a designer is freed from the tedious task of debugging a RTL specification, and can concentrate on the design itself. This environment promises to be an important step towards efficient high-level VLSI design specification validation.

1-20hit(23hit)