The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CASE(82hit)

21-40hit(82hit)

  • A New Non-Uniform Weight-Updating Beamformer for LEO Satellite Communication

    Jie LIU  Zhuochen XIE  Huijie LIU  Zhengmin ZHANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:9
      Page(s):
    1708-1711

    In this paper, a new non-uniform weight-updating scheme for adaptive digital beamforming (DBF) is proposed. The unique feature of the letter is that the effective working range of the beamformer is extended and the computational complexity is reduced by introducing the robust DBF based on worst-case performance optimization. The robust parameter for each weight updating is chosen by analyzing the changing rate of the Direction of Arrival (DOA) of desired signal in LEO satellite communication. Simulation results demonstrate the improved performance of the new Non-Uniform Weight-Updating Beamformer (NUWUB).

  • Transmission Properties of Electromagnetic Wave in Pre-Cantor Bar: Scaling and Double-Exponetiality

    Ryota SATO  Keimei KAINO  Jun SONODA  

     
    BRIEF PAPER

      Vol:
    E99-C No:7
      Page(s):
    801-804

    Pre-Cantor bar, the one-dimensional fractal media, consists of two kinds of materials. Using the transmission-line theory we will explain the double-exponential behavior of the minimum of the transmittance as a function of the stage number n, and obtain formulae of two kinds of scaling behaviors of the transmittance. From numerical calculations for n=1 to 5 we will find that the maximum of field amplitudes of resonance which increases double-exponentially with n is well estimated by the theoretical upper bound. We will show that after sorting field amplitudes for resonance frequencies of the 5th stage their distribution is a staircase function of the index.

  • Design Optimization for Process-Variation-Tolerant 22-nm FinFET-Based 6-T SRAM Cell with Worst-Case Sampling Method

    Sangheon OH  Changhwan SHIN  

     
    BRIEF PAPER

      Vol:
    E99-C No:5
      Page(s):
    541-543

    To find the optimal design in alleviating the effect of random variations on a SRAM cell, a worst-case sampling method is used. From the quantitative analysis using this method, the optimal designs for a process-variation-tolerant 22-nm FinFET-based 6-T SRAM cell are proposed and implemented through cell layouts and a dual-threshold-voltage designs.

  • Distributing Garbage Collection Costs over Multiple Requests to Improve the Worst-Case Performance of Hybrid Mapping Schemes

    Ilhoon SHIN  

     
    PAPER-Software System

      Vol:
    E97-D No:11
      Page(s):
    2844-2851

    NAND-based block devices such as memory cards and solid-state drives embed a flash translation layer (FTL) to emulate the standard block device interface and its features. The overall performance of these devices is determined mainly by the efficiency of the FTL scheme, so intensive research has been performed to improve the average performance of the FTL scheme. However, its worst-case performance has rarely been considered. The present study aims to improve the worst-case performance without affecting the average performance. The central concept is to distribute the garbage collection cost, which is the main source of performance fluctuations, over multiple requests. The proposed scheme comprises three modules: i) anticipated partial log block merging to distribute the garbage collection time; ii) reclaiming clean pages by moving valid pages to bound the worst-case garbage collection time, instead of performing repeated block merges; and iii) victim selection based on the valid page count in a victim log and the required clean page count to avoid subsequent garbage collections. A trace-driven simulation showed that the worst-case performance was improved up to 1,300% using the proposed garbage collection scheme. The average performance was also similar to that of the original scheme. This improvement was achieved without additional memory overheads.

  • New Metrics for Prioritized Interaction Test Suites

    Rubing HUANG  Dave TOWEY  Jinfu CHEN  Yansheng LU  

     
    PAPER-Software Engineering

      Vol:
    E97-D No:4
      Page(s):
    830-841

    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies.

  • An Average-Case Efficient Algorithm on Testing the Identity of Boolean Functions in Trace Representation

    Qian GUO  Haibin KAN  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:3
      Page(s):
    583-588

    In this paper, we present an average-case efficient algorithm to resolve the problem of determining whether two Boolean functions in trace representation are identical. Firstly, we introduce a necessary and sufficient condition for null Boolean functions in trace representation, which can be viewed as a generalization of the well-known additive Hilbert-90 theorem. Based on this condition, we propose an algorithmic method with preprocessing to address the original problem. The worst-case complexity of the algorithm is still exponential; its average-case performance, however, can be improved. We prove that the expected complexity of the refined procedure is O(n), if the coefficients of input functions are chosen i.i.d. according to the uniform distribution over F2n; therefore, it performs well in practice.

  • Worst Case Response Time Analysis for Messages in Controller Area Network with Gateway

    Yong XIE  Gang ZENG  Yang CHEN  Ryo KURACHI  Hiroaki TAKADA  Renfa LI  

     
    PAPER-Software System

      Vol:
    E96-D No:7
      Page(s):
    1467-1477

    In modern automobiles, Controller Area Network (CAN) has been widely used in different sub systems that are connected by using gateway. While a gateway is necessary to integrate different electronic sub systems, it brings challenges for the analysis of Worst Case Response Time (WCRT) for CAN messages, which is critical from the safety point of view. In this paper, we first analyzed the challenges for WCRT analysis of messages in gateway-interconnected CANs. Then, based on the existing WCRT analysis method proposed for one single CAN, a new WCRT analysis method that uses two new definitions to analyze the interfering delay of sporadically arriving gateway messages is proposed for non-gateway messages. Furthermore, a division approach, where the end-to-end WCRT analysis of gateway messages is transformed into the similar situation with that of non-gateway messages, is adopted for gateway messages. Finally, the proposed method is extended to include CANs with different bandwidths. The proposed method is proved to be safe, and experimental results demonstrated its effectiveness by comparing it with a full space searching based simulator and applying it to a real message set.

  • An Agent-Based Expert System Architecture for Product Return Administration

    Chen-Shu WANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E96-D No:1
      Page(s):
    73-80

    Product return is a critical but controversial issue. To deal with such a vague return problem, businesses must improve their information transparency in order to administrate the product return behaviour of their end users. This study proposes an intelligent return administration expert system (iRAES) to provide product return forecasting and decision support for returned product administration. The iRAES consists of two intelligent agents that adopt a hybrid data mining algorithm. The return diagnosis agent generates different alarms for certain types of product return, based on forecasts of the return possibility. The return recommender agent is implemented on the basis of case-based reasoning, and provides the return centre clerk with a recommendation for returned product administration. We present a 3C-iShop scenario to demonstrate the feasibility and efficiency of the iRAES architecture. Our experiments identify a particularly interesting return, for which iRAES generates a recommendation for returned product administration. On average, iRAES decreases the effort required to generate a recommendation by 70% compared to previous return administration systems, and improves performance via return decision support by 37%. iRAES is designed to accelerate product return administration, and improve the performance of product return knowledge management.

  • Scalable Privacy-Preserving t-Repetition Protocol with Distributed Medical Data

    Ji Young CHUN  Dowon HONG  Dong Hoon LEE  Ik Rae JEONG  

     
    PAPER-Cryptography and Information Security

      Vol:
    E95-A No:12
      Page(s):
    2451-2460

    Finding rare cases with medical data is important when hospitals or research institutes want to identify rare diseases. To extract meaningful information from a large amount of sensitive medical data, privacy-preserving data mining techniques can be used. A privacy-preserving t-repetition protocol can be used to find rare cases with distributed medical data. A privacy-preserving t-repetition protocol is to find elements which exactly t parties out of n parties have in common in their datasets without revealing their private datasets. A privacy-preserving t-repetition protocol can be used to find not only common cases with a high t but also rare cases with a low t. In 2011, Chun et al. suggested the generic set operation protocol which can be used to find t-repeated elements. In the paper, we first show that the Chun et al.'s protocol becomes infeasible for calculating t-repeated elements if the number of users is getting bigger. That is, the computational and communicational complexities of the Chun et al.'s protocol in calculating t-repeated elements grow exponentially as the number of users grows. Then, we suggest a polynomial-time protocol with respect to the number of users, which calculates t-repeated elements between users.

  • Statistical Learning Theory of Quasi-Regular Cases

    Koshi YAMADA  Sumio WATANABE  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E95-A No:12
      Page(s):
    2479-2487

    Many learning machines such as normal mixtures and layered neural networks are not regular but singular statistical models, because the map from a parameter to a probability distribution is not one-to-one. The conventional statistical asymptotic theory can not be applied to such learning machines because the likelihood function can not be approximated by any normal distribution. Recently, new statistical theory has been established based on algebraic geometry and it was clarified that the generalization and training errors are determined by two birational invariants, the real log canonical threshold and the singular fluctuation. However, their concrete values are left unknown. In the present paper, we propose a new concept, a quasi-regular case in statistical learning theory. A quasi-regular case is not a regular case but a singular case, however, it has the same property as a regular case. In fact, we prove that, in a quasi-regular case, two birational invariants are equal to each other, resulting that the symmetry of the generalization and training errors holds. Moreover, the concrete values of two birational invariants are explicitly obtained, hence the quasi-regular case is useful to study statistical learning theory.

  • Application of Markov Chain Monte Carlo Random Testing to Test Case Prioritization in Regression Testing

    Bo ZHOU  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER

      Vol:
    E95-D No:9
      Page(s):
    2219-2226

    This paper proposes the test case prioritization in regression testing. The large size of a test suite to be executed in regression testing often causes large amount of testing cost. It is important to reduce the size of test cases according to prioritized test sequence. In this paper, we apply the Markov chain Monte Carlo random testing (MCMC-RT) scheme, which is a promising approach to effectively generate test cases in the framework of random testing. To apply MCMC-RT to the test case prioritization, we consider the coverage-based distance and develop the algorithm of the MCMC-RT test case prioritization using the coverage-based distance. Furthermore, the MCMC-RT test case prioritization technique is consistently comparable to coverage-based adaptive random testing (ART) prioritization techniques and involves much less time cost.

  • 3-Way Software Testing with Budget Constraints

    Soumen MAITY  

     
    LETTER

      Vol:
    E95-D No:9
      Page(s):
    2227-2231

    In most software development environments, time, computing and human resources needed to perform the testing of a component is strictly limited. In order to deal with such situations, this paper proposes a method of creating the best possible test suite (covering the maximum number of 3-tuples) within a fixed number of test cases.

  • A Function Interaction Testing by Reusing Characterized Test Cases

    Youngsul SHIN  Woo Jin LEE  

     
    LETTER

      Vol:
    E95-D No:9
      Page(s):
    2232-2234

    This letter proposes a reuse method of unit test cases, which characterize internal behaviors of a called function, for enhancing capability of automatic generation of test cases. Existing test case generation tools have limits in finding solutions to the deep call structure of the source code. In our approach, the complex call structure is simplified by reusing unit test cases of called functions. As unit test cases represent the characteristics of the called function, the internal behaviors of called functions are replaced by the test cases. This approach can be applicable to existing test tools for simplifying the process of generation and enhancing their capabilities.

  • Predicate Argument Structure Analysis for Use Case Description Modeling

    Hironori TAKEUCHI  Taiga NAKAMURA  Takahira YAMAGUCHI  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:7
      Page(s):
    1959-1968

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  • Trade-Off Analysis between Concerns Based on Aspect-Oriented Requirements Engineering

    Abelyn Methanie R. LAURITO  Shingo TAKADA  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    1003-1011

    The identification of functional and non-functional concerns is an important activity during requirements analysis. However, there may be conflicts between the identified concerns, and they must be discovered and resolved through trade-off analysis. Aspect-Oriented Requirements Engineering (AORE) has trade-off analysis as one of its goals, but most AORE approaches do not actually offer support for trade-off analysis; they focus on describing concerns and generating their composition. This paper proposes an approach for trade-off analysis based on AORE using use cases and the Requirements Conflict Matrix (RCM) to represent compositions. RCM shows the positive or negative effect of non-functional concerns over use cases and other non-functional concerns. Our approach is implemented within a tool called E-UCEd (Extended Use Case Editor). We also show the results of evaluating our tool.

  • Finding Incorrect and Missing Quality Requirements Definitions Using Requirements Frame

    Haruhiko KAIYA  Atsushi OHNISHI  

     
    PAPER

      Vol:
    E95-D No:4
      Page(s):
    1031-1043

    Defining quality requirements completely and correctly is more difficult than defining functional requirements because stakeholders do not state most of quality requirements explicitly. We thus propose a method to measure a requirements specification for identifying the amount of quality requirements in the specification. We also propose another method to recommend quality requirements to be defined in such a specification. We expect stakeholders can identify missing and unnecessary quality requirements when measured quality requirements are different from recommended ones. We use a semi-formal language called X-JRDL to represent requirements specifications because it is suitable for analyzing quality requirements. We applied our methods to a requirements specification, and found our methods contribute to defining quality requirements more completely and correctly.

  • Image Inpainting Based on Adaptive Total Variation Model

    Zhaolin LU  Jiansheng QIAN  Leida LI  

     
    LETTER-Image

      Vol:
    E94-A No:7
      Page(s):
    1608-1612

    In this letter, a novel adaptive total variation (ATV) model is proposed for image inpainting. The classical TV model is a partial differential equation (PDE)-based technique. While the TV model can preserve the image edges well, it has some drawbacks, such as staircase effect in the inpainted image and slow convergence rate. By analyzing the diffusion mechanism of TV model and introducing a new edge detection operator named difference curvature, we propose a novel ATV inpainting model. The proposed ATV model can diffuse the image information smoothly and quickly, namely, this model not only eliminates the staircase effect but also accelerates the convergence rate. Experimental results demonstrate the effectiveness of the proposed scheme.

  • Towards a UML Extension of Reusable Secure Use Cases for Mobile Grid Systems

    David G. ROSADO  Eduardo FERNANDEZ-MEDINA  Javier LOPEZ  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E94-D No:2
      Page(s):
    243-254

    The systematic processes exactly define the development cycle and help the development team follow the same development strategies and techniques, thus allowing a continuous improvement in the quality of the developed products. Likewise, it is important that the development process used integrates security aspects from the first stages at the same level as other functional and non-functional requirements. Grid systems allow us to build very complex information systems with different and remarkable features (interoperability between multiple security domains, cross-domain authentication and authorization, dynamic, heterogeneous and limited mobile devices, etc). With the development of wireless technology and mobile devices, the Grid becomes the perfect candidate for letting mobile users make complex works that add new computational capacity to the Grid. A methodology of development for secure mobile Grid systems is being defined. One of the activities of this methodology is the requirements analysis which is based in reusable use cases. In this paper, we will present a UML-extension for security use cases and Grid use case which capture the behaviour of this kind of systems. A detailed description of all these new use cases defined in the UML extension is necessary, describing the stereotypes, tagged values, constraints and graphical notation. We show an example of how to apply and use this extension for building the diagram of use cases and incorporating common security aspects for this kind of systems. Also, we will see how the diagrams built can be reused in the construction of others diagrams saving time and effort in this task.

  • Improving Efficiency of Self-Configurable Autonomic Systems Using Clustered CBR Approach

    Malik Jahan KHAN  Mian Muhammad AWAIS  Shafay SHAMAIL  

     
    PAPER-Computer System

      Vol:
    E93-D No:11
      Page(s):
    3005-3016

    Inspired from natural self-managing behavior of the human body, autonomic systems promise to inject self-managing behavior in software systems. Such behavior enables self-configuration, self-healing, self-optimization and self-protection capabilities in software systems. Self-configuration is required in systems where efficiency is the key issue, such as real time execution environments. To solve self-configuration problems in autonomic systems, the use of various problem-solving techniques has been reported in the literature including case-based reasoning. The case-based reasoning approach exploits past experience that can be helpful in achieving autonomic capabilities. The learning process improves as more experience is added in the case-base in the form of cases. This results in a larger case-base. A larger case-base reduces the efficiency in terms of computational cost. To overcome this efficiency problem, this paper suggests to cluster the case-base, subsequent to find the solution of the reported problem. This approach reduces the search complexity by confining a new case to a relevant cluster in the case-base. Clustering the case-base is a one-time process and does not need to be repeated regularly. The proposed approach presented in this paper has been outlined in the form of a new clustered CBR framework. The proposed framework has been evaluated on a simulation of Autonomic Forest Fire Application (AFFA). This paper presents an outline of the simulated AFFA and results on three different clustering algorithms for clustering the case-base in the proposed framework. The comparison of performance of the conventional CBR approach and clustered CBR approach has been presented in terms of their Accuracy, Recall and Precision (ARP) and computational efficiency.

  • A Case Study of Requirements Elicitation Process with Changes

    Takako NAKATANI  Shouzo HORI  Naoyasu UBAYASHI  Keiichi KATAMINE  Masaaki HASHIMOTO  

     
    PAPER-Software Engineering

      Vol:
    E93-D No:8
      Page(s):
    2182-2189

    Requirements changes sometimes cause a project to fail. A lot of projects now follow incremental development processes so that new requirements and requirements changes can be incorporated as soon as possible. These processes are called integrated requirements processes, which function to integrate requirements processes with other developmental processes. We have quantitatively and qualitatively investigated the requirements processes of a specific project from beginning to end. Our focus is to clarify the types of necessary requirements based on the components contained within a certain portion of the software architecture. Further, each type reveals its typical requirements processes through its own rationale. This case study is a system to manage the orders and services of a restaurant. In this paper, we introduce the case and categorize its requirements processes based on the components of the system and the qualitative characteristics of ISO-9126. We could identify seven categories of the typical requirements process to be managed and/or controlled. Each category reveals its typical requirements processes and their characteristics. The case study is our first step of practical integrated requirements engineering.

21-40hit(82hit)