The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CASE(82hit)

41-60hit(82hit)

  • The Effect of Corpus Size on Case Frame Acquisition for Predicate-Argument Structure Analysis

    Ryohei SASANO  Daisuke KAWAHARA  Sadao KUROHASHI  

     
    PAPER-Natural Language Processing

      Vol:
    E93-D No:6
      Page(s):
    1361-1368

    This paper reports the effect of corpus size on case frame acquisition for predicate-argument structure analysis in Japanese. For this study, we collect a Japanese corpus consisting of up to 100 billion words, and construct case frames from corpora of six different sizes. Then, we apply these case frames to syntactic and case structure analysis, and zero anaphora resolution, in order to investigate the relationship between the corpus size for case frame acquisition and the performance of predicate-argument structure analysis. We obtained better analyses by using case frames constructed from larger corpora; the performance was not saturated even with a corpus size of 100 billion words.

  • Computer Algebra System as Test Generation System

    Satoshi HATTORI  

     
    PAPER-Software Testing

      Vol:
    E93-D No:5
      Page(s):
    1006-1017

    We try to use a computer algebra system Mathematica as a test case generation system. In test case generation, we generally need to solve equations and inequalities. The main reason why we take Mathematica is because it has a built-in function to solve equations and inequalities. In this paper, we deal with both black-box testing and white-box testing. First, we show two black-box test case generation procedures described in Mathematica. The first one is based on equivalence partitioning. Mathematica explicitly shows a case that test cases do no exist. This is an advantage in using Mathematica. The second procedure is a modification of the first one adopting boundary value analysis. For implementation of boundary value analysis, we give a formalization for it. Next, we show a white-box test case generation procedure. For this purpose, we also give a model for source programs. It is like a control flow graph model. The proposed procedure analyzes a model description of a program.

  • A Scheduling Algorithm for Minimizing Exclusive Window Durations in Time-Triggered Controller Area Network

    Minsoo RYU  

     
    LETTER-Network

      Vol:
    E92-B No:8
      Page(s):
    2739-2742

    Time-Triggered Controller Area Network is widely accepted as a viable solution for real-time communication systems such as in-vehicle communications. However, although TTCAN has been designed to support both periodic and sporadic real-time messages, previous studies mostly focused on providing deterministic real-time guarantees for periodic messages while barely addressing the performance issue of sporadic messages. In this paper, we present an O(n2) scheduling algorithm that can minimize the maximum duration of exclusive windows occupied by periodic messages, thereby minimizing the worst-case scheduling delays experienced by sporadic messages.

  • Enhancements of a Circuit-Level Timing Speculation Technique and Their Evaluations Using a Co-simulation Environment

    Yuji KUNITAKE  Kazuhiro MIMA  Toshinori SATO  Hiroto YASUURA  

     
    PAPER

      Vol:
    E92-C No:4
      Page(s):
    483-491

    A deep submicron semiconductor technology has increased process variations. This fact makes the estimate of the worst-case design margin difficult. In order to realize robust designs, we are investigating such a typical-case design methodology, which we call Constructive Timing Violation (CTV). In the CTV-based design, we can relax timing constraints. However, relaxing timing constraints might cause some timing errors. While we have applied the CTV-based design to a processor, unfortunately, the timing error recovery has serious impact on processor performance. In this paper, we investigate enhancement techniques of the CTV-based design. In addition, in order to accurately evaluate the CTV-based design, we build a co-simulation framework to consider circuit delay at the architectural level. From the co-simulation results, we find the performance penalty is significantly reduced by the enhancement techniques.

  • Visualization and Formalization of User Constraints for Tight Estimation of Worst-Case Execution Time

    Jong-In LEE  Ho-Jung BANG  Tai-Hyo KIM  Sung-Deok CHA  

     
    PAPER-Dependable Computing

      Vol:
    E92-D No:1
      Page(s):
    24-31

    Automated static timing analysis methods provide a safe but usually overestimated worst-case execution time (WCET) due to infeasible execution paths. In this paper, we propose a visual language, User Constraint Language (UCL), to obtain a tight WCET estimation. UCL provides intuitive visual notations with which users can easily specify various levels of flow information to characterize valid execution paths of a program. The user constraints specified in UCL are translated into finite automata. The combined automaton, constructed by a cross-production of the automata for program and user constraints, reflects the static structure and possible dynamic behavior of the program. It contains only the execution paths satisfying user constraints. A case study using part of a software program for satellite flight demonstrates the effectiveness of UCL and our approach.

  • A Simple Mechanism for Collapsing Instructions under Timing Speculation

    Toshinori SATO  

     
    PAPER

      Vol:
    E91-C No:9
      Page(s):
    1394-1401

    The deep submicron semiconductor technologies will make the worst-case design impossible, since they can not provide design margins that it requires. We are investigating a typical-case design methodology, which we call the Constructive Timing Violation (CTV). This paper extends the CTV concept to collapse dependent instructions, resulting in performance improvement. Based on detailed simulations, we find the proposed mechanism effectively collapses dependent instructions.

  • Autonomous Distributed Congestion Control Scheme in WCDMA Network

    Hafiz Farooq AHMAD  Hiroki SUGURI  Muhammad Qaisar CHOUDHARY  Ammar HASSAN  Ali LIAQAT  Muhammad Umer KHAN  

     
    PAPER

      Vol:
    E91-D No:9
      Page(s):
    2267-2275

    Wireless technology has become widely popular and an important means of communication. A key issue in delivering wireless services is the problem of congestion which has an adverse impact on the Quality of Service (QoS), especially timeliness. Although a lot of work has been done in the context of RRM (Radio Resource Management), the deliverance of quality service to the end user still remains a challenge. Therefore there is need for a system that provides real-time services to the users through high assurance. We propose an intelligent agent-based approach to guarantee a predefined Service Level Agreement (SLA) with heterogeneous user requirements for appropriate bandwidth allocation in QoS sensitive cellular networks. The proposed system architecture exploits Case Based Reasoning (CBR) technique to handle RRM process of congestion management. The system accomplishes predefined SLA through the use of Retrieval and Adaptation Algorithm based on CBR case library. The proposed intelligent agent architecture gives autonomy to Radio Network Controller (RNC) or Base Station (BS) in accepting, rejecting or buffering a connection request to manage system bandwidth. Instead of simply blocking the connection request as congestion hits the system, different buffering durations are allocated to diverse classes of users based on their SLA. This increases the opportunity of connection establishment and reduces the call blocking rate extensively in changing environment. We carry out simulation of the proposed system that verifies efficient performance for congestion handling. The results also show built-in dynamism of our system to cater for variety of SLA requirements.

  • Effect of Back-Volume of Arc-Quenching Chamber on Arc Behavior

    Ruicheng DAI  Degui CHEN  Xingwen LI  Chunping NIU  Weixiong TONG  Honggang XIANG  

     
    PAPER-Arc Discharge & Related Phenomena

      Vol:
    E91-C No:8
      Page(s):
    1261-1267

    The gas-puffer effect has important effects on the interruption capability of a molded case circuit breaker (MCCB). In this paper, on the basis of a simplified model of an arc chamber with a single break, the effect of back-volume of an arc-quenching chamber on arc behavior in an MCCB is investigated. Firstly, using a 2-D optical-fiber arc-motion measurement system, experiments are performed to study the effect of back-volume on the arc-motion and gas pressure in an arc-quenching chamber. We demonstrate that the lower back-volume of the arc-quenching chamber is, the higher the pressure and the better the arc motion will be. Then, corresponding to the above experiments, the gas pressure inside the arc-quenching chamber is calculated using the integral conservation equation. The simulation results are consistent with the experimental results.

  • Worst Case Behavior of List Algorithms for Dynamic Scheduling of Non-unit Execution Time Tasks with Arbitrary Precedence Constrains

    Andrei TCHERNYKH  Klaus ECKER  

     
    LETTER-Concurrent Systems

      Vol:
    E91-A No:8
      Page(s):
    2277-2280

    Performance properties of list scheduling algorithms under various dynamic assumptions are analyzed. The focus is on bounds for scheduling directed acyclic graphs with arbitrary precedence constrains and arbitrary task processing times subject to minimizing the makespan. New performance bounds are derived and compared with known results.

  • Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    Haruhiko KAIYA  Akira OSADA  Kenji KAIJIRI  

     
    PAPER-Software Engineering

      Vol:
    E91-D No:4
      Page(s):
    897-906

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  • Novel Method of Interconnect Worstcase Establishment with Statistically-Based Approaches

    Won-Young JUNG  Hyungon KIM  Yong-Ju KIM  Jae-Kyung WEE  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E91-A No:4
      Page(s):
    1177-1184

    In order for the interconnect effects due to process-induced variations to be applied to the designs in 0.13 µm and below, it is necessary to determine and characterize the realistic interconnect worstcase models with high accuracy and speed. This paper proposes new statistically-based approaches to the characterization of realistic interconnect worstcase models which take into account process-induced variations. The Effective Common Geometry (ECG) and Accumulated Maximum Probability (AMP) algorithms have been developed and implemented into the new statistical interconnect worstcase design environment. To verify this statistical interconnect worstcase design environment, the 31-stage ring oscillators are fabricated and measured with UMC 0.13 µm Logic process. The 15-stage ring oscillators are fabricated and measured with 0.18 µm standard CMOS process for investigating its flexibility in other technologies. The results show that the relative errors of the new method are less than 1.00%, which is two times more accurate than the conventional worstcase method. Furthermore, the new interconnect worstcase design environment improves optimization speed by 29.61-32.01% compared to that of the conventional worstcase optimization. The new statistical interconnect worstcase design environment accurately predicts the worstcase and bestcase corners of non-normal distribution where conventional methods cannot do well.

  • Adaptive Beamforming with Robustness against Both Finite-Sample Effects and Steering Vector Mismatches

    Jing-Ran LIN  Qi-Cong PENG  Qi-Shan HUANG  

     
    PAPER-Digital Signal Processing

      Vol:
    E89-A No:9
      Page(s):
    2356-2362

    A novel approach of robust adaptive beamforming (RABF) is presented in this paper, aiming at robustness against both finite-sample effects and steering vector mismatches. It belongs to the class of diagonal loading approaches with the loading level determined based on worst-case performance optimization. The proposed approach, however, is distinguished by two points. (1) It takes finite-sample effects into account and applies worst-case performance optimization to not only the constraints, but also the objective of the constrained quadratic equation, for which it is referred to as joint worst-case RABF (JW-RABF). (2) It suggests a simple closed-form solution to the optimal loading after some approximations, revealing how different factors affect the loading. Compared with many existing methods in this field, the proposed one achieves better robustness in the case of small sample data size as well as steering vector mismatches. Moreover, it is less computationally demanding for presenting a simple closed-form solution to the optimal loading. Numerical examples confirm the effectiveness of the proposed approach.

  • High-Speed Calculation of Worst-Case Link Delays in the EDD Connection Admission Control Scheme

    Tokumi YOKOHIRA  Kiyohiko OKAYAMA  

     
    PAPER-Network

      Vol:
    E89-B No:7
      Page(s):
    2012-2022

    The EDD connection admission control scheme has been proposed for supporting real-time communication in packet-switched networks. In the scheme, when a connection establishment request occurs, the worst-case link delay in each link along the connection is calculated to determine whether the request can be accepted or not. In order to calculate the worst-case link delay, we must perform a check called the point schedulability check for each of some discrete time instants (checkpoints). Therefore when there are many checkpoints, the worst-case link delay calculation is time-consuming. We have proposed a high-speed calculation method. The method finds some checkpoints for which the point schedulability check need not be performed and removes such unnecessary checkpoints in advance before a connection establishment request occurs, and the check is performed for each of the remaining checkpoints after the request occurs. However, the method is not so effective under the situation that the maximum packet length in networks is large, because the method can find few unnecessary checkpoints under the situation. This paper proposes a new high-speed calculation method. We relax the condition which determines whether or not the point schedulability check need not be performed for each checkpoint in our previous method and derive a new condition for finding unnecessary checkpoints. Using the proposed method based on the new condition, we can increase the number of unnecessary checkpoints compared to our previous method. Numerical examples which are obtained by extensive simulation show that the proposed method can attain as much as about 50 times speedup.

  • Meta-Modeling Based Version Control System for Software Diagrams

    Takafumi ODA  Motoshi SAEKI  

     
    PAPER

      Vol:
    E89-D No:4
      Page(s):
    1390-1402

    In iterative software development methodology, a version control system is used in order to record and manage modification histories of products such as source codes and models described in diagrams. However, conventional version control systems cannot manage the models as a logical unit because the systems mainly handle source codes. In this paper, we propose a version control technique for handling diagrammatical models as logical units. Then we illustrate the feasibility of our approach with the implementation of version control functions of a meta-CASE tool that is able to generate a modeling tool in order to deal with various diagrams.

  • Effectiveness of an Integrated CASE Tool for Productivity and Quality of Software Developments

    Michio TSUDA  Sadahiro ISHIKAWA  Osamu OHNO  Akira HARADA  Mayumi TAKAHASHI  Shinji KUSUMOTO  Katsuro INOUE  

     
    PAPER-Software Engineering

      Vol:
    E89-D No:4
      Page(s):
    1470-1479

    This is commonly thought that CASE tools reduce programming efforts and increase development productivity. However, no paper has provide quantitative data supporting the matter. This paper discusses productivity improvement through the use of an integrated CASE tool system named EAGLE (Effective Approach to Achieving High Level Software Productivity), as shown by various data collected in Hitachi from the 1980s to the 2000s. We have evaluated productivity by using three metrics, l) program generation rate using reusable program skeletons and components, 2) fault density at two test phase, and 3) learning curve for the education of inexperienced programmers. We will show that productivity has been improved by the various facilities of EAGLE.

  • Determination of Interconnect Structural Parameters for Best- and Worst-Case Delays

    Atsushi KUROKAWA  Hiroo MASUDA  Junko FUJII  Toshinori INOSHITA  Akira KASEBE  Zhangcai HUANG  Yasuaki INOUE  

     
    PAPER

      Vol:
    E89-A No:4
      Page(s):
    856-864

    In general, a corner model with best- and worst-case delay conditions is used in static timing analysis (STA). The best- and worst-case delays of a stage are defined as the fastest and slowest delays from a cell input to the next cell input. In this paper, we present a methodology for determining the parameters that yield the best- and worst-case delays when interconnect structural parameters have the minimum and maximum values with process variations. We also present analysis results of our circuit model using the methodology. The min and max conditions for the time constant are found to be (+Δw, +Δt, +Δh) & (-Δw, -Δt, -Δh), respectively. Here, +Δ or -Δ means the max or min corner value of each parameter variation, where w is the width, t is the interconnect thickness, and h is the height. Best and worst conditions for delay time are as follows: 1) given a circuit with an optimum driver, dense interconnects, and small branch capacitance, the best and worst conditions are respectively (-Δw, +Δt, +Δh) & (+Δw, +Δt, -Δh), 2) given driver and/or via resistances that are higher than the interconnect resistance, dense interconnects, and small branch capacitance, they are (-Δw, -Δt, +Δh) & (+Δw, +Δt, -Δh), and 3) for other conditions, they are (+Δw, +Δt, +Δh) & (-Δw, -Δt, -Δh). Moreover, if there must be only one condition each for the best- and worst-case delays, they are (+Δw, +Δt, +Δh) & (-Δw, -Δt, -Δh).

  • Computer-Aided Diagnosis of Intracranial Aneurysms in MRA Images with Case-Based Reasoning

    Syoji KOBASHI  Katsuya KONDO  Yutaka HATA  

     
    PAPER-Biological Engineering

      Vol:
    E89-D No:1
      Page(s):
    340-350

    Finding intracranial aneurysms plays a key role in preventing serious cerebral diseases such as subarachnoid hemorrhage. For detection of aneurysms, magnetic resonance angiography (MRA) can provide detailed images of arteries non-invasively. However, because over 100 MRA images per subject are required to cover the entire cerebrum, image diagnosis using MRA is very time-consuming and labor-intensive. This article presents a computer-aided diagnosis (CAD) system for finding aneurysms with MRA images. The principal components are identification of aneurysm candidates (= ROIs; regions of interest) from MRA images and estimation of a fuzzy degree for each aneurysm candidate based on a case-based reasoning (CBR). The fuzzy degree indicates whether a candidate is true aneurysm. Our system presents users with a limited number of ROIs that have been sorted in order of fuzzy degree. Thus, this system can decrease the time and the labor required for detecting aneurysms. Experimental results using phantoms indicate that the system can detect all aneurysms at branches of arteries and all saccular aneurysms produced by dilation of a straight artery in 1 direction perpendicular to the principal axis. In a clinical evaluation, performance in finding aneurysms and estimating the fuzzy degree was examined by applying the system to 16 subjects with a total of 19 aneurysms. The experimental results indicate that this CAD system detected all aneurysms except a fusiform aneurysm, and gave high fuzzy degrees and high priorities for the detected aneurysms.

  • Swiss Cheese Test Case Generation for Web Services Testing

    Wei-Tek TSAI  Xiao WEI  Yinong CHEN  Ray PAUL  Bingnan XIAO  

     
    PAPER

      Vol:
    E88-D No:12
      Page(s):
    2691-2698

    Current Web services testing techniques are unable to assure the desired level of trustworthiness, which presents a barrier to WS applications in mission and business critical environments. This paper presents a framework that assures the trustworthiness of Web services. New assurance techniques are developed within the framework, including specification verification via completeness and consistency checking, test case generation, and automated Web services testing. Traditional test case generation methods only generate positive test cases that verify the functionality of software. The proposed Swiss Cheese test case generation method is designed to generate both positive and negative test cases that also reveal the vulnerability of Web services. This integrated development process is implemented in a case study. The experimental evaluation demonstrates the effectiveness of this approach. It also reveals that the Swiss Cheese negative testing detects even more faults than positive testing and thus significantly reduces the vulnerability of Web services.

  • A Protocol for Peer-to-Peer Multi-Player Networked Virtual Ball Game

    Tatsuhiro YONEKURA  Yoshihiro KAWANO  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    926-937

    This paper reports our study of how to gain consistency of states in a ball-game typed Distributed Virtual Environment (DVE) with lag, in peer-to-peer (P2P) architecture. That is, we are studying how to reduce in real-time the difference of states between the participating terminals in a virtual ball game caused by transmission lag or update interval. We are also studying how to control shared objects in real-time in a server-less network architecture. Specifically, a priority field called Allocated Topographical Zone (AtoZ) is used in P2P for DVE. By implementing this function, each terminal can compute which avatar holds the ownership of a shared object by calculating mutually the state of the local avatar predicted by the remote terminals. The region for ownership determined by AtoZ allows an avatar to access and control an object dominantly, so that a geometrical property is dynamically changed depending upon the relative arrangement between the object and avatars. Moreover considering the critical case, defined as inconsistent phenomena between the peers, caused by the network latency, a stricter ownership determination algorithm, called dead zone is introduced. By using these protocols in combination, a robust and effective scheme is achieved for a virtual ball game. As an example of the application, a real-time networked doubles air-hockey is implemented for evaluation of the influence of these protocols on interactivity and on consistency.

  • Forward Bias Enhanced Channel Hot Electron Injection for Low-Level Programming Improvement in Multilevel Flash Memory

    Caleb Yu-Sheng CHO  Ming-Jer CHEN  

     
    PAPER-Integrated Electronics

      Vol:
    E87-C No:7
      Page(s):
    1204-1207

    Low-voltage programmed levels are hard to achieve in multilevel Flash memory using staircase CHEI (channel hot electron injection) programming. The reasons are that low-level programming marginally deviates from the linear relation between threshold voltage VTH and control gate voltage VCG . Forward bias enhancement of CHEI is proposed to overcome this drawback. It is demonstrated that the new technique creates a linear relation between VTH and VCG , validated down to a critical VCG that is at least 1 V lower than traditional CHEI. Through extensive measurements, it is further argued that the most suitable magnitude of forward bias is 0.5 V since (i) it produces the lowest program level of 1.4 V; and (ii) higher biases cause not only large current consumption but also worsened drain disturb performance in NOR array configuration. The corresponding linear relation with the unity slope is maintained after 105 program/erase cycling.

41-60hit(82hit)