The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] FA(3430hit)

841-860hit(3430hit)

  • Dynamic Fault Tree Analysis for Systems with Nonexponential Failure Components

    Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E96-A No:8
      Page(s):
    1730-1736

    A method of calculating the top event probability of a fault tree, where dynamic gates and repeated events are included and the occurrences of basic events follow nonexponential distributions, is proposed. The method is on the basis of the Bayesian network formulation for a DFT proposed by Yuge and Yanagi [1]. The formulation had a difficulty in calculating a sequence probability if components have nonexponential failure distributions. We propose an alternative method to obtain the sequence probability in this paper. First, a method in the case of the Erlang distribution is discussed. Then, Tijms's fitting procedure is applied to deal with a general distribution. The procedure gives a mixture of two Erlang distributions as an approximate distribution for a general distribution given the mean and standard deviation. A numerical example shows that our method works well for complex systems.

  • Selective Check of Data-Path for Effective Fault Tolerance

    Tanvir AHMED  Jun YAO  Yuko HARA-AZUMI  Shigeru YAMASHITA  Yasuhiko NAKASHIMA  

     
    PAPER-Design Methodology

      Vol:
    E96-D No:8
      Page(s):
    1592-1601

    Nowadays, fault tolerance has been playing a progressively important role in covering increasing soft/hard error rates in electronic devices that accompany the advances of process technologies. Research shows that wear-out faults have a gradual onset, starting with a timing fault and then eventually leading to a permanent fault. Error detection is thus a required function to maintain execution correctness. Currently, however, many highly dependable methods to cover permanent faults are commonly over-designed by using very frequent checking, due to lack of awareness of the fault possibility in circuits used for the pending executions. In this research, to address the over-checking problem, we introduce a metric for permanent defects, as operation defective probability (ODP), to quantitatively instruct the check operations being placed only at critical positions. By using this selective checking approach, we can achieve a near-100% dependability by having about 53% less check operations, as compared to the ideal reliable method, which performs exhaustive checks to guarantee a zero-error propagation. By this means, we are able to reduce 21.7% power consumption by avoiding the non-critical checking inside the over-designed approach.

  • Propagation Analysis Using Plane Coupler for 2D Wireless Power Transmission Systems

    Hiroshi SHINODA  Takahide TERADA  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E96-C No:8
      Page(s):
    1041-1047

    A plane coupler has been developed for a two-dimensional (2D) wireless power transmission. This coupler can construct a continuous wireless power transmission system for mobile devices due to its small, light characteristics. This coupler has two elements connected with a 2D waveguide sheet, and coupling capacitances between the elements and the sheet decrease the coupler size by reducing their resonance frequencies. A propagation loss of -10.0 dB is obtained using the small 0.025-λ2 coupler. Continuous operation of the mobile device is demonstrated by applying wireless power transmission to the 2D waveguide sheet with the small plane coupler.

  • Face Retrieval in Large-Scale News Video Datasets

    Thanh Duc NGO  Hung Thanh VU  Duy-Dinh LE  Shin'ichi SATOH  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:8
      Page(s):
    1811-1825

    Face retrieval in news video has been identified as a challenging task due to the huge variations in the visual appearance of the human face. Although several approaches have been proposed to deal with this problem, their extremely high computational cost limits their scalability to large-scale video datasets that may contain millions of faces of hundreds of characters. In this paper, we introduce approaches for face retrieval that are scalable to such datasets while maintaining competitive performances with state-of-the-art approaches. To utilize the variability of face appearances in video, we use a set of face images called face-track to represent the appearance of a character in a video shot. Our first proposal is an approach for extracting face-tracks. We use a point tracker to explore the connections between detected faces belonging to the same character and then group them into one face-track. We present techniques to make the approach robust against common problems caused by flash lights, partial occlusions, and scattered appearances of characters in news videos. In the second proposal, we introduce an efficient approach to match face-tracks for retrieval. Instead of using all the faces in the face-tracks to compute their similarity, our approach obtains a representative face for each face-track. The representative face is computed from faces that are sampled from the original face-track. As a result, we significantly reduce the computational cost of face-track matching while taking into account the variability of faces in face-tracks to achieve high matching accuracy. Experiments are conducted on two face-track datasets extracted from real-world news videos, of such scales that have never been considered in the literature. One dataset contains 1,497 face-tracks of 41 characters extracted from 370 hours of TRECVID videos. The other dataset provides 5,567 face-tracks of 111 characters observed from a television news program (NHK News 7) over 11 years. We make both datasets publically accessible by the research community. The experimental results show that our proposed approaches achieved a remarkable balance between accuracy and efficiency.

  • Field Slack Assessment for Predictive Fault Avoidance on Coarse-Grained Reconfigurable Devices

    Toshihiro KAMEDA  Hiroaki KONOURA  Dawood ALNAJJAR  Yukio MITSUYAMA  Masanori HASHIMOTO  Takao ONOYE  

     
    PAPER-Test and Verification

      Vol:
    E96-D No:8
      Page(s):
    1624-1631

    This paper proposes a procedure for avoiding delay faults in field with slack assessment during standby time. The proposed procedure performs path delay testing and checks if the slack is larger than a threshold value using selectable delay embedded in basic elements (BE). If the slack is smaller than the threshold, a pair of BEs to be replaced, which maximizes the path slack, is identified. Experimental results with two application circuits mapped on a coarse-grained architecture show that for aging-induced delay degradation a small threshold slack, which is less than 1 ps in a test case, is enough to ensure the delay fault prediction.

  • Ontology-Based Reuse of Failure Modes in Existing Databases for FMEA: Methodology and Tool

    Guoqi LI  

     
    LETTER-Reliability, Maintainability and Safety Analysis

      Vol:
    E96-A No:7
      Page(s):
    1645-1648

    The wide application of FMEA in reliability engineering is generally appreciated, and how to identify the failure modes is the key to it. Failure modes, however, rely only on specific components rather than the system architecture, and therefore could be reused in different FMEAs. A novel ontology-based method, to recognize and reuse specific failure modes in existing databases, is provided here, and a light weight tool is developed for this method. The method and the tool can also be used in other fields with similar scenarios.

  • Efficient Utilization of Vector Registers to Improve FFT Performance on SIMD Microprocessors

    Feng YU  Ruifeng GE  Zeke WANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E96-A No:7
      Page(s):
    1637-1641

    We investigate the utilization of vector registers (VRs) on reducing memory references for single instruction multiple data fast Fourier transform calculation. We propose to group the butterfly computations in several consecutive stages to maximize utilization of the available VRs and take the advantage of the symmetries in twiddle factors. All the butterflies sharing identical twiddle factors are clustered and computed together to further improve performance. The relationship between the number of fused stages and the number of available VRs is then examined. Experimental results on different platforms show that the proposed method is effective.

  • Coverage of Irrelevant Components in Systems with Imperfect Fault Coverage

    Jianwen XIANG  Fumio MACHIDA  Kumiko TADANO  Yoshiharu MAENO  Kazuo YANOO  

     
    LETTER-Reliability, Maintainability and Safety Analysis

      Vol:
    E96-A No:7
      Page(s):
    1649-1652

    Traditional imperfect fault coverage models only consider the coverage (including identification and isolation) of faulty components, and they do not consider the coverage of irrelevant (operational) components. One potential reason for the omission is that in these models the system is generally assumed to be coherent in which each component is initially relevant. In this paper, we first point out that an initially relevant component could become irrelevant afterwards due to the failures of some other components, and thus it is important to consider the handling of irrelevancy even the system is originally coherent. We propose an irrelevancy coverage model (IRCM) in which the coverage is extended to the irrelevant components in addition to the faulty components. The IRCM can not only significantly enhance system reliability by preventing the future system failures resulting from the not-covered failures of the irrelevant components, but may also play an important role in efficient energy use in practice by timely turning off the irrelevant components.

  • Detection and Localization of Link Quality Degradation in Transparent WDM Networks

    Wissarut YUTTACHAI  Poompat SAENGUDOMLERT  Wuttipong KUMWILAISAK  

     
    PAPER-Fiber-Optic Transmission for Communications

      Vol:
    E96-B No:6
      Page(s):
    1412-1424

    We consider the problem of detecting and localizing of link quality degradations in transparent wavelength division multiplexing (WDM) networks. In particular, we consider the degradation of the optical signal-to-noise ratio (OSNR), which is a key parameter for link quality monitoring in WDM networks. With transparency in WDM networks, transmission lightpaths can bypass electronic processing at intermediate nodes. Accordingly, links cannot always be monitored by receivers at their end nodes. This paper proposes the use of optical multicast probes to monitor OSNR degradations on optical links. The proposed monitoring scheme consists of two steps. The first step is an off-line process to set up monitoring trees using integer linear programming (ILP). The set of monitoring trees is selected to guarantee that significant OSNR degradations can be identified on any link or links in the network. The second step uses optical performance monitors that are placed at the receivers identified in the first step. The information from these monitors is collected and input to the estimation algorithm to localize the degraded links. Numerical results indicate that the proposed monitoring algorithm is able to detect link degradations that cause significant OSNR changes. In addition, we demonstrate how the information obtained from monitoring can be used to detect a significant end-to-end OSNR degradation even though there is no significant OSNR degradation on individual links.

  • MacWilliams Type Identity for M-Spotty Rosenbloom-Tsfasman Weight Enumerator of Linear Codes over Finite Ring

    Jianzhang CHEN  Wenguang LONG  Bo FU  

     
    LETTER-Coding Theory

      Vol:
    E96-A No:6
      Page(s):
    1496-1500

    Nowadays, error control codes have become an essential technique to improve the reliability of various digital systems. A new type error control codes called m-spotty byte error control codes are applied to computer memory systems. These codes are essential to make the memory systems reliable. Here, we introduce the m-spotty Rosenbloom-Tsfasman weights and m-spotty Rosenbloom-Tsfasman weight enumerator of linear codes over Fq[u]/(uk) with uk=0. We also derive a MacWilliams type identity for m-spotty Rosenbloom-Tsfasman weight enumerator.

  • Interference Rejection Characteristics by Adaptive Array at User Equipment Using Measured K-Factor in Heterogeneous Networks

    Kentaro NISHIMORI  Keisuke KUSUMI  Misaki HORIO  Koshiro KITAO  Tetsuro IMAI  

     
    PAPER

      Vol:
    E96-B No:6
      Page(s):
    1256-1264

    In LTE-Advanced heterogeneous networks, a typical cell layout to enhance frequency utilization is to incorporate picocells and femtocells in a macrocell. However, the co-channel interference between the marcocell and picocell/femtocell is an important issue when the same frequency band is used between these systems. We have already clarified how the interference from the femto(macro) cell affects on the macro(femto) cell. In this paper, we evaluate the interference rejection characteristics by an adaptive array with user equipment (UE). The characteristics are evaluated based on the K-factor used in the Nakagami-Race Fading model and the spatial correlation that is obtained in an actual outdoor environment. It is shown that a two-element adaptive array at the macro UE (M-UE) can sufficiently reduce the interference from the femto base station (F-BS) to the M-UE even if the number of total signals exceeds the degrees of freedom of the array.

  • Power Allocation and Performance Analysis for Incremental-Selective Decode-and-Forward Cooperative Communications over Nakagami-m Fading Channels

    Rouhollah AGHAJANI  Reza SAADAT  Mohammad Reza AREF  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E96-B No:6
      Page(s):
    1531-1539

    The focus of this study is the performance of the relaying network with incremental selective decode-and-forward (ISDF) protocol in non-selective slow Nakagami-m fading channels. To enhance bandwidth efficiency, when the direct transmission is not successful the relay is used to retransmit a clean copy of the source signal. The proposed protocol achieves a significant reduction in the power consumption and an improvement in performance compared to the fixed decode-and-forward (DF). The exact symbol error rate (SER) of M-PSK modulation for the ISDF protocol over general fading channels is derived. However, as the exact SER analysis is very complicated, we provide an approximated SER expression. Based on this approximation, we provide an optimum power allocation coefficient where the aggregate transmit power constraint is imposed on the source and the relay. Our results show that at least 50% of total power must be used by the direct link, and the remaining may be used by the relay. Furthermore, power allocation in this protocol is independent of the quality of the source-destination channel and modulation constellation size. Numerical results show that the ISDF protocol can reduce the average transmit power with respect to the fixed DF protocol.

  • Directing All Learners to Course Goal with Enforcement of Discipline Utilizing Persona Motivation

    Dong Phuong DINH  Fumiko HARADA  Hiromitsu SHIMAKAWA  

     
    PAPER-Educational Technology

      Vol:
    E96-D No:6
      Page(s):
    1332-1343

    The paper proposes the PMD method to design an introductory programming practice course plan that is inclusive for all learners and stable throughout a course. To achieve the course plan, the method utilizes personas, each of which represents learners having similar motivation to study programming. The learning of the personas is directed to the course goal with an enforcement resulting from the discipline, which is an integration of effective learning strategies with affective components of the persoans. Under the enforcement, services to facilitate and promote the learning of each persona can be decided, based on motivation components of each persona, motivational effects of the services, and the cycle of self-efficacy. The application of the method on about 500 freshmen in C programming practice course has shown this is a successful approach for designing courses.

  • An Accurate User Position Estimation Method Using a Single Camera for 3D Display without Glasses

    Byeoung-su KIM  Cho-il LEE  Seong-hwan JU  Whoi-Yul KIM  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:6
      Page(s):
    1344-1350

    3D display systems without glasses are preferred because of the inconvenience wearing of special glasses while viewing 3D content. In general, non-glass type 3D displays work by sending left and right views of the content to the corresponding eyes depending on the user position with respect to the display. Since accurate user position estimation has become a very important task for non-glass type 3D displays, most of such systems require additional hardware or suffer from low accuracy. In this paper, an accurate user position estimation method using a single camera for non-glass type 3D display is proposed. As inter-pupillary distance is utilized for the estimation, at first the face is detected and then tracked using an Active Appearance Model. The pose of face is then estimated to compensate the pose variations. To estimate the user position, a simple perspective mapping function is applied which uses the average of the inter-pupillary distance. For accuracy, personal inter-pupillary distance can also be used. Experimental results have shown that the proposed method successfully estimated the user position using a single camera. The average error for position estimation with the proposed method was small enough for viewing 3D contents.

  • A Simple Scheduling Restriction Scheme for Interference Coordinated Networks

    Moo Ryong JEONG  Nobuhiko MIKI  

     
    PAPER

      Vol:
    E96-B No:6
      Page(s):
    1306-1317

    Scheduling restriction is attracting much attention in LTE-Advanced as a technique to reduce the power consumption and network overheads in interference coordinated heterogeneous networks (HetNets). Such a network with inter-cell interference coordination (ICIC) provides two radio resources with different channel quality statistics. One of the resources is protected (unprotected) from inter-cell interference (hence, called protected (non-protected) resource) and has higher (lower) average channel quality. Without scheduling restriction, the channel quality feedback would be doubled to reflect the quality difference of the two resources. We present a simple scheduling restriction scheme that addresses the problem without significant performance degradation. Users with relatively larger (smaller) average channel quality difference between the two resources are scheduled in the protected (non-protected) resource only, and a boundary user, determined by a proportional fair resource allocation (PFRA) under simplified static channels, is scheduled on one of the two resources or both depending on PFRA. Having most users scheduled in only one of the resources, the power consumption and network overheads that would otherwise be required for the channel quality feedback on the other resource can be avoided. System level simulation of LTE-Advanced downlink shows that the performance degradation due to our scheduling restriction scheme is less than 2%, with the average feedback reduction of 40%.

  • Lower Bounds on the Aperiodic Hamming Correlations of Frequency Hopping Sequences

    Xing LIU  Daiyuan PENG  Xianhua NIU  Fang LIU  

     
    PAPER-Spread Spectrum Technologies and Applications

      Vol:
    E96-A No:6
      Page(s):
    1445-1450

    In order to evaluate the goodness of frequency hopping (FH) sequence design, the periodic Hamming correlation function is used as an important measure. But aperiodic Hamming correlation of FH sequences matters in real applications, while it received little attraction in the literature compared with periodic Hamming correlation. In this paper, the new aperiodic Hamming correlation lower bounds for FH sequences, with respect to the size of the frequency slot set, the sequence length, the family size, the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are established. The new aperiodic bounds are tighter than the Peng-Fan bounds. In addition, the new bounds include the second powers of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation but the Peng-Fan bounds do not include them. For the given sequence length, the family size and the frequency slot set size, the values of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are inside of an ellipse which is given by the new aperiodic bounds.

  • Test Generation for Delay Faults on Clock Lines under Launch-on-Capture Test Environment

    Yoshinobu HIGAMI  Hiroshi TAKAHASHI  Shin-ya KOBAYASHI  Kewal K. SALUJA  

     
    PAPER-Dependable Computing

      Vol:
    E96-D No:6
      Page(s):
    1323-1331

    This paper deals with delay faults on clock lines assuming the launch-on-capture test. In this realistic fault model, the amount of delay at the FF driven by the faulty clock line is such that the scan shift operation can perform correctly even in the presence of a fault, but during the system clock operation, capturing functional value(s) at faulty FF(s), i.e. FF(s) driven by the clock with delay, is delayed and correct value(s) may not be captured. We developed a fault simulator that can handle such faults and using this simulator we investigate the relation between the duration of the delay and the difficulty of detecting clock delay faults in the launch-on-capture test. Next, we propose test generation methods for detecting clock delay faults that affect a single or two FFs. Experimental results for benchmark circuits are given in order to establish the effectiveness of the proposed methods.

  • Facial Image Super-Resolution Reconstruction Based on Separated Frequency Components

    Hyunduk KIM  Sang-Heon LEE  Myoung-Kyu SOHN  Dong-Ju KIM  Byungmin KIM  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1315-1322

    Super resolution (SR) reconstruction is the process of fusing a sequence of low-resolution images into one high-resolution image. Many researchers have introduced various SR reconstruction methods. However, these traditional methods are limited in the extent to which they allow recovery of high-frequency information. Moreover, due to the self-similarity of face images, most of the facial SR algorithms are machine learning based. In this paper, we introduce a facial SR algorithm that combines learning-based and regularized SR image reconstruction algorithms. Our conception involves two main ideas. First, we employ separated frequency components to reconstruct high-resolution images. In addition, we separate the region of the training face image. These approaches can help to recover high-frequency information. In our experiments, we demonstrate the effectiveness of these ideas.

  • Dynamic Fault Tree Analysis Using Bayesian Networks and Sequence Probabilities

    Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E96-A No:5
      Page(s):
    953-962

    A method of calculating the exact top event probability of a fault tree with dynamic gates and repeated basic events is proposed. The top event probability of such a dynamic fault tree is obtained by converting the tree into an equivalent Markov model. However, the Markov-based method is not realistic for a complex system model because the number of states that should be considered in the Markov analysis increases explosively as the number of basic events in the model increases. To overcome this shortcoming, we propose an alternative method in this paper. It is a hybrid of a Bayesian network (BN) and an algebraic technique. First, modularization is applied to a dynamic fault tree. The detected modules are classified into two types: one satisfies the parental Markov condition and the other does not. The module without the parental Markov condition is replaced with an equivalent single event. The occurrence probability of this event is obtained as the sum of disjoint sequence probabilities. After the contraction of modules without parent Markov condition, the BN algorithm is applied to the dynamic fault tree. The conditional probability tables for dynamic gates are presented. The BN is a standard one and has hierarchical and modular features. Numerical example shows that our method works well for complex systems.

  • Dictionary Learning with Incoherence and Sparsity Constraints for Sparse Representation of Nonnegative Signals

    Zunyi TANG  Shuxue DING  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E96-D No:5
      Page(s):
    1192-1203

    This paper presents a method for learning an overcomplete, nonnegative dictionary and for obtaining the corresponding coefficients so that a group of nonnegative signals can be sparsely represented by them. This is accomplished by posing the learning as a problem of nonnegative matrix factorization (NMF) with maximization of the incoherence of the dictionary and of the sparsity of coefficients. By incorporating a dictionary-incoherence penalty and a sparsity penalty in the NMF formulation and then adopting a hierarchically alternating optimization strategy, we show that the problem can be cast as two sequential optimal problems of quadratic functions. Each optimal problem can be solved explicitly so that the whole problem can be efficiently solved, which leads to the proposed algorithm, i.e., sparse hierarchical alternating least squares (SHALS). The SHALS algorithm is structured by iteratively solving the two optimal problems, corresponding to the learning process of the dictionary and to the estimating process of the coefficients for reconstructing the signals. Numerical experiments demonstrate that the new algorithm performs better than the nonnegative K-SVD (NN-KSVD) algorithm and several other famous algorithms, and its computational cost is remarkably lower than the compared algorithms.

841-860hit(3430hit)