The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] LD(1872hit)

321-340hit(1872hit)

  • Reliability-Enhanced ECC-Based Memory Architecture Using In-Field Self-Repair

    Gian MAYUGA  Yuta YAMATO  Tomokazu YONEDA  Yasuo SATO  Michiko INOUE  

     
    PAPER-Dependable Computing

      Pubricized:
    2016/06/27
      Vol:
    E99-D No:10
      Page(s):
    2591-2599

    Embedded memory is extensively being used in SoCs, and is rapidly growing in size and density. It contributes to SoCs to have greater features, but at the expense of taking up the most area. Due to continuous scaling of nanoscale device technology, large area size memory introduces aging-induced faults and soft errors, which affects reliability. In-field test and repair, as well as ECC, can be used to maintain reliability, and recently, these methods are used together to form a combined approach, wherein uncorrectable words are repaired, while correctable words are left to the ECC. In this paper, we propose a novel in-field repair strategy that repairs uncorrectable words, and possibly correctable words, for an ECC-based memory architecture. It executes an adaptive reconfiguration method that ensures 'fresh' memory words are always used until spare words run out. Experimental results demonstrate that our strategy enhances reliability, and the area overhead contribution is small.

  • Effective Magnetic Sheet Loading Method for Near Field Communication Antennas

    Takaho SEKIGUCHI  Yoshinobu OKANO  Satoshi OGINO  

     
    BRIEF PAPER

      Vol:
    E99-C No:10
      Page(s):
    1211-1214

    Near field communication (NFC) antennas are often lined with magnetic sheets to reduce performance degradation caused by nearby metal objects. Though amorphous sheets have a high permeability and are suitable magnetic sheets for lining, their magnetic loss is also high. Therefore, this paper suggests a technique of suppressing magnetic loss by modifying the shape of the sheet without changing its composition. The utility of the proposed technique was investigated in this study.

  • Topics Arising from the WRC-15 with Respect to Satellite-Related Agenda Items Open Access

    Nobuyuki KAWAI  Satoshi IMATA  

     
    INVITED PAPER

      Vol:
    E99-B No:10
      Page(s):
    2113-2120

    Along with remarkable advancement of radiocommunication services including satellite services, the radio-frequency spectrum and geostationary-satellite orbit are getting congested. WRC-15 was held in November 2015 to study and implement efficient use of those natural resources. There were a number of satellite-related agenda items associated with frequency allocation, new usages of satellite communications and satellite regulatory issues. This paper overviews the outcome from these agenda items of WRC-15 as well as the agenda items for the next WRC (i.e. the WRC-19).

  • Robust Projective Template Matching

    Chao ZHANG  Takuya AKASHI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/06/08
      Vol:
    E99-D No:9
      Page(s):
    2341-2350

    In this paper, we address the problem of projective template matching which aims to estimate parameters of projective transformation. Although homography can be estimated by combining key-point-based local features and RANSAC, it can hardly be solved with feature-less images or high outlier rate images. Estimating the projective transformation remains a difficult problem due to high-dimensionality and strong non-convexity. Our approach is to quantize the parameters of projective transformation with binary finite field and search for an appropriate solution as the final result over the discrete sampling set. The benefit is that we can avoid searching among a huge amount of potential candidates. Furthermore, in order to approximate the global optimum more efficiently, we develop a level-wise adaptive sampling (LAS) method under genetic algorithm framework. With LAS, the individuals are uniformly selected from each fitness level and the elite solution finally converges to the global optimum. In the experiment, we compare our method against the popular projective solution and systematically analyse our method. The result shows that our method can provide convincing performance and holds wider application scope.

  • Acquiring 4D Light Fields of Self-Luminous Extended Light Sources Using Programmable Filter

    Motohiro NAKAMURA  Shinnosuke OYA  Takahiro OKABE  Hendrik P. A. LENSCH  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/06/17
      Vol:
    E99-D No:9
      Page(s):
    2360-2367

    Self-luminous light sources in the real world often have nonnegligible sizes and radiate light inhomogeneously. Acquiring the model of such a light source is highly important for accurate image synthesis and understanding. In this paper, we propose an approach to measuring 4D light fields of self-luminous extended light sources by using a liquid crystal (LC) panel, i.e. a programmable optical filter and a diffuse-reflection board. The proposed approach recovers the 4D light field from the images of the board illuminated by the light radiated from a light source and passing through the LC panel. We make use of the feature that the transmittance of the LC panel can be controlled both spatially and temporally. The approach enables multiplexed sensing and adaptive sensing, and therefore is able to acquire 4D light fields more efficiently and densely than the straightforward method. We implemented the prototype setup, and confirmed through a number of experiments that our approach is effective for modeling self-luminous extended light sources in the real world.

  • Weighted 4D-DCT Basis for Compressively Sampled Light Fields

    Yusuke MIYAGI  Keita TAKAHASHI  Toshiaki FUJII  

     
    PAPER

      Vol:
    E99-A No:9
      Page(s):
    1655-1664

    Light field data, which is composed of multi-view images, have various 3D applications. However, the cost of acquiring many images from slightly different viewpoints sometimes makes the use of light fields impractical. Here, compressive sensing is a new way to obtain the entire light field data from only a few camera shots instead of taking all the images individually. In paticular, the coded aperture/mask technique enables us to capture light field data in a compressive way through a single camera. A pixel value recorded by such a camera is a sum of the light rays that pass though different positions on the coded aperture/mask. The target light field can be reconstructed from the recorded pixel values by using prior information on the light field signal. As prior information, the current state of the art uses a dictionary (light field atoms) learned from training datasets. Meanwhile, it was reported that general bases such as those of the discrete cosine transform (DCT) are not suitable for efficiently representing prior information. In this study, however, we demonstrate that a 4D-DCT basis works surprisingly well when it is combined with a weighting scheme that considers the amplitude differences between DCT coefficients. Simulations using 18 light field datasets show the superiority of the weighted 4D-DCT basis to the learned dictionary. Furthermore, we analyzed a disparity-dependent property of the reconstructed data that is unique to light fields.

  • Exhaustive and Efficient Identification of Rationales Using GQM+Strategies with Stakeholder Relationship Analysis

    Takanobu KOBORI  Hironori WASHIZAKI  Yoshiaki FUKAZAWA  Daisuke HIRABAYASHI  Katsutoshi SHINTANI  Yasuko OKAZAKI  Yasuhiro KIKUSHIMA  

     
    PAPER

      Pubricized:
    2016/07/06
      Vol:
    E99-D No:9
      Page(s):
    2219-2228

    To achieve overall business goals, GQM+Strategies is one approach that aligns business goals at each level of an organization to strategies and assesses the achievement of goals. Strategies are based on rationales (contexts and assumptions). Because extracting all rationales is an important process in the GQM+Strategies approach, we propose the Context-Assumption-Matrix (CAM), which refines the GQM+Strategies model by extracting rationales based on analyzing the relationships between stakeholders, and the process of using GQM+Strategies with CAM effectively. To demonstrate the effectiveness of the CAM and the defined process, we conducted three experiments involving students majoring in information sciences at two different Japanese universities. Moreover, we applied the GQM+Strategies approach with CAM to the Recruit Sumai Company in Japan. The results reveal that compared to GQM+Strategies alone, GQM+Strategies with CAM can extract rationales of the same quality more efficiently and exhaustively.

  • Reliability and Failure Impact Analysis of Distributed Storage Systems with Dynamic Refuging

    Hiroaki AKUTSU  Kazunori UEDA  Takeru CHIBA  Tomohiro KAWAGUCHI  Norio SHIMOZONO  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/06/17
      Vol:
    E99-D No:9
      Page(s):
    2259-2268

    In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate setting of a stripe distribution level could localize the failure.

  • Embedded F-SIR Type Transmission Line with Open-Stub for Negative Group Delay Characteristic

    Yoshiki KAYANO  Hiroshi INOUE  

     
    BRIEF PAPER

      Vol:
    E99-C No:9
      Page(s):
    1023-1026

    Negative group delay characteristics can be used to improve signal-integrity performance such as equalizer for compensation of the group delay of transmission line (TL). This brief-paper newly attempts to propose a concept of the embedded Folded-Stepped Impedance Resonator (F-SIR) structure with open-stub resonator, for negative group delay and slope characteristics at high-frequency as well as low-insertion loss. The concept of the proposed TL is based on the combination of resonance and anti-resonance due to open-stub resonator in order to establish wideband negative group delay and negative slope characteristics. The proposed TL is fabricated on PCB, and then the concept is validated by measurement and simulation.

  • Entity Identification on Microblogs by CRF Model with Adaptive Dependency

    Jun-Li LU  Makoto P. KATO  Takehiro YAMAMOTO  Katsumi TANAKA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2016/06/20
      Vol:
    E99-D No:9
      Page(s):
    2295-2305

    We address the problem of entity identification on a microblog with special attention to indirect reference cases in which entities are not referred to by their names. Most studies on identifying entities referred to them by their full/partial name or abbreviation, while there are many indirectly mentioned entities in microblogs, which are difficult to identify in short text such as microblogs. We therefore tackled indirect reference cases by developing features that are particularly important for certain types of indirect references and modeling dependency among referred entities by a Conditional Random Field (CRF) model. In addition, we model non-sequential order dependency while keeping the inference tractable by dynamically building dependency among entities. The experimental results suggest that our features were effective for indirect references, and our CRF model with adaptive dependency was robust even when there were multiple mentions in a microblog and achieved the same high performance as that with the fully connected CRF model.

  • Database Calibration for Outdoor Wi-Fi Positioning System

    Yuyang HUANG  Li-Ta HSU  Yanlei GU  Haitao WANG  Shunsuke KAMIJO  

     
    PAPER-Intelligent Transport System

      Vol:
    E99-A No:9
      Page(s):
    1683-1690

    The limitation of the GPS in urban canyon has led to the rapid development of Wi-Fi positioning system (WPS). The fingerprint-based WPS could be divided into calibration and positioning stages. In calibration stage, several grid points (GPs) are selected, and their position tags and featured access points (APs) are collected to build fingerprint database. In positioning stage, real time measurement of APs are compared with the feature of each GP in the database. The k weighted nearest neighbors (KWNN) algorithm is used as pattern matching algorithm to estimate the final positioning result. However, the performance of outdoor fingerprint-based WPS is not good enough for pedestrian navigation. The main challenge is to build a robust fingerprint database. The received number of APs in outdoor environments has large variation. In addition, positioning result estimated by GPS receiver is used as position tag of each GP to automatically build the fingerprint database. This paper studies the lifecycle of fingerprint database in outdoor environment. We also shows that using long time collected data to build database could improve the positioning accuracy. Moreover, a new 3D-GNSS (3D building models aided GNSS) positioning method is used to provide accurate position tags. In this paper, the fingerprint-based WPS has been developed in an outdoor environment near the center of Tokyo city. The proposed WPS can achieve around 17 meters positioning accuracy in urban canyon.

  • A 50-Gb/s Optical Transmitter Based on a 25-Gb/s-Class DFB-LD and a 0.18-µm SiGe BiCMOS LD Driver

    Takashi TAKEMOTO  Yasunobu MATSUOKA  Hiroki YAMASHITA  Takahiro NAKAMURA  Yong LEE  Hideo ARIMOTO  Tatemi IDO  

     
    PAPER-Optoelectronics

      Vol:
    E99-C No:9
      Page(s):
    1039-1047

    A 50-Gb/s optical transmitter, consisting of a 25-Gb/s-class lens-integrated DFB-LD (with -3-dB bandwidth of 20GHz) and a LD-driver chip based on 0.18-µm SiGe BiCMOS technology for inter and intra-rack transmissions, was developed and tested. The DFB-LD and LD driver chip are flip-chip mounted on an alumina ceramic package. To suppress inter-symbol interference due to a shortage of the DFB-LD bandwidth and signal reflection between the DFB-LD and the package, the LD driver includes a two-tap pre-emphasis circuit and a high-speed termination circuit. Operating at a data rate of 50Gb/s, the optical transmitter enhances LD bandwidth and demonstrated an eye opening with jitter margin of 0.23UI. Power efficiency of the optical transmitter at a data rate of 50Gb/s is 16.2mW/Gb/s.

  • Self-Adaptive Scaled Min-Sum Algorithm for LDPC Decoders Based on Delta-Min

    Keol CHO  Ki-Seok CHUNG  

     
    LETTER-Coding Theory

      Vol:
    E99-A No:8
      Page(s):
    1632-1634

    A self-adaptive scaled min-sum algorithm for LDPC decoding based on the difference between the first two minima of the check node messages (Δmin) is proposed. Δmin is utilized for adjusting the scaling factor of the check node messages, and simulation results show that the proposed algorithm improves the error correcting performance compared to existing algorithms.

  • New Estimation Method for the Operational Low Frequency End of Antipodal Vivaldi Antennas

    Hien Ba CHU  Hiroshi SHIRAI  Chien Dao NGOC  

     
    PAPER-Electromagnetic Theory

      Vol:
    E99-C No:8
      Page(s):
    947-955

    A simple approach is presented for designing an antipodal Vivaldi antenna in this paper. A new and better estimation of the low frequency end of the operational range is shown. Final dimensions of the antenna parameters are determined by using the High Frequency Structure Simulator (HFSS). The proposed antenna has a simple configuration but exhibits low return loss, good radiation characteristics, and high and flat gain in the operating ultra wideband frequency range (3.1-10.6 GHz). Lastly, the fabrication has been done along with the specification to confirm the properties by measurements.

  • Threshold Relaxation and Holding Time Limitation Method for Accepting More General Calls under Emergency Trunk Reservation

    Kazuki TANABE  Sumiko MIYATA  Ken-ichi BABA  Katsunori YAMAOKA  

     
    PAPER

      Vol:
    E99-A No:8
      Page(s):
    1518-1528

    In emergency situations, telecommunication networks become congested due to large numbers of call requests. Also, some infrastructure breaks down, so undamaged communication resources must be utilized more efficiently. Therefore, several lines in telephone exchanges are generally reserved for emergency calls whose users communicate crucial information. The number of lines reserved for emergency calls is determined by a threshold, on a trunk reservation control method. To accept both required emergency calls and more general calls, the traffic intensity of arriving emergency calls should be estimated in advance, and a threshold should be configured so that the number of reserved lines becomes lower than the estimation. Moreover, we propose that the holding time for general calls should be positively limited. By guaranteeing the holding time sufficient for communicating essential information, holding time limitation reduces long-period calls so more general calls are accepted. In this paper, we propose a new CAC method to utilize undamaged communication resources more efficiently during emergencies. Our proposed method accepts more general calls by collaboratively relaxing the threshold of trunk reservation and limiting holding time of general calls. This method is targeted at not only the telephone exchange but also various systems on networks, e.g. base stations of the wireless network or SIP servers. With our method, the threshold is configured in consideration of the ratio of traffic intensities estimated in advance. We modeled the telephone exchange as a queueing loss system and calculated call-blocking rates of both emergency and general calls by using computer simulation. The comparison with the conventional holding time limitation method showed that our proposed method accepts the required number of emergency calls by appropriately relaxing the threshold, while suppressing the increase in call-blocking of general calls.

  • Transmission Characteristics and Shielding Effectiveness of Shielded-Flexible Printed Circuits for Differential-Signaling

    Yoshiki KAYANO  Hiroshi INOUE  

     
    PAPER

      Vol:
    E99-C No:7
      Page(s):
    766-773

    To provide basic considerations for the realization of method for suppressing the EMI from differential-paired lines on flexible printed circuits (FPC), the characteristics of the SI performance and shielding effectiveness (SE) of shielded-flexible printed circuits for differential-signaling are investigated in this paper experimentally and by a numerical modeling. Firstly, transmission characteristics of TDR measurement and frequency response of |Sdd21| are discussed, from view point of signal integrity. Secondly, as the characteristics of the SE performance for EMI, frequency responses of magnetic field are investigated. Although placement of conductive shield near the paired-lines decreases characteristics impedance, |Sdd21| for the “with Cu 5.5 µm-thickness copper shield” is not deteriorated compared with “without shield” and sufficient SE performance for magnetic field can be established. But, thin-shield deteriorates SI as well as SE performances. The frequency response of |Sdd21| at higher frequencies for the “Ag 0.1 µm” case has the steep loss roll off. A reflection loss resulted from impedance-mismatching is not dominant factor of the losses. The dominant factor may be magnetic field leakage due to very thin-conductive shield.

  • Efficient Aging-Aware SRAM Failure Probability Calculation via Particle Filter-Based Importance Sampling

    Hiromitsu AWANO  Masayuki HIROMOTO  Takashi SATO  

     
    PAPER

      Vol:
    E99-A No:7
      Page(s):
    1390-1399

    An efficient Monte Carlo (MC) method for the calculation of failure probability degradation of an SRAM cell due to negative bias temperature instability (NBTI) is proposed. In the proposed method, a particle filter is utilized to incrementally track temporal performance changes in an SRAM cell. The number of simulations required to obtain stable particle distribution is greatly reduced, by reusing the final distribution of the particles in the last time step as the initial distribution. Combining with the use of a binary classifier, with which an MC sample is quickly judged whether it causes a malfunction of the cell or not, the total number of simulations to capture the temporal change of failure probability is significantly reduced. The proposed method achieves 13.4× speed-up over the state-of-the-art method.

  • An Error-Propagation Minimization Based Signal Selection Scheme for QRM-MLD

    Ilmiawan SHUBHI  Hidekazu MURATA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E99-B No:7
      Page(s):
    1566-1576

    Recently, multi-user multiple-input multiple-output (MU-MIMO) systems are being widely studied. For interference cancellation, MU-MIMO commonly uses spatial precoding techniques. These techniques, however, require the transmitters to have perfect knowledge of the downlink channel state information (CSI), which is hard to achieve in high mobility environments. Instead of spatial precoding, a collaborative interference cancellation (CIC) technique can be implemented for these environments. In CIC, mobile stations (MSs) collaborate and share their received signals to increase the demultiplexing capabilities. To obtain efficient signal-exchange between collaborating users, signal selection can be implemented. In this paper, a signal selection scheme suitable for a QRM-MLD algorithm is proposed. The proposed scheme uses the minimum Euclidean distance criterion to obtain an optimum bit error rate (BER) performance. Numerical results obtained through computer simulations show that the proposed scheme is able to provide BER performance near to that of MLD even when the number of candidates in QRM-MLD is relatively small. In addition, the proposed scheme is feasible to implement owing to its low computational complexity.

  • Learning from Multiple Sources via Multiple Domain Relationship

    Zhen LIU  Junan YANG  Hui LIU  Jian LIU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/04/11
      Vol:
    E99-D No:7
      Page(s):
    1941-1944

    Transfer learning extracts useful information from the related source domain and leverages it to promote the target learning. The effectiveness of the transfer was affected by the relationship among domains. In this paper, a novel multi-source transfer learning based on multi-similarity was proposed. The method could increase the chance of finding the sources closely related to the target to reduce the “negative transfer” and also import more knowledge from multiple sources for the target learning. The method explored the relationship between the sources and the target by multi-similarity metric. Then, the knowledge of the sources was transferred to the target based on the smoothness assumption, which enforced that the target classifier shares similar decision values with the relevant source classifiers on the unlabeled target samples. Experimental results demonstrate that the proposed method can more effectively enhance the learning performance.

  • Automated Duplicate Bug Report Detection Using Multi-Factor Analysis

    Jie ZOU  Ling XU  Mengning YANG  Xiaohong ZHANG  Jun ZENG  Sachio HIROKAWA  

     
    PAPER-Software Engineering

      Pubricized:
    2016/04/01
      Vol:
    E99-D No:7
      Page(s):
    1762-1775

    The bug reports expressed in natural language text usually suffer from vast, ambiguous and poorly written, which causes the challenge to the duplicate bug reports detection. Current automatic duplicate bug reports detection techniques have mainly focused on textual information and ignored some useful factors. To improve the detection accuracy, in this paper, we propose a new approach calls LNG (LDA and N-gram) model which takes advantages of the topic model LDA and word-based model N-gram. The LNG considers multiple factors, including textual information, semantic correlation, word order, contextual connections, and categorial information, that potentially affect the detection accuracy. Besides, the N-gram adopted in our LNG model is improved by modifying the similarity algorithm. The experiment is conducted under more than 230,000 real bug reports of the Eclipse project. In the evaluation, we propose a new evaluation metric, namely exact-accuracy (EA) rate, which can be used to enhance the understanding of the performance of duplicates detection. The evaluation results show that all the recall rate, precision rate, and EA rate of the proposed method are higher than treating them separately. Also, the recall rate is improved by 2.96%-10.53% compared to the state-of-art approach DBTM.

321-340hit(1872hit)