The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SiON(4624hit)

901-920hit(4624hit)

  • Parameterization of High-Dimensional Perfect Sequences over a Composition Algebra over R

    Takao MAEDA  Yodai WATANABE  Takafumi HAYASHI  

     
    PAPER-Sequence

      Vol:
    E98-A No:12
      Page(s):
    2439-2445

    To analyze the structure of a set of high-dimensional perfect sequences over a composition algebra over R, we developed the theory of Fourier transforms of the set of such sequences. We define the discrete cosine transform and the discrete sine transform, and we show that there exists a relationship between these transforms and a convolution of sequences. By applying this property to a set of perfect sequences, we obtain a parameterization theorem. Using this theorem, we show the equivalence between the left perfectness and right perfectness of sequences. For sequences of real numbers, we obtain the parameterization without restrictions on the parameters.

  • Survivability Analysis of VM-Based Intrusion Tolerant Systems

    Junjun ZHENG  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER-Network

      Pubricized:
    2015/09/15
      Vol:
    E98-D No:12
      Page(s):
    2082-2090

    Survivability is the capability of a system to provide its services in a timely manner even after intrusion and compromise occur. In this paper, we focus on the quantitative analysis of survivability of virtual machine (VM) based intrusion tolerant system in the presence of Byzantine failures due to malicious attacks. Intrusion tolerant system has the ability of a system to continuously provide correct services even if the system is intruded. This paper introduces a scheme of the intrusion tolerant system with virtualization, and derives the success probability for one request by a Markov chain under the environment where VMs have been intruded due to a security hole by malicious attacks. Finally, in numerical experiments, we evaluate the performance of VM-based intrusion tolerant system from the viewpoint of survivability.

  • Computer Power Supply Transient Response Improvement by Power Consumption Prediction Procedure Using Performance Counters

    Shinichi KAWAGUCHI  Toshiaki YACHI  

     
    PAPER-Energy in Electronics Communications

      Vol:
    E98-B No:12
      Page(s):
    2382-2388

    As the use of information technology is rapidly expanding, the power consumption of IT equipment is becoming an important social issue. As such, the power supply of IT equipment must provide various power saving measures through advanced features. A digitally controlled power supply is attractive for satisfying this requirement due to its flexibility and advanced management functionality. However, a digitally controlled power supply has issues with its transient response performance because the conversion time of the analog-digital converter and the time required for digital processing in the digital controller adversely affect the dynamic characteristics. The present paper introduces a new approach that can improve the transient response performance of the digital point-of-load (POL) power supplies of computer processors. The resulting power systems use feed-forward transient control, in addition to the general voltage regulation feedback control loop, to improve their dynamic characteristics. On the feed-forward control path, the processor workload information is supplied to the power supply controller from the processor. The power supply controller uses the workload information to predict the power load change and generates an auxiliary control to improve the transient response performance. As the processor workload information, the processor-integrated performance counter values are sent to the power supply controller via a hardware interface. The processor power consumption prediction equation is modeled using the moving average model, which uses performance counter values of several past steps. The prediction equation parameters are defined by multiple regression analysis using the measured CPU power consumption data and experimentally obtained performance counter information. The analysis reveals that the optimum parameters change with time during transient periods. The modeled equation well explains the processor power load change. The measured CPU power consumption profile is confirmed to be accurately replicated by the prediction for a period of 200ns. Using the power load change prediction model, circuit simulations of the feed-forward transient control are conducted. It is validated that the proposed approach improves power supply transient response under some practical server workloads.

  • An AM-PM Noise Mitigation Technique in Class-C VCO

    Kento KIMURA  Aravind THARAYIL NARAYANAN  Kenichi OKADA  Akira MATSUZAWA  

     
    PAPER-Electronic Circuits

      Vol:
    E98-C No:12
      Page(s):
    1161-1170

    This paper presents a 20GHz Class-C VCO using a noise sensitivity mitigation technique. A radio frequency Class-C VCO suffers from the AM-PM conversion, caused by the non-linear capacitance of cross coupled pair. In this paper, the phase noise degradation mechanism is discussed, and a desensitization technique of AM-PM noise is proposed. In the proposed technique, AM-PM sensitivity is canceled by tuning the tail impedance, which consists of 4-bit resistor switches. A 65-nm CMOS prototype of the proposed VCO demonstrates the oscillation frequency from 19.27 to 22.4GHz, and the phase noise of -105.7dBc/Hz at 1-MHz offset with the power dissipation of 6.84mW, which is equivalent to a Figure-of-Merit of -183.73dBc/Hz.

  • Photonic Millimeter Wave Transmitter for a Real-Time Coherent Wireless Link Based on Injection Locking of Integrated Laser Diodes

    Shintaro HISATAKE  Guillermo CARPINTERO  Yasuyuki YOSHIMIZU  Yusuke MINAMIKATA  Kazuki OOGIMOTO  Yu YASUDA  Frédéric van DIJK  Tolga TEKIN  Tadao NAGATSUMA  

     
    PAPER

      Vol:
    E98-C No:12
      Page(s):
    1105-1111

    We propose the concept of an integrated coherent photonic wireless transmitter based on the simultaneous injection locking of two monolithically integrated distributed feedback (DFB) laser diodes (LDs) using an optical frequency comb (OFC). We characterize the basic operation of the transmitter and demonstrate that two injection-locked integrated DFB LDs are sufficiently stable to generate the carrier signal using a uni-traveling-carrier photodiode (UTC-PD) for a real-time error-free (bit error rate: BER < 10-11) coherent transmission with a data rate of 10 Gbit/s at a carrier frequency of 97 GHz. In the coherent wireless transmission, we compare the BER characteristics of the injection-locked transmitter with that of an actively phase-stabilized transmitter and show that the power penalty of 8-dB for the injection-locked transmitter is due to the RF spurious components, which can be reduced by integrating the OFC generator (OFCG) and LDs on the same chip. Our results suggest that the integration of the OFCG, DFB LDs, modulators, semiconductor optical amplifiers, and UTC-PD on the same chip is a promising strategy to develop a practical real-time ultrafast coherent millimeter/terahertz wave wireless transmitter.

  • Proposal of a New Disk-Repeater System for Contactless Power Transfer Open Access

    Yuichi SAWAHARA  Yuya IKUTA  Yangjun ZHANG  Toshio ISHIZAKI  Ikuo AWAI  

     
    PAPER

      Vol:
    E98-B No:12
      Page(s):
    2370-2375

    The authors propose “Disk-repeater” as a new structure alternative to the conventional resonator repeater. Disk-repeater has a simple structure comprised of just copper plates and wire, non-resonant structure. First, coupling coefficients are measured as functions of disk diameter and wire length to characterize the basic performance of Disk-repeater. It is explained by several experimental evidences that Disk-repeater and resonator are not magnetically coupled but electrically coupled. It is also shown that the transmission distance extends dramatically longer than that of conventional resonator repeater. Further, two-dimensional arrangement, where multiple disks are connected, shows very high efficiency and uniform transmission characteristic regardless of positions of receiving resonator. Disk-repeater gives possibility of unprecedented versatile application with the simple structure.

  • Multi-Feature Guided Brain Tumor Segmentation Based on Magnetic Resonance Images

    Ye AI  Feng MIAO  Qingmao HU  Weifeng LI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/08/25
      Vol:
    E98-D No:12
      Page(s):
    2250-2256

    In this paper, a novel method of high-grade brain tumor segmentation from multi-sequence magnetic resonance images is presented. Firstly, a Gaussian mixture model (GMM) is introduced to derive an initial posterior probability by fitting the fluid attenuation inversion recovery histogram. Secondly, some grayscale and region properties are extracted from different sequences. Thirdly, grayscale and region characteristics with different weights are proposed to adjust the posterior probability. Finally, a cost function based on the posterior probability and neighborhood information is formulated and optimized via graph cut. Experiment results on a public dataset with 20 high-grade brain tumor patient images show the proposed method could achieve a dice coefficient of 78%, which is higher than the standard graph cut algorithm without a probability-adjusting step or some other cost function-based methods.

  • Code Generation Limiting Maximum and Minimum Hamming Distances for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2484-2493

    Data stored in non-volatile memories may be destructed due to crosstalk and radiation but we can restore their data by using error-correcting codes. However, non-volatile memories consume a large amount of energy in writing. How to reduce maximum writing bits even using error-correcting codes is one of the challenges in non-volatile memory design. In this paper, we first propose Doughnut code which is based on state encoding limiting maximum and minimum Hamming distances. After that, we propose a code expansion method, which improves maximum and minimum Hamming distances. When we apply our code expansion method to Doughnut code, we can obtain a code which reduces maximum-flipped bits and has error-correcting ability equal to Hamming code. Experimental results show that the proposed code efficiently reduces the number of maximum-writing bits.

  • Lines of Comments as a Noteworthy Metric for Analyzing Fault-Proneness in Methods

    Hirohisa AMAN  Sousuke AMASAKI  Takashi SASAKI  Minoru KAWAHARA  

     
    PAPER-Software Engineering

      Pubricized:
    2015/09/04
      Vol:
    E98-D No:12
      Page(s):
    2218-2228

    This paper focuses on the power of comments to predict fault-prone programs. In general, comments along with executable statements enhance the understandability of programs. However, comments may also be used to mask the lack of readability in the program, therefore well-written comments are referred to as “deodorant to mask code smells” in the field of code refactoring. This paper conducts an empirical analysis to examine whether Lines of Comments (LCM) written inside a method's body is a noteworthy metric for analyzing fault-proneness in Java methods. The empirical results show the following two findings: (1) more-commented methods (the methods having more comments than the amount estimated by size and complexity of the methods) are about 1.6 - 2.8 times more likely to be faulty than the others, and (2) LCM can be a useful factor in fault-prone method prediction models along with the method size and the method complexity.

  • The Error Exponent of Zero-Rate Multiterminal Hypothesis Testing for Sources with Common Information

    Makoto UEDA  Shigeaki KUZUOKA  

     
    PAPER-Shannon Theory

      Vol:
    E98-A No:12
      Page(s):
    2384-2392

    The multiterminal hypothesis testing problem with zero-rate constraint is considered. For this problem, an upper bound on the optimal error exponent is given by Shalaby and Papamarcou, provided that the positivity condition holds. Our contribution is to prove that Shalaby and Papamarcou's upper bound is valid under a weaker condition: (i) two remote observations have a common random variable in the sense of Gácks and Körner, and (ii) when the value of the common random variable is fixed, the conditional distribution of remaining random variables satisfies the positivity condition. Moreover, a generalization of the main result is also given.

  • An Anti-Collision Algorithm with Short Reply for RFID Tag Identification

    Qing YANG  Jiancheng LI  Hongyi WANG  

     
    PAPER-Network

      Vol:
    E98-B No:12
      Page(s):
    2446-2453

    In many radio frequency identification (RFID) applications, the reader identifies the tags in its scope repeatedly. For these applications, many algorithms, such as an adaptive binary splitting algorithm (ABS), a single resolution blocking ABS (SRB), a pair resolution blocking ABS (PRB) and a dynamic blocking ABS (DBA) have been proposed. All these algorithms require the staying tags to reply with their IDs to be recognized by the reader. However, the IDs of the staying tags are stored in the reader in the last identification round. The reader can verify the existence of these tags when identifying them. Thus, we propose an anti-collision algorithm with short reply for RFID tag identification (ACSR). In ACSR, each staying tag emits a short reply to indicate its continued existence. Therefore, the data amount transmitted by staying tags is reduced significantly. The identification rate of ACSR is analyzed in this paper. Finally, simulation and analysis results show that ACSR greatly outperforms ABS, SRB and DBA in terms of the identification rate and average amount of data transmitted by a tag.

  • Soft-Output Decoding Approach of 2D Modulation Codes in Bit-Patterned Media Recording Systems

    Chanon WARISARN  Piya KOVINTAVEWAT  

     
    PAPER-Storage Technology

      Vol:
    E98-C No:12
      Page(s):
    1187-1192

    The two-dimensional (2D) interference is one of the major impairments in bit-patterned media recording (BPMR) systems due to small bit and track pitches, especially at high recording densities. To alleviate this problem, we introduced a rate-4/5 constructive inter-track interference (CITI) coding scheme to prevent the destructive data patterns to be written onto a magnetic medium for an uncoded BPMR system, i.e., without error-correction codes. Because the CITI code produces only the hard decision, it cannot be employed in a coded BPMR system that uses a low-density parity-check (LDPC) code. To utilize it in an iterative decoding scheme, we propose a soft CITI coding scheme based on the log-likelihood ratio algebra implementation in Boolean logic mappings in order that the soft CITI coding scheme together with a modified 2D soft-output Viterbi algorithm (SOVA) detector and a LDPC decoder will jointly perform iterative decoding. Simulation results show that the proposed scheme provides a significant performance improvement, in particular when an areal density (AD) is high and/or the position jitter is large. Specifically, at a bit-error rate of 10-4 and no position jitter, the proposed system can provide approximately 1.8 and 3.5 dB gain over the conventional coded system without using the CITI code at the ADs of 2.5 and 3.0 Tera-bit per square inch (Tb/in2), respectively.

  • Error Correction Using Long Context Match for Smartphone Speech Recognition

    Yuan LIANG  Koji IWANO  Koichi SHINODA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/07/31
      Vol:
    E98-D No:11
      Page(s):
    1932-1942

    Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we use such contexts to search higher-order n-grams corpora for matching word sequences. For this purpose, we also utilize the Web text data. Furthermore, we propose a combination of LCM and WCN (“LCM + WCN”) to provide users with candidate lists that are more relevant than those yielded by WCN alone. We compare our interface with the WCN-based interface on the Corpus of Spontaneous Japanese (CSJ). Our proposed “LCM + WCN” method improved the 1-best accuracy by 23%, improved the Mean Reciprocal Rank (MRR) by 28%, and our interface reduced the user's load by 12%.

  • Beamwidth Scaling in Wireless Networks with Outage Constraints

    Trung-Anh DO  Won-Yong SHIN  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E98-B No:11
      Page(s):
    2202-2211

    This paper analyzes the impact of directional antennas in improving the transmission capacity, defined as the maximum allowable spatial node density of successful transmissions multiplied by their data rate with a given outage constraint, in wireless networks. We consider the case where the gain Gm for the mainlobe of beamwidth can scale at an arbitrarily large rate. Under the beamwidth scaling model, the transmission capacity is analyzed for all path-loss attenuation regimes for the following two network configurations. In dense networks, in which the spatial node density increases with the antenna gain Gm, the transmission capacity scales as Gm4/α, where α denotes the path-loss exponent. On the other hand, in extended networks of fixed node density, the transmission capacity scales logarithmically in Gm. For comparison, we also show an ideal antenna model where there is no sidelobe beam. In addition, computer simulations are performed, which show trends consistent with our analytical behaviors. Our analysis sheds light on a new understanding of the fundamental limit of outage-constrained ad hoc networks operating in the directional mode.

  • Adaptive Block-Propagative Background Subtraction Method for UHDTV Foreground Detection

    Axel BEAUGENDRE  Satoshi GOTO  

     
    PAPER-Image

      Vol:
    E98-A No:11
      Page(s):
    2307-2314

    This paper presents an Adapting Block-Propagative Background Subtraction (ABPBGS) designed for Ultra High Definition Television (UHDTV) foreground detection. The main idea is to detect block after block along the objects in order to skip all areas of the image in which there is no moving object. This is particularly interesting for UHDTV when the objects of interest could represent not even 0.1% of the total area. From a seed block which is determined in a previous iteration, the detection will spread along an object as long as it detects a part of that object. A block history map guaranties that each block is processed only once. Moreover, only small blocks are loaded and processed, thus saving computational time and memory usage. The process of each block is independent enough to be easily parallelized. Compared to 9 state-of-the-art works, the ABPBGS achieved the best results with an average global quality score of 0.57 (1 being the maximum) on a dataset of 4K and 8K UHDTV sequences developed for this work. None of the state-of-the-art methods could process 4K videos in reasonable time while the ABPBGS has shown an average speed of 5.18fps. In comparison, 5 of the 9 state-of-the-art methods performed slower on 270p down-scale version of the same videos. The experiments have also shown that for the process an 8K UHDTV video the ABPBGS can divide the memory required by about 24 for a total of 450MB.

  • Ensemble and Multiple Kernel Regressors: Which Is Better?

    Akira TANAKA  Hirofumi TAKEBAYASHI  Ichigaku TAKIGAWA  Hideyuki IMAI  Mineichi KUDO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E98-A No:11
      Page(s):
    2315-2324

    For the last few decades, learning with multiple kernels, represented by the ensemble kernel regressor and the multiple kernel regressor, has attracted much attention in the field of kernel-based machine learning. Although their efficacy was investigated numerically in many works, their theoretical ground is not investigated sufficiently, since we do not have a theoretical framework to evaluate them. In this paper, we introduce a unified framework for evaluating kernel regressors with multiple kernels. On the basis of the framework, we analyze the generalization errors of the ensemble kernel regressor and the multiple kernel regressor, and give a sufficient condition for the ensemble kernel regressor to outperform the multiple kernel regressor in terms of the generalization error in noise-free case. We also show that each kernel regressor can be better than the other without the sufficient condition by giving examples, which supports the importance of the sufficient condition.

  • An Encryption-then-Compression System for JPEG/Motion JPEG Standard

    Kenta KURIHARA  Masanori KIKUCHI  Shoko IMAIZUMI  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER

      Vol:
    E98-A No:11
      Page(s):
    2238-2245

    In many multimedia applications, image encryption has to be conducted prior to image compression. This paper proposes a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG and Motion JPEG compressions. The proposed encryption scheme can provides approximately the same compression performance as that of JPEG compression without any encryption, where both gray scale images and color ones are considered. It is also shown that the proposed scheme consists of four block-based encryption steps, and provide a reasonably high level of security. Most of conventional perceptual encryption schemes have not been designed for international compression standards, but this paper focuses on applying the JPEG and Motion JPEG standards, as one of the most widely used image compression standards. In addition, this paper considers an efficient key management scheme, which enables an encryption with multiple keys to be easy to manage its keys.

  • Highly Compressed Lists of Integers with Dense Padding Modes

    Kun JIANG  Xingshen SONG  Yuexiang YANG  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2015/08/19
      Vol:
    E98-D No:11
      Page(s):
    1986-1989

    Index compression is partially responsible for the current performance achievements of Internet search engines. Among many latest compression techniques, Simple9 can pack as many integers as possible into a single 32-bit machine word using 9 different padding modes. However, the number of wasted bits in Simple9 remains large. In previous works, researchers have focused on reducing the unused trailing bits of the padding modes and have proposed various additional modes that make full use of the cases of the status bits. Instead, we focus on the wasted bits in the integer list, padding extra zeros for a complete dense mode when the number of integers is not enough to fit a complete mode. More precisely, we first propose a novel index compression method called SimpleD with dense padding modes to achieve a more compact storage compared with that of Simple9. We then design an innovative metric for extracting the inserted extra zero integers during the decoding phase. Experiments on the TREC WT2G and GOV2 datasets show that our encoder outperforms Simple9 while still retaining a very fast decompression speed.

  • Achievement Accurate CSI for AF Relay MIMO/OFDM Based on Complex HTRCI Pilot Signal with Enhanced MMSE Equalization

    Yuta IDA  Chang-Jun AHN  Takahiro MATSUMOTO  Shinya MATSUFUJI  

     
    PAPER

      Vol:
    E98-A No:11
      Page(s):
    2254-2262

    Amplify-and-forward (AF) relay multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems can achieve high data rate and high quality communications. On the other hand, it has to estimate all channels between the source-relay and relay-destination nodes in the destination node. In MIMO/OFDM systems, high time resolution carrier interferometry (HTRCI) has been proposed to achieve an accurate channel estimation (CE) with a small number of pilot signals. However, since it has many interferences, an accurate CE is not obtained and the system performance is degraded in AF relay MIMO/OFDM systems. Therefore, in this paper, we propose the complex HTRCI (C-HTRCI) pilot signal and the enhanced minimum mean square error (E-MMSE) equalization to achieve an accurate CE and to improve the system performance for AF relay MIMO/OFDM systems.

  • A Low-Complexity PTS Scheme with the Hybrid Subblock Partition Method for PAPR Reduction in OFDM Systems

    Sheng-Ju KU  Yuan OUYANG  Chiachi HUANG  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:11
      Page(s):
    2341-2347

    The technique of partial transmit sequences (PTS) is effective in reducing the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals. However, the conventional PTS (CPTS) scheme has high computation complexity because it needs several inverse fast Fourier transform (IFFT) units and an optimization process to find the candidate signal with the lowest PAPR. In this paper, we propose a new low-complexity PTS scheme for OFDM systems, in which a hybrid subblock partition method (SPM) is used to reduce the complexity that results from the IFFT computations and the optimization process. Also, the PAPR reduction performance of the proposed PTS scheme is further enhanced by multiplying a selected subblock with a predefined phase rotation vector to form a new subblock. The time-domain signal of the new subblock can be obtained simply by performing a circularly-shift-left operation on the IFFT output of the selected subblock. Computer simulations show that the proposed PTS scheme achieves a PAPR reduction performance close to that of the CPTS scheme with the pseudo-random SPM, but with much lower computation complexity.

901-920hit(4624hit)