The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] LD(1872hit)

301-320hit(1872hit)

  • Dry Etching Technologies of Optical Device and III-V Compound Semiconductors Open Access

    Ryuichiro KAMIMURA  Kanji FURUTA  

     
    INVITED PAPER

      Vol:
    E100-C No:2
      Page(s):
    150-155

    Dry etching is one of the elemental technologies for the fabrication of optical devices. In order to obtain the desired shape using the dry etching process, it is necessary to understand the reactivity of the materials being used to plasma. In particular, III-V compound semiconductors have a multi-layered structure comprising a plurality of elements and thus it is important to first have a full understanding of the basic trends of plasma dry etching, the plasma type and the characteristics of etching plasma sources. In this paper, III-V compound semiconductor etching for use in light sources such as LDs and LEDs, will be described. Glass, LN and LT used in the formation of waveguides and MLA will be introduced as well. And finally, the future prospects of dry etching will be described briefly.

  • Utilizing Shape-Based Feature and Discriminative Learning for Building Detection

    Shangqi ZHANG  Haihong SHEN  Chunlei HUO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/11/18
      Vol:
    E100-D No:2
      Page(s):
    392-395

    Building detection from high resolution remote sensing images is challenging due to the high intraclass variability and the difficulty in describing buildings. To address the above difficulties, a novel approach is proposed based on the combination of shape-specific feature extraction and discriminative feature classification. Shape-specific feature can capture complex shapes and structures of buildings. Discriminative feature classification is effective in reflecting similarities among buildings and differences between buildings and backgrounds. Experiments demonstrate the effectiveness of the proposed approach.

  • Key Recovery Attacks on Multivariate Public Key Cryptosystems Derived from Quadratic Forms over an Extension Field

    Yasufumi HASHIMOTO  

     
    PAPER

      Vol:
    E100-A No:1
      Page(s):
    18-25

    One of major ideas to design a multivariate public key cryptosystem (MPKC) is to generate its quadratic forms by a polynomial map over an extension field. In fact, Matsumoto-Imai's scheme (1988), HFE (Patarin, 1996), MFE (Wang et al., 2006) and multi-HFE (Chen et al., 2008) are constructed in this way and Sflash (Akkar et al., 2003), Quartz (Patarin et al., 2001), Gui (Petzoldt et al, 2015) are variants of these schemes. An advantage of such extension field type MPKCs is to reduce the numbers of variables and equations to be solved in the decryption process. In the present paper, we study the security of MPKCs whose quadratic forms are derived from a “quadratic” map over an extension field and propose a new attack on such MPKCs. Our attack recovers partial information of the secret affine maps in polynomial time when the field is of odd characteristic. Once such partial information is recovered, the attacker can find the plain-text for a given cipher-text by solving a system of quadratic equations over the extension field whose numbers of variables and equations are same to those of the system of quadratic equations used in the decryption process.

  • Name Resolution Based on Set of Attribute-Value Pairs of Real-World Information

    Ryoichi KAWAHARA  Hiroshi SAITO  

     
    PAPER-Network

      Pubricized:
    2016/08/04
      Vol:
    E100-B No:1
      Page(s):
    110-121

    It is expected that a large number of different objects, such as sensor devices and consumer electronics, will be connected to future networks. For such networks, we propose a name resolution method for directly specifying a condition on a set of attribute-value pairs of real-world information without needing prior knowledge of the uniquely assigned name of a target object, e.g., a URL. For name resolution, we need an algorithm to find the target object(s) satisfying a query condition on multiple attributes. To address the problem that multi-attribute searching algorithms may not work well when the number of attributes (i.e., dimensions) d increases, which is related to the curse of dimensionality, we also propose a probabilistic searching algorithm to reduce searching time at the expense of a small probability of false positives. With this algorithm, we choose permutation pattern(s) of d attributes to use the first K (K « d) ones to search objects so that they contain relevant attributes with a high probability. We argue that our algorithm can identify the target objects at a false positive rate less than 10-6 and a few percentages of tree-searching cost compared with a naive d-dimensional searching under a certain condition.

  • Image Watermarking Method Satisfying IHC by Using PEG LDPC Code

    Nobuhiro HIRATA  Takayuki NOZAKI  Masaki KAWAMURA  

     
    PAPER

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    13-23

    We propose a digital image watermarking method satisfying information hiding criteria (IHC) for robustness against JPEG compression, cropping, scaling, and rotation. When a stego-image is cropped, the marking positions of watermarks are unclear. To detect the position in a cropped stego-image, a marker or synchronization code is embedded with the watermarks in a lattice pattern. Attacks by JPEG compression, scaling, and rotation cause errors in extracted watermarks. Against such errors, the same watermarks are repeatedly embedded in several areas. The number of errors in the extracted watermarks can be reduced by using a weighted majority voting (WMV) algorithm. To correct residual errors in output of the WMV algorithm, we use a high-performance error-correcting code: a low-density parity-check (LDPC) code constructed by progressive edge-growth (PEG). In computer simulations using the IHC ver. 4 the proposed method could a bit error rate of 0, the average PSNR was 41.136 dB, and the computational time for synchronization recovery was less than 10 seconds. The proposed method can thus provide high image quality and fast synchronization recovery.

  • Initial Value Problem Formulation TDBEM with 4-D Domain Decomposition Method and Application to Wake Fields Analysis

    Hideki KAWAGUCHI  Thomas WEILAND  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    37-44

    The Time Domain Boundary Element Method (TDBEM) has its advantages in the analysis of transient electromagnetic fields (wake fields) induced by a charged particle beam with curved trajectory in a particle accelerator. On the other hand, the TDBEM has disadvantages of huge required memory and computation time compared with those of the Finite Difference Time Domain (FDTD) method or the Finite Integration Technique (FIT). This paper presents a comparison of the FDTD method and 4-D domain decomposition method of the TDBEM based on an initial value problem formulation for the curved trajectory electron beam, and application to a full model simulation of the bunch compressor section of the high-energy particle accelerators.

  • Digital Multiple Notch Filter Design with Nelder-Mead Simplex Method

    Qiusheng WANG  Xiaolan GU  Yingyi LIU  Haiwen YUAN  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:1
      Page(s):
    259-265

    Multiple notch filters are used to suppress narrow-band or sinusoidal interferences in digital signals. In this paper, we propose a novel optimization design technique of an infinite impulse response (IIR) multiple notch filter. It is based on the Nelder-Mead simplex method. Firstly, the system function of the desired notch filter is constructed to form the objective function of the optimization technique. Secondly, the design parameters of the desired notch filter are optimized by Nelder-Mead simplex method. A weight function is also introduced to improve amplitude response of the notch filter. Thirdly, the convergence and amplitude response of the proposed technique are compared with other Nelder-Mead based design methods and the cascade-based design method. Finally, the practicability of the proposed notch filter design technique is demonstrated by some practical applications.

  • User Collaborated Reception of Spatially Multiplexed Signals: An Experimental Study in Group Mobility

    Ilmiawan SHUBHI  Yuji HAYASHI  Hidekazu MURATA  

     
    LETTER

      Vol:
    E100-A No:1
      Page(s):
    227-231

    In multi user multiple input multiple output systems, spatial precoding is typically employed as an interference cancellation technique. This technique, however, requires accurate channel state information at the transmitter and limits the mobility of the mobile station (MS). Instead of spatial precoding, this letter implements collaborative interference cancellation (CIC) for interference suppression. In CIC, neighboring MSs share their received signals without decoding and equivalently increase the number of received antennas. The performance is evaluated through a field experiment using a vehicle that is equipped with seven MSs and moves around an urban area.

  • Performance Analysis Based on Density Evolution on Fault Erasure Belief Propagation Decoder

    Hiroki MORI  Tadashi WADAYAMA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E99-A No:12
      Page(s):
    2155-2161

    In this paper, we will present analysis on the fault erasure BP decoders based on the density evolution. In the fault BP decoder, the messages exchanged in a BP process are stochastically corrupted due to unreliable logic gates and flip-flops; i.e., we assume circuit components with transient faults. We derived a set of the density evolution equations for the fault erasure BP processes. Our density evolution analysis reveals the asymptotic behaviors of the estimation error probability of the fault erasure BP decoders. In contrast to the fault free cases, it is observed that the error probabilities of the fault erasure BP decoder converge to positive values, and that there exists a discontinuity in an error curve corresponding to the fault BP threshold. It is also shown that an message encoding technique provides higher fault BP thresholds than those of the original decoders at the cost of increased circuit size.

  • Global Hyperbolic Hopfield Neural Networks

    Masaki KOBAYASHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E99-A No:12
      Page(s):
    2511-2516

    In recent years, applications of neural networks with Clifford algebra have become widespread. Hyperbolic numbers are useful Clifford algebra to deal with hyperbolic geometry. It is difficult when Hopfield neural network is extended to hyperbolic versions, though several models have been proposed. Multistate or continuous hyperbolic Hopfield neural networks are promising models. However, the connection weights and domain of activation function are limited to the right quadrant of hyperbolic plane, and the learning algorithms are restricted. In this work, the connection weights and activation function are extended to the entire hyperbolic plane. In addition, the energy is defined and it is proven that the energy does not increase.

  • Performance Optimization of Light-Field Applications on GPU

    Yuttakon YUTTAKONKIT  Shinya TAKAMAEDA-YAMAZAKI  Yasuhiko NAKASHIMA  

     
    PAPER-Computer System

      Pubricized:
    2016/08/24
      Vol:
    E99-D No:12
      Page(s):
    3072-3081

    Light-field image processing has been widely employed in many areas, from mobile devices to manufacturing applications. The fundamental process to extract the usable information requires significant computation with high-resolution raw image data. A graphics processing unit (GPU) is used to exploit the data parallelism as in general image processing applications. However, the sparse memory access pattern of the applications reduced the performance of GPU devices for both systematic and algorithmic reasons. Thus, we propose an optimization technique which redesigns the memory access pattern of the applications to alleviate the memory bottleneck of rendering application and to increase the data reusability for depth extraction application. We evaluated our optimized implementations with the state-of-the-art algorithm implementations on several GPUs where all implementations were optimally configured for each specific device. Our proposed optimization increased the performance of rendering application on GTX-780 GPU by 30% and depth extraction application on GTX-780 and GTX-980 GPUs by 82% and 18%, respectively, compared with the original implementations.

  • Probabilistic Analysis of the Network Reliability Problem on Random Graph Ensembles

    Akiyuki YANO  Tadashi WADAYAMA  

     
    PAPER-Networks and Network Coding

      Vol:
    E99-A No:12
      Page(s):
    2218-2225

    In the field of computer science, the network reliability problem for evaluating the network failure probability has been extensively investigated. For a given undirected graph G, the network failure probability is the probability that edge failures (i.e., edge erasures) make G unconnected. Edge failures are assumed to occur independently with the same probability. The main contributions of the present paper are the upper and lower bounds on the expected network failure probability. We herein assume a simple random graph ensemble that is closely related to the Erds-Rényi random graph ensemble. These upper and lower bounds exhibit the typical behavior of the network failure probability. The proof is based on the fact that the cut-set space of G is a linear space over F2 spanned by the incident matrix of G. The present study shows a close relationship between the ensemble analysis of the expected network failure probability and the ensemble analysis of the error detection probability of LDGM codes with column weight 2.

  • Efficient Multiplication Based on Dickson Bases over Any Finite Fields

    Sun-Mi PARK  Ku-Young CHANG  Dowon HONG  Changho SEO  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E99-A No:11
      Page(s):
    2060-2074

    We propose subquadratic space complexity multipliers for any finite field $mathbb{F}_{q^n}$ over the base field $mathbb{F}_q$ using the Dickson basis, where q is a prime power. It is shown that a field multiplication in $mathbb{F}_{q^n}$ based on the Dickson basis results in computations of Toeplitz matrix vector products (TMVPs). Therefore, an efficient computation of a TMVP yields an efficient multiplier. In order to derive efficient $mathbb{F}_{q^n}$ multipliers, we develop computational schemes for a TMVP over $mathbb{F}_{q}$. As a result, the $mathbb{F}_{2^n}$ multipliers, as special cases of the proposed $mathbb{F}_{q^n}$ multipliers, have lower time complexities as well as space complexities compared with existing results. For example, in the case that n is a power of 3, the proposed $mathbb{F}_{2^n}$ multiplier for an irreducible Dickson trinomial has about 14% reduced space complexity and lower time complexity compared with the best known results.

  • Design of a Compact Sound Localization Device on a Stand-Alone FPGA-Based Platform

    Mauricio KUGLER  Teemu TOSSAVAINEN  Susumu KUROYANAGI  Akira IWATA  

     
    PAPER-Computer System

      Pubricized:
    2016/07/26
      Vol:
    E99-D No:11
      Page(s):
    2682-2693

    Sound localization systems are widely studied and have several potential applications, including hearing aid devices, surveillance and robotics. However, few proposed solutions target portable systems, such as wearable devices, which require a small unnoticeable platform, or unmanned aerial vehicles, in which weight and low power consumption are critical aspects. The main objective of this research is to achieve real-time sound localization capability in a small, self-contained device, without having to rely on large shaped platforms or complex microphone arrays. The proposed device has two surface-mount microphones spaced only 20 mm apart. Such reduced dimensions present challenges for the implementation, as differences in level and spectra become negligible, and only time-difference of arrival (TDoA) can be used as a localization cue. Three main issues have to be addressed in order to accomplish these objectives. To achieve real-time processing, the TDoA is calculated using zero-crossing spikes applied to the hardware-friendly Jeffers model. In order to make up for the reduction in resolution due to the small dimensions, the signal is upsampled several-fold within the system. Finally, a coherence-based spectral masking is used to select only frequency components with relevant TDoA information. The proposed system was implemented on a field-programmable gate array (FPGA) based platform, due to the large amount of concurrent and independent tasks, which can be efficiently parallelized in reconfigurable hardware devices. Experimental results with white-noise and environmental sounds show high accuracies for both anechoic and reverberant conditions.

  • Relating Crosstalk to Plane-Wave Field-to-Wire Coupling

    Flavia GRASSI  Giordano SPADACINI  Keliang YUAN  Sergio A. PIGNARI  

     
    PAPER-Electromagnetic Compatibility(EMC)

      Pubricized:
    2016/05/25
      Vol:
    E99-B No:11
      Page(s):
    2406-2413

    In this work, a novel formulation of crosstalk (XT) is developed, in which the perturbation/loading effect that the generator circuit exerts on the passive part of the receptor circuit is elucidated. Practical conditions (i.e., weak coupling and matching/mismatching of the generator circuit) under which this effect can be neglected are then discussed and exploited to develop an alternative radiated susceptibility (RS) test procedure, which resorts to crosstalk to induce at the terminations of a cable harness the same disturbance that would be induced by an external uniform plane-wave field. The proposed procedure, here developed with reference to typical RS setups foreseen by Standards of the aerospace sector, assures equivalence with field coupling without a priori knowledge and/or specific assumptions on the units connected to the terminations of the cable harness. Accuracy of the proposed scheme of equivalence is assessed by virtual experiments carried out in a full-wave simulation environment.

  • Measurement Matrices Construction for Compressed Sensing Based on Finite Field Quasi-Cyclic LDPC Codes

    Hua XU  Hao YANG  Wenjuan SHI  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/06/16
      Vol:
    E99-B No:11
      Page(s):
    2332-2339

    Measurement matrix construction is critically important to signal sampling and reconstruction for compressed sensing. From a practical point of view, deterministic construction of the measurement matrix is better than random construction. In this paper, we propose a novel deterministic method to construct a measurement matrix for compressed sensing, CS-FF (compressed sensing-finite field) algorithm. For this proposed algorithm, the constructed measurement matrix is from the finite field Quasi-cyclic Low Density Parity Check (QC-LDPC) code and thus it has quasi-cyclic structure. Furthermore, we construct three groups of measurement matrices. The first group matrices are the proposed matrix and other matrices including deterministic construction matrices and random construction matrices. The other two group matrices are both constructed by our method. We compare the recovery performance of these matrices. Simulation results demonstrate that the recovery performance of our matrix is superior to that of the other matrices. In addition, simulation results show that the compression ratio is an important parameter to analyse and predict the recovery performance of the proposed measurement matrix. Moreover, these matrices have less storage requirement than that of a random one, and they achieve a better trade-off between complexity and performance. Therefore, from practical perspective, the proposed scheme is hardware friendly and easily implemented, and it is suitable to compressed sensing for its quasi-cyclic structure and good recovery performance.

  • Small-World-Network Model Based Routing Method for Wireless Sensor Networks

    Nobuyoshi KOMURO  Sho MOTEGI  Kosuke SANADA  Jing MA  Zhetao LI  Tingrui PEI  Young-June CHOI  Hiroo SEKIYA  

     
    PAPER

      Vol:
    E99-B No:11
      Page(s):
    2315-2322

    This paper proposes a Watts and Strogatz-model based routing method for wireless sensor network along with link-exchange operation. The proposed routing achieves low data-collection delay because of hub-node existence. By applying the link exchanges, node with low remaining battery level can escape from a hub node. Therefore, the proposed routing method achieves the fair battery-power consumptions among sensor nodes. It is possible for the proposed method to prolong the network lifetime with keeping the small-world properties. Simulation results show the effectiveness of the proposed method.

  • Micro-Vibration Patterns Generated from Shape Memory Alloy Actuators and the Detection of an Asymptomatic Tactile Sensation Decrease in Diabetic Patients

    Junichi DANJO  Sonoko DANJO  Yu NAKAMURA  Keiji UCHIDA  Hideyuki SAWADA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2016/08/10
      Vol:
    E99-D No:11
      Page(s):
    2759-2766

    Diabetes mellitus is a group of metabolic diseases that cause high blood sugar due to functional problems with the pancreas or metabolism. Diabetic patients have few subjective symptoms and may experience decreased sensation without being aware of it. The commonly performed tests for sensory disorders are qualitative in nature. The authors pay attention to the decline of the sensitivity of tactile sensations, and develop a non-invasive method to detect the level of tactile sensation using a novel micro-vibration actuator that employs shape-memory alloy wires. Previously, we performed a pilot study that applied the device to 15 diabetic patients and confirmed a significant reduction in the tactile sensation in diabetic patients when compared to healthy subjects. In this study, we focus on the asymptomatic development of decreased sensation associated with diabetes mellitus. The objectives are to examine diabetic patients who are unaware of abnormal or decreased sensation using the quantitative tactile sensation measurement device and to determine whether tactile sensation is decreased in patients compared to healthy controls. The finger method is used to measure the Tactile Sensation Threshold (TST) score of the index and middle fingers using the new device and the following three procedures: TST-1, TST-4, and TST-8. TST scores ranged from 1 to 30 were compared between the two groups. The TST scores were significantly higher for the diabetic patients (P<0.05). The TST scores for the left fingers of diabetic patients and healthy controls were 5.9±6.2 and 2.7±2.9 for TST-1, 15.3±7.0 and 8.7±6.4 for TST-4, and 19.3±7.8 and 12.7±9.1 for TST-8. Our data suggest that the use of the new quantitative tactile sensation measurement device enables the detection of decreased tactile sensation in diabetic patients who are unaware of abnormal or decreased sensation compared to controls.

  • ePec-LDPC HARQ: An LDPC HARQ Scheme with Targeted Retransmission

    Yumei WANG  Jiawei LIANG  Hao WANG  Eiji OKI  Lin ZHANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/04/12
      Vol:
    E99-B No:10
      Page(s):
    2168-2178

    In 3GPP (3rd Generation Partnership Project) LTE (Long Term Evolution) systems, when HARQ (Hybrid Automatic Repeat request) retransmission is invoked, the data at the transmitter are retransmitted randomly or sequentially regardless of their relationship to the wrongly decoded data. Such practice is inefficient since precious transmission resources will be spent to retransmit data that may be of no use in error correction at the receiver. This paper proposes an incremental redundancy HARQ scheme based on Error Position Estimating Coding (ePec) and LDPC (Low Density Parity Check Code) channel coding, which is called ePec-LDPC HARQ. The proposal is able to feedback the wrongly decoded code blocks within a specific MAC (Media Access Control) PDU (Protocol Data Unit) from the receiver. The transmitter gets the feedback information and then performs targeted retransmission. That is, only the data related to the wrongly decoded code blocks are retransmitted, which can improve the retransmission efficiency and thus reduce the retransmission overload. An enhanced incremental redundancy LDPC coding approach, called EIR-LDPC, together with a physical layer framing method, is developed to implement ePec-LDPC HARQ. Performance evaluations show that ePec-LDPC HARQ reduces the overall transmission resources by 15% compared to a conventional LDPC HARQ scheme. Moreover, the average retransmission times of each MAC PDU and the transmission delay are also reduced considerably.

  • Re-Ranking Approach of Spoken Term Detection Using Conditional Random Fields-Based Triphone Detection

    Naoki SAWADA  Hiromitsu NISHIZAKI  

     
    PAPER-Spoken term detection

      Pubricized:
    2016/07/19
      Vol:
    E99-D No:10
      Page(s):
    2518-2527

    This study proposes a two-pass spoken term detection (STD) method. The first pass uses a phoneme-based dynamic time warping (DTW)-based STD, and the second pass recomputes detection scores produced by the first pass using conditional random fields (CRF)-based triphone detectors. In the second-pass, we treat STD as a sequence labeling problem. We use CRF-based triphone detection models based on features generated from multiple types of phoneme-based transcriptions. The models train recognition error patterns such as phoneme-to-phoneme confusions in the CRF framework. Consequently, the models can detect a triphone comprising a query term with a detection probability. In the experimental evaluation of two types of test collections, the CRF-based approach worked well in the re-ranking process for the DTW-based detections. CRF-based re-ranking showed 2.1% and 2.0% absolute improvements in F-measure for each of the two test collections.

301-320hit(1872hit)