The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6601-6620hit(42807hit)

  • Modeling of Field-Plate Effect on Gallium-Nitride-Based High Electron Mobility Transistors for High-Power Applications

    Takeshi MIZOGUCHI  Toshiyuki NAKA  Yuta TANIMOTO  Yasuhiro OKADA  Wataru SAITO  Mitiko MIURA-MATTAUSCH  Hans Jürgen MATTAUSCH  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E100-C No:3
      Page(s):
    321-328

    The major task in compact modeling for high power devices is to predict the switching waveform accurately because it determines the energy loss of circuits. Device capacitance mainly determines the switching characteristics, which makes accurate capacitance modeling inevitable. This paper presents a newly developed compact model HiSIM-GaN [Hiroshima University STARC IGFET Model for Gallium-Nitride-based High Electron Mobility Transistors (GaN-HEMTs)], where the focus is given on the accurate modeling of the field-plate (FP), which is introduced to delocalize the electric-field peak that occurs at the electrode edge. We demonstrate that the proposed model reproduces capacitance measurements of a GaN-HEMT accurately without fitting parameters. Furthermore, the influence of the field plate on the studied circuit performance is analyzed.

  • Two Classes of New Zero Difference Balanced Functions from Difference Balanced Functions and Perfect Ternary Sequences

    Wei SU  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:3
      Page(s):
    839-845

    In this paper, we present two classes of zero difference balanced (ZDB) functions, which are derived by difference balanced functions, and a class of perfect ternary sequences respectively. The proposed functions have parameters not covered in the literature, and can be used to design optimal constant composition codes, and perfect difference systems of sets.

  • Superconducting Transition Edge Sensor for Gamma-Ray Spectroscopy Open Access

    Masashi OHNO  Tomoya IRIMATSUGAWA  Hiroyuki TAKAHASHI  Chiko OTANI  Takashi YASUMUNE  Koji TAKASAKI  Chikara ITO  Takashi OHNISHI  Shin-ichi KOYAMA  Shuichi HATAKEYAMA  R.M. Thushara. DAMAYANTHI  

     
    INVITED PAPER

      Vol:
    E100-C No:3
      Page(s):
    283-290

    Superconducting Transition edge sensor (TES) coupled with a heavy metal absorber is a promising microcalorimeter for Gamma-ray (γ-ray) spectroscopy with ultra-high energy resolution and high detection efficiency. It is very useful for the non-destructed inspection of the nuclide materials. High resolving power of γ-ray peaks can precisely identify multiple nuclides such as Plutonium (Pu) and Actinides with high efficiency and safety. For this purpose, we have developed the TES coupled with a tin absorber. We suggest the new device structure using the gold bump post which connects a tin absorber to the thermometer of the superconducting Ir/Au bilayer. High thermal conductivity of the gold bump post realized strong thermal coupling between the thermometer and the γ-ray absorber, and it brought the benefit of large pulse height and fast decay time. Our TES achieved the good energy resolution of 84 eV FWHM at 59.5 keV. Using this TES device, we also succeeded to demonstrate the nuclear material measurements. In the measurement of a Pu sample, we detected the sharp γ-ray peaks from 239Pu and 240Pu, and of a Fission Products (FP) sample, we observed fluorescence X-ray peaks emitted by the elements contained in FP. The TES could resolve the fine structures of each fluorescence X-ray line like Kα1 and Kα2. In addition to that, we developed the TES coupled with tantalum absorber, which is expected to have higher absorption efficiency for γ-rays. This device reported the best energy resolution of 465 eV at 662 keV.

  • Theoretical Analyses on 2-Norm-Based Multiple Kernel Regressors

    Akira TANAKA  Hideyuki IMAI  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E100-A No:3
      Page(s):
    877-887

    The solution of the standard 2-norm-based multiple kernel regression problem and the theoretical limit of the considered model space are discussed in this paper. We prove that 1) The solution of the 2-norm-based multiple kernel regressor constructed by a given training data set does not generally attain the theoretical limit of the considered model space in terms of the generalization errors, even if the training data set is noise-free, 2) The solution of the 2-norm-based multiple kernel regressor is identical to the solution of the single kernel regressor under a noise free setting, in which the adopted single kernel is the sum of the same kernels used in the multiple kernel regressor; and it is also true for a noisy setting with the 2-norm-based regularizer. The first result motivates us to develop a novel framework for the multiple kernel regression problems which yields a better solution close to the theoretical limit, and the second result implies that it is enough to use the single kernel regressors with the sum of given multiple kernels instead of the multiple kernel regressors as long as the 2-norm based criterion is used.

  • Link Quality Information Sharing by Compressed Sensing and Compressed Transmission for Arbitrary Topology Wireless Mesh Networks

    Hiraku OKADA  Shuhei SUZAKI  Tatsuya KATO  Kentaro KOBAYASHI  Masaaki KATAYAMA  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2016/09/20
      Vol:
    E100-B No:3
      Page(s):
    456-464

    We proposed to apply compressed sensing to realize information sharing of link quality for wireless mesh networks (WMNs) with grid topology. In this paper, we extend the link quality sharing method to be applied for WMNs with arbitrary topology. For arbitrary topology WMNs, we introduce a link quality matrix and a matrix formula for compressed sensing. By employing a diffusion wavelets basis, the link quality matrix is converted to its sparse equivalent. Based on the sparse matrix, information sharing is achieved by compressed sensing. In addition, we propose compressed transmission for arbitrary topology WMNs, in which only the compressed link quality information is transmitted. Experiments and simulations clarify that the proposed methods can reduce the amount of data transmitted for information sharing and maintain the quality of the shared information.

  • FOREWORD Open Access

    Takeshi KOSHIBA  

     
    FOREWORD

      Vol:
    E100-D No:3
      Page(s):
    413-413
  • Polynomial Time Inductive Inference of Languages of Ordered Term Tree Patterns with Height-Constrained Variables from Positive Data

    Takayoshi SHOUDAI  Kazuhide AIKOH  Yusuke SUZUKI  Satoshi MATSUMOTO  Tetsuhiro MIYAHARA  Tomoyuki UCHIDA  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E100-A No:3
      Page(s):
    785-802

    An efficient means of learning tree-structural features from tree-structured data would enable us to construct effective mining methods for tree-structured data. Here, a pattern representing rich tree-structural features common to tree-structured data and a polynomial time algorithm for learning important tree patterns are necessary for mining knowledge from tree-structured data. As such a tree pattern, we introduce a term tree pattern t such that any edge label of t belongs to a finite alphabet Λ, any internal vertex of t has ordered children and t has a new kind of structured variable, called a height-constrained variable. A height-constrained variable has a pair of integers (i, j) as constraints, and it can be replaced with a tree whose trunk length is at least i and whose height is at most j. This replacement is called height-constrained replacement. A sequence of consecutive height-constrained variables is called a variable-chain. In this paper, we present polynomial time algorithms for solving the membership problem and the minimal language (MINL) problem for term tree patternshaving no variable-chain. The membership problem for term tree patternsis to decide whether or not a given tree can be obtained from a given term tree pattern by applying height-constrained replacements to all height-constrained variables in the term tree pattern. The MINL problem for term tree patternsis to find a term tree pattern t such that the language generated by t is minimal among languages, generated by term tree patterns, which contain all given tree-structured data. Finally, we show that the class, i.e., the set of all term tree patternshaving no variable-chain, is polynomial time inductively inferable from positive data if |Λ| ≥ 2.

  • Link Weight Optimization Scheme for Link Reinforcement in IP Networks

    Stephane KAPTCHOUANG  Hiroki TAHARA  Eiji OKI  

     
    PAPER-Internet

      Pubricized:
    2016/10/06
      Vol:
    E100-B No:3
      Page(s):
    417-425

    Link duplication is widely used in Internet protocol networks to tackle the network congestion increase caused by link failure. Network congestion represents the highest link utilization over all the links in the network. Due to capital expenditure constraints, not every link can be duplicated to reduce congestion after a link fails. Giving priority to some selected links makes sense. Meanwhile, traffic routes are determined by link weights that are configured in advance. Therefore, choosing an appropriate set of link weights reduces the number of links that actually need to be duplicated in order to keep a manageable congestion under failure. A manageable congestion is a congestion under which Service Level Agreements can be met. The conventional scheme fixes link weights before determining links to duplicate. In this scheme, the fixed link weights are optimized to minimize the worst network congestion. The worst network congestion is the highest network congestion over all the single non-duplicated link failures. As the selection of links for protection depends on the fixed link weights, some suitable protection patterns, which are not considered with other possible link weights, might be skipped leading to overprotection. The paper proposes a scheme that considers multiple protection scenarios before optimizing link weights in order to reduce the overall number of protected links. Simulation results show that the proposed scheme uses fewer link protections compared to the conventional scheme.

  • FOREWORD Open Access

    Keizo CHO  

     
    FOREWORD

      Vol:
    E100-B No:2
      Page(s):
    194-194
  • Thermal Treatment Effect on Morphology and Photo-Physical Properties of Bis-Styrylbenzene Derivatives

    Hiroyuki MOCHIZUKI  

     
    BRIEF PAPER

      Vol:
    E100-C No:2
      Page(s):
    145-148

    Characteristics of the bis-styrylbenzene derivatives with trifluoromethyl or methyl moieties were evaluated in each as-vapor-deposited film, thermally-treated film, and the crystal from the solution. Thermal treatment dramatically changed morphologies and photo-physical properties of the vapor-deposited film.

  • Field Experimental Evaluation of Mobile Terminal Velocity Estimation Based on Doppler Spread Detection for Mobility Control in Heterogeneous Cellular Networks

    Sourabh MAITI  Manabu MIKAMI  Kenji HOSHINO  

     
    PAPER

      Vol:
    E100-B No:2
      Page(s):
    252-261

    To deal with the recent explosion of mobile data traffic, heterogeneous cellular networks, in which a large number of small cells are deployed in a macro-cell coverage area, are considered to be a promising approach. However, when a mobile terminal (MT) traveling at a high velocity moves through several small cells in a short period of time, the frequent handovers (HOs) that occur between small cells lead to a deterioration of user quality of experience. To avoid such HO problems, while improving the network capacity in the heterogeneous cellular network, it is effective to introduce an inter-layer HO control policy where MTs traveling at high velocities are connected to the macro-cell layer to reduce the number of HOs and MTs traveling at low velocities or which are stationary are connected to the small-cell layer for offloading traffic from the macro-cells to the small-cells. However, to realize such inter-layer HO control policy in the heterogeneous cellular network, it is crucial to estimate the velocity of each MT. Due to the technological constraints of MT velocity estimation based on the Global Positioning Systems (GPS), we focus on MT velocity estimation algorithms which do not require information provided by GPS. First, we discuss the issues of the existing MT velocity estimation algorithms and then focus on a MT velocity estimation algorithm based on a conventional Doppler spread detection using Fast Fourier Transform (FFT). Since few studies have evaluated Doppler spread based MT velocity estimation techniques for practical communication systems in actual radio propagation environments, we implement the MT velocity estimation algorithm to a Long Term Evolution (LTE) based experimental system, and perform its field experiments. Based on these experimental results we also evaluate the high or low velocity decision accuracy for the inter-layer HO control policy and show that good decision accuracy is achieved in both line-of-sight (LOS) and non-line-of-sight (NLOS) outdoor propagation environment. These results show its feasibility for practical mobile communication systems in actual radio propagation environments.

  • Face Hallucination by Learning Local Distance Metric

    Yuanpeng ZOU  Fei ZHOU  Qingmin LIAO  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2016/11/07
      Vol:
    E100-D No:2
      Page(s):
    384-387

    In this letter, we propose a novel method for face hallucination by learning a new distance metric in the low-resolution (LR) patch space (source space). Local patch-based face hallucination methods usually assume that the two manifolds formed by LR and high-resolution (HR) image patches have similar local geometry. However, this assumption does not hold well in practice. Motivated by metric learning in machine learning, we propose to learn a new distance metric in the source space, under the supervision of the true local geometry in the target space (HR patch space). The learned new metric gives more freedom to the presentation of local geometry in the source space, and thus the local geometries of source and target space turn to be more consistent. Experiments conducted on two datasets demonstrate that the proposed method is superior to the state-of-the-art face hallucination and image super-resolution (SR) methods.

  • TCP Network Coding with Enhanced Retransmission for Heavy and Bursty Loss

    Nguyen VIET HA  Kazumi KUMAZOE  Masato TSURU  

     
    PAPER-Network

      Pubricized:
    2016/08/09
      Vol:
    E100-B No:2
      Page(s):
    293-303

    In general, Transmission Control Protocol (TCP), e.g., TCP NewReno, considers all losses to be a sign of congestion. It decreases the sending rate whenever a loss is detected. Integrating the network coding (NC) into protocol stack and making it cooperate with TCP (TCP/NC) would provide the benefit of masking packet losses in lossy networks, e.g., wireless networks. TCP/NC complements the packet loss recovery capability without retransmission at a sink by sending the redundant combination packets which are encoded at the source. However, TCP/NC is less effective under heavy and bursty loss which often occurs in fast fading channel because the retransmission mechanism of the TCP/NC entirely relies on the TCP layer. Our solution is TCP/NC with enhanced retransmission (TCP/NCwER), for which a new retransmission mechanism is developed to retransmit more than one lost packet quickly and efficiently, to allow encoding the retransmitted packets for reducing the repeated losses, and to handle the dependent combination packets for avoiding the decoding failure. We implement and test our proposal in Network Simulator 3. The results show that TCP/NCwER overcomes the deficiencies of the original TCP/NC and improves the TCP goodput under both random loss and burst loss channels.

  • Real-Time UHD Background Modelling with Mixed Selection Block Updates

    Axel BEAUGENDRE  Satoshi GOTO  Takeshi YOSHIMURA  

     
    PAPER-IMAGE PROCESSING

      Vol:
    E100-A No:2
      Page(s):
    581-591

    The vast majority of foreground detection methods require heavy hardware optimization to process in real-time standard definition videos. Indeed, those methods process the whole frame for the detection but also for the background modelling part which makes them resource-guzzlers (time, memory, etc.) unable to be applied to Ultra High Definition (UHD) videos. This paper presents a real-time background modelling method called Mixed Block Background Modelling (MBBM). It is a spatio-temporal approach which updates the background model by carefully selecting block by a linear and pseudo-random orders and update the corresponding model's block parts. The two block selection orders make sure that every block will be updated. For foreground detection purposes, the method is combined with a foreground detection designed for UHD videos such as the Adaptive Block-Propagative Background Subtraction method. Experimental results show that the proposed MBBM can process 50min. of 4K UHD videos in less than 6 hours. while other methods are estimated to take from 8 days to more than 21 years. Compared to 10 state-of-the-art foreground detection methods, the proposed MBBM shows the best quality results with an average global quality score of 0.597 (1 being the maximum) on a dataset of 4K UHDTV sequences containing various situation like illumination variation. Finally, the processing time per pixel of the MBBM is the lowest of all compared methods with an average of 3.18×10-8s.

  • FOREWORD

    Satoshi Tanaka  

     
    FOREWORD

      Vol:
    E100-A No:2
      Page(s):
    515-515
  • Clutter Suppression Method of Iron Tunnel Using Cepstral Analysis for Automotive Radars

    Han-Byul LEE  Jae-Eun LEE  Hae-Seung LIM  Seong-Hee JEONG  Seong-Cheol KIM  

     
    PAPER-Sensing

      Pubricized:
    2016/08/17
      Vol:
    E100-B No:2
      Page(s):
    400-406

    In this paper, we propose an efficient clutter suppression algorithm for automotive radar systems in iron-tunnel environments. In general, the clutters in iron tunnels makes it highly likely that automotive radar systems will fail to detect targets. In order to overcome this drawback, we first analyze the cepstral characteristic of the iron tunnel clutter to determine the periodic properties of the clutters in the frequency domain. Based on this observation, we suggest for removing the periodic components induced by the clutters in iron tunnels in the cepstral domain by using the cepstrum editing process. To verify the clutter suppression of the proposed method experimentally, we performed measurements by using 77GHz frequency modulated continuous waveform radar sensors for an adaptive cruise control (ACC) system. Experimental results show that the proposed method is effective to suppress the clutters in iron-tunnel environments in the sense that it improves the early target detection performance for ACC significantly.

  • Band Splitting Permutations for Spatially Coupled LDPC Codes Achieving Asymptotically Optimal Burst Erasure Immunity

    Hiroki MORI  Tadashi WADAYAMA  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:2
      Page(s):
    663-669

    It is well known that spatially coupled (SC) codes with erasure-BP decoding have powerful error correcting capability over memoryless erasure channels. However, the decoding performance of SC-codes significantly degrades when they are used over burst erasure channels. In this paper, we propose band splitting permutations (BSP) suitable for (l,r,L) SC-codes. The BSP splits a diagonal band in a base matrix into multiple bands in order to enhance the span of the stopping sets in the base matrix. As theoretical performance guarantees, lower and upper bounds on the maximal burst correctable length of the permuted (l,r,L) SC-codes are presented. Those bounds indicate that the maximal correctable burst ratio of the permuted SC-codes is given by λmax≃1/k where k=r/l. This implies the asymptotic optimality of the permuted SC-codes in terms of burst erasure correction.

  • A Wideband Printed Elliptical Monopole Antenna for Circular Polarization

    Takafumi FUJIMOTO  Takaya ISHIKUBO  Masaya TAKAMURA  

     
    PAPER

      Vol:
    E100-B No:2
      Page(s):
    203-210

    In this paper, a printed elliptical monopole antenna for wideband circular polarization is proposed. The antenna's structure is asymmetric with regard to the microstrip line. The section of the ground plane that overlaps the elliptical patch is removed. With simulations, the relationship between the antenna's geometrical parameters and the antenna's axial ratio of circularly polarized wave is clarified. The operational principle for wideband circular polarization is explained by the simulated electric current distributions. The simulated and measured bandwidths of the 3dB-axial ratio with a 2-VSWR is approximately 88.4% (2.12GHz-5.47GHz) and 83.6% (2.20GHz-5.36GHz), respectively.

  • Sparse Recovery Using Sparse Sensing Matrix Based Finite Field Optimization in Network Coding

    Ganzorig GANKHUYAG  Eungi HONG  Yoonsik CHOE  

     
    LETTER-Information Network

      Pubricized:
    2016/11/04
      Vol:
    E100-D No:2
      Page(s):
    375-378

    Network coding (NC) is considered a new paradigm for distributed networks. However, NC has an all-or-nothing property. In this paper, we propose a sparse recovery approach using sparse sensing matrix to solve the NC all-or-nothing problem over a finite field. The effectiveness of the proposed approach is evaluated based on a sensor network.

  • Driver Behavior Assessment in Case of Critical Driving Situations

    Oussama DERBEL  René LANDRY, Jr.  

     
    PAPER

      Vol:
    E100-A No:2
      Page(s):
    491-498

    Driver behavior assessment is a hard task since it involves distinctive interconnected factors of different types. Especially in case of insurance applications, a trade-off between application cost and data accuracy remains a challenge. Data uncertainty and noises make smart-phone or low-cost sensor platforms unreliable. In order to deal with such problems, this paper proposes the combination between the Belief and Fuzzy theories with a two-level fusion based architecture. It enables the propagation of information errors from the lower to the higher level of fusion using the belief and/or the plausibility functions at the decision step. The new developed risk models of the Driver and Environment are based on the accident statistics analysis regarding each significant driving risk parameter. The developed Vehicle risk models are based on the longitudinal and lateral accelerations (G-G diagram) and the velocity to qualify the driving behavior in case of critical events (e.g. Zig-Zag scenario). In case of over-speed and/or accident scenario, the risk is evaluated using our new developed Fuzzy Inference System model based on the Equivalent Energy Speed (EES). The proposed approach and risk models are illustrated by two examples of driving scenarios using the CarSim vehicle simulator. Results have shown the validity of the developed risk models and the coherence with the a-priori risk assessment.

6601-6620hit(42807hit)