The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Z(5900hit)

2261-2280hit(5900hit)

  • Arc Erosion of Silver/Tungsten Contact Material under Low Voltage and Small Current and Resistive Load at 400 Hz and 50 Hz

    Jing LI  Zhiying MA  Jianming LI  Lizhan XU  

     
    PAPER

      Vol:
    E94-C No:9
      Page(s):
    1356-1361

    Using a self-developed ASTM test system of contact material electrical properties under low voltage (LV), small-capacity, the current-frequency variable and a photoelectric analytical balance, the electric performance comparison experiments and material weighing of silver-based electrical contact materials, such as silver/tungsten and silver/cadmium oxide contact materials, are completed under LV, pure resistive load and small current at 400 Hz/50 Hz. The surface profiles and constituents of silver/tungsten contact material were observed and analyzed by SEM and EDAX. Researches indicate that the form of the contact material arc burnout at 400 Hz is stasis, not an eddy flow style at 50 Hz; meanwhile, the area of the contact burnout at 400 Hz is less than that of 50 Hz, and the local ablation on the surface layer at 400 Hz is more serious. Comparing the capacities of the silver-based contact materials with different second element such as CAgW50, CAgNi10, CAgC4 and CAgCdO15 at 400 Hz, no matter what the performances of arc erosion resistance or welding resistance, it can be found that the capacities of the silver/tungsten material is the best.

  • A Cross Polarization Suppressed Sequential Array with L-Probe Fed Rectangular Microstrip Antennas

    Kazuki IKEDA  Keigo SATO  Ken-ichi KAGOSHIMA  Shigeki OBOTE  Atsushi TOMIKI  Tomoaki TODA  

     
    LETTER-Antennas and Propagation

      Vol:
    E94-B No:9
      Page(s):
    2653-2655

    In this paper, we present a sequentially rotated array antenna with a rectangular patch MSA fed by an L-probe. Since it's important to decrease couplings between patch elements in order to suppress the cross-polarization level, rectangular patches with aspect ratio of k are adopted. We investigate the cross-polarization level of the sequential array and discuss the relationship between the cross-polarization level and the mutual coupling. As a result, the bandwdith of the antenna element is obtained 14.6% when its VSWR is less than 1.5, and the directivity and cross-polarization level of a 4-patch sequential array are 10.8 dBic and 1.7 dBic, respectively, where k=0.6 and the patch spacing of d=0.5 wave length. These characteristics are 5.6 dB and 5.8 dB better than the corresponding values of a square patch sequential array antenna.

  • Global Selection vs Local Ordering of Color SIFT Independent Components for Object/Scene Classification

    Dan-ni AI  Xian-hua HAN  Guifang DUAN  Xiang RUAN  Yen-wei CHEN  

     
    PAPER-Pattern Recognition

      Vol:
    E94-D No:9
      Page(s):
    1800-1808

    This paper addresses the problem of ordering the color SIFT descriptors in the independent component analysis for image classification. Component ordering is of great importance for image classification, since it is the foundation of feature selection. To select distinctive and compact independent components (IC) of the color SIFT descriptors, we propose two ordering approaches based on local variation, named as the localization-based IC ordering and the sparseness-based IC ordering. We evaluate the performance of proposed methods, the conventional IC selection method (global variation based components selection) and original color SIFT descriptors on object and scene databases, and obtain the following two main results. First, the proposed methods are able to obtain acceptable classification results in comparison with original color SIFT descriptors. Second, the highest classification rate can be obtained by using the global selection method in the scene database, while the local ordering methods give the best performance for the object database.

  • The Relationship between Aging and Photic Driving EEG Response

    Tadanori FUKAMI  Takamasa SHIMADA  Fumito ISHIKAWA  Bunnoshin ISHIKAWA  Yoichi SAITO  

     
    LETTER-Biological Engineering

      Vol:
    E94-D No:9
      Page(s):
    1839-1842

    The present study examined the evaluation of aging using the photic driving response, a measure used in routine EEG examinations. We examined 60 normal participants without EEG abnormalities, classified into three age groups (2029, 3059 and over 60 years; 20 participants per group). EEG was measured at rest and during photic stimulation (PS). We calculated Z-scores as a measure of enhancement and suppression due to visual stimulation at rest and during PS and tested for between-group and intraindividual differences. We examined responses in the alpha frequency and harmonic frequency ranges separately, because alpha suppression can affect harmonic frequency responses that overlap the alpha frequency band. We found a negative correlation between Z-scores for harmonics and age by fitting the data to a linear function (CC: -0.740). In contrast, Z-scores and alpha frequency were positively correlated (CC: 0.590).

  • Cross Low-Dimension Pursuit for Sparse Signal Recovery from Incomplete Measurements Based on Permuted Block Diagonal Matrix

    Zaixing HE  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER-Digital Signal Processing

      Vol:
    E94-A No:9
      Page(s):
    1793-1803

    In this paper, a novel algorithm, Cross Low-dimension Pursuit, based on a new structured sparse matrix, Permuted Block Diagonal (PBD) matrix, is proposed in order to recover sparse signals from incomplete linear measurements. The main idea of the proposed method is using the PBD matrix to convert a high-dimension sparse recovery problem into two (or more) groups of highly low-dimension problems and crossly recover the entries of the original signal from them in an iterative way. By sampling a sufficiently sparse signal with a PBD matrix, the proposed algorithm can recover it efficiently. It has the following advantages over conventional algorithms: (1) low complexity, i.e., the algorithm has linear complexity, which is much lower than that of existing algorithms including greedy algorithms such as Orthogonal Matching Pursuit and (2) high recovery ability, i.e., the proposed algorithm can recover much less sparse signals than even 1-norm minimization algorithms. Moreover, we demonstrate both theoretically and empirically that the proposed algorithm can reliably recover a sparse signal from highly incomplete measurements.

  • On the Security of BioEncoding Based Cancelable Biometrics

    Osama OUDA  Norimichi TSUMURA  Toshiya NAKAGUCHI  

     
    PAPER-Information Network

      Vol:
    E94-D No:9
      Page(s):
    1768-1777

    Proving the security of cancelable biometrics and other template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a cancelable biometrics scheme that has been proposed recently to protect biometric templates represented as binary strings like iris codes. Unlike other template protection schemes, BioEncoding does not require user-specific keys or tokens. Moreover, it satisfies the requirements of untraceable biometrics without sacrificing the matching accuracy. However, the security of BioEncoding against smart attacks, such as correlation and optimization-based attacks, has to be proved before recommending it for practical deployment. In this paper, the security of BioEncopding, in terms of both non-invertibility and privacy protection, is analyzed. First, resistance of protected templates generated using BioEncoding against brute-force search attacks is revisited rigorously. Then, vulnerabilities of BioEncoding with respect to correlation attacks and optimization based attacks are identified and explained. Furthermore, an important modification to the BioEncoding algorithm is proposed to enhance its security against correlation attacks. The effect of integrating this modification into BioEncoding is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed modification and show that it has no negative impact on the matching accuracy.

  • New Encoding Method of Parameter for Dynamic Encoding Algorithm for Searches (DEAS)

    Youngsu PARK  Jong-Wook KIM  Johwan KIM  Sang Woo KIM  

     
    PAPER-Numerical Analysis and Optimization

      Vol:
    E94-A No:9
      Page(s):
    1804-1816

    The dynamic encoding algorithm for searches (DEAS) is a recently developed algorithm that comprises a series of global optimization methods based on variable-length binary strings that represent real variables. It has been successfully applied to various optimization problems, exhibiting outstanding search efficiency and accuracy. Because DEAS manages binary strings or matrices, the decoding rules applied to the binary strings and the algorithm's structure determine the aspects of local search. The decoding rules used thus far in DEAS have some drawbacks in terms of efficiency and mathematical analysis. This paper proposes a new decoding rule and applies it to univariate DEAS (uDEAS), validating its performance against several benchmark functions. The overall optimization results of the modified uDEAS indicate that it outperforms other metaheuristic methods and obviously improves upon older versions of DEAS series.

  • Low Complexity Algorithms for Multi-Cell Joint Channel Estimation in TDD-CDMA Systems

    Peng XUE  Jae Hyun PARK  Duk Kyung KIM  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E94-B No:8
      Page(s):
    2431-2434

    In this letter, we propose two low complexity algorithms for least square (LS) and minimum mean square error (MMSE) based multi-cell joint channel estimation (MJCE). The algorithm for LS-MJCE achieves the same complexity and mean square error (MSE) performance as the previously proposed most efficient algorithm, while the algorithm for MMSE-MJCE is superior to the conventional ones, in terms of either complexity or MSE performance.

  • Wideband Inductor-Less Linear LNA Using Post Distortion Technique

    Amir AMIRABADI  Mahmoud KAMAREI  

     
    PAPER-Nonlinear Problems

      Vol:
    E94-A No:8
      Page(s):
    1662-1670

    In this paper a third-order inter-modulation cancellation technique using Pre-Post-Distortion is proposed to design a wideband high linear low-power LNA in deep submicron. The IM3 cancellation is achieved by post-distorting signal inversely after it is pre- distorted in the input trans-conductance stage during amplification process. The operating frequency range of the LNA is 800 MHz–5 GHz. The proposed technique increases input-referred third-order intercept point (IIP3) and input 1 dB Compression point (P-1 dB) to 12–25 dBm and -1.18 dBm, respectively. Post layout simulation results show a noise figure (NF) of 4.1–4.5 dB, gain of 13.7–13.9 dB and S11 lower than -13 dB while consumes 8 mA from 1.2 V supply. The LNA is designed in a 65 nm standard CMOS technology. The layout schematic shows that the LNA occupies 0.150.11 mm2 of silicon area.

  • Stabilization of a Class of Feedforward and Non-feedforward Nonlinear Systems with a Large Delay in the Input via LMI Approach

    Ho-Lim CHOI  

     
    LETTER-Systems and Control

      Vol:
    E94-A No:8
      Page(s):
    1753-1755

    We consider a stabilization problem of a class of input-delayed nonlinear systems that have not only feedforward, but also some non-feedforward nonlinearity. While there are some existing results that deal with input-delayed non-feedforward nonlinear systems, they often assume a small input delay. It has been often the case that for a large input delay, the results are limited to only feedforward systems. In this letter, combined with the LMI approach in [3] and the reduction method in [5], we show that some feedforward and non-feedforward systems with a large delay in the input can be stabilized via the proposed controller.

  • A Construction of Quaternary Low Correlation Zone Sequence Sets from Binary Low Correlation Zone Sequence Sets Improving Optimality

    Ji-Woong JANG  Sang-Hyo KIM  Young-Sik KIM  

     
    LETTER-Coding Theory

      Vol:
    E94-A No:8
      Page(s):
    1768-1771

    In this letter, we propose a new construction of quaternary low correlation zone (LCZ) sequence set using binary LCZ sequence sets and an inverse Gray mapping. The new construction method provides optimal quaternary LCZ sequence sets even if the employed binary LCZ sequence set is suboptimal. The optimality is improved at the price of alphabet extension.

  • Adaptive Bare Bones Particle Swarm Inspired by Cloud Model

    Junqi ZHANG  Lina NI  Jing YAO  Wei WANG  Zheng TANG  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E94-D No:8
      Page(s):
    1527-1538

    Kennedy has proposed the bare bones particle swarm (BBPS) by the elimination of the velocity formula and its replacement by the Gaussian sampling strategy without parameter tuning. However, a delicate balance between exploitation and exploration is the key to the success of an optimizer. This paper firstly analyzes the sampling distribution in BBPS, based on which we propose an adaptive BBPS inspired by the cloud model (ACM-BBPS). The cloud model adaptively produces a different standard deviation of the Gaussian sampling for each particle according to the evolutionary state in the swarm, which provides an adaptive balance between exploitation and exploration on different objective functions. Meanwhile, the diversity of the swarms is further enhanced by the randomness of the cloud model itself. Experimental results show that the proposed ACM-BBPS achieves faster convergence speed and more accurate solutions than five other contenders on twenty-five unimodal, basic multimodal, extended multimodal and hybrid composition benchmark functions. The diversity enhancement by the randomness in the cloud model itself is also illustrated.

  • Nonparametric Regression Method Based on Orthogonalization and Thresholding

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E94-D No:8
      Page(s):
    1610-1619

    In this paper, we consider a nonparametric regression problem using a learning machine defined by a weighted sum of fixed basis functions, where the number of basis functions, or equivalently, the number of weights, is equal to the number of training data. For the learning machine, we propose a training scheme that is based on orthogonalization and thresholding. On the basis of the scheme, vectors of basis function outputs are orthogonalized and coefficients of the orthogonalized vectors are estimated instead of weights. The coefficient is set to zero if it is less than a predetermined threshold level assigned component-wise to each coefficient. We then obtain the resulting weight vector by transforming the thresholded coefficients. In this training scheme, we propose asymptotically reasonable threshold levels to distinguish contributed components from unnecessary ones. To see how this works in a simple case, we derive an upper bound for the generalization error of the training scheme with the given threshold levels. It tells us that an increase in the generalization error is of O(log n/n) when there is a sparse representation of a target function in an orthogonal domain. In implementing the training scheme, eigen-decomposition or the Gram–Schmidt procedure is employed for orthogonalization, and the corresponding training methods are referred to as OHTED and OHTGS. Furthermore, modified versions of OHTED and OHTGS, called OHTED2 and OHTGS2 respectively, are proposed for reduced estimation bias. On real benchmark datasets, OHTED2 and OHTGS2 are found to exhibit relatively good generalization performance. In addition, OHTGS2 is found to be obtain a sparse representation of a target function in terms of the basis functions.

  • Stackelberg Game-Based Power Control Scheme for Efficiency and Fairness Tradeoff

    Sungwook KIM  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E94-B No:8
      Page(s):
    2427-2430

    In this paper, a new power control scheme is proposed to maximize the network throughput with fairness provisioning. Based on the Stackelberg game model, the proposed scheme consists of two control mechanisms; user-level and system-level mechanisms. Control decisions in each mechanism act cooperatively and collaborate with each other to satisfy efficiency and fairness requirements. Simulation results demonstrate that the proposed scheme has excellent network performance, while other schemes cannot offer such an attractive performance balance.

  • Precoding and Power Allocation for Full-Duplex MIMO Relays

    Jong-Ho LEE  Oh-Soon SHIN  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E94-B No:8
      Page(s):
    2316-2327

    In this paper, we propose precoding and power allocation strategies for full-duplex multiple input multiple output (MIMO) relays. The precoding scheme for full-duplex MIMO relays is derived based on the block diagonalization (BD) method to suppress the self-interference in the full-duplex relaying so that each relay station (RS) can receive multiple data streams from the base station (BS), while forwarding the decoded data streams to mobile stations (MS's) simultaneously. We also develop the optimal power allocation scheme for full-duplex MIMO relays. Numerical results verify that the proposed scheme provides substantial performance improvement compared with the conventional half-duplex relay (HDR), if sufficient physical isolation between the transmit and receive antennas is ensured such that the proposed full-duplex MIMO relays operate in a tolerable self-interference range.

  • Progressive Side Information Refinement Algorithm for Wyner-Ziv Codec

    Chan-Hee HAN  Si-Woong LEE  Hamid GHOLAMHOSSEINI  Yun-Ho KO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E94-D No:8
      Page(s):
    1641-1652

    In this paper, side information refinement methods for Wyner-Ziv video codec are presented. In the proposed method, each block of a Wyner-Ziv frame is separated into a predefined number of groups, and these groups are interleaved to be coded. The side information for the first group is generated by the motion compensated temporal interpolation using adjacent key frames only. Then, the side information for remaining groups is gradually refined using the knowledge of the already decoded signal of the current Wyner-Ziv frame. Based on this basic concept, two progressive side information refinement methods are proposed. One is the band-wise side information refinement (BW-SIR) method which is based on transform domain interleaving, while the other is the field-wise side information refinement (FW-SIR) method which is based on pixel domain interleaving. Simulation results show that the proposed methods improve the quality of the side information and rate-distortion performance compared to the conventional side information refinement methods.

  • Reliability of Generalized Normal Laplacian Distribution Model in TH-BPSK UWB Systems

    Sangchoon KIM  

     
    LETTER-Communication Theory and Signals

      Vol:
    E94-A No:8
      Page(s):
    1772-1775

    In this letter, the reliabilty of the generalized normal-Laplace (GNL) distribution used for modeling the multiple access interference (MAI) plus noise in time-hopping (TH) binary phase-shift keying (BPSK) ultra-wideband (UWB) systems is evaluated in terms of the probability density function and the BER. The multiple access performance of TH-BPSK UWB systems based on GNL model is analyzed. The average BER performance obtained by using GNL approximation well matches with the exact BER results of TH-BPSK UWB systems. The parameter estimates of GNL distribution based on the moments estimation method is also presented.

  • Regularization of the RLS Algorithm

    Jacob BENESTY  Constantin PALEOLOGU  Silviu CIOCHIN  

     
    LETTER

      Vol:
    E94-A No:8
      Page(s):
    1628-1629

    Regularization plays a fundamental role in adaptive filtering. There are, very likely, many different ways to regularize an adaptive filter. In this letter, we propose one possible way to do it based on a condition that makes intuitively sense. From this condition, we show how to regularize the recursive least-squares (RLS) algorithm.

  • Enhanced DV-Hop Algorithm with Reduced Hop-Size Error in Ad Hoc Networks

    Sang-Woo LEE  Dong-Yul LEE  Chae-Woo LEE  

     
    LETTER-Network

      Vol:
    E94-B No:7
      Page(s):
    2130-2132

    DV-Hop algorithm produces errors in location estimations due to inaccurate hop size. We propose a novel localization scheme based on DV-Hop to improve positioning accuracy with least error hop sizes of anchors and average hop sizes of unknowns. The least error hop size of an anchor minimizes its location error, but it may be far small or large. To cope with this inconsistent hop size, each unknown node calculates its average hop size with hop sizes from anchors. Simulation results show that the proposed algorithm outperforms the DV-Hop algorithm in location estimations.

  • Optimization of OSPF Link Weights to Counter Network Failure

    Mohammad Kamrul ISLAM  Eiji OKI  

     
    PAPER-Internet

      Vol:
    E94-B No:7
      Page(s):
    1964-1972

    A key traffic engineering problem in the Open Shortest Path First (OSPF)-based network is the determination of optimal link weights. From the network operators' point of view, there are two approaches to determining a set of link weights: Start-time Optimization (SO) and Run-time Optimization (RO). We previously presented a Preventive Start-time Optimization (PSO) scheme that determines an appropriate set of link weights at start time. It can counter both unexpected network congestion and network instability and thus overcomes the drawbacks of SO and RO, respectively. The previous work adopts a preventive start-time optimization algorithm with limited candidates, named PSO-L (PSO for Limited candidates). Although PSO-L relaxes the worst-case congestion, it does not confirm the optimal worst-case performance. To pursue this optimality, this paper proposes a preventive start-time optimization algorithm with a wide range of candidates, named PSO-W (PSO for Wide-range candidates). PSO-W upgrades the objective function of SO that determines the set of link weights at start time by considering all possible single link failures; its goal is to minimize the worst-case congestion. Numerical results via simulations show that PSO-W effectively relaxes the worst-case network congestion compared to SO, while it avoids the network instability caused by the run-time changes of link weights caused by RO. At the same time, PSO-W yields performance superior to that of PSO-L.

2261-2280hit(5900hit)