The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PAR(2741hit)

761-780hit(2741hit)

  • An Iterative Reweighted Least Squares Algorithm with Finite Series Approximation for a Sparse Signal Recovery

    Kazunori URUMA  Katsumi KONISHI  Tomohiro TAKAHASHI  Toshihiro FURUKAWA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:2
      Page(s):
    319-322

    This letter deals with a sparse signal recovery problem and proposes a new algorithm based on the iterative reweighted least squares (IRLS) algorithm. We assume that the non-zero values of a sparse signal is always greater than a given constant and modify the IRLS algorithm to satisfy this assumption. Numerical results show that the proposed algorithm recovers a sparse vector efficiently.

  • Analysis of TV White Space Availability in Japan

    Tsuyoshi SHIMOMURA  Teppei OYAMA  Hiroyuki SEKI  

     
    PAPER

      Vol:
    E97-B No:2
      Page(s):
    350-358

    Television white spaces (TVWS) are locally and/or temporally unused portions of TV bands. After TVWS regulations were passed in the USA, more and more regulators have been considering efficient use of TVWS. Under the condition that the primary user, i.e., terrestrial TV broadcasting system, is not interfered, various secondary users (SUs) may be deployed in TVWS. In Japan, the TVWS regulations started with broadcast-type SUs and small-area broadcasting systems, followed by voice radio. This paper aims to provide useful insights for more efficient utilization of TVWS as one of the options to meet the continuously increasing demand for wireless bandwidth. TVWS availability in Japan is analyzed using graphs and maps. As per the regulations in Japan, for TV broadcasting service, a protection contour is defined to be 51dBµV/m, while the interference contour for SU is defined to be 12.3dBµV/m. We estimate TVWS availability using these two regulation parameters and the minimum separation distances calculated on the basis of the ITU-R P.1546 propagation models. Moreover, we investigate and explain the effect of two important factors on TVWS availability. One is the measures to avoid adjacent channel interference, while the other is whether the SU has client devices with interference ranges beyond the interference area of the master device. Furthermore, possible options to increase available TVWS channels are discussed.

  • Double-Layer Plate-Laminated Waveguide Slot Array Antennas for a 39GHz Band Fixed Wireless Access System

    Miao ZHANG  Jiro HIROKAWA  Makoto ANDO  

     
    PAPER-Antennas and Propagation

      Vol:
    E97-B No:1
      Page(s):
    122-128

    A point-to-point fixed wireless access (FWA) system with a maximum throughput of 1Gbps has been developed in the 39GHz band. A double-layer plate-laminated waveguide slot array antenna is successfully realized with specific considerations of practical application. The antenna is designed so as to hold the VSWR under 1.5. The antenna input as well as feeding network is configured to reduce the antenna profile as well as the antenna weight. In addition, integrating the antenna into a wireless terminal is taken into account. A shielding wall, whose effectiveness is experimentally demonstrated, is set in the middle of the wireless terminal to achieve the spatial isolation of more than 65dB between two antennas on the H-plane. 30 test antennas are fabricated by diffusion bonding of thin metal plates, to investigate the tolerance and mass-productivity of this process. An aluminum antenna, which has the advantages of light weight and anti-aging, is also fabricated and evaluated with an eye to the future.

  • Packetization and Unequal Erasure Protection for Transmission of SPIHT-Encoded Images

    Kuen-Tsair LAY  Lee-Jyi WANG  

     
    PAPER-Multimedia Systems for Communications

      Vol:
    E97-B No:1
      Page(s):
    226-237

    Coupled with the discrete wavelet transform, SPIHT (set partitioning in hierarchical trees) is a highly efficient image compression technique that allows for progressive transmission. One problem, however, is that its decoding can be extremely sensitive to bit errors in the code sequence. In this paper, we address the issue of transmitting SPIHT-encoded images via noisy channels, wherein errors are inevitable. The communication scenario assumed in this paper is that the transmitter cannot get any acknowledgement from the receiver. In our scheme, the original SPIHT code sequence is first segmented into packets. Each packet is classified as either a CP (critical packet) or an RP (refinement packet). For error control, cyclic redundancy check (CRC) is incorporated into each packet. By checking the CRC check sum, the receiver is able to tell whether a packet is correctly received or not. In this way, the noisy channel can be effectively modeled as an erasure channel. For unequal error protection (UEP), each of those packets are repeatedly transmitted for a few times, as determined by a process called diversity allocation (DA). Two DA algorithms are proposed. The first algorithm produces a nearly optimal decoded image (as measured in the expected signal-to-noise ratio). However, its computation cost is extremely high. The second algorithm works in a progressive fashion and is naturally compatible with progressive transmission. Its computation complexity is extremely low. Nonetheless, its decoded image is nearly as good. Experimental results show that the proposed scheme significantly improves the decoded images. They also show that making distinction between CP and RP results in wiser diversity allocation to packets and thus produces higher quality in the decoded images.

  • Formation of Soluble Ink Using Nanoparticles of Low Molecular EL Materials

    Naoaki SAKURAI  Hiroyasu KONDO  Shuzi HAYASE  

     
    PAPER-Electronic Displays

      Vol:
    E97-C No:1
      Page(s):
    85-90

    As one of organic electroluminescent (EL) materials, we developed a method of fabricating an ink using low molecular- weight materials with a long emission lifetime for application to the inkjet method. Although the emission lifetime is usually long for low molecular-weight materials, their high manufacturing cost due to the necessity of vapor deposition is a disadvantage. We utilized the low molecular-weight material, tris-(8-hydroxyquinoline) aluminum (Alq3), and investigated its dispersibility in a solvent in which it has low solubility. In addition, we ascertained whether the material could maintain its photoluminescence characteristic under the irradiation of ultraviolet rays by investigating the emission of photoluminescence. Alq3 was crystallized into nanosize crystals, whose surface was then coated with a primary amine by the gas evaporation method. The fabricated ink contained crystals with an average size of 250nm and high dispersibility in tetradecane, in which Alq3 is insoluble. Thus, we made it possible to carry out an inkjet method with low molecular weight EL materials.

  • Optimal Transform Order of Fractional Fourier Transform for Decomposition of Overlapping Ultrasonic Signals

    Zhenkun LU  Cui YANG  Gang WEI  

     
    LETTER-Ultrasonics

      Vol:
    E97-A No:1
      Page(s):
    393-396

    The separation time-overlapping ultrasound signals is necessary to obtain accurate estimate of transit time and material properties. In this letter, a method to determine the optimal transform order of fractional Fourier transform (FRFT) for decomposition of overlapping ultrasonic signals is proposed. The optimal transform order is obtained by minimizing the mean square error (MSE) between the output and the reference signal. Furthermore, windowing in FRFT domain is discussed. Numerical simulation results show the performances of the proposed method in separating signals overlapping in time.

  • Performance Comparisons of Subjective Quality Assessment Methods for Video

    Toshiko TOMINAGA  Masataka MASUDA  Jun OKAMOTO  Akira TAKAHASHI  Takanori HAYASHI  

     
    PAPER-Network

      Vol:
    E97-B No:1
      Page(s):
    66-75

    Many subjective assessment methods for video quality are provided by ITU-T and ITU-R recommendations, but the differences among these methods have not been sufficiently studied. We compare five subjective assessment methods using four quantitative performance indices for both HD and QVGA resolution video. We compare the Double-Stimulus Continuous Quality-Scale (DSCQS), Double-Stimulus Impairment Scale (DSIS), Absolute Category Rating method (ACR), and ACR with Hidden Reference (ACR-HR) as common subjective assessment methods for HD and QVGA resolution videos. Furthermore, we added ACR with an 11-grade scale (ACR11) for the HD test and Subjective Assessment of Multimedia Video Quality (SAMVIQ) for the QVGA test for quality scale variations. The performance indices are correlation coefficients, rank correlation coefficients, statistical reliability, and assessment time. For statistical reliability, we propose a performance index for comparing different quality scale tests. The results of the performance comparison showed that the correlation coefficients and rank correlation coefficients of the mean opinion scores between pairs of methods were high for both HD and QVGA tests. As for statistical reliability provided by the proposed index, DSIS of HD and ACR of QVGA outperformed the other methods. Moreover, ACR, ACR-HR, and ACR11 were the most efficient subjective quality assessment methods from the viewpoint of assessment time.

  • State-Dependence of On-Chip Power Distribution Network Capacitance

    Koh YAMANAGA  Shiho HAGIWARA  Ryo TAKAHASHI  Kazuya MASU  Takashi SATO  

     
    PAPER-Integrated Electronics

      Vol:
    E97-C No:1
      Page(s):
    77-84

    In this paper, the measurement of capacitance variation, of an on-chip power distribution network (PDN) due to the change of internal states of a CMOS logic circuit, is studied. A state-dependent PDN-capacitance model that explains measurement results will be also proposed. The model is composed of capacitance elements related to MOS transistors, signal and power supply wires, and substrate. Reflecting the changes of electrode potentials, the capacitance elements become state-dependent. The capacitive elements are then all connected in parallel between power supply and ground to form the proposed model. By using the proposed model, state-dependence of PDN-capacitances for different logic circuits are studied in detail. The change of PDN-capacitance exceeds 12% of its total capacitance in some cases, which corresponds to 6% shift of anti-resonance frequency. Consideration of the state-dependence is important for modeling the PDN-capacitance.

  • Parametric Wiener Filter with Linear Constraints for Unknown Target Signals

    Akira TANAKA  Hideyuki IMAI  

     
    PAPER-Digital Signal Processing

      Vol:
    E97-A No:1
      Page(s):
    322-330

    In signal restoration problems, we expect to improve the restoration performance with a priori information about unknown target signals. In this paper, the parametric Wiener filter with linear constraints for unknown target signals is discussed. Since the parametric Wiener filter is usually defined as the minimizer of the criterion not for the unknown target signal but for the filter, it is difficult to impose constraints for the unknown target signal in the criterion. To overcome this difficulty, we introduce a criterion for the parametric Wiener filter defined for the unknown target signal whose minimizer is equivalent to the solution obtained by the original formulation. On the basis of the newly obtained criterion, we derive a closed-form solution for the parametric Wiener filter with linear constraints.

  • Phase Unwrapping Algorithm Based on Extended Particle Filter for SAR Interferometry

    XianMing XIE  PengDa HUANG  QiuHua LIU  

     
    LETTER-Nonlinear Problems

      Vol:
    E97-A No:1
      Page(s):
    405-408

    This paper presents a new phase unwrapping algorithm, based on an extended particle filter (EPF) for SAR interferometry. This technique is not limited by the nonlinearity of the model, and is able to accurately unwrap noisy interferograms by applying EPF to simultaneously perform noise suppression and phase unwrapping. Results obtained from synthetic and real data validate the effectiveness of the proposed method.

  • A Sparse Modeling Method Based on Reduction of Cost Function in Regularized Forward Selection

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:1
      Page(s):
    98-106

    Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.

  • A Note on Pcodes of Partial Words

    Tetsuo MORIYA  Itaru KATAOKA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:1
      Page(s):
    139-141

    In this paper, we study partial words in relation with pcodes, compatibility, and containment. First, we introduce C⊂(L), the set of all partial words contained by elements of L, and C⊃(L), the set of all partial words containing elements of L, for a set L of partial words. We discuss the relation between C(L), the set of all partial words compatible with elements of the set L, C⊂(L), and C⊃(L). Next, we consider the condition for C(L), C⊂(L), and C⊃(L) to be a pcode when L is a pcode. Furthermore, we introduce some classes of pcodes. An infix pcode and a comma-free pcode are defined, and the inclusion relation among these classes is established.

  • Unified Coprocessor Architecture for Secure Key Storage and Challenge-Response Authentication

    Koichi SHIMIZU  Daisuke SUZUKI  Toyohiro TSURUMARU  Takeshi SUGAWARA  Mitsuru SHIOZAKI  Takeshi FUJINO  

     
    PAPER-Hardware Based Security

      Vol:
    E97-A No:1
      Page(s):
    264-274

    In this paper we propose a unified coprocessor architecture that, by using a Glitch PUF and a block cipher, efficiently unifies necessary functions for secure key storage and challenge-response authentication. Based on the fact that a Glitch PUF uses a random logic for the purpose of generating glitches, the proposed architecture is designed around a block cipher circuit such that its round functions can be shared with a Glitch PUF as a random logic. As a concrete example, a circuit structure using a Glitch PUF and an AES circuit is presented, and evaluation results for its implementation on FPGA are provided. In addition, a physical random number generator using the same circuit is proposed. Evaluation results by the two major test suites for randomness, NIST SP 800-22 and Diehard, are provided, proving that the physical random number generator passes the test suites.

  • Comprehensive Study of Integral Analysis on LBlock

    Yu SASAKI  Lei WANG  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    127-138

    The current paper presents an integral cryptanalysis in the single-key setting against light-weight block-cipher LBlock reduced to 22 rounds. Our attack uses the same 15-round integral distinguisher as the previous attacks, but many techniques are taken into consideration in order to achieve comprehensive understanding of the attack; choosing the best balanced-byte position, meet-in-the-middle technique to identify right key candidates, partial-sum technique, relations among subkeys, and combination of the exhaustive search with the integral analysis. Our results indicate that the integral cryptanalysis is particularly useful for LBlock like structures. At the end of this paper, which factor makes the LBlock structure weak against the integral cryptanalysis is discussed. Because designing light-weight cryptographic primitives is an actively discussed topic, we believe that this paper returns some useful feedback to future designs.

  • Portfolio Selection Models with Technical Analysis-Based Fuzzy Birandom Variables

    You LI  Bo WANG  Junzo WATADA  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E97-D No:1
      Page(s):
    11-21

    Recently, fuzzy set theory has been widely employed in building portfolio selection models where uncertainty plays a role. In these models, future security returns are generally taken for fuzzy variables and mathematical models are then built to maximize the investment profit according to a given risk level or to minimize a risk level based on a fixed profit level. Based on existing works, this paper proposes a portfolio selection model based on fuzzy birandom variables. Two original contributions are provided by the study: First, the concept of technical analysis is combined with fuzzy set theory to use the security returns as fuzzy birandom variables. Second, the fuzzy birandom Value-at-Risk (VaR) is used to build our model, which is called the fuzzy birandom VaR-based portfolio selection model (FBVaR-PSM). The VaR can directly reflect the largest loss of a selected case at a given confidence level and it is more sensitive than other models and more acceptable for general investors than conventional risk measurements. To solve the FBVaR-PSM, in some special cases when the security returns are taken for trapezoidal, triangular or Gaussian fuzzy birandom variables, several crisp equivalent models of the FBVaR-PSM are derived, which can be handled by any linear programming solver. In general, the fuzzy birandom simulation-based particle swarm optimization algorithm (FBS-PSO) is designed to find the approximate optimal solution. To illustrate the proposed model and the behavior of the FBS-PSO, two numerical examples are introduced based on investors' different risk attitudes. Finally, we analyze the experimental results and provide a discussion of some existing approaches.

  • Adaptive Channel Power Partitioning Scheme in WCDMA Femto Cell

    Tae-Won BAN  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E97-B No:1
      Page(s):
    190-195

    Recently, small cell systems such as femto cell are being considered as a good alternative that can support the increasing demand for mobile data traffic because they can significantly enhance network capacity by increasing spatial reuse. In this paper, we analyze the coverage and capacity of a femto cell when it is deployed in a hotspot to reduce the traffic loads of neighboring macro base stations (BSs). Our analysis results show that the coverage and capacity of femto cell are seriously affected by surrounding signal environment and they can be greatly enhanced by adapting power allocation for channels to the surrounding environment. Thus, we propose an adaptive power partitioning scheme where power allocation for channels can be dynamically adjusted to suit the environment surrounding the femto cell. In addition, we numerically derive the optimal power allocation ratio for channels to optimize the performance of the femto cell in the proposed scheme. It is shown that the proposed scheme with the optimal channel power allocation significantly outperforms the conventional scheme with fixed power allocation for channels.

  • Time Shift Parameter Setting of Temporal Decorrelation Source Separation for Periodic Gaussian Signals

    Takeshi AMISHIMA  Kazufumi HIRATA  

     
    PAPER-Sensing

      Vol:
    E96-B No:12
      Page(s):
    3190-3198

    Temporal Decorrelation source SEParation (TDSEP) is a blind separation scheme that utilizes the time structure of the source signals, typically, their periodicities. The advantage of TDSEP over non-Gaussianity based methods is that it can separate Gaussian signals as long as they are periodic. However, its shortcoming is that separation performance (SEP) heavily depends upon the values of the time shift parameters (TSPs). This paper proposes a method to automatically and blindly estimate a set of TSPs that achieves optimal SEP against periodic Gaussian signals. It is also shown that, selecting the same number of TSPs as that of the source signals, is sufficient to obtain optimal SEP, and adding more TSPs does not improve SEP, but only increases the computational complexity. The simulation example showed that the SEP is higher by approximately 20dB, compared with the ordinary method. It is also shown that the proposed method successfully selects just the same number of TSPs as that of incoming signals.

  • A 5.83pJ/bit/iteration High-Parallel Performance-Aware LDPC Decoder IP Core Design for WiMAX in 65nm CMOS

    Xiongxin ZHAO  Zhixiang CHEN  Xiao PENG  Dajiang ZHOU  Satoshi GOTO  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E96-A No:12
      Page(s):
    2623-2632

    In this paper, we propose a synthesizable LDPC decoder IP core for the WiMAX system with high parallelism and enhanced error-correcting performance. By taking the advantages of both layered scheduling and fully-parallel architecture, the decoder can fully support multi-mode decoding specified in WiMAX with the parallelism much higher than commonly used partial-parallel layered LDPC decoder architecture. 6-bit quantized messages are split into bit-serial style and 2bit-width serial processing lines work concurrently so that only 3 cycles are required to decode one layer. As a result, 12∼24 cycles are enough to process one iteration for all the code-rates specified in WiMAX. Compared to our previous bit-serial decoder, it doubles the parallelism and solves the message saturation problem of the bit-serial arithmetic, with minor gate count increase. Power synthesis result shows that the proposed decoder achieves 5.83pJ/bit/iteration energy efficiency which is 46.8% improvement compared to state-of-the-art work. Furthermore, an advanced dynamic quantization (ADQ) technique is proposed to enhance the error-correcting performance in layered decoder architecture. With about 2% area overhead, 6-bit ADQ can achieve the error-correcting performance close to 7-bit fixed quantization with improved error floor performance.

  • Synchronization-Aware Virtual Machine Scheduling for Parallel Applications in Xen

    Cheol-Ho HONG  Chuck YOO  

     
    LETTER

      Vol:
    E96-D No:12
      Page(s):
    2720-2723

    In this paper, we propose a synchronization-aware VM scheduler for parallel applications in Xen. The proposed scheduler prevents threads from waiting for a significant amount of time during synchronization. For this purpose, we propose an identification scheme that can identify the threads that have awaited other threads for a long time. In this scheme, a detection module that can infer the internal status of guest OSs was developed. We also present a scheduling policy that can accelerate bottlenecks of concurrent VMs. We implemented our VM scheduler in the recent Xen hypervisor with para-virtualized Linux-based operating systems. We show that our approach can improve the performance of concurrent VMs by up to 43% as compared to the credit scheduler.

  • Construction of High Rate Punctured Convolutional Codes by Exhaustive Search and Partial Search

    Sen MORIYA  Hiroshi SASANO  

     
    PAPER-Coding Theory

      Vol:
    E96-A No:12
      Page(s):
    2374-2381

    We consider two methods for constructing high rate punctured convolutional codes. First, we present the best high rate R=(n-1)/n punctured convolutional codes, for n=5,6,…,16, which are obtained by exhaustive searches. To obtain the best code, we use a regular convolutional code whose weight spectrum is equivalent to that of each punctured convolutional code. We search these equivalent codes for the best one. Next, we present a method that searches for good punctured convolutional codes by partial searches. This method searches the codes that are derived from rate 1/2 original codes obtained in the first method. By this method, we obtain some good punctured convolutional codes relatively faster than the case in which we search for the best codes.

761-780hit(2741hit)