The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MPO(945hit)

361-380hit(945hit)

  • Least-Squares Independence Test

    Masashi SUGIYAMA  Taiji SUZUKI  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E94-D No:6
      Page(s):
    1333-1336

    Identifying the statistical independence of random variables is one of the important tasks in statistical data analysis. In this paper, we propose a novel non-parametric independence test based on a least-squares density ratio estimator. Our method, called least-squares independence test (LSIT), is distribution-free, and thus it is more flexible than parametric approaches. Furthermore, it is equipped with a model selection procedure based on cross-validation. This is a significant advantage over existing non-parametric approaches which often require manual parameter tuning. The usefulness of the proposed method is shown through numerical experiments.

  • Performance Analysis of Optical Packet Switches with Reconfiguration Overhead

    Kuan-Hung CHOU  Woei LIN  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E94-B No:6
      Page(s):
    1640-1647

    In optical packet switches, the overhead of reconfiguring a switch fabric is not negligible with respect to the packet transmission time and can adversely affect switch performance. The overhead increases the average waiting time of packets and worsens throughput performance. Therefore, scheduling packets requires additional considerations on the reconfiguration frequency. This work intends to analytically find the optimal reconfiguration frequency that minimizes the average waiting time of packets. It proposes an analytical model to facilitate our analysis on reconfiguration optimization for input-buffered optical packet switches with the reconfiguration overhead. The analytical model is based on a Markovian analysis and is used to study the effects of various network parameters on the average waiting time of packets. Of particular interest is the derivation of closed-form equations that quantify the effects of the reconfiguration frequency on the average waiting time of packets. Quantitative examples are given to show that properly balancing the reconfiguration frequency can significantly reduce the average waiting time of packets. In the case of heavy traffic, the basic round-robin scheduling scheme with the optimal reconfiguration frequency can achieve as much as 30% reduction in the average waiting time of packets, when compared with the basic round-robin scheduling scheme with a fixed reconfiguration frequency.

  • Adaptive Selective Retransmission Algorithm for Video Communications in Congested Networks

    Bin SONG  Hao QIN  Xuelu PENG  Yanhui QIN  

     
    LETTER-Multimedia Systems for Communications

      Vol:
    E94-B No:6
      Page(s):
    1788-1791

    An adaptive selective retransmission algorithm for video communications based on packet importance value is proposed. The algorithm can adaptively select the retransmission threshold in realtime and efficiently manage the retransmission process in heavy loaded networks while guaranteeing acceptable video quality at the receiver.

  • Universally Composable NBAC-Based Fair Voucher Exchange for Mobile Environments

    Kazuki YONEYAMA  Masayuki TERADA  Sadayuki HONGO  Kazuo OHTA  

     
    PAPER

      Vol:
    E94-A No:6
      Page(s):
    1263-1273

    Fair exchange is an important tool to achieve “fairness” of electronic commerce. Several previous schemes satisfy universally composable security which provides security preserving property under complex networks like the Internet. In recent years, as the demand for electronic commerce increases, fair exchange for electronic vouchers (e.g., electronic tickets, moneys, etc.) to obtain services or contents is in the spotlight. The definition of fairness for electronic vouchers is different from that for general electronic items (e.g., the sender must not do duplicate use of exchanged electronic vouchers). However, although there are universally composable schemes for electronic items, there is no previous study for electronic vouchers. In this paper, we introduce a universally composable definition of fair voucher exchange, that is, an ideal functionality of fair voucher exchange. Also, we prove the equivalence between our universally composable definition and the conventional definition for electronic vouchers. Thus, our formulation of the ideal functionality is justified. Finally, we propose a new fair voucher exchange scheme from non-blocking atomic commitment as black-box, which satisfies our security definition and is adequate for mobile environments. By instantiating general building blocks with known practical ones, our scheme can be also practical because it is implemented without trusted third party in usual executions.

  • Design Optimization of H-Plane Waveguide Component by Level Set Method

    Koichi HIRAYAMA  Yasuhide TSUJI  Shintaro YAMASAKI  Shinji NISHIWAKI  

     
    PAPER-Electromagnetic Theory

      Vol:
    E94-C No:5
      Page(s):
    874-881

    We present a design optimization method of H-plane waveguide components, based on the level set method with the finite element method. In this paper, we propose a new formulation for the improvement of a level set function, which describes shape, location, and connectivity of dielectric in a design region. Employing the optimization procedure, we demonstrate that optimized structures of an H-plane waveguide filter and T-junction are obtained from an initial structure composed of several circular blocks of dielectric.

  • Adaptive Script-Independent Text Line Extraction

    Majid ZIARATBAN  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Vol:
    E94-D No:4
      Page(s):
    866-877

    In this paper, an adaptive block-based text line extraction algorithm is proposed. Three global and two local parameters are defined to adapt the method to various handwritings in different languages. A document image is segmented into several overlapping blocks. The skew of each block is estimated. Text block is de-skewed by using the estimated skew angle. Text regions are detected in the de-skewed text block. A number of data points are extracted from the detected text regions in each block. These data points are used to estimate the paths of text lines. By thinning the background of the image including text line paths, text line boundaries or separators are estimated. Furthermore, an algorithm is proposed to assign to the extracted text lines the connected components which have intersections with the estimated separators. Extensive experiments on different standard datasets in various languages demonstrate that the proposed algorithm outperforms previous methods.

  • Fast Performance Evaluation Method of LDPC Codes

    Takakazu SAKAI  Koji SHIBATA  

     
    PAPER-Coding Theory

      Vol:
    E94-A No:4
      Page(s):
    1116-1123

    This paper shows a fast estimation method of very low error rate of low-density parity-check (LDPC) codes. No analytical tool is available to evaluate performance of LDPC codes, and the traditional Monte Carlo simulation methods can not estimate the low error rate of LDPC codes due to the limitation of time. To conquer this problem, we propose another simulation method which is based on the optimal simulation probability density function (PDF). The proposed simulation PDF can also avoid the dependency between the simulation time and the number of dominant trapping sets, which is the problem of some fast simulation methods based on the error event simulation method. Additionally, we show some numerical examples to demonstrate the effectiveness of the proposed method. The simulation time of the proposed method is reduced to almost less than 1/10 of that of Cole et al.'s method under the condition of the same accuracy of the estimator.

  • Geometry Coding for Triangular Mesh Model with Structuring Surrounding Vertices and Connectivity-Oriented Multiresolution Decomposition

    Shuji WATANABE  Akira KAWANAKA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E94-D No:4
      Page(s):
    886-894

    In this paper, we propose a novel coding scheme for the geometry of the triangular mesh model. The geometry coding schemes can be classified into two groups: schemes with perfect reconstruction property that maintains their connectivity, and schemes without it in which the remeshing procedure is performed to change the mesh to semi-regular or regular mesh. The former schemes have good coding performance at higher coding rate, while the latter give excellent coding performance at lower coding rate. We propose a geometry coding scheme that maintains the connectivity and has a perfect reconstruction property. We apply a method that successively structures on 2-D plane the surrounding vertices obtained by expanding vertex sequences neighboring the previous layer. Non-separable component decomposition is applied, in which 2-D structured data are decomposed into four components depending on whether their location was even or odd on the horizontal and vertical axes in the 2-D plane. And a prediction and update are performed for the decomposed components. In the prediction process the predicted value is obtained from the vertices, which were not processed, neighboring the target vertex in the 3-D space. And the zero-tree coding is introduced in order to remove the redundancies between the coefficients at similar positions in different resolution levels. SFQ (Space-Frequency Quantization) is applied, which gives the optimal combination of coefficient pruning for the descendant coefficients of each tree element and a uniform quantization for each coefficient. Experiments applying the proposed method to several polygon meshes of different resolutions show that the proposed method gives a better coding performance at lower bit rate when compared to the conventional schemes.

  • Blind Source Separation Using Dodecahedral Microphone Array under Reverberant Conditions

    Motoki OGASAWARA  Takanori NISHINO  Kazuya TAKEDA  

     
    PAPER-Engineering Acoustics

      Vol:
    E94-A No:3
      Page(s):
    897-906

    The separation and localization of sound source signals are important techniques for many applications, such as highly realistic communication and speech recognition systems. These systems are expected to work without such prior information as the number of sound sources and the environmental conditions. In this paper, we developed a dodecahedral microphone array and proposed a novel separation method with our developed device. This method refers to human sound localization cues and uses acoustical characteristics obtained by the shape of the dodecahedral microphone array. Moreover, this method includes an estimation method of the number of sound sources that can operate without prior information. The sound source separation performances were evaluated under simulated and actual reverberant conditions, and the results were compared with the conventional method. The experimental results showed that our separation performance outperformed the conventional method.

  • Fast Simulation Method for Turbo Codes over Additive White Class A Noise Channel

    Takakazu SAKAI  Koji SHIBATA  

     
    LETTER-Coding Theory

      Vol:
    E94-A No:3
      Page(s):
    1034-1037

    This study shows a fast simulation method for turbo codes over an additive white class A noise (AWAN) channel. The reduction of the estimation time is achieved by applying importance sampling (IS) which is one of the variance reduction simulation methods. In order to adapt the AWAN channel, we propose a design method of a simulation probability density function (PDF) utilized in IS. The proposed simulation PDF is related to the Bhattacharyya bound to evaluate wider area of the signal space than the conventional method. Since the mean translation method, which is a conventional design method of the simulation PDF used in IS, is optimized for an additive white Gaussian noise channel, the evaluation time of the error performance of turbo codes over the AWAN channel can not be reduced. To evaluate BER of 10-8, the simulation time of the proposed method can be reduced to 1/104 under the condition of the same accuracy of the traditional Monte Carlo simulation method.

  • Probabilistic Treatment for Syntactic Gaps in Analytic Language Parsing

    Prachya BOONKWAN  Thepchai SUPNITHI  

     
    PAPER

      Vol:
    E94-D No:3
      Page(s):
    440-447

    This paper presents a syntax-based framework for gap resolution in analytic languages. CCG, reputable for dealing with deletion under coordination, is extended with a memory mechanism similar to the slot-and-filler mechanism, resulting in a wider coverage of syntactic gaps patterns. Though our grammar formalism is more expressive than the canonical CCG, its generative power is bounded by Partially Linear Indexed Grammar. Despite the spurious ambiguity originated from the memory mechanism, we also show that its probabilistic parsing is feasible by using the dual decomposition algorithm.

  • Temporal Coalescing on Window Extents over Data Streams

    Mohammed AL-KATEB  Sasi Sekhar KUNTA  Byung Suk LEE  

     
    PAPER

      Vol:
    E94-D No:3
      Page(s):
    489-503

    This paper focuses on the coalescing operator applied to the processing of continuous queries with temporal functions and predicates over windowed data streams. Coalescing is a key operation enabling the evaluation of interval predicates and functions on temporal tuples. Applying this operation for temporal query processing on windowed streams brings the challenge of coalescing tuples in a window extent each time the window slides over the data stream. This coalescing becomes even more involving when some tuples arrive out of order. This paper distinguishes between eager coalescing and lazy coalescing, the two known coalescing schemes. The former coalesces tuples during window extent update and the latter does it during window extent scan. With these two schemes, the paper first presents algorithms for updating a window extent for both tuple-based and time-based windows. Then, the problem of optimally selecting between eager and lazy coalescing for concurrent queries is formulated as a 0-1 integer programming problem. Through extensive performance study, the two schemes are compared and the optimal selection is demonstrated.

  • Design of a Broadband Cruciform Substrate Integrated Waveguide Coupler

    Mitsuyoshi KISHIHARA  Isao OHTA  Kensuke OKUBO  

     
    LETTER-Microwaves, Millimeter-Waves

      Vol:
    E94-C No:2
      Page(s):
    248-250

    A broadband cruciform substrate integrated waveguide coupler is designed based on the planar circuit approach. The broadband property is obtained by widening the crossed region in the same way as rectangular waveguide cruciform couplers. As a result, a 3 dB coupler with fractional bandwidth of 30% is realized at 24 GHz.

  • Lighting Condition Adaptation for Perceived Age Estimation

    Kazuya UEKI  Masashi SUGIYAMA  Yasuyuki IHARA  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:2
      Page(s):
    392-395

    Over the recent years, a great deal of effort has been made to estimate age from face images. It has been reported that age can be accurately estimated under controlled environment such as frontal faces, no expression, and static lighting conditions. However, it is not straightforward to achieve the same accuracy level in a real-world environment due to considerable variations in camera settings, facial poses, and illumination conditions. In this paper, we apply a recently proposed machine learning technique called covariate shift adaptation to alleviating lighting condition change between laboratory and practical environment. Through real-world age estimation experiments, we demonstrate the usefulness of our proposed method.

  • Non-reference Quality Estimation for Temporal Degradation of Coded Picture

    Kenji SUGIYAMA  Naoya SAGARA  Ryo OKAWA  

     
    PAPER-Evaluation

      Vol:
    E94-A No:2
      Page(s):
    519-524

    The non-reference method is widely useful for picture quality estimation on the decoder side. In other work, we discussed pure non-reference estimation using only the decoded picture, and we proposed quantitative estimation methods for mosquito noise and block artifacts. In this paper, we discuss the estimation method as it applies to the degradation of the temporal domain. In the proposed method, motion compensated inter-picture differences and motion vector activity are the basic parameters of temporal degradation. To obtain these parameters, accurate but unstable motion estimation is used with a 1/16 reduction of processing power. Similar values of the parameters in the pictures can be seen in the stable original picture, but temporal degradation caused by the coding increases them. For intra-coded pictures, the values increase significantly. However, for inter-coded pictures, the values are the same or decrease. Therefore, by taking the ratio of the peak frame and other frames, the absolute value of the temporal degradation can be estimated. In this case, the peak frame may be intra-coded. Finally, we evaluate the proposed method using coded pictures with different quantization.

  • Universally Composable and Statistically Secure Verifiable Secret Sharing Scheme Based on Pre-Distributed Data

    Rafael DOWSLEY  Jorn MULLER-QUADE  Akira OTSUKA  Goichiro HANAOKA  Hideki IMAI  Anderson C.A. NASCIMENTO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E94-A No:2
      Page(s):
    725-734

    This paper presents a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority. As an application of this VSS scheme we present very efficient unconditionally secure protocols for performing multiplication of shares based on pre-distributed data which generalize two-party computations based on linear pre-distributed bit commitments. The main results of this paper are a non-interactive VSS, a simplified multiplication protocol for shared values based on pre-distributed random products, and non-interactive zero knowledge proofs for arbitrary polynomial relations. The security of the schemes is proved using the UC framework.

  • A Dynamic Phasor-Based Method for Measuring the Apparent Impedance of a Single-Line-to-Ground Fault

    Chi-Shan YU  

     
    LETTER-Measurement Technology

      Vol:
    E94-A No:1
      Page(s):
    461-463

    This letter proposes a dynamic phasor-based apparent impedance measuring method for a single-line-to-ground fault. Using the proposed method, the effects of the decaying DC components on the apparent impedance of a single-line-to-ground fault can be completely removed. Compared with previous works, the proposed method uses less computation to measure an accurate apparent impedance.

  • Low Power Bus Binding Exploiting Optimal Substructure

    Ji-Hyung KIM  Jun-Dong CHO  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E94-A No:1
      Page(s):
    332-341

    The earlier the stage where we perform low power design, the higher the dynamic power reduction we achieve. In this paper, we focus on reducing switching activity in high-level synthesis, especially, in the problem of functional module binding, bus binding or register binding. We propose an effective low power bus binding algorithm based on the table decomposition method, to reduce switching activity. The proposed algorithm is based on the decomposition of the original problem into sub-problems by exploiting the optimal substructure. As a result, it finds an optimal or close-to-optimal binding solution with less computation time. Experimental results show the proposed method obtains a solution 2.3-22.2% closer to optimal solution than one with a conventional heuristic method, 8.0-479.2 times faster than the optimal one (at a threshold value of 1.0E+9).

  • Separation of Mixtures of Complex Sinusoidal Signals with Independent Component Analysis

    Tetsuo KIRIMOTO  Takeshi AMISHIMA  Atsushi OKAMURA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E94-B No:1
      Page(s):
    215-221

    ICA (Independent Component Analysis) has a remarkable capability of separating mixtures of stochastic random signals. However, we often face problems of separating mixtures of deterministic signals, especially sinusoidal signals, in some applications such as radar systems and communication systems. One may ask if ICA is effective for deterministic signals. In this paper, we analyze the basic performance of ICA in separating mixtures of complex sinusoidal signals, which utilizes the fourth order cumulant as a criterion of independency of signals. We theoretically show that ICA can separate mixtures of deterministic sinusoidal signals. Then, we conduct computer simulations and radio experiments with a linear array antenna to confirm the theoretical result. We will show that ICA is successful in separating mixtures of sinusoidal signals with frequency difference less than FFT resolution and with DOA (Direction of Arrival) difference less than Rayleigh criterion.

  • Optimized Spatial Capacity by Eigenvalue Decomposition of Adjacency Matrix

    Fumie ONO  

     
    LETTER

      Vol:
    E93-B No:12
      Page(s):
    3514-3517

    In this letter, an eigenspace of network topology is introduced to increase a spatial capacity. The network topology is represented as an adjacency matrix. By an eigenvector of adjacency matrix, efficient two way transmission can be realized in wireless distributed networks. It is confirmed by numerical analysis that the scheme with an eigenvector of adjacency matrix supplies higher spatial capacity and reliability than that of conventional scheme.

361-380hit(945hit)