The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

3481-3500hit(20498hit)

  • Some Constructions for Fractional Repetition Codes with Locality 2

    Mi-Young NAM  Jung-Hyun KIM  Hong-Yeop SONG  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:4
      Page(s):
    936-943

    In this paper, we examine the locality property of the original Fractional Repetition (FR) codes and propose two constructions for FR codes with better locality. For this, we first derive the capacity of the FR codes with locality 2, that is the maximum size of the file that can be stored. Construction 1 generates an FR code with repetition degree 2 and locality 2. This code is optimal in the sense of achieving the capacity we derived. Construction 2 generates an FR code with repetition degree 3 and locality 2 based on 4-regular graphs with girth g. This code is also optimal in the same sense.

  • Proposal of Dehazing Method and Quantitative Index for Evaluation of Haze Removal Quality

    Yi RU  Go TANAKA  

     
    PAPER-Image

      Vol:
    E100-A No:4
      Page(s):
    1045-1054

    When haze exists in an image of an outdoor scene, the visibility of objects in the image is deteriorated. In recent years, to improve the visibility of objects in such images, many dehazing methods have been investigated. Most of the methods are based on the atmospheric scattering model. In such methods, the transmittance and global atmospheric light are estimated from an input image and a dehazed image is obtained by substituting them into the model. To estimate the transmittance and global atmospheric light, the dark channel prior is a major and powerful concept that is employed in many dehazing methods. In this paper, we propose a new dehazing method in which the degree of haze removal can be adjusted by changing its parameters. Our method is also based on the atmospheric scattering model and employs the dark channel prior. In our method, the estimated transmittance is adjusted to a more suitable value by a transform function. By choosing appropriate parameter values for each input image, good haze removal results can be obtained by our method. In addition, a quantitative index for evaluating the quality of a dehazed image is proposed in this paper. It can be considered that haze removal is a type of saturation enhancement. On the other hand, an output image obtained using the atmospheric scattering model is generally darker than the input image. Therefore, we evaluate the quality of dehazed images by considering the balance between the brightness and saturation of the input and output images. The validity of the proposed index is examined using our dehazing method. Then a comparison between several dehazing methods is carried out using the index. Through these experiments, the effectiveness of our dehazing method and the quantitative index is confirmed.

  • Flexible Load-Dependent Soft-Start Method for Digital PID Control DC-DC Converter in 380Vdc System

    Hidenori MARUTA  Tsutomu SAKAI  Suguru SAGARA  Yuichiro SHIBATA  Keiichi HIROSE  Fujio KUROKAWA  

     
    PAPER-Energy in Electronics Communications

      Pubricized:
    2016/10/17
      Vol:
    E100-B No:4
      Page(s):
    518-528

    The purpose of this paper is to propose a flexible load-dependent digital soft-start control method for dc-dc converters in a 380Vdc system. The soft-start operation is needed to prevent negative effects such as large inrush current and output overshoot to a power supply in the start-up process of dc-dc converters. In the conventional soft-start operation, a dc-dc converter has a very slow start-up to deal with the light load condition. Therefore, it always takes a long time in any load condition to start up a power supply and obtain the desired output. In the proposed soft-start control method, the speed of the start-up process is flexibly controlled depending on the load condition. To obtain the optimal speed for any load condition, the speed of the soft-start is determined from a approximated function of load current, which is estimated from experiment results in advance. The proposed soft-start control method is evaluated both in simulations and experiments. From results, it is confirmed that the proposed method has superior soft-start characteristics compared to the conventional one.

  • Measurement and Stochastic Modeling of Vertical Handover Interruption Time of Smartphone Real-Time Applications on LTE and Wi-Fi Networks

    Sungjin SHIN  Donghyuk HAN  Hyoungjun CHO  Jong-Moon CHUNG  

     
    PAPER-Network

      Pubricized:
    2016/11/16
      Vol:
    E100-B No:4
      Page(s):
    548-556

    Due to the rapid growth of applications that are based on Internet of Things (IoT) and real-time communications, mobile traffic growth is increasing exponentially. In highly populated areas, sudden concentration of numerous mobile user traffic can cause radio resource shortage, where traffic offloading is essential in preventing overload problems. Vertical handover (VHO) technology which supports seamless connectivity across heterogeneous wireless networks is a core technology of traffic offloading. In VHO, minimizing service interruption is a key design factor, since service interruption deteriorates service performance and degrades user experience (UX). Although 3GPP standard VHO procedures are designed to prevent service interruption, severe quality of service (QoS) degradation and severe interruption can occur in real network environments due to unintended disconnections with one's base station (BS) or access point (AP). In this article, the average minimum handover interruption time (HIT) (i.e., the guaranteed HIT influence) between LTE and Wi-Fi VHO is analyzed and measured based on 3GPP VHO access and decision procedures. In addition, the key parameters and procedures which affect HIT performance are analyzed, and a reference probability density function (PDF) for HIT prediction is derived from Kolmogorov-Smirnov test techniques.

  • User and Antenna Joint Selection in Multi-User Large-Scale MIMO Downlink Networks

    Moo-Woong JEONG  Tae-Won BAN  Bang Chul JUNG  

     
    PAPER-Network

      Pubricized:
    2016/11/02
      Vol:
    E100-B No:4
      Page(s):
    529-535

    In this paper, we investigate a user and antenna joint selection problem in multi-user large-scale MIMO downlink networks, where a BS with N transmit antennas serves K users, and N is much larger than K. The BS activates only S(S≤N) antennas for data transmission to reduce hardware cost and computation complexity, and selects the set of users to which data is to be transmitted by maximizing the sum-rate. The optimal user and antenna joint selection scheme based on exhaustive search causes considerable computation complexity. Thus, we propose a new joint selection algorithm with low complexity and analyze the performance of the proposed scheme in terms of sum-rate and complexity. When S=7, N=10, K=5, and SNR=10dB, the sum-rate of the proposed scheme is 5.1% lower than that of the optimal scheme, while the computation complexity of the proposed scheme is reduced by 99.0% compared to that of the optimal scheme.

  • Energy-Efficient Optimization for Device-to-Device Communication Underlaying Cellular Networks

    Haibo DAI  Chunguo LI  Luxi YANG  

     
    LETTER-Numerical Analysis and Optimization

      Vol:
    E100-A No:4
      Page(s):
    1079-1083

    In this letter, we focus on the subcarrier allocation problem for device-to-device (D2D) communication in cellular networks to improve the cellular energy efficiency (EE). Our goal is to maximize the weighted cellular EE and its solution is obtained by using a game-theoretic learning approach. Specifically, we propose a lower bound instead of the original optimization objective on the basis of the proven property that the gap goes to zero as the number of transmitting antennas increases. Moreover, we prove that an exact potential game applies to the subcarrier allocation problem and it exists the best Nash equilibrium (NE) which is the optimal solution to optimize the lower bound. To find the best NE point, a distributed learning algorithm is proposed and then is proved that it can converge to the best NE. Finally, numerical results verify the effectiveness of the proposed scheme.

  • Stochastic Dykstra Algorithms for Distance Metric Learning with Covariance Descriptors

    Tomoki MATSUZAWA  Eisuke ITO  Raissa RELATOR  Jun SESE  Tsuyoshi KATO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/01/13
      Vol:
    E100-D No:4
      Page(s):
    849-856

    In recent years, covariance descriptors have received considerable attention as a strong representation of a set of points. In this research, we propose a new metric learning algorithm for covariance descriptors based on the Dykstra algorithm, in which the current solution is projected onto a half-space at each iteration, and which runs in O(n3) time. We empirically demonstrate that randomizing the order of half-spaces in the proposed Dykstra-based algorithm significantly accelerates convergence to the optimal solution. Furthermore, we show that the proposed approach yields promising experimental results for pattern recognition tasks.

  • Codebook Learning for Image Recognition Based on Parallel Key SIFT Analysis

    Feng YANG  Zheng MA  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/01/10
      Vol:
    E100-D No:4
      Page(s):
    927-930

    The quality of codebook is very important in visual image classification. In order to boost the classification performance, a scheme of codebook generation for scene image recognition based on parallel key SIFT analysis (PKSA) is presented in this paper. The method iteratively applies classical k-means clustering algorithm and similarity analysis to evaluate key SIFT descriptors (KSDs) from the input images, and generates the codebook by a relaxed k-means algorithm according to the set of KSDs. With the purpose of evaluating the performance of the PKSA scheme, the image feature vector is calculated by sparse code with Spatial Pyramid Matching (ScSPM) after the codebook is constructed. The PKSA-based ScSPM method is tested and compared on three public scene image datasets. The experimental results show the proposed scheme of PKSA can significantly save computational time and enhance categorization rate.

  • Fast Ad-Hoc Search Algorithm for Personalized PageRank Open Access

    Yasuhiro FUJIWARA  Makoto NAKATSUJI  Hiroaki SHIOKAWA  Takeshi MISHIMA  Makoto ONIZUKA  

     
    INVITED PAPER

      Pubricized:
    2017/01/23
      Vol:
    E100-D No:4
      Page(s):
    610-620

    Personalized PageRank (PPR) is a typical similarity metric between nodes in a graph, and node searches based on PPR are widely used. In many applications, graphs change dynamically, and in such cases, it is desirable to perform ad hoc searches based on PPR. An ad hoc search involves performing searches by varying the search parameters or graphs. However, as the size of a graph increases, the computation cost of performing an ad hoc search can become excessive. In this paper, we propose a method called Castanet that offers fast ad hoc searches of PPR. The proposed method features (1) iterative estimation of the upper and lower bounds of PPR scores, and (2) dynamic pruning of nodes that are not needed to obtain a search result. Experiments confirm that the proposed method does offer faster ad hoc PPR searches than existing methods.

  • XY-Separable Scale-Space Filtering by Polynomial Representations and Its Applications Open Access

    Gou KOUTAKI  Keiichi UCHIMURA  

     
    INVITED PAPER

      Pubricized:
    2017/01/11
      Vol:
    E100-D No:4
      Page(s):
    645-654

    In this paper, we propose the application of principal component analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. Because the translation of an input image into scale-space is a continuous operation, it requires the extension of conventional finite matrix-based PCA to an infinite number of dimensions. Here, we use spectral theory to resolve this infinite eigenvalue problem through the use of integration, and we propose an approximate solution based on polynomial equations. In order to clarify its eigensolutions, we apply spectral decomposition to Gaussian scale-space and scale-normalized Laplacian of Gaussian (sLoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and sLoG images, demonstrating that the accuracy of such an image can be made very high by using an arbitrary scale calculated through simple linear combination. Furthermore, to make the scale-space filtering efficient, we approximate the basis filter set using Gaussian lobes approximation and we can obtain XY-Separable filters. As a more practical example, we propose a new Scale Invariant Feature Transform (SIFT) detector.

  • Perceptual Distributed Compressive Video Sensing via Reweighted Sampling and Rate-Distortion Optimized Measurements Allocation

    Jin XU  Yan ZHANG  Zhizhong FU  Ning ZHOU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/01/06
      Vol:
    E100-D No:4
      Page(s):
    918-922

    Distributed compressive video sensing (DCVS) is a new paradigm for low-complexity video compression. To achieve the highest possible perceptual coding performance under the measurements budget constraint, we propose a perceptual optimized DCVS codec by jointly exploiting the reweighted sampling and rate-distortion optimized measurements allocation technologies. A visual saliency modulated just-noticeable distortion (VS-JND) profile is first developed based on the side information (SI) at the decoder side. Then the estimated correlation noise (CN) between each non-key frame and its SI is suppressed by the VS-JND. Subsequently, the suppressed CN is utilized to determine the weighting matrix for the reweighted sampling as well as to design a perceptual rate-distortion optimization model to calculate the optimal measurements allocation for each non-key frame. Experimental results indicate that the proposed DCVS codec outperforms the other existing DCVS codecs in term of both the objective and subjective performance.

  • A New Efficient Resource Management Framework for Iterative MapReduce Processing in Large-Scale Data Analysis

    Seungtae HONG  Kyongseok PARK  Chae-Deok LIM  Jae-Woo CHANG  

    This paper has been cancelled due to violation of duplicate submission policy on IEICE Transactions on Information and Systems on September 5, 2019.
     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    704-717
    • HTML
    • Errata[Uploaded on March 1,2018]

    To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.

  • Interdisciplinary Collaborator Recommendation Based on Research Content Similarity

    Masataka ARAKI  Marie KATSURAI  Ikki OHMUKAI  Hideaki TAKEDA  

     
    PAPER

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:4
      Page(s):
    785-792

    Most existing methods on research collaborator recommendation focus on promoting collaboration within a specific discipline and exploit a network structure derived from co-authorship or co-citation information. To find collaboration opportunities outside researchers' own fields of expertise and beyond their social network, we present an interdisciplinary collaborator recommendation method based on research content similarity. In the proposed method, we calculate textual features that reflect a researcher's interests using a research grant database. To find the most relevant researchers who work in other fields, we compare constructing a pairwise similarity matrix in a feature space and exploiting existing social networks with content-based similarity. We present a case study at the Graduate University for Advanced Studies in Japan in which actual collaborations across departments are used as ground truth. The results indicate that our content-based approach can accurately predict interdisciplinary collaboration compared with the conventional collaboration network-based approaches.

  • Quick Window Query Processing Using a Non-Uniform Cell-Based Index in Wireless Data Broadcast Environment

    SeokJin IM  HeeJoung HWANG  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E100-A No:4
      Page(s):
    1092-1096

    This letter proposes a Non-uniform Cell-based Index (NCI) to enable clients to quickly process window queries in the wireless spatial data broadcast environment. To improve the access time, NCI reduces the probe wait time by equalized spacing between indexes, using non-uniformly partitioned cells of data space. Through the performance evaluation, we show the proposed NCI outperforms the existing index schemes for window queries to spatial data in respect of access time.

  • Multi-Valued Sequences Generated by Power Residue Symbols over Odd Characteristic Fields

    Begum NASIMA  Yasuyuki NOGAMI  Satoshi UEHARA  Robert H. MOLEROS-ZARAGOZA  

     
    PAPER-Sequences

      Vol:
    E100-A No:4
      Page(s):
    922-929

    This paper proposes a new approach for generating pseudo random multi-valued (including binary-valued) sequences. The approach uses a primitive polynomial over an odd characteristic prime field $ {p}$, where p is an odd prime number. Then, for the maximum length sequence of vectors generated by the primitive polynomial, the trace function is used for mapping these vectors to scalars as elements in the prime field. Power residue symbol (Legendre symbol in binary case) is applied to translate the scalars to k-value scalars, where k is a prime factor of p-1. Finally, a pseudo random k-value sequence is obtained. Some important properties of the resulting multi-valued sequences are shown, such as their period, autocorrelation, and linear complexity together with their proofs and small examples.

  • A Novel Class of Quadriphase Zero-Correlation Zone Sequence Sets

    Takafumi HAYASHI  Yodai WATANABE  Toshiaki MIYAZAKI  Anh PHAM  Takao MAEDA  Shinya MATSUFUJI  

     
    LETTER-Sequences

      Vol:
    E100-A No:4
      Page(s):
    953-960

    The present paper introduces the construction of quadriphase sequences having a zero-correlation zone. For a zero-correlation zone sequence set of N sequences, each of length l, the cross-correlation function and the side lobe of the autocorrelation function of the proposed sequence set are zero for the phase shifts τ within the zero-correlation zone z, such that |τ|≤z (τ ≠ 0 for the autocorrelation function). The ratio $ rac{N(z+1)}{ell}$ is theoretically limited to one. When l=N(z+1), the sequence set is called an optimal zero-correlation sequence set. The proposed zero-correlation zone sequence set can be generated from an arbitrary Hadamard matrix of order n. The length of the proposed sequence set can be extended by sequence interleaving, where m times interleaving can generate 4n sequences, each of length 2m+3n. The proposed sequence set is optimal for m=0,1 and almost optimal for m>1.

  • Achievable Error Rate Performance Analysis of Space Shift Keying Systems with Imperfect CSI

    Jinkyu KANG  Seongah JEONG  Hoojin LEE  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:4
      Page(s):
    1084-1087

    In this letter, efficient closed-form formulas for the exact and asymptotic average bit error probability (ABEP) of space shift keying (SSK) systems are derived over Rayleigh fading channels with imperfect channel state information (CSI). Specifically, for a generic 2×NR multiple-input multiple-output (MIMO) system with the maximum likelihood (ML) detection, the impact of imperfect CSI is taken into consideration in terms of two types of channel estimation errors with the fixed variance and the variance as a function of the number of pilot symbols and signal-to-noise ratio (SNR). Then, the explicit evaluations of the bit error floor (BEF) and asymptotic SNR loss are carried out based on the derived asymptotic ABEP formula, which accounts for the impact of imperfect CSI on the SSK system. The numerical results are presented to validate the exactness of our theoretical analysis.

  • Antenna Array Arrangement for Massive MIMO to Reduce Channel Spatial Correlation in LOS Environment

    Takuto ARAI  Atsushi OHTA  Yushi SHIRATO  Satoshi KUROSAKI  Kazuki MARUTA  Tatsuhiko IWAKUNI  Masataka IIZUKA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2016/10/21
      Vol:
    E100-B No:4
      Page(s):
    594-601

    This paper proposes a new antenna array design of Massive MIMO for capacity enhancement in line of sight (LOS) environments. Massive MIMO has two key problems: the heavy overhead of feeding back the channel state information (CSI) for very large number of transmission and reception antenna element pairs and the huge computation complexity imposed by the very large scale matrixes. We have already proposed a practical application of Massive MIMO, that is, Massive Antenna Systems for Wireless Entrance links (MAS-WE), which can clearly solve the two key problems of Massive MIMO. However, the conventional antenna array arrangements; e.g. uniform planar array (UPA) or uniform circular array (UCA) degrade the system capacity of MAS-WE due to the channel spatial correlation created by the inter-element spacing. When the LOS component dominates the propagation channel, the antenna array can be designed to minimize the inter-user channel correlation. We propose an antenna array arrangement to control the grating-lobe positions and achieve very low channel spatial correlation. Simulation results show that the proposed arrangement can reduce the spatial correlation at CDF=50% value by 80% compared to UCA and 75% compared to UPA.

  • A Nonparametric Estimation Approach Based on Apollonius Circles for Outdoor Localization

    Byung Jin LEE  Kyung Seok KIM  

     
    PAPER-Sensing

      Pubricized:
    2016/11/07
      Vol:
    E100-B No:4
      Page(s):
    638-645

    When performing measurements in an outdoor field environment, various interference factors occur. So, many studies have been performed to increase the accuracy of the localization. This paper presents a novel probability-based approach to estimating position based on Apollonius circles. The proposed algorithm is a modified method of existing trilateration techniques. This method does not need to know the exact transmission power of the source and does not require a calibration procedure. The proposed algorithm is verified in several typical environments, and simulation results show that the proposed method outperforms existing algorithms.

  • Analyzing Temporal Dynamics of Consumer's Behavior Based on Hierarchical Time-Rescaling

    Hideaki KIM  Noriko TAKAYA  Hiroshi SAWADA  

     
    PAPER

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:4
      Page(s):
    693-703

    Improvements in information technology have made it easier for industry to communicate with their customers, raising hopes for a scheme that can estimate when customers will want to make purchases. Although a number of models have been developed to estimate the time-varying purchase probability, they are based on very restrictive assumptions such as preceding purchase-event dependence and discrete-time effect of covariates. Our preliminary analysis of real-world data finds that these assumptions are invalid: self-exciting behavior, as well as marketing stimulus and preceding purchase dependence, should be examined as possible factors influencing purchase probability. In this paper, by employing the novel idea of hierarchical time rescaling, we propose a tractable but highly flexible model that can meld various types of intrinsic history dependency and marketing stimuli in a continuous-time setting. By employing the proposed model, which incorporates the three factors, we analyze actual data, and show that our model has the ability to precisely track the temporal dynamics of purchase probability at the level of individuals. It enables us to take effective marketing actions such as advertising and recommendations on timely and individual bases, leading to the construction of a profitable relationship with each customer.

3481-3500hit(20498hit)