The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PA(8249hit)

1241-1260hit(8249hit)

  • Correlation-Based Optimal Chirp Rate Allocation for Chirp Spread Spectrum Using Multiple Linear Chirps

    Kwang-Yul KIM  Seung-Woo LEE  Yu-Min HWANG  Jae-Seang LEE  Yong-Sin KIM  Jin-Young KIM  Yoan SHIN  

     
    LETTER-Spread Spectrum Technologies and Applications

      Vol:
    E100-A No:4
      Page(s):
    1088-1091

    A chirp spread spectrum (CSS) system uses a chirp signal which changes the instantaneous frequency according to time for spreading a transmission bandwidth. In the CSS system, the transmission performance can be simply improved by increasing the time-bandwidth product which is known as the processing gain. However, increasing the transmission bandwidth is limited because of the spectrum regulation. In this letter, we propose a correlation-based chirp rate allocation method to improve the transmission performance by analyzing the cross-correlation coefficient in the same time-bandwidth product. In order to analyze the transmission performance of the proposed method, we analytically derive the cross-correlation coefficient according to the time-bandwidth separation product and simulate the transmission performance. The simulation results show that the proposed method can analytically allocate the optimal chirp rate and improve the transmission performance.

  • Grouping Methods for Pattern Matching over Probabilistic Data Streams

    Kento SUGIURA  Yoshiharu ISHIKAWA  Yuya SASAKI  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    718-729

    As the development of sensor and machine learning technologies has progressed, it has become increasingly important to detect patterns from probabilistic data streams. In this paper, we focus on complex event processing based on pattern matching. When we apply pattern matching to probabilistic data streams, numerous matches may be detected at the same time interval because of the uncertainty of data. Although existing methods distinguish between such matches, they may derive inappropriate results when some of the matches correspond to the real-world event that has occurred during the time interval. Thus, we propose two grouping methods for matches. Our methods output groups that indicate the occurrence of complex events during the given time intervals. In this paper, first we describe the definition of groups based on temporal overlap, and propose two grouping algorithms, introducing the notions of complete overlap and single overlap. Then, we propose an efficient approach for calculating the occurrence probabilities of groups by using deterministic finite automata that are generated from the query patterns. Finally, we empirically evaluate the effectiveness of our methods by applying them to real and synthetic datasets.

  • Dynamic Path Provisioning and Disruption-Free Reoptimization Algorithms for Bandwidth on-Demand Services Considering Fairness

    Masahiro NAKAGAWA  Hiroshi HASEGAWA  Ken-ichi SATO  

     
    PAPER-Network

      Pubricized:
    2016/10/28
      Vol:
    E100-B No:4
      Page(s):
    536-547

    Adaptive and flexible network control technology is considered essential for efficient network resource utilization. Moreover, such technology is becoming a key to cost-effectively meet diverse service requirements and accommodate heavier traffic with limited network resources; demands that conventional static operation cannot satisfy. To address this issue, we previously studied dynamic network control technology for large-capacity network services including on-demand broad bandwidth provisioning services and layer-one VPN. Our previous study introduced a simple weighting function for achieving fairness in terms of path length and proposed two dynamic Make Before Break Routing algorithms for reducing blocking probability. These algorithms enhance network utilization by rerouting existing paths to alternative routes while completely avoiding disruption for highly reliable services. However, the impact of this avoidance of service disruption on blocking probability has not been clarified. In this paper, we propose modified versions of the algorithms that enhance network utilization while slightly increasing disruption by rerouting, which enable us to elucidate the effectiveness of hitless rerouting. We also provide extensive evaluations including a comparison of original and modified algorithms. Numerical examples demonstrate that they achieve not only a high degree of fairness but also low service blocking probability. Hitless rerouting is achieved with a small increase in blocking probability.

  • Accent Sandhi Estimation of Tokyo Dialect of Japanese Using Conditional Random Fields Open Access

    Masayuki SUZUKI  Ryo KUROIWA  Keisuke INNAMI  Shumpei KOBAYASHI  Shinya SHIMIZU  Nobuaki MINEMATSU  Keikichi HIROSE  

     
    INVITED PAPER

      Pubricized:
    2016/12/08
      Vol:
    E100-D No:4
      Page(s):
    655-661

    When synthesizing speech from Japanese text, correct assignment of accent nuclei for input text with arbitrary contents is indispensable in obtaining naturally-sounding synthetic speech. A phenomenon called accent sandhi occurs in utterances of Japanese; when a word is uttered in a sentence, its accent nucleus may change depending on the contexts of preceding/succeeding words. This paper describes a statistical method for automatically predicting the accent nucleus changes due to accent sandhi. First, as the basis of the research, a database of Japanese text was constructed with labels of accent phrase boundaries and accent nucleus positions when uttered in sentences. A single native speaker of Tokyo dialect Japanese annotated all the labels for 6,344 Japanese sentences. Then, using this database, a conditional-random-field-based method was developed using this database to predict accent phrase boundaries and accent nuclei. The proposed method predicted accent nucleus positions for accent phrases with 94.66% accuracy, clearly surpassing the 87.48% accuracy obtained using our rule-based method. A listening experiment was also conducted on synthetic speech obtained using the proposed method and that obtained using the rule-based method. The results show that our method significantly improved the naturalness of synthetic speech.

  • Development and Evaluation of Online Infrastructure to Aid Teaching and Learning of Japanese Prosody Open Access

    Nobuaki MINEMATSU  Ibuki NAKAMURA  Masayuki SUZUKI  Hiroko HIRANO  Chieko NAKAGAWA  Noriko NAKAMURA  Yukinori TAGAWA  Keikichi HIROSE  Hiroya HASHIMOTO  

     
    INVITED PAPER

      Pubricized:
    2016/12/22
      Vol:
    E100-D No:4
      Page(s):
    662-669

    This paper develops an online and freely available framework to aid teaching and learning the prosodic control of Tokyo Japanese: how to generate its adequate word accent and phrase intonation. This framework is called OJAD (Online Japanese Accent Dictionary) [1] and it provides three features. 1) Visual, auditory, systematic, and comprehensive illustration of patterns of accent change (accent sandhi) of verbs and adjectives. Here only the changes caused by twelve fundamental conjugations are focused upon. 2) Visual illustration of the accent pattern of a given verbal expression, which is a combination of a verb and its postpositional auxiliary words. 3) Visual illustration of the pitch pattern of any given sentence and the expected positions of accent nuclei in the sentence. The third feature is technically implemented by using an accent change prediction module that we developed for Japanese Text-To-Speech (TTS) synthesis [2],[3]. Experiments show that accent nucleus assignment to given texts by the proposed framework is much more accurate than that by native speakers. Subjective assessment and objective assessment done by teachers and learners show extremely high pedagogical effectiveness of the developed framework.

  • Efficient Multiplexer Networks for Field-Data Extractors and Their Evaluations

    Koki ITO  Kazushi KAWAMURA  Yutaka TAMIYA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E100-A No:4
      Page(s):
    1015-1028

    As seen in stream data processing, it is necessary to extract a particular data field from bulk data, where we can use a field-data extractor. Particularly, an (M,N)-field-data extractor reads out any consecutive N bytes from an M-byte register by connecting its input/output using multiplexers (MUXs). However, the number of required MUXs increases too much as the input/output byte widths increase. It is known that partitioning a MUX network leads to reducing the number of MUXs. In this paper, we firstly pick up a multi-layered MUX network, which is generated by repeatedly partitioning a MUX network into a collection of single-layered MUX networks. We show that the multi-layered MUX network is equivalent to the barrel shifter from which redundant MUXs and wires are removed, and we prove that the number of required MUXs becomes the smallest among MUX-network-partitioning based field-data extractors. Next, we propose a rotator-based MUX network for a field-data extractor, which is based on reading out a particular data in an input register to a rotator. The byte width of the rotator is the same as its output register and hence we no longer require any extra wires nor MUXs. By rotating the input data appropriately, we can finally have a right-ordered data into an output register. Experimental results show that a multi-layered MUX network reduces the number of required gates to construct a field-data extractor by up to 97.0% compared with the one using a naive approach and its delay becomes 1.8ns-2.3ns. A rotator-based MUX network with a control circuit also reduces the number of required gates to construct a field-data extractor by up to 97.3% compared with the one using a naive approach and its delay becomes 2.1ns-2.9ns.

  • Some Constructions for Fractional Repetition Codes with Locality 2

    Mi-Young NAM  Jung-Hyun KIM  Hong-Yeop SONG  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:4
      Page(s):
    936-943

    In this paper, we examine the locality property of the original Fractional Repetition (FR) codes and propose two constructions for FR codes with better locality. For this, we first derive the capacity of the FR codes with locality 2, that is the maximum size of the file that can be stored. Construction 1 generates an FR code with repetition degree 2 and locality 2. This code is optimal in the sense of achieving the capacity we derived. Construction 2 generates an FR code with repetition degree 3 and locality 2 based on 4-regular graphs with girth g. This code is also optimal in the same sense.

  • A Logarithmic Compression ADC Using Transient Response of a Comparator

    Yuji INAGAKI  Yusaku SUGIMORI  Eri IOKA  Yasuyuki MATSUYA  

     
    BRIEF PAPER

      Vol:
    E100-C No:4
      Page(s):
    359-362

    This paper describes a logarithmic compression ADC using a subranging TDC and the transient response of a comparator. We utilized the settling time of the comparator for a logarithmic compression instead of a logarithmic amplifier. The settling time of the comparator is inversely proportional to the logarithm of an input voltage. In the proposed ADC, an input voltage is converted into a pulse whose width represents the settling time of the comparator. Subsequently, the TDC converts the pulse width into a binary code. The supply voltage of the proposed ADC can be reduced more than a conventional logarithmic ADC because an analog to digital conversion takes place in the time domain. We confirmed through a 0.18-µm CMOS circuit simulation that the proposed ADC achieves a resolution of 11 bits, a sampling rate of 20 MS/s, a dynamic range of 59 dB and a power consumption of 9.8 mW at 1.5 V operation.

  • Particle Swarm Optimizer Networks with Stochastic Connection for Improvement of Diversity Search Ability to Solve Multimodal Optimization Problems

    Tomoyuki SASAKI  Hidehiro NAKANO  Arata MIYAUCHI  Akira TAGUCHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E100-A No:4
      Page(s):
    996-1007

    Particle swarm optimizer network (PSON) is one of the multi-swarm PSOs. In PSON, a population is divided into multiple sub-PSOs, each of which searches a solution space independently. Although PSON has a good solving performance, it may be trapped into a local optimum solution. In this paper, we introduce into PSON a dynamic stochastic network topology called “PSON with stochastic connection” (PSON-SC). In PSON-SC, each sub-PSO can be connected to the global best (gbest) information memory and refer to gbest stochastically. We show clearly herein that the diversity of PSON-SC is higher than that of PSON, while confirming the effectiveness of PSON-SC by many numerical simulations.

  • Codebook Learning for Image Recognition Based on Parallel Key SIFT Analysis

    Feng YANG  Zheng MA  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/01/10
      Vol:
    E100-D No:4
      Page(s):
    927-930

    The quality of codebook is very important in visual image classification. In order to boost the classification performance, a scheme of codebook generation for scene image recognition based on parallel key SIFT analysis (PKSA) is presented in this paper. The method iteratively applies classical k-means clustering algorithm and similarity analysis to evaluate key SIFT descriptors (KSDs) from the input images, and generates the codebook by a relaxed k-means algorithm according to the set of KSDs. With the purpose of evaluating the performance of the PKSA scheme, the image feature vector is calculated by sparse code with Spatial Pyramid Matching (ScSPM) after the codebook is constructed. The PKSA-based ScSPM method is tested and compared on three public scene image datasets. The experimental results show the proposed scheme of PKSA can significantly save computational time and enhance categorization rate.

  • Fast Ad-Hoc Search Algorithm for Personalized PageRank Open Access

    Yasuhiro FUJIWARA  Makoto NAKATSUJI  Hiroaki SHIOKAWA  Takeshi MISHIMA  Makoto ONIZUKA  

     
    INVITED PAPER

      Pubricized:
    2017/01/23
      Vol:
    E100-D No:4
      Page(s):
    610-620

    Personalized PageRank (PPR) is a typical similarity metric between nodes in a graph, and node searches based on PPR are widely used. In many applications, graphs change dynamically, and in such cases, it is desirable to perform ad hoc searches based on PPR. An ad hoc search involves performing searches by varying the search parameters or graphs. However, as the size of a graph increases, the computation cost of performing an ad hoc search can become excessive. In this paper, we propose a method called Castanet that offers fast ad hoc searches of PPR. The proposed method features (1) iterative estimation of the upper and lower bounds of PPR scores, and (2) dynamic pruning of nodes that are not needed to obtain a search result. Experiments confirm that the proposed method does offer faster ad hoc PPR searches than existing methods.

  • XY-Separable Scale-Space Filtering by Polynomial Representations and Its Applications Open Access

    Gou KOUTAKI  Keiichi UCHIMURA  

     
    INVITED PAPER

      Pubricized:
    2017/01/11
      Vol:
    E100-D No:4
      Page(s):
    645-654

    In this paper, we propose the application of principal component analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. Because the translation of an input image into scale-space is a continuous operation, it requires the extension of conventional finite matrix-based PCA to an infinite number of dimensions. Here, we use spectral theory to resolve this infinite eigenvalue problem through the use of integration, and we propose an approximate solution based on polynomial equations. In order to clarify its eigensolutions, we apply spectral decomposition to Gaussian scale-space and scale-normalized Laplacian of Gaussian (sLoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and sLoG images, demonstrating that the accuracy of such an image can be made very high by using an arbitrary scale calculated through simple linear combination. Furthermore, to make the scale-space filtering efficient, we approximate the basis filter set using Gaussian lobes approximation and we can obtain XY-Separable filters. As a more practical example, we propose a new Scale Invariant Feature Transform (SIFT) detector.

  • Dynamic Scheduling of Workflow for Makespan and Robustness Improvement in the IaaS Cloud

    Haiou JIANG  Haihong E  Meina SONG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2017/01/13
      Vol:
    E100-D No:4
      Page(s):
    813-821

    The Infrastructure-as-a-Service (IaaS) cloud is attracting applications due to the scalability, dynamic resource provision, and pay-as-you-go cost model. Scheduling scientific workflow in the IaaS cloud is faced with uncertainties like resource performance variations and unknown failures. A schedule is said to be robust if it is able to absorb some degree of the uncertainties during the workflow execution. In this paper, we propose a novel workflow scheduling algorithm called Dynamic Earliest-Finish-Time (DEFT) in the IaaS cloud improving both makespan and robustness. DEFT is a dynamic scheduling containing a set of list scheduling loops invoked when some tasks complete successfully and release resources. In each loop, unscheduled tasks are ranked, a best virtual machine (VM) with minimum estimated earliest finish time for each task is selected. A task is scheduled only when all its parents complete, and the selected best VM is ready. Intermediate data is sent from the finished task to each of its child and the selected best VM before the child is scheduled. Experiments show that DEFT can produce shorter makespans with larger robustness than existing typical list and dynamic scheduling algorithms in the IaaS cloud.

  • Quick Window Query Processing Using a Non-Uniform Cell-Based Index in Wireless Data Broadcast Environment

    SeokJin IM  HeeJoung HWANG  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E100-A No:4
      Page(s):
    1092-1096

    This letter proposes a Non-uniform Cell-based Index (NCI) to enable clients to quickly process window queries in the wireless spatial data broadcast environment. To improve the access time, NCI reduces the probe wait time by equalized spacing between indexes, using non-uniformly partitioned cells of data space. Through the performance evaluation, we show the proposed NCI outperforms the existing index schemes for window queries to spatial data in respect of access time.

  • Classification of Gait Anomaly due to Lesion Using Full-Body Gait Motions

    Tsuyoshi HIGASHIGUCHI  Toma SHIMOYAMA  Norimichi UKITA  Masayuki KANBARA  Norihiro HAGITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/01/10
      Vol:
    E100-D No:4
      Page(s):
    874-881

    This paper proposes a method for evaluating a physical gait motion based on a 3D human skeleton measured by a depth sensor. While similar methods measure and evaluate the motion of only a part of interest (e.g., knee), the proposed method comprehensively evaluates the motion of the full body. The gait motions with a variety of physical disabilities due to lesioned body parts are recorded and modeled in advance for gait anomaly detection. This detection is achieved by finding lesioned parts a set of pose features extracted from gait sequences. In experiments, the proposed features extracted from the full body allowed us to identify where a subject was injured with 83.1% accuracy by using the model optimized for the individual. The superiority of the full-body features was validated in in contrast to local features extracted from only a body part of interest (77.1% by lower-body features and 65% by upper-body features). Furthermore, the effectiveness of the proposed full-body features was also validated with single universal model used for all subjects; 55.2%, 44.7%, and 35.5% by the full-body, lower-body, and upper-body features, respectively.

  • SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    Semih YUMUSAK  Erdogan DOGDU  Halife KODAZ  Andreas KAMILARIS  Pierre-Yves VANDENBUSSCHE  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    758-767

    Linked data endpoints are online query gateways to semantically annotated linked data sources. In order to query these data sources, SPARQL query language is used as a standard. Although a linked data endpoint (i.e. SPARQL endpoint) is a basic Web service, it provides a platform for federated online querying and data linking methods. For linked data consumers, SPARQL endpoint availability and discovery are crucial for live querying and semantic information retrieval. Current studies show that availability of linked datasets is very low, while the locations of linked data endpoints change frequently. There are linked data respsitories that collect and list the available linked data endpoints or resources. It is observed that around half of the endpoints listed in existing repositories are not accessible (temporarily or permanently offline). These endpoint URLs are shared through repository websites, such as Datahub.io, however, they are weakly maintained and revised only by their publishers. In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a “search keyword” discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, the collected search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. We analyze our findings in comparison to Datahub collection in detail.

  • Achievable Error Rate Performance Analysis of Space Shift Keying Systems with Imperfect CSI

    Jinkyu KANG  Seongah JEONG  Hoojin LEE  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:4
      Page(s):
    1084-1087

    In this letter, efficient closed-form formulas for the exact and asymptotic average bit error probability (ABEP) of space shift keying (SSK) systems are derived over Rayleigh fading channels with imperfect channel state information (CSI). Specifically, for a generic 2×NR multiple-input multiple-output (MIMO) system with the maximum likelihood (ML) detection, the impact of imperfect CSI is taken into consideration in terms of two types of channel estimation errors with the fixed variance and the variance as a function of the number of pilot symbols and signal-to-noise ratio (SNR). Then, the explicit evaluations of the bit error floor (BEF) and asymptotic SNR loss are carried out based on the derived asymptotic ABEP formula, which accounts for the impact of imperfect CSI on the SSK system. The numerical results are presented to validate the exactness of our theoretical analysis.

  • Antenna Array Arrangement for Massive MIMO to Reduce Channel Spatial Correlation in LOS Environment

    Takuto ARAI  Atsushi OHTA  Yushi SHIRATO  Satoshi KUROSAKI  Kazuki MARUTA  Tatsuhiko IWAKUNI  Masataka IIZUKA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2016/10/21
      Vol:
    E100-B No:4
      Page(s):
    594-601

    This paper proposes a new antenna array design of Massive MIMO for capacity enhancement in line of sight (LOS) environments. Massive MIMO has two key problems: the heavy overhead of feeding back the channel state information (CSI) for very large number of transmission and reception antenna element pairs and the huge computation complexity imposed by the very large scale matrixes. We have already proposed a practical application of Massive MIMO, that is, Massive Antenna Systems for Wireless Entrance links (MAS-WE), which can clearly solve the two key problems of Massive MIMO. However, the conventional antenna array arrangements; e.g. uniform planar array (UPA) or uniform circular array (UCA) degrade the system capacity of MAS-WE due to the channel spatial correlation created by the inter-element spacing. When the LOS component dominates the propagation channel, the antenna array can be designed to minimize the inter-user channel correlation. We propose an antenna array arrangement to control the grating-lobe positions and achieve very low channel spatial correlation. Simulation results show that the proposed arrangement can reduce the spatial correlation at CDF=50% value by 80% compared to UCA and 75% compared to UPA.

  • A Nonparametric Estimation Approach Based on Apollonius Circles for Outdoor Localization

    Byung Jin LEE  Kyung Seok KIM  

     
    PAPER-Sensing

      Pubricized:
    2016/11/07
      Vol:
    E100-B No:4
      Page(s):
    638-645

    When performing measurements in an outdoor field environment, various interference factors occur. So, many studies have been performed to increase the accuracy of the localization. This paper presents a novel probability-based approach to estimating position based on Apollonius circles. The proposed algorithm is a modified method of existing trilateration techniques. This method does not need to know the exact transmission power of the source and does not require a calibration procedure. The proposed algorithm is verified in several typical environments, and simulation results show that the proposed method outperforms existing algorithms.

  • Improving Dynamic Scaling Performance of Cassandra

    Saneyasu YAMAGUCHI  Yuki MORIMITSU  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    682-692

    Load size for a service on the Internet changes remarkably every hour. Thus, it is expected for service system scales to change dynamically according to load size. KVS (key-value store) is a scalable DBMS (database management system) widely used in largescale Internet services. In this paper, we focus on Cassandra, a popular open-source KVS implementation, and discuss methods for improving dynamic scaling performance. First, we evaluate node joining time, which is the time to complete adding a node to a running KVS system, and show that its bottleneck process is disk I/O. Second, we analyze disk accesses in the nodes and indicate that some heavily accessed files cause a large number of disk accesses. Third, we propose two methods for improving elasticity, which means decreasing node adding and removing time, of Cassandra. One method reduces disk accesses significantly by keeping the heavily accessed file in the page cache. The other method optimizes I/O scheduler behavior. Lastly, we evaluate elasticity of our methods. Our experimental results demonstrate that the methods can improve the scaling-up and scaling-down performance of Cassandra.

1241-1260hit(8249hit)