The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SC(4570hit)

641-660hit(4570hit)

  • A 1.9GHz Low-Phase-Noise Complementary Cross-Coupled FBAR-VCO without Additional Voltage Headroom in 0.18µm CMOS Technology

    Guoqiang ZHANG  Awinash ANAND  Kousuke HIKICHI  Shuji TANAKA  Masayoshi ESASHI  Ken-ya HASHIMOTO  Shinji TANIGUCHI  Ramesh K. POKHAREL  

     
    PAPER

      Vol:
    E100-C No:4
      Page(s):
    363-369

    A 1.9GHz film bulk acoustic resonator (FBAR)-based low-phase-noise complementary cross-coupled voltage-controlled oscillator (VCO) is presented. The FBAR-VCO is designed and fabricated in 0.18µm CMOS process. The DC latch and the low frequency instability are resolved by employing the NMOS source coupling capacitor and the DC blocked cross-coupled pairs. Since no additional voltage headroom is required, the proposed FBAR-VCO can be operated at a low power supply voltage of 1.1V with a wide voltage swing of 0.9V. An effective phase noise optimization is realized by a reasonable trade-off between the output resistance and the trans-conductance of the cross-coupled pairs. The measured performance shows the proposed FBAR-VCO achieves a phase noise of -148dBc/Hz at 1MHz offset with a figure of merit (FoM) of -211.6dB.

  • User and Antenna Joint Selection in Multi-User Large-Scale MIMO Downlink Networks

    Moo-Woong JEONG  Tae-Won BAN  Bang Chul JUNG  

     
    PAPER-Network

      Pubricized:
    2016/11/02
      Vol:
    E100-B No:4
      Page(s):
    529-535

    In this paper, we investigate a user and antenna joint selection problem in multi-user large-scale MIMO downlink networks, where a BS with N transmit antennas serves K users, and N is much larger than K. The BS activates only S(S≤N) antennas for data transmission to reduce hardware cost and computation complexity, and selects the set of users to which data is to be transmitted by maximizing the sum-rate. The optimal user and antenna joint selection scheme based on exhaustive search causes considerable computation complexity. Thus, we propose a new joint selection algorithm with low complexity and analyze the performance of the proposed scheme in terms of sum-rate and complexity. When S=7, N=10, K=5, and SNR=10dB, the sum-rate of the proposed scheme is 5.1% lower than that of the optimal scheme, while the computation complexity of the proposed scheme is reduced by 99.0% compared to that of the optimal scheme.

  • Stochastic Dykstra Algorithms for Distance Metric Learning with Covariance Descriptors

    Tomoki MATSUZAWA  Eisuke ITO  Raissa RELATOR  Jun SESE  Tsuyoshi KATO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/01/13
      Vol:
    E100-D No:4
      Page(s):
    849-856

    In recent years, covariance descriptors have received considerable attention as a strong representation of a set of points. In this research, we propose a new metric learning algorithm for covariance descriptors based on the Dykstra algorithm, in which the current solution is projected onto a half-space at each iteration, and which runs in O(n3) time. We empirically demonstrate that randomizing the order of half-spaces in the proposed Dykstra-based algorithm significantly accelerates convergence to the optimal solution. Furthermore, we show that the proposed approach yields promising experimental results for pattern recognition tasks.

  • Codebook Learning for Image Recognition Based on Parallel Key SIFT Analysis

    Feng YANG  Zheng MA  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/01/10
      Vol:
    E100-D No:4
      Page(s):
    927-930

    The quality of codebook is very important in visual image classification. In order to boost the classification performance, a scheme of codebook generation for scene image recognition based on parallel key SIFT analysis (PKSA) is presented in this paper. The method iteratively applies classical k-means clustering algorithm and similarity analysis to evaluate key SIFT descriptors (KSDs) from the input images, and generates the codebook by a relaxed k-means algorithm according to the set of KSDs. With the purpose of evaluating the performance of the PKSA scheme, the image feature vector is calculated by sparse code with Spatial Pyramid Matching (ScSPM) after the codebook is constructed. The PKSA-based ScSPM method is tested and compared on three public scene image datasets. The experimental results show the proposed scheme of PKSA can significantly save computational time and enhance categorization rate.

  • XY-Separable Scale-Space Filtering by Polynomial Representations and Its Applications Open Access

    Gou KOUTAKI  Keiichi UCHIMURA  

     
    INVITED PAPER

      Pubricized:
    2017/01/11
      Vol:
    E100-D No:4
      Page(s):
    645-654

    In this paper, we propose the application of principal component analysis (PCA) to scale-spaces. PCA is a standard method used in computer vision. Because the translation of an input image into scale-space is a continuous operation, it requires the extension of conventional finite matrix-based PCA to an infinite number of dimensions. Here, we use spectral theory to resolve this infinite eigenvalue problem through the use of integration, and we propose an approximate solution based on polynomial equations. In order to clarify its eigensolutions, we apply spectral decomposition to Gaussian scale-space and scale-normalized Laplacian of Gaussian (sLoG) space. As an application of this proposed method, we introduce a method for generating Gaussian blur images and sLoG images, demonstrating that the accuracy of such an image can be made very high by using an arbitrary scale calculated through simple linear combination. Furthermore, to make the scale-space filtering efficient, we approximate the basis filter set using Gaussian lobes approximation and we can obtain XY-Separable filters. As a more practical example, we propose a new Scale Invariant Feature Transform (SIFT) detector.

  • A New Efficient Resource Management Framework for Iterative MapReduce Processing in Large-Scale Data Analysis

    Seungtae HONG  Kyongseok PARK  Chae-Deok LIM  Jae-Woo CHANG  

    This paper has been cancelled due to violation of duplicate submission policy on IEICE Transactions on Information and Systems on September 5, 2019.
     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    704-717
    • HTML
    • Errata[Uploaded on March 1,2018]

    To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.

  • Interdisciplinary Collaborator Recommendation Based on Research Content Similarity

    Masataka ARAKI  Marie KATSURAI  Ikki OHMUKAI  Hideaki TAKEDA  

     
    PAPER

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:4
      Page(s):
    785-792

    Most existing methods on research collaborator recommendation focus on promoting collaboration within a specific discipline and exploit a network structure derived from co-authorship or co-citation information. To find collaboration opportunities outside researchers' own fields of expertise and beyond their social network, we present an interdisciplinary collaborator recommendation method based on research content similarity. In the proposed method, we calculate textual features that reflect a researcher's interests using a research grant database. To find the most relevant researchers who work in other fields, we compare constructing a pairwise similarity matrix in a feature space and exploiting existing social networks with content-based similarity. We present a case study at the Graduate University for Advanced Studies in Japan in which actual collaborations across departments are used as ground truth. The results indicate that our content-based approach can accurately predict interdisciplinary collaboration compared with the conventional collaboration network-based approaches.

  • Dynamic Scheduling of Workflow for Makespan and Robustness Improvement in the IaaS Cloud

    Haiou JIANG  Haihong E  Meina SONG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2017/01/13
      Vol:
    E100-D No:4
      Page(s):
    813-821

    The Infrastructure-as-a-Service (IaaS) cloud is attracting applications due to the scalability, dynamic resource provision, and pay-as-you-go cost model. Scheduling scientific workflow in the IaaS cloud is faced with uncertainties like resource performance variations and unknown failures. A schedule is said to be robust if it is able to absorb some degree of the uncertainties during the workflow execution. In this paper, we propose a novel workflow scheduling algorithm called Dynamic Earliest-Finish-Time (DEFT) in the IaaS cloud improving both makespan and robustness. DEFT is a dynamic scheduling containing a set of list scheduling loops invoked when some tasks complete successfully and release resources. In each loop, unscheduled tasks are ranked, a best virtual machine (VM) with minimum estimated earliest finish time for each task is selected. A task is scheduled only when all its parents complete, and the selected best VM is ready. Intermediate data is sent from the finished task to each of its child and the selected best VM before the child is scheduled. Experiments show that DEFT can produce shorter makespans with larger robustness than existing typical list and dynamic scheduling algorithms in the IaaS cloud.

  • SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    Semih YUMUSAK  Erdogan DOGDU  Halife KODAZ  Andreas KAMILARIS  Pierre-Yves VANDENBUSSCHE  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    758-767

    Linked data endpoints are online query gateways to semantically annotated linked data sources. In order to query these data sources, SPARQL query language is used as a standard. Although a linked data endpoint (i.e. SPARQL endpoint) is a basic Web service, it provides a platform for federated online querying and data linking methods. For linked data consumers, SPARQL endpoint availability and discovery are crucial for live querying and semantic information retrieval. Current studies show that availability of linked datasets is very low, while the locations of linked data endpoints change frequently. There are linked data respsitories that collect and list the available linked data endpoints or resources. It is observed that around half of the endpoints listed in existing repositories are not accessible (temporarily or permanently offline). These endpoint URLs are shared through repository websites, such as Datahub.io, however, they are weakly maintained and revised only by their publishers. In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a “search keyword” discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, the collected search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. We analyze our findings in comparison to Datahub collection in detail.

  • Analyzing Temporal Dynamics of Consumer's Behavior Based on Hierarchical Time-Rescaling

    Hideaki KIM  Noriko TAKAYA  Hiroshi SAWADA  

     
    PAPER

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:4
      Page(s):
    693-703

    Improvements in information technology have made it easier for industry to communicate with their customers, raising hopes for a scheme that can estimate when customers will want to make purchases. Although a number of models have been developed to estimate the time-varying purchase probability, they are based on very restrictive assumptions such as preceding purchase-event dependence and discrete-time effect of covariates. Our preliminary analysis of real-world data finds that these assumptions are invalid: self-exciting behavior, as well as marketing stimulus and preceding purchase dependence, should be examined as possible factors influencing purchase probability. In this paper, by employing the novel idea of hierarchical time rescaling, we propose a tractable but highly flexible model that can meld various types of intrinsic history dependency and marketing stimuli in a continuous-time setting. By employing the proposed model, which incorporates the three factors, we analyze actual data, and show that our model has the ability to precisely track the temporal dynamics of purchase probability at the level of individuals. It enables us to take effective marketing actions such as advertising and recommendations on timely and individual bases, leading to the construction of a profitable relationship with each customer.

  • Phoneme Set Design Based on Integrated Acoustic and Linguistic Features for Second Language Speech Recognition

    Xiaoyun WANG  Tsuneo KATO  Seiichi YAMAMOTO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2016/12/29
      Vol:
    E100-D No:4
      Page(s):
    857-864

    Recognition of second language (L2) speech is a challenging task even for state-of-the-art automatic speech recognition (ASR) systems, partly because pronunciation by L2 speakers is usually significantly influenced by the mother tongue of the speakers. Considering that the expressions of non-native speakers are usually simpler than those of native ones, and that second language speech usually includes mispronunciation and less fluent pronunciation, we propose a novel method that maximizes unified acoustic and linguistic objective function to derive a phoneme set for second language speech recognition. The authors verify the efficacy of the proposed method using second language speech collected with a translation game type dialogue-based computer assisted language learning (CALL) system. In this paper, the authors examine the performance based on acoustic likelihood, linguistic discrimination ability and integrated objective function for second language speech. Experiments demonstrate the validity of the phoneme set derived by the proposed method.

  • A Saturating-Integrator-Based Behavioral Model of Ring Oscillator Facilitating PLL Design

    Zule XU  Takayuki KAWAHARA  

     
    BRIEF PAPER

      Vol:
    E100-C No:4
      Page(s):
    370-372

    We propose a Simulink model of a ring oscillator using saturating integrators. The oscillator's period is tuned via the saturation time of the integrators. Thus, timing jitters due to white and flicker noises are easily introduced into the model, enabling an efficient phase noise evaluation before transistor-level circuit design.

  • A Novel Label Aggregation with Attenuated Scores for Ground-Truth Identification of Dataset Annotation with Crowdsourcing

    Ratchainant THAMMASUDJARIT  Anon PLANGPRASOPCHOK  Charnyote PLUEMPITIWIRIYAWEJ  

     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    750-757

    Ground-truth identification - the process, which infers the most probable labels, for a certain dataset, from crowdsourcing annotations - is a crucial task to make the dataset usable, e.g., for a supervised learning problem. Nevertheless, the process is challenging because annotations from multiple annotators are inconsistent and noisy. Existing methods require a set of data sample with corresponding ground-truth labels to precisely estimate annotator performance but such samples are difficult to obtain in practice. Moreover, the process requires a post-editing step to validate indefinite labels, which are generally unidentifiable without thoroughly inspecting the whole annotated data. To address the challenges, this paper introduces: 1) Attenuated score (A-score) - an indicator that locally measures annotator performance for segments of annotation sequences, and 2) label aggregation method that applies A-score for ground-truth identification. The experimental results demonstrate that A-score label aggregation outperforms majority vote in all datasets by accurately recovering more labels. It also achieves higher F1 scores than those of the strong baselines in all multi-class data. Additionally, the results suggest that A-score is a promising indicator that helps identifying indefinite labels for the post-editing procedure.

  • Feature Adaptive Correlation Tracking

    Yulong XU  Yang LI  Jiabao WANG  Zhuang MIAO  Hang LI  Yafei ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/11/28
      Vol:
    E100-D No:3
      Page(s):
    594-597

    Feature extractor plays an important role in visual tracking, but most state-of-the-art methods employ the same feature representation in all scenes. Taking into account the diverseness, a tracker should choose different features according to the videos. In this work, we propose a novel feature adaptive correlation tracker, which decomposes the tracking task into translation and scale estimation. According to the luminance of the target, our approach automatically selects either hierarchical convolutional features or histogram of oriented gradient features in translation for varied scenarios. Furthermore, we employ a discriminative correlation filter to handle scale variations. Extensive experiments are performed on a large-scale benchmark challenging dataset. And the results show that the proposed algorithm outperforms state-of-the-art trackers in accuracy and robustness.

  • Time-to-Contact in Scattering Media

    Laksmita RAHADIANTI  Wooseong JEONG  Fumihiko SAKAUE  Jun SATO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/12/06
      Vol:
    E100-D No:3
      Page(s):
    564-573

    In this paper we propose a method for estimating time-to-contact in scattering media. Images taken in scattering media are often unclear and blurry, making it difficult to detect appropriate geometric information from these images for computing the 3 dimensional properties of the scene. Therefore, instead of searching for geometric information, we attempt to use photometric information instead. In our approach, we use the observed image intensity. The method proposed in this paper is able to utilize the effect of scattering media on the resultant image and estimate the time-to-contact toward objects without any prior knowledge of the scene, cameras, and the scattering media. This method is then evaluated using simulated and real images.

  • On Scheduling Delay-Sensitive SVC Multicast over Wireless Networks with Network Coding

    Shujuan WANG  Chunting YAN  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/09/12
      Vol:
    E100-B No:3
      Page(s):
    407-416

    In this work, we study efficient scheduling with network coding in a scalable video coding (SVC) multicast system. Transmission consists of two stages. The original SVC packets are multicasted by the server in the first stage and the lost packets are retransmitted in the second stage. With deadline constraint, the consumer can be only satisfied when the requested packets are received before expiration. Further, the hierarchical encoding architecture of SVC introduces extra decoding delay which poses a challenge for providing acceptable reconstructed video quality. To solve these problems, instantly decodable network coding is applied for reducing the decoding delay, and a novel packet weighted policy is designed to better describe the contribution a packet can make in upgrading the recovered video quality. Finally, an online packet scheduling algorithm based on the maximal weighted clique is proposed to improve the delay, deadline miss ratio and users' experience. Multiple characteristics of SVC packets, such as the packet utility, the slack time and the number of undelivered/wanted packets, are jointly considered. Simulation results prove that the proposed algorithm requires fewer retransmissions and achieves lower deadline miss ratio. Moreover, the algorithm enjoys fine recovery video quality and provides high user satisfaction.

  • Naturalization of Screen Content Images for Enhanced Quality Evaluation

    Xingge GUO  Liping HUANG  Ke GU  Leida LI  Zhili ZHOU  Lu TANG  

     
    LETTER-Information Network

      Pubricized:
    2016/11/24
      Vol:
    E100-D No:3
      Page(s):
    574-577

    The quality assessment of screen content images (SCIs) has been attractive recently. Different from natural images, SCI is usually a mixture of picture and text. Traditional quality metrics are mainly designed for natural images, which do not fit well into the SCIs. Motivated by this, this letter presents a simple and effective method to naturalize SCIs, so that the traditional quality models can be applied for SCI quality prediction. Specifically, bicubic interpolation-based up-sampling is proposed to achieve this goal. Extensive experiments and comparisons demonstrate the effectiveness of the proposed method.

  • Recent Progress and Application of Superconducting Nanowire Single-Photon Detectors Open Access

    Taro YAMASHITA  Shigehito MIKI  Hirotaka TERAI  

     
    INVITED PAPER

      Vol:
    E100-C No:3
      Page(s):
    274-282

    In this review, we present recent advances relating to superconducting nanowire single-photon detectors (SSPDs or SNSPDs) and their broad range of applications. During a period exceeding ten years, the system performance of SSPDs has been drastically improved, and lately excellent detection efficiencies have been realized in practical systems for a wide range of target photon wavelengths. Owing to their advantages such as high system detection efficiency, low dark count rate, and excellent timing jitter, SSPDs have found application in various research fields such as quantum information, quantum optics, optical communication, and also in the life sciences. We summarize the photon detection principle and the current performance status of practical SSPD systems. In addition, we introduce application examples in which SSPDs have been applied.

  • A Wideband Noise-Cancelling Receiver Front-End Using a Linearized Transconductor

    Duksoo KIM  Byungjoon KIM  Sangwook NAM  

     
    BRIEF PAPER-Microwaves, Millimeter-Waves

      Vol:
    E100-C No:3
      Page(s):
    340-343

    A wideband noise-cancelling receiver front-end is proposed in this brief. As a basic architecture, a low-noise transconductance amplifier, a passive mixer, and a transimpedance amplifier are employed to compose the wideband receiver. To achieve wideband input matching for the transconductor, a global feedback method is adopted. Since the wideband receiver has to minimize linearity degradation if a large blocker signal exists out-of-band, a linearization technique is applied for the transconductor circuit. The linearization cancels third-order intermodulation distortion components and increases linearity; however, the additional circuits used in linearization generate excessive noise. A noise-cancelling architecture that employs an auxiliary path cancels noise signals generated in the main path. The designed receiver front-end is fabricated using a 65-nm CMOS process. The receiver operates in the frequency range of 25 MHz-2 GHz with a gain of 49.7 dB. The in-band input-referred third-order intercept point is improved by 12.3 dB when the linearization is activated, demonstrating the effectiveness of the linearization technique.

  • Decision Feedback Equalizer with Frequency Domain Bidirectional Noise Prediction for MIMO-SCFDE System

    Zedong XIE  Xihong CHEN  Xiaopeng LIU  Lunsheng XUE  Yu ZHAO  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2016/09/12
      Vol:
    E100-B No:3
      Page(s):
    433-439

    The impact of intersymbol interference (ISI) on single carrier frequency domain equalization with multiple input multiple output (MIMO-SCFDE) systems is severe. Most existing channel equalization methods fail to solve it completely. In this paper, given the disadvantages of the error propagation and the gap from matched filter bound (MFB), we creatively introduce a decision feedback equalizer with frequency-domain bidirectional noise prediction (DFE-FDBiNP) to tackle intersymbol interference (ISI) in MIMO-SCFDE systems. The equalizer has two-part equalizer, that is the normal mode and the time-reversal mode decision feedback equalization with noise prediction (DFE-NP). Equal-gain combining is used to realize a greatly simplified and low complexity diversity combining. Analysis and simulation results validate the improved performance of the proposed method in quasi-static frequency-selective fading MIMO channel for a typical urban environment.

641-660hit(4570hit)