The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6001-6020hit(42807hit)

  • An Algorithm to Evaluate Appropriateness of Still Images for Learning Concrete Nouns of a New Foreign Language

    Mohammad Nehal HASNINE  Masatoshi ISHIKAWA  Yuki HIRAI  Haruko MIYAKODA  Keiichi KANEKO  

     
    PAPER-Educational Technology

      Pubricized:
    2017/06/21
      Vol:
    E100-D No:9
      Page(s):
    2156-2164

    Vocabulary acquisition based on the traditional pen-and-paper approach is outdated, and has been superseded by the multimedia-supported approach. In a multimedia-supported foreign language learning environment, a learning material comprised of a still-image, a text, and the corresponding sound data is considered to be the most effective way to memorize a noun. However, extraction of an appropriate still image for a noun has always been a challenging and time-consuming process for learners. Learners' burden would be reduced if a system could extract an appropriate image for representing a noun. Therefore, the present study purposed to extract an appropriate image for each noun in order to assist foreign language learners in acquisition of foreign vocabulary. This study presumed that, a learning material created with the help of an appropriate image would be more effective in recalling memory compared to the one created with an inappropriate image. As the first step to finding appropriate images for nouns, concrete nouns have been considered as the subject of investigation. Therefore, this study, at first proposed a definition of an appropriate image for a concrete noun. After that, an image re-ranking algorithm has been designed and implemented that is able to extract an appropriate image from a finite set of corresponding images for each concrete noun. Finally, immediate-after, short- and long-term learning effects of those images with regard to learners' memory retention rates have been examined by conducting immediate-after, delayed and extended delayed posttests. The experimental result revealed that participants in the experimental group significantly outperformed the control group in their long-term memory retention, while no significant differences have been observed in immediate-after and in short-term memory retention. This result indicates that our algorithm could extract images that have a higher learning effect. Furthermore, this paper briefly discusses an on-demand learning system that has been developed to assist foreign language learners in creation of vocabulary learning materials.

  • Management of Information, Communications, and Networking: from the Past to the Future Open Access

    Shingo ATA  Toshio TONOUCHI  

     
    INVITED PAPER-Network Management/Operation

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1614-1622

    As a result of large-scale and complicated of ICT (Information and Communication Technology), the technologies of operations and management of ICT infrastructure and systems are changing to adopt the variation and diversity of usage and communication characteristics. In retrospect, operations and management technologies cover widely from the support of telecommunications operations and remote monitoring for maintaining network equipments, to an integrated network management framework for configuration, monitoring, testing and controls. Recently, the spread of network virtualization technologies enable agility, integrity, and resilience to ICT services. Operations and management technologies will be more important in future, for the support of integrated management of ICT infrastructure including computing resource, and automation of service operations. In this paper, we review research activities of the Technical Committee on Information and Communications Management (ICM) with discussions of individual research category. We then describe the state-of-the-art topics and future directions in the area of ICM.

  • Long Period Sequences Generated by the Logistic Map over Finite Fields with Control Parameter Four

    Kazuyoshi TSUCHIYA  Yasuyuki NOGAMI  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1816-1824

    Pseudorandom number generators have been widely used in Monte Carlo methods, communication systems, cryptography and so on. For cryptographic applications, pseudorandom number generators are required to generate sequences which have good statistical properties, long period and unpredictability. A Dickson generator is a nonlinear congruential generator whose recurrence function is the Dickson polynomial. Aly and Winterhof obtained a lower bound on the linear complexity profile of a Dickson generator. Moreover Vasiga and Shallit studied the state diagram given by the Dickson polynomial of degree two. However, they do not specify sets of initial values which generate a long period sequence. In this paper, we show conditions for parameters and initial values to generate long period sequences, and asymptotic properties for periods by numerical experiments. We specify sets of initial values which generate a long period sequence. For suitable parameters, every element of this set occurs exactly once as a component of generating sequence in one period. In order to obtain sets of initial values, we consider a logistic generator proposed by Miyazaki, Araki, Uehara and Nogami, which is obtained from a Dickson generator of degree two with a linear transformation. Moreover, we remark on the linear complexity profile of the logistic generator. The sets of initial values are described by values of the Legendre symbol. The main idea is to introduce a structure of a hyperbola to the sets of initial values. Our results ensure that generating sequences of Dickson generator of degree two have long period. As a consequence, the Dickson generator of degree two has some good properties for cryptographic applications.

  • Synthesis and Enumeration of Generalized Shift Registers for Strongly Secure SR-Equivalents

    Hideo FUJIWARA  Katsuya FUJIWARA  

     
    LETTER-Dependable Computing

      Pubricized:
    2017/05/26
      Vol:
    E100-D No:9
      Page(s):
    2232-2236

    In our previous work, we introduced new concepts of secure scan design; shift register equivalent circuits (SR-equivalents, for short) and strongly secure circuits, and also introduced generalized shift registers (GSRs, for short) to apply them to secure scan design. In this paper, we combine both concepts of SR-equivalents and strongly secure circuits and apply them to GSRs, and consider the synthesis problem of strongly secure SR-equivalents using GSRs. We also consider the enumeration problem of GSRs that are strongly secure and SR-equivalent, i.e., the cardinality of the class of strongly secure SR-equivalent GSRs to clarify the security level of the secure scan architecture.

  • FOREWORD Open Access

    Toshiaki Fujii  

     
    FOREWORD

      Vol:
    E100-D No:9
      Page(s):
    1943-1943
  • Quantification of Human Stress Using Commercially Available Single Channel EEG Headset

    Sanay MUHAMMAD UMAR SAEED  Syed MUHAMMAD ANWAR  Muhammad MAJID  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2017/06/02
      Vol:
    E100-D No:9
      Page(s):
    2241-2244

    A study on quantification of human stress using low beta waves of electroencephalography (EEG) is presented. For the very first time the importance of low beta waves as a feature for quantification of human stress is highlighted. In this study, there were twenty-eight participants who filled the Perceived Stress Scale (PSS) questionnaire and recorded their EEG in closed eye condition by using a commercially available single channel EEG headset placed at frontal site. On the regression analysis of beta waves extracted from recorded EEG, it has been observed that low beta waves can predict PSS scores with a confidence level of 94%. Consequently, when low beta wave is used as a feature with the Naive Bayes algorithm for classification of stress level, it not only reduces the computational cost by 7 folds but also improves the accuracy to 71.4%.

  • A Finite Automaton-Based String Matching Engine on Graphic Processing Unit

    JinMyung YOON  Kang-Il CHOI  HyunJin KIM  

     
    LETTER-VLSI Design Technology and CAD

      Vol:
    E100-A No:9
      Page(s):
    2031-2033

    A non-deterministic finite automaton (NFA)-based parallel string matching scheme is proposed. To parallelize the operations of NFAs, a graphic processing unit (GPU) is adopted. Considering the resource occupancy of threads and size of the shared memory, the optimized resource allocation is performed in the proposed string matching scheme. Therefore, the performance is enhanced significantly in all evaluations.

  • Pixel-Wise Interframe Prediction based on Dense Three-Dimensional Motion Estimation for Depth Map Coding

    Shota KASAI  Yusuke KAMEDA  Tomokazu ISHIKAWA  Ichiro MATSUDA  Susumu ITOH  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2039-2043

    We propose a method of interframe prediction in depth map coding that uses pixel-wise 3D motion estimated from encoded textures and depth maps. By using the 3D motion, an approximation of the depth map frame to be encoded is generated and used as a reference frame of block-wise motion compensation.

  • NerveNet Architecture and Its Pilot Test in Shirahama for Resilient Social Infrastructure Open Access

    Masugi INOUE  Yasunori OWADA  

     
    INVITED PAPER-Network

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1526-1537

    From past experience of the large-scale cutoff of existing networks as a result of the East Japan Great Earthquake and tsunamis, and from previous research on stabilizing ad hoc networks that lack control mechanisms, we have strengthened the resilience of NerveNet. NerveNet was originally designed and developed as an access network for providing context-aware services with the use of sensors and actuators. Thus, at present, it has the capability to enable resilient information sharing and communications in a region even if access to the Internet is impossible in emergency situations. NerveNet is composed of single or multiple base stations interconnected by a variety of Ethernet-based wired or wireless transmission systems. A network is formed using line, star, tree, or mesh topology. Network and data management works in each base station in a distributed manner, resulting in the resilience of this system. In collaboration with the town of Shirahama in Wakayama prefecture in Japan, we have been conducting a pilot test with the NerveNet testbed. The test includes nine base stations interconnected by 5.6-GHz Wi-Fi and Fixed Wireless Access (FWA), providing tourists and residents with Internet access. In the future, we expect that not only NerveNet but also other novel technologies will contribute to solving social problems and enriching people's lives.

  • A Low Capture Power Test Generation Method Based on Capture Safe Test Vector Manipulation

    Toshinori HOSOKAWA  Atsushi HIRAI  Yukari YAMAUCHI  Masayuki ARAI  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/06/06
      Vol:
    E100-D No:9
      Page(s):
    2118-2125

    In at-speed scan testing, capture power is a serious problem because the high power dissipation that can occur when the response for a test vector is captured by flip-flops results in excessive voltage drops, known as IR-drops, which may cause significant capture-induced yield loss. In low capture power test generation, the test vectors that violate capture power constraints in an initial test set are defined as capture-unsafe test vectors, while faults that are detected solely by capture-unsafe test vectors are defined as unsafe faults. It is necessary to regenerate the test vectors used to detect unsafe faults in order to prevent unnecessary yield losses. In this paper, we propose a new low capture power test generation method based on fault simulation that uses capture-safe test vectors in an initial test set. Experimental results show that the use of this method reduces the number of unsafe faults by 94% while requiring just 18% more additional test vectors on average, and while requiring less test generation time compared with the conventional low capture power test generation method.

  • A ROM Driving Circuit for RFID Tags Based on a-IGZO TFTs

    Shaolong LIN  Ruohe YAO  Fei LUO  

     
    BRIEF PAPER-Electronic Circuits

      Vol:
    E100-C No:9
      Page(s):
    746-748

    This paper proposes a read-only memory driving circuit for RFID tags based on a-IGZO thin-film transistors. The circuit consists of a Johnson counter and monotype complementary gates. By utilizing complementary signals to drive a decoder based on monotype complementary gates, the propagation delay can be decreased and the redundant current can be reduced. The Johnson counter reduces the number of registers. The new circuit can effectively avoid glitch generation, and reduce circuit power consumption and delay.

  • Sheared EPI Analysis for Disparity Estimation from Light Fields

    Takahiro SUZUKI  Keita TAKAHASHI  Toshiaki FUJII  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1984-1993

    Structure tensor analysis on epipolar plane images (EPIs) is a successful approach to estimate disparity from a light field, i.e. a dense set of multi-view images. However, the disparity range allowable for the light field is limited because the estimation becomes less accurate as the range of disparities become larger. To overcome this limitation, we developed a new method called sheared EPI analysis, where EPIs are sheared before the structure tensor analysis. The results of analysis obtained with different shear values are integrated into a final disparity map through a smoothing process, which is the key idea of our method. In this paper, we closely investigate the performance of sheared EPI analysis and demonstrate the effectiveness of the smoothing process by extensively evaluating the proposed method with 15 datasets that have large disparity ranges.

  • Collaborative Quality Framework: QoE-Centric Service Operation in Collaboration with Users, Service Providers, and Network Operators Open Access

    Akira TAKAHASHI  Takanori HAYASHI  

     
    INVITED PAPER-Network

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1554-1563

    We propose a framework called “QoE-centric Service Operation,” with which we attempt to implement a means to enable the collaboration of end-users, service providers, and network providers to achieve better QoE of telecommunication services. First, we give an overview of the transition in the quality factors of voice, video, and web-browsing applications. Then, taking into account the fact that many quality factors exist not only in networks, but also in servers and terminals, we discuss how to measure, assess, analyze, and control QoE and the technical requirements in each component. We also propose approaches to meet these requirements: packet- and KPI-based QoE estimation, compensation of sparse measurement, and quality prediction based on human behavior and traffic estimation. Finally, we explain the results of our proof-of-concept study using an actual video delivery service in Japan.

  • Signatures from Trapdoor Commitments with Strong Openings

    Goichiro HANAOKA  Jacob C. N. SCHULDT  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1924-1931

    In this paper, we propose a new generic construction of signatures from trapdoor commitments with strong openings in the random oracle model. Our construction is very efficient in the sense that signatures consist of just a single decommitment of the underlying commitment scheme, and verification corresponds to verifying this decommitment against a commitment derived via a hash function. Furthermore, assuming the commitment scheme provides sufficiently strong statistical hiding and trapdoor opening properties, the reduction of the security of the signature scheme to the binding property of the commitment scheme is tight. To instantiate our construction, we propose two new commitment schemes with strong openings. Both of these are statistically hiding, and have binding properties based on a Diffie-Hellman inversion problem and factoring, respectively. The signature schemes obtained from these are very efficient; the first matches the performance of BLS signatures, which currently provides the shortest signatures, and the second provides signatures of similar length to the shortest version of Rabin-Williams signatures while still being tightly related to factoring.

  • A Smart City Based on Ambient Intelligence Open Access

    Tomoaki OHTSUKI  

     
    INVITED PAPER-Network

      Pubricized:
    2017/03/22
      Vol:
    E100-B No:9
      Page(s):
    1547-1553

    The United Nations (UN) reports that the global population reached 7 billion in 2011, and today, it stands at about 7.3 billion. This dramatic increase has been driven largely by the extension of people's lifetime. The urban population has been also increasing, which causes a lot of issues for cities, such as congestion and increased demand for resources, including energy, water, sanitation, education, and healthcare services. A smart city has been expected a lot to solve those issues. The concept of a smart city is not new. Due to the progress of information and communication technology (ICT), including the Internet of Things (IoT) and big data (BD), the concept of a smart city has been being realized in various aspects. This paper introduces the concept and definition of a smart city. Then it explains the ambient intelligence that supports a smart city. Moreover, it introduces several key components of a smart city.

  • Image Restoration with Multiple Hard Constraints on Data-Fidelity to Blurred/Noisy Image Pair

    Saori TAKEYAMA  Shunsuke ONO  Itsuo KUMAZAWA  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1953-1961

    Existing image deblurring methods with a blurred/noisy image pair take a two-step approach: blur kernel estimation and image restoration. They can achieve better and much more stable blur kernel estimation than single image deblurring methods. On the other hand, in the image restoration step, they do not exploit the information on the noisy image, or they require ad hoc tuning of interdependent parameters. This paper focuses on the image restoration step and proposes a new restoration method of using a blurred/noisy image pair. In our method, the image restoration problem is formulated as a constrained convex optimization problem, where data-fidelity to a blurred image and that to a noisy image is properly taken into account as multiple hard constraints. This offers (i) high quality restoration when the blurred image also contains noise; (ii) robustness to the estimation error of the blur kernel; and (iii) easy parameter setting. We also provide an efficient algorithm for solving our optimization problem based on the so-called alternating direction method of multipliers (ADMM). Experimental results support our claims.

  • Sufficient and Necessary Conditions of Distributed Compressed Sensing with Prior Information

    Wenbo XU  Yupeng CUI  Yun TIAN  Siye WANG  Jiaru LIN  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E100-A No:9
      Page(s):
    2013-2020

    This paper considers the recovery problem of distributed compressed sensing (DCS), where J (J≥2) signals all have sparse common component and sparse innovation components. The decoder attempts to jointly recover each component based on {Mj} random noisy measurements (j=1,…,J) with the prior information on the support probabilities, i.e., the probabilities that the entries in each component are nonzero. We give both the sufficient and necessary conditions on the total number of measurements $sum olimits_{j = 1}^J M_j$ that is needed to recover the support set of each component perfectly. The results show that when the number of signal J increases, the required average number of measurements $sum olimits_{j = 1}^J M_j/J$ decreases. Furthermore, we propose an extension of one existing algorithm for DCS to exploit the prior information, and simulations verify its improved performance.

  • Visualizing Web Images Using Fisher Discriminant Locality Preserving Canonical Correlation Analysis

    Kohei TATENO  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2005-2016

    A novel dimensionality reduction method, Fisher Discriminant Locality Preserving Canonical Correlation Analysis (FDLP-CCA), for visualizing Web images is presented in this paper. FDLP-CCA can integrate two modalities and discriminate target items in terms of their semantics by considering unique characteristics of the two modalities. In this paper, we focus on Web images with text uploaded on Social Networking Services for these two modalities. Specifically, text features have high discriminate power in terms of semantics. On the other hand, visual features of images give their perceptual relationships. In order to consider both of the above unique characteristics of these two modalities, FDLP-CCA estimates the correlation between the text and visual features with consideration of the cluster structure based on the text features and the local structures based on the visual features. Thus, FDLP-CCA can integrate the different modalities and provide separated manifolds to organize enhanced compactness within each natural cluster.

  • Flexible and Fast Similarity Search for Enriched Trajectories

    Hideaki OHASHI  Toshiyuki SHIMIZU  Masatoshi YOSHIKAWA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/05/30
      Vol:
    E100-D No:9
      Page(s):
    2081-2091

    In this study, we focus on a method to search for similar trajectories. In the majority of previous works on searching for similar trajectories, only raw trajectory data were used. However, to obtain deeper insights, additional time-dependent trajectory features should be utilized depending on the search intent. For instance, to identify similar combination plays in soccer games, such additional features include the movements of the team players. In this paper, we develop a framework to flexibly search for similar trajectories associated with time-dependent features, which we call enriched trajectories. In this framework, weights, which represent the relative importance of each feature, can be flexibly given by users. Moreover, to facilitate fast searching, we first propose a lower bounding measure of the DTW distance between enriched trajectories, and then we propose algorithms based on this lower bounding measure. We evaluate the effectiveness of the lower bounding measure and compare the performances of the algorithms under various conditions using soccer data and synthetic data. Our experimental results suggest that the proposed lower bounding measure is superior to the existing measure, and one of the proposed algorithms, which is based on the threshold algorithm, is suitable for practical use.

  • Design of Two Channel Biorthogonal Graph Wavelet Filter Banks with Half-Band Kernels

    Xi ZHANG  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1743-1750

    In this paper, we propose a novel design method of two channel critically sampled compactly supported biorthogonal graph wavelet filter banks with half-band kernels. First of all, we use the polynomial half-band kernels to construct a class of biorthogonal graph wavelet filter banks, which exactly satisfy the PR (perfect reconstruction) condition. We then present a design method of the polynomial half-band kernels with the specified degree of flatness. The proposed design method utilizes the PBP (Parametric Bernstein Polynomial), which ensures that the half-band kernels have the specified zeros at λ=2. Therefore the constraints of flatness are satisfied at both of λ=0 and λ=2, and then the resulting graph wavelet filters have the flat spectral responses in passband and stopband. Furthermore, we apply the Remez exchange algorithm to minimize the spectral error of lowpass (highpass) filter in the band of interest by using the remaining degree of freedom. Finally, several examples are designed to demonstrate the effectiveness of the proposed design method.

6001-6020hit(42807hit)