The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

5281-5300hit(42807hit)

  • Validity of Kit-Build Method for Assessment of Learner-Build Map by Comparing with Manual Methods

    Warunya WUNNASRI  Jaruwat PAILAI  Yusuke HAYASHI  Tsukasa HIRASHIMA  

     
    PAPER-Educational Technology

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1141-1150

    This paper describes an investigation into the validity of an automatic assessment method of the learner-build concept map by comparing it with two well-known manual methods. We have previously proposed the Kit-Build (KB) concept map framework where a learner builds a concept map by using only a provided set of components, known as the set “kit”. In this framework, instant and automatic assessment of a learner-build concept map has been realized. We call this assessment method the “Kit-Build method” (KB method). The framework and assessment method have already been practically used in classrooms in various schools. As an investigation of the validity of this method, we have conducted an experiment as a case study to compare the assessment results of the method with the assessment results of two other manual assessment methods. In this experiment, 22 university students attended as subjects and four as raters. It was found that the scores of the KB method had a very strong correlation with the scores of the other manual methods. The results of this experiment are one of evidence to show the automatic assessment of the Kit-Build concept map can attain almost the same level of validity as well-known manual assessment methods.

  • PON Convergence Open Access

    Frank EFFENBERGER  

     
    INVITED PAPER

      Pubricized:
    2017/10/18
      Vol:
    E101-B No:4
      Page(s):
    947-951

    This paper discusses the concept of PON standards convergence. The history of PON standardization is reviewed in brief as a way to explain how the industry arrived at its current divergent form. The reasons why convergence is favorable are enumerated, with a focus on what has changed since the last round of standardization. Finally, some paths forward are proposed.

  • Name Binding is Easy with Hypergraphs

    Alimujiang YASEN  Kazunori UEDA  

     
    PAPER-Software System

      Pubricized:
    2018/01/12
      Vol:
    E101-D No:4
      Page(s):
    1126-1140

    We develop a technique for representing variable names and name binding which is a mechanism of associating a name with an entity in many formal systems including logic, programming languages and mathematics. The idea is to use a general form of graph links (or edges) called hyperlinks to represent variables, graph nodes as constructors of the formal systems, and a graph type called hlground to define substitutions. Our technique is based on simple notions of graph theory in which graph types ensure correct substitutions and keep bound variables distinct. We encode strong reduction of the untyped λ-calculus to introduce our technique. Then we encode a more complex formal system called System F<:, a polymorphic λ-calculus with subtyping that has been one of important theoretical foundations of functional programming languages. The advantage of our technique is that the representation of terms, definition of substitutions, and implementation of formal systems are all straightforward. We formalized the graph type hlground, proved that it ensures correct substitutions in the λ-calculus, and implemented hlground in HyperLMNtal, a modeling language based on hypergraph rewriting. Experiments were conducted to test this technique. By this technique, one can implement formal systems simply by following the steps of their definitions as described in papers.

  • Energy-Efficient Resource Management in Mobile Cloud Computing

    Xiaomin JIN  Yuanan LIU  Wenhao FAN  Fan WU  Bihua TANG  

     
    PAPER-Energy in Electronics Communications

      Pubricized:
    2017/10/16
      Vol:
    E101-B No:4
      Page(s):
    1010-1020

    Mobile cloud computing (MCC) has been proposed as a new approach to enhance mobile device performance via computation offloading. The growth in cloud computing energy consumption is placing pressure on both the environment and cloud operators. In this paper, we focus on energy-efficient resource management in MCC and aim to reduce cloud operators' energy consumption through resource management. We establish a deterministic resource management model by solving a combinatorial optimization problem with constraints. To obtain the resource management strategy in deterministic scenarios, we propose a deterministic strategy algorithm based on the adaptive group genetic algorithm (AGGA). Wireless networks are used to connect to the cloud in MCC, which causes uncertainty in resource management in MCC. Based on the deterministic model, we establish a stochastic model that involves a stochastic optimization problem with chance constraints. To solve this problem, we propose a stochastic strategy algorithm based on Monte Carlo simulation and AGGA. Experiments show that our deterministic strategy algorithm obtains approximate optimal solutions with low algorithmic complexity with respect to the problem size, and our stochastic strategy algorithm saves more energy than other algorithms while satisfying the chance constraints.

  • Multiple Speech Source Separation with Non-Sparse Components Recovery by Using Dual Similarity Determination

    Maoshen JIA  Jundai SUN  Feng DENG  Junyue SUN  

     
    PAPER-Elemental Technologies for human behavior analysis

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    925-932

    In this work, a multiple source separation method with joint sparse and non-sparse components recovery is proposed by using dual similarity determination. Specifically, a dual similarity coefficient is designed based on normalized cross-correlation and Jaccard coefficients, and its reasonability is validated via a statistical analysis on a quantitative effective measure. Thereafter, by regarding the sparse components as a guide, the non-sparse components are recovered using the dual similarity coefficient. Eventually, a separated signal is obtained by a synthesis of the sparse and non-sparse components. Experimental results demonstrate the separation quality of the proposed method outperforms some existing BSS methods including sparse components separation based methods, independent components analysis based methods and soft threshold based methods.

  • Color Image Enhancement Method with Variable Emphasis Degree

    Hiromu ENDO  Akira TAGUCHI  

     
    PAPER-Image

      Vol:
    E101-A No:4
      Page(s):
    713-722

    In this paper, we propose a new enhancement method for color images. In color image processing, hue preserving is required. The proposed method is performed into HSI color space whose gamut is same as RGB color space. The differential gray-level histogram equalization (DHE) is effective for gray scale images. The proposed method is an extension version of the DHE for color images, and furthermore, the enhancement degree is variable by introducing two parameters. Since our processing method is applied to not only intensity but also saturation, the contrast and the colorfulness of the output image can be varied. It is an important issue how to determine the two parameters. Thus, we give the guideline for how to decide the two parameters. By using the guideline, users can easily obtain their own enhancement images.

  • Expansion of Optical Access Network to Rural Area Open Access

    Hideyuki IWATA  Yuji INOUE  

     
    INVITED PAPER

      Pubricized:
    2017/10/18
      Vol:
    E101-B No:4
      Page(s):
    966-971

    The spread of optical access broadband networks using Fiber to the Home (FTTH) has not reached the rural areas of developing countries. The current state of global deployment of ICT indicates that it is difficult to sell network systems as stand-alone products due to prohibitive costs, and the demand is for total services that include construction, maintenance, and operation. Moreover, there is a need to offer proposals that include various solutions utilizing broadband networks, as well as for a business model that takes the sustainability of those solutions into consideration. In this paper, we discuss the issues in constructing broadband networks, introduce case studies of solutions using broadband networks for solving social issues in rural areas of developing countries, and discuss the challenges in the deployment of the solutions.

  • Reliability Analysis of Scaled NAND Flash Memory Based SSDs with Real Workload Characteristics by Using Real Usage-Based Precise Reliability Test

    Yusuke YAMAGA  Chihiro MATSUI  Yukiya SAKAKI  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    243-252

    In order to reduce the memory cell errors in real-usage of NAND flash-based SSD, real usage-based precise reliability test for NAND flash of SSDs has been proposed. Reliability of the NAND flash memories of the SSDs is seriously degraded as the scaling of memory cells. However, conventional simple reliability tests of read-disturb and data-retention cannot give the same result as the real-life VTH shift and memory cell errors. To solve this problem, the proposed reliability test precisely reproduces the real memory cell failures by emulating the complicated read, write, and data-retention with SSD emulator. In this paper, the real-life VTH shift and memory cell errors between two generations of NAND flash memory with different characterized real workloads are provided. Using the proposed test method, 1.6-times BER difference is observed when write-cold and read-hot workload (hm_1) and write-hot and read-hot workload (prxy_1) are compared in 1Ynm MLC NAND flash. In addition, by NAND flash memory scaling from 1Xnm to 1Ynm generations, the discrepancy of error numbers between the conventional reliability test result and actual reliability measured by proposed reliability test is increased by 6.3-times. Finally, guidelines for read reference voltage shifts and strength of ECCs are given to achieve high memory cell reliability for various workloads.

  • Approximate-DCT-Derived Measurement Matrices with Row-Operation-Based Measurement Compression and its VLSI Architecture for Compressed Sensing

    Jianbin ZHOU  Dajiang ZHOU  Takeshi YOSHIMURA  Satoshi GOTO  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    263-272

    Compressed Sensing based CMOS image sensor (CS-CIS) is a new generation of CMOS image sensor that significantly reduces the power consumption. For CS-CIS, the image quality and data volume of output are two important issues to concern. In this paper, we first proposed an algorithm to generate a series of deterministic and ternary matrices, which improves the image quality, reduces the data volume and are compatible with CS-CIS. Proposed matrices are derived from the approximate DCT and trimmed in 2D-zigzag order, thus preserving the energy compaction property as DCT does. Moreover, we proposed matrix row operations adaptive to the proposed matrix to further compress data (measurements) without any image quality loss. At last, a low-cost VLSI architecture of measurements compression with proposed matrix row operations is implemented. Experiment results show our proposed matrix significantly improve the coding efficiency by BD-PSNR increase of 4.2 dB, comparing with the random binary matrix used in the-state-of-art CS-CIS. The proposed matrix row operations for measurement compression further increases the coding efficiency by 0.24 dB BD-PSNR (4.8% BD-rate reduction). The VLSI architecture is only 4.3 K gates in area and 0.3 mW in power consumption.

  • A High-Throughput Low-Energy Arithmetic Processor

    Hong-Thu NGUYEN  Xuan-Thuan NGUYEN  Cong-Kha PHAM  

     
    BRIEF PAPER

      Vol:
    E101-C No:4
      Page(s):
    281-284

    In this paper, the hardware architecture of a CORDIC-based Arithmetic Processor utilizing both angle recoding (ARD) CORDIC algorithm and scaling-free (SCFE) CORDIC algorithm is proposed and implemented in 180 nm CMOS technology. The arithmetic processor is capable of calculating the sine, cosine, sine hyperbolic, cosine hyperbolic, and multiplication function. The experimental results prove that the design is able to work at 100 MHz frequency and requires 12.96 mW power consumption. In comparison with some previous work, the design can be seen as a good choice for high-throughput low-energy applications.

  • Development of Idea Generation Consistent Support System That Includes Suggestive Functions for Preparing Concreteness of Idea Labels and Island Names

    Jun MUNEMORI  Hiroki SAKAMOTO  Junko ITOU  

     
    PAPER-Creativity Support Systems and Decision Support Systems

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    838-846

    In recent years, networking has spread substantially owing to the rapid developments made in Information & Communication Technology (ICT). It has also become easy to share highly contextual data and information, including ideas, among people. On the other hand, there exists information that cannot be expressed in words (tacit knowledge) and useful knowledge or know-how that is not shared well in an organization. The idea generation method enables the expression of explicit knowledge, which enables the expression of tacit knowledge by words, and can utilize explicit knowledge as know-how in organizations. We propose an idea generation consistent support system, GUNGEN-Web II. This system has suggestion functions for a concrete idea label and a concrete island name. The suggestion functions convey an idea and the island name to other participants more precisely. This system also has an illustration support function and a document support function. In this study, we aimed to improve the quality of the sentence obtained using the KJ method. We compared the results of our proposed systems with conventional GUNGEN-Web by conducting experiments. The results are as follows: The evaluation of the sentence of GUNGEN-Web II was significantly different to those obtained using the conventional GUNGEN-Web.

  • Web-Based and Quality-Oriented Remote Collaboration Platform Tolerant to Severe Network Constraints

    Yasuhiro MOCHIDA  Daisuke SHIRAI  Tatsuya FUJII  

     
    PAPER-Technologies for Knowledge Support Platform

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    944-955

    Existing remote collaboration systems are not suitable for a collaboration style where distributed users touch work tools at the same time, especially in demanding use cases or in severe network situations. To cover a wider range of use cases, we propose a novel concept of a remote collaboration platform that enables the users to share currently-used work tools with a high quality A/V transmission module, while maintaining the advantages of web-based systems. It also provides functions to deal with long transmission delay using relay servers, packet transmission instability using visual feedback of audio delivery and limited bandwidth using dynamic allocation of video bitrate. We implemented the platform and conducted evaluation tests. The results show the feasibility of the proposed concept and its tolerance to network constraints, which indicates that the proposed platform can construct unprecedented collaboration systems.

  • A Survey of Thai Knowledge Extraction for the Semantic Web Research and Tools Open Access

    Ponrudee NETISOPAKUL  Gerhard WOHLGENANNT  

     
    SURVEY PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    986-1002

    As the manual creation of domain models and also of linked data is very costly, the extraction of knowledge from structured and unstructured data has been one of the central research areas in the Semantic Web field in the last two decades. Here, we look specifically at the extraction of formalized knowledge from natural language text, which is the most abundant source of human knowledge available. There are many tools on hand for information and knowledge extraction for English natural language, for written Thai language the situation is different. The goal of this work is to assess the state-of-the-art of research on formal knowledge extraction specifically from Thai language text, and then give suggestions and practical research ideas on how to improve the state-of-the-art. To address the goal, first we distinguish nine knowledge extraction for the Semantic Web tasks defined in literature on knowledge extraction from English text, for example taxonomy extraction, relation extraction, or named entity recognition. For each of the nine tasks, we analyze the publications and tools available for Thai text in the form of a comprehensive literature survey. Additionally to our assessment, we measure the self-assessment by the Thai research community with the help of a questionnaire-based survey on each of the tasks. Furthermore, the structure and size of the Thai community is analyzed using complex literature database queries. Combining all the collected information we finally identify research gaps in knowledge extraction from Thai language. An extensive list of practical research ideas is presented, focusing on concrete suggestions for every knowledge extraction task - which can be implemented and evaluated with reasonable effort. Besides the task-specific hints for improvements of the state-of-the-art, we also include general recommendations on how to raise the efficiency of the respective research community.

  • Performance Evaluation of Pipeline-Based Processing for the Caffe Deep Learning Framework

    Ayae ICHINOSE  Atsuko TAKEFUSA  Hidemoto NAKADA  Masato OGUCHI  

     
    PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1042-1052

    Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed as the use of various sensors and Cloud computing technologies has spread. However, difficulties arise because of the limited network bandwidth between such sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose a pipelined method for distributed deep learning processing between sensors and the Cloud to reduce the amount of data sent to the Cloud and protect the privacy of users. In this study, we measured the processing times and evaluated the performance of our method using two different datasets. In addition, we performed experiments using three types of machines with different performance characteristics on the client side and compared the processing times. The experimental results show that the accuracy of deep learning with coarse-grained data is comparable to that achieved with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth between realistic sensors and a Cloud environment. In addition, it is confirmed that the process that most affects the overall processing time varies depending on the machine performance on the client side, and the most efficient distribution method similarly differs.

  • Analysis of Body Bias Control Using Overhead Conditions for Real Time Systems: A Practical Approach

    Carlos Cesar CORTES TORRES  Hayate OKUHARA  Nobuyuki YAMASAKI  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2018/01/12
      Vol:
    E101-D No:4
      Page(s):
    1116-1125

    In the past decade, real-time systems (RTSs), which must maintain time constraints to avoid catastrophic consequences, have been widely introduced into various embedded systems and Internet of Things (IoTs). The RTSs are required to be energy efficient as they are used in embedded devices in which battery life is important. In this study, we investigated the RTS energy efficiency by analyzing the ability of body bias (BB) in providing a satisfying tradeoff between performance and energy. We propose a practical and realistic model that includes the BB energy and timing overhead in addition to idle region analysis. This study was conducted using accurate parameters extracted from a real chip using silicon on thin box (SOTB) technology. By using the BB control based on the proposed model, about 34% energy reduction was achieved.

  • Frame-Based Representation for Event Detection on Twitter

    Yanxia QIN  Yue ZHANG  Min ZHANG  Dequan ZHENG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1180-1188

    Large scale first-hand tweets motivate automatic event detection on Twitter. Previous approaches model events by clustering tweets, words or segments. On the other hand, event clusters represented by tweets are easier to understand than those represented by words/segments. However, compared to words/segments, tweets are sparser and therefore makes clustering less effective. This article proposes to represent events with triple structures called frames, which are as efficient as, yet can be easier to understand than words/segments. Frames are extracted based on shallow syntactic information of tweets with an unsupervised open information extraction method, which is introduced for domain-independent relation extraction in a single pass over web scale data. This is then followed by bursty frame element extraction functions as feature selection by filtering frame elements with bursty frequency pattern via a probabilistic model. After being clustered and ranked, high-quality events are yielded and then reported by linking frame elements back to frames. Experimental results show that frame-based event detection leads to improved precision over a state-of-the-art baseline segment-based event detection method. Superior readability of frame-based events as compared with segment-based events is demonstrated in some example outputs.

  • A Joint Convolutional Bidirectional LSTM Framework for Facial Expression Recognition

    Jingwei YAN  Wenming ZHENG  Zhen CUI  Peng SONG  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1217-1220

    Facial expressions are generated by the actions of the facial muscles located at different facial regions. The spatial dependencies of different spatial facial regions are worth exploring and can improve the performance of facial expression recognition. In this letter we propose a joint convolutional bidirectional long short-term memory (JCBLSTM) framework to model the discriminative facial textures and spatial relations between different regions jointly. We treat each row or column of feature maps output from CNN as individual ordered sequence and employ LSTM to model the spatial dependencies within it. Moreover, a shortcut connection for convolutional feature maps is introduced for joint feature representation. We conduct experiments on two databases to evaluate the proposed JCBLSTM method. The experimental results demonstrate that the JCBLSTM method achieves state-of-the-art performance on Multi-PIE and very competitive result on FER-2013.

  • Filter Level Pruning Based on Similar Feature Extraction for Convolutional Neural Networks

    Lianqiang LI  Yuhui XU  Jie ZHU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1203-1206

    This paper introduces a filter level pruning method based on similar feature extraction for compressing and accelerating the convolutional neural networks by k-means++ algorithm. In contrast to other pruning methods, the proposed method would analyze the similarities in recognizing features among filters rather than evaluate the importance of filters to prune the redundant ones. This strategy would be more reasonable and effective. Furthermore, our method does not result in unstructured network. As a result, it needs not extra sparse representation and could be efficiently supported by any off-the-shelf deep learning libraries. Experimental results show that our filter pruning method could reduce the number of parameters and the amount of computational costs in Lenet-5 by a factor of 17.9× with only 0.3% accuracy loss.

  • Collaborative Ontology Development Approach for Multidisciplinary Knowledge: A Scenario-Based Knowledge Construction System in Life Cycle Assessment

    Akkharawoot TAKHOM  Sasiporn USANAVASIN  Thepchai SUPNITHI  Mitsuru IKEDA  

     
    PAPER-Knowledge Representation

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    892-900

    Creating an ontology from multidisciplinary knowledge is a challenge because it needs a number of various domain experts to collaborate in knowledge construction and verify the semantic meanings of the cross-domain concepts. Confusions and misinterpretations of concepts during knowledge creation are usually caused by having different perspectives and different business goals from different domain experts. In this paper, we propose a community-driven ontology-based application management (CD-OAM) framework that provides a collaborative environment with supporting features to enable collaborative knowledge creation. It can also reduce confusions and misinterpretations among domain stakeholders during knowledge construction process. We selected one of the multidisciplinary domains, which is Life Cycle Assessment (LCA) for our scenario-based knowledge construction. Constructing the LCA knowledge requires many concepts from various fields including environment protection, economic development, social development, etc. The output of this collaborative knowledge construction is called MLCA (multidisciplinary LCA) ontology. Based on our scenario-based experiment, it shows that CD-OAM framework can support the collaborative activities for MLCA knowledge construction and also reduce confusions and misinterpretations of cross-domain concepts that usually presents in general approach.

  • Repeated Games for Generating Randomness in Encryption

    Kenji YASUNAGA  Kosuke YUZAWA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E101-A No:4
      Page(s):
    697-703

    In encryption schemes, the sender may not generate randomness properly if generating randomness is costly, and the sender is not concerned about the security of a message. The problem was studied by the first author (2016), and was formalized in a game-theoretic framework. In this work, we construct an encryption scheme with an optimal round complexity on the basis of the mechanism of repeated games.

5281-5300hit(42807hit)