The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

2761-2780hit(21534hit)

  • Static Representation Exposing Spatial Changes in Spatio-Temporal Dependent Data

    Hiroki CHIBA  Yuki HYOGO  Kazuo MISUE  

     
    PAPER-Elemental Technologies for human behavior analysis

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    933-943

    Spatio-temporal dependent data, such as weather observation data, are data of which the attribute values depend on both time and space. Typical methods for the visualization of such data include plotting the attribute values at each point in time on a map and displaying series of the maps in chronological order with animation, or displaying them by juxtaposing horizontally or vertically. However, these methods are problematic in that they compel readers interested in grasping the spatial changes of the attribute values to memorize the representations on the maps. The problem is exacerbated by considering that the longer the time-period covered by the data, the higher the cognitive load. In order to solve these problems, the authors propose a visualization method capable of overlaying the representations of multiple instantaneous values on a single static map. This paper explains the design of the proposed method and reports two experiments conducted by the authors to investigate the usefulness of the method. The experimental results show that the proposed method is useful in terms of the speed and accuracy with which it reads the spatial changes and its ability to present data with long time series efficiently.

  • Workflow Extraction for Service Operation Using Multiple Unstructured Trouble Tickets

    Akio WATANABE  Keisuke ISHIBASHI  Tsuyoshi TOYONO  Keishiro WATANABE  Tatsuaki KIMURA  Yoichi MATSUO  Kohei SHIOMOTO  Ryoichi KAWAHARA  

     
    PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1030-1041

    In current large-scale IT systems, troubleshooting has become more complicated due to the diversification in the causes of failures, which has increased operational costs. Thus, clarifying the troubleshooting process also becomes important, though it is also time-consuming. We propose a method of automatically extracting a workflow, a graph indicating a troubleshooting process, using multiple trouble tickets. Our method extracts an operator's actions from free-format texts and aligns relative sentences between multiple trouble tickets. Our method uses a stochastic model to detect a resolution, a frequent action pattern that helps us understand how to solve a problem. We validated our method using real trouble-ticket data captured from a real network operation and showed that it can extract a workflow to identify the cause of a failure.

  • Analysis of SCM-Based SSD Performance in Consideration of SCM Access Unit Size, Write/Read Latencies and Application Request Size

    Hirofumi TAKISHITA  Yutaka ADACHI  Chihiro MATSUI  Ken TAKECUHI  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    253-262

    NAND flash memories used in solid-state drives (SSDs) will be replaced with storage-class memories (SCMs), which are comparable with NAND flash in their cost, and with DRAM in their speed. This paper describes the performance difference of the SCM/NAND flash hybrid SSD and the SCM-based SSD with between sector-unit read (512 Byte) and page-unit read (16 KByte, NAND flash page-size) using synthetic and real workload. Also, effect of the SCM read-unit size on SSD performance are analyzed. When SCM write/read latency is 0.1 us, performance difference of the SCM/NAND flash hybrid SSD with between page- and sector-unit read is about 1% and 6% at most for the write-intensive and read-intensive workloads, respectively. However, performance of the SCM-based SSD is significantly improved when sector-unit read is used because extra read latency does not occur. Especially, the SCM-based SSD IOPS is improved by 131% for proj_3 (read-hot-random), because its read request size is small but its read request ratio is large. This paper also shows IOPS of SCM-based SSD write/read with sector-unit read can be predicted by the average write/read request size of workloads.

  • 82.5GS/s (8×10.3GHz Multi-Phase Clocks) Blind Over-Sampling Based Burst-Mode Clock and Data Recovery for 10G-EPON 10.3-Gb/s/1.25-Gb/s Dual-Rate Operation

    Naoki SUZUKI  Kenichi NAKURA  Takeshi SUEHIRO  Seiji KOZAKI  Junichi NAKAGAWA  Kuniaki MOTOSHIMA  

     
    PAPER

      Pubricized:
    2017/10/18
      Vol:
    E101-B No:4
      Page(s):
    987-994

    We present an 82.5GS/s over-sampling based burst-mode clock and data recovery (BM-CDR) IC chip-set comprising an 82.5GS/s over-sampling IC using 8×10.3GHz multi-phase clocks and a dual-rate data selector logic IC to realize the 10.3Gb/s and 1.25Gb/s dual-rate burst-mode fast-lock operation required for 10-Gigabit based fiber-to-the-x (FTTx) services supported by 10-Gigabit Ethernet passive optical network (10G-EPON) systems. As the key issue for designing the proposed 82.5GS/s BM-CDR, a fresh study of the optimum number of multi-phase clocks, which is equivalent to the sampling resolution, is undertaken, and details of the 10.3Gb/s cum 1.25/Gb/s dual-rate optimum phase data selection logic based on a blind phase decision algorithm, which can realize a full single-platform dual-rate BM-CDR, ate also presented. By using the power of the proposed 82.5GS/s over-sampling BM-CDR in cooperation with our dual-rate burst-mode optical receiver, we further demonstrated that a short dual-rate and burst-mode preamble of 256ns supporting receiver settling and CDR recovery times was successfully achieved, while obtaining high receiver sensitivities of -31.6dBm at 10.3Gb/s and -34.6dBm at 1.25Gb/s and a high pulse-width distortion tolerance of +/-0.53UI, which are superior to the 10G-EPON standard.

  • Optimal Design of Notch Filter with Principal Basic Vectors in Subspace

    Jinguang HAO  Gang WANG  Lili WANG  Honggang WANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:4
      Page(s):
    723-726

    In this paper, an optimal method is proposed to design sparse-coefficient notch filters with principal basic vectors in the column space of a matrix constituted with frequency samples. The proposed scheme can perform in two stages. At the first stage, the principal vectors can be determined in the least-squares sense. At the second stage, with some components of the principal vectors, the notch filter design is formulated as a linear optimization problem according to the desired specifications. Optimal results can form sparse coefficients of the notch filter by solving the linear optimization problem. The simulation results show that the proposed scheme can achieve better performance in designing a sparse-coefficient notch filter of small order compared with other methods such as the equiripple method, the orthogonal matching pursuit based scheme and the L1-norm based method.

  • An Ontology-Based Approach to Supporting Knowledge Management in Government Agencies: A Case Study of the Thai Excise Department

    Marut BURANARACH  Chutiporn ANUTARIYA  Nopachat KALAYANAPAN  Taneth RUANGRAJITPAKORN  Vilas WUWONGSE  Thepchai SUPNITHI  

     
    PAPER-Knowledge Representation

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    884-891

    Knowledge management is important for government agencies in improving service delivery to their customers and data inter-operation within and across organizations. Building organizational knowledge repository for government agency has unique challenges. In this paper, we propose that enterprise ontology can provide support for government agencies in capturing organizational taxonomy, best practices and global data schema. A case study of a large-scale adoption for the Thailand's Excise Department is elaborated. A modular design approach of the enterprise ontology for the excise tax domain is discussed. Two forms of organizational knowledge: global schema and standard practices were captured in form of ontology and rule-based knowledge. The organizational knowledge was deployed to support two KM systems: excise recommender service and linked open data. Finally, we discuss some lessons learned in adopting the framework in the government agency.

  • A Transmission Control Protocol for Long Distance High-Speed Wireless Communications

    Yohei HASEGAWA  Jiro KATTO  

     
    PAPER-Network

      Pubricized:
    2017/10/17
      Vol:
    E101-B No:4
      Page(s):
    1045-1054

    This paper proposes a transmission control protocol (TCP) for long distance high-speed wireless communications, including free-space optical communications (FSOC). Extreme high frequency of wireless communications enables high-speed bit rate, but frequent signal error, including burst error, can be a quite severe problem for ordinary high-speed TCPs. To achieve 10Gbps or higher data transfer throughput on FSOC, the proposed TCP (designated “TCP-FSO”) has improved and new features including multi-layer congestion control, retransmission control with packet loss point estimation, delay-based ACK congestion control, and ACK retransmission control. We evaluated data transfer throughput of TCP-FSO and the other TCPs, by throughput model analysis and experiment on real implementation. Obtained results show that TCP-FSO achieves far higher data transfer throughput than other high-speed TCPs. For example, it achieved a thousand times higher throughput than the other high-speed TCPs in a real FSOC environment.

  • A 7GS/s Complete-DDFS-Solution in 65nm CMOS

    Abdel MARTINEZ ALONSO  Masaya MIYAHARA  Akira MATSUZAWA  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    206-217

    A 7GS/s complete-DDFS-solution featuring a two-times interleaved RDAC with 1.2Vpp-diff output swing was fabricated in 65nm CMOS. The frequency tuning and amplitude resolutions are 24-bits and 10-bits respectively. The RDAC includes a mixed-signal, high-speed architecture for random swapping thermometer coding dynamic element matching that improves the narrowband SFDR up to 8dB for output frequencies below 1.85GHz. The proposed techniques enable a 7 GS/s operation with a spurious-free dynamic range better than 32dBc over the full Nyquist bandwidth. The worst case narrowband SFDR is 42dBc. This system consumes 87.9mW/(GS/s) from a 1.2V power supply when the RSTC-DEM method is enabled, resulting in a FoM of 458.9GS/s·2(SFDR/6)/W. A proof-of-concept chip with an active area of only 0.22mm2 was measured in prototypes encapsulated in a 144-pins low profile quad flat package.

  • Name Binding is Easy with Hypergraphs

    Alimujiang YASEN  Kazunori UEDA  

     
    PAPER-Software System

      Pubricized:
    2018/01/12
      Vol:
    E101-D No:4
      Page(s):
    1126-1140

    We develop a technique for representing variable names and name binding which is a mechanism of associating a name with an entity in many formal systems including logic, programming languages and mathematics. The idea is to use a general form of graph links (or edges) called hyperlinks to represent variables, graph nodes as constructors of the formal systems, and a graph type called hlground to define substitutions. Our technique is based on simple notions of graph theory in which graph types ensure correct substitutions and keep bound variables distinct. We encode strong reduction of the untyped λ-calculus to introduce our technique. Then we encode a more complex formal system called System F<:, a polymorphic λ-calculus with subtyping that has been one of important theoretical foundations of functional programming languages. The advantage of our technique is that the representation of terms, definition of substitutions, and implementation of formal systems are all straightforward. We formalized the graph type hlground, proved that it ensures correct substitutions in the λ-calculus, and implemented hlground in HyperLMNtal, a modeling language based on hypergraph rewriting. Experiments were conducted to test this technique. By this technique, one can implement formal systems simply by following the steps of their definitions as described in papers.

  • Multiple Speech Source Separation with Non-Sparse Components Recovery by Using Dual Similarity Determination

    Maoshen JIA  Jundai SUN  Feng DENG  Junyue SUN  

     
    PAPER-Elemental Technologies for human behavior analysis

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    925-932

    In this work, a multiple source separation method with joint sparse and non-sparse components recovery is proposed by using dual similarity determination. Specifically, a dual similarity coefficient is designed based on normalized cross-correlation and Jaccard coefficients, and its reasonability is validated via a statistical analysis on a quantitative effective measure. Thereafter, by regarding the sparse components as a guide, the non-sparse components are recovered using the dual similarity coefficient. Eventually, a separated signal is obtained by a synthesis of the sparse and non-sparse components. Experimental results demonstrate the separation quality of the proposed method outperforms some existing BSS methods including sparse components separation based methods, independent components analysis based methods and soft threshold based methods.

  • Color Image Enhancement Method with Variable Emphasis Degree

    Hiromu ENDO  Akira TAGUCHI  

     
    PAPER-Image

      Vol:
    E101-A No:4
      Page(s):
    713-722

    In this paper, we propose a new enhancement method for color images. In color image processing, hue preserving is required. The proposed method is performed into HSI color space whose gamut is same as RGB color space. The differential gray-level histogram equalization (DHE) is effective for gray scale images. The proposed method is an extension version of the DHE for color images, and furthermore, the enhancement degree is variable by introducing two parameters. Since our processing method is applied to not only intensity but also saturation, the contrast and the colorfulness of the output image can be varied. It is an important issue how to determine the two parameters. Thus, we give the guideline for how to decide the two parameters. By using the guideline, users can easily obtain their own enhancement images.

  • Reliability Analysis of Scaled NAND Flash Memory Based SSDs with Real Workload Characteristics by Using Real Usage-Based Precise Reliability Test

    Yusuke YAMAGA  Chihiro MATSUI  Yukiya SAKAKI  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    243-252

    In order to reduce the memory cell errors in real-usage of NAND flash-based SSD, real usage-based precise reliability test for NAND flash of SSDs has been proposed. Reliability of the NAND flash memories of the SSDs is seriously degraded as the scaling of memory cells. However, conventional simple reliability tests of read-disturb and data-retention cannot give the same result as the real-life VTH shift and memory cell errors. To solve this problem, the proposed reliability test precisely reproduces the real memory cell failures by emulating the complicated read, write, and data-retention with SSD emulator. In this paper, the real-life VTH shift and memory cell errors between two generations of NAND flash memory with different characterized real workloads are provided. Using the proposed test method, 1.6-times BER difference is observed when write-cold and read-hot workload (hm_1) and write-hot and read-hot workload (prxy_1) are compared in 1Ynm MLC NAND flash. In addition, by NAND flash memory scaling from 1Xnm to 1Ynm generations, the discrepancy of error numbers between the conventional reliability test result and actual reliability measured by proposed reliability test is increased by 6.3-times. Finally, guidelines for read reference voltage shifts and strength of ECCs are given to achieve high memory cell reliability for various workloads.

  • Approximate-DCT-Derived Measurement Matrices with Row-Operation-Based Measurement Compression and its VLSI Architecture for Compressed Sensing

    Jianbin ZHOU  Dajiang ZHOU  Takeshi YOSHIMURA  Satoshi GOTO  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    263-272

    Compressed Sensing based CMOS image sensor (CS-CIS) is a new generation of CMOS image sensor that significantly reduces the power consumption. For CS-CIS, the image quality and data volume of output are two important issues to concern. In this paper, we first proposed an algorithm to generate a series of deterministic and ternary matrices, which improves the image quality, reduces the data volume and are compatible with CS-CIS. Proposed matrices are derived from the approximate DCT and trimmed in 2D-zigzag order, thus preserving the energy compaction property as DCT does. Moreover, we proposed matrix row operations adaptive to the proposed matrix to further compress data (measurements) without any image quality loss. At last, a low-cost VLSI architecture of measurements compression with proposed matrix row operations is implemented. Experiment results show our proposed matrix significantly improve the coding efficiency by BD-PSNR increase of 4.2 dB, comparing with the random binary matrix used in the-state-of-art CS-CIS. The proposed matrix row operations for measurement compression further increases the coding efficiency by 0.24 dB BD-PSNR (4.8% BD-rate reduction). The VLSI architecture is only 4.3 K gates in area and 0.3 mW in power consumption.

  • Development of Idea Generation Consistent Support System That Includes Suggestive Functions for Preparing Concreteness of Idea Labels and Island Names

    Jun MUNEMORI  Hiroki SAKAMOTO  Junko ITOU  

     
    PAPER-Creativity Support Systems and Decision Support Systems

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    838-846

    In recent years, networking has spread substantially owing to the rapid developments made in Information & Communication Technology (ICT). It has also become easy to share highly contextual data and information, including ideas, among people. On the other hand, there exists information that cannot be expressed in words (tacit knowledge) and useful knowledge or know-how that is not shared well in an organization. The idea generation method enables the expression of explicit knowledge, which enables the expression of tacit knowledge by words, and can utilize explicit knowledge as know-how in organizations. We propose an idea generation consistent support system, GUNGEN-Web II. This system has suggestion functions for a concrete idea label and a concrete island name. The suggestion functions convey an idea and the island name to other participants more precisely. This system also has an illustration support function and a document support function. In this study, we aimed to improve the quality of the sentence obtained using the KJ method. We compared the results of our proposed systems with conventional GUNGEN-Web by conducting experiments. The results are as follows: The evaluation of the sentence of GUNGEN-Web II was significantly different to those obtained using the conventional GUNGEN-Web.

  • Web-Based and Quality-Oriented Remote Collaboration Platform Tolerant to Severe Network Constraints

    Yasuhiro MOCHIDA  Daisuke SHIRAI  Tatsuya FUJII  

     
    PAPER-Technologies for Knowledge Support Platform

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    944-955

    Existing remote collaboration systems are not suitable for a collaboration style where distributed users touch work tools at the same time, especially in demanding use cases or in severe network situations. To cover a wider range of use cases, we propose a novel concept of a remote collaboration platform that enables the users to share currently-used work tools with a high quality A/V transmission module, while maintaining the advantages of web-based systems. It also provides functions to deal with long transmission delay using relay servers, packet transmission instability using visual feedback of audio delivery and limited bandwidth using dynamic allocation of video bitrate. We implemented the platform and conducted evaluation tests. The results show the feasibility of the proposed concept and its tolerance to network constraints, which indicates that the proposed platform can construct unprecedented collaboration systems.

  • A Survey of Thai Knowledge Extraction for the Semantic Web Research and Tools Open Access

    Ponrudee NETISOPAKUL  Gerhard WOHLGENANNT  

     
    SURVEY PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    986-1002

    As the manual creation of domain models and also of linked data is very costly, the extraction of knowledge from structured and unstructured data has been one of the central research areas in the Semantic Web field in the last two decades. Here, we look specifically at the extraction of formalized knowledge from natural language text, which is the most abundant source of human knowledge available. There are many tools on hand for information and knowledge extraction for English natural language, for written Thai language the situation is different. The goal of this work is to assess the state-of-the-art of research on formal knowledge extraction specifically from Thai language text, and then give suggestions and practical research ideas on how to improve the state-of-the-art. To address the goal, first we distinguish nine knowledge extraction for the Semantic Web tasks defined in literature on knowledge extraction from English text, for example taxonomy extraction, relation extraction, or named entity recognition. For each of the nine tasks, we analyze the publications and tools available for Thai text in the form of a comprehensive literature survey. Additionally to our assessment, we measure the self-assessment by the Thai research community with the help of a questionnaire-based survey on each of the tasks. Furthermore, the structure and size of the Thai community is analyzed using complex literature database queries. Combining all the collected information we finally identify research gaps in knowledge extraction from Thai language. An extensive list of practical research ideas is presented, focusing on concrete suggestions for every knowledge extraction task - which can be implemented and evaluated with reasonable effort. Besides the task-specific hints for improvements of the state-of-the-art, we also include general recommendations on how to raise the efficiency of the respective research community.

  • Performance Evaluation of Pipeline-Based Processing for the Caffe Deep Learning Framework

    Ayae ICHINOSE  Atsuko TAKEFUSA  Hidemoto NAKADA  Masato OGUCHI  

     
    PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1042-1052

    Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed as the use of various sensors and Cloud computing technologies has spread. However, difficulties arise because of the limited network bandwidth between such sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose a pipelined method for distributed deep learning processing between sensors and the Cloud to reduce the amount of data sent to the Cloud and protect the privacy of users. In this study, we measured the processing times and evaluated the performance of our method using two different datasets. In addition, we performed experiments using three types of machines with different performance characteristics on the client side and compared the processing times. The experimental results show that the accuracy of deep learning with coarse-grained data is comparable to that achieved with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth between realistic sensors and a Cloud environment. In addition, it is confirmed that the process that most affects the overall processing time varies depending on the machine performance on the client side, and the most efficient distribution method similarly differs.

  • Frame-Based Representation for Event Detection on Twitter

    Yanxia QIN  Yue ZHANG  Min ZHANG  Dequan ZHENG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1180-1188

    Large scale first-hand tweets motivate automatic event detection on Twitter. Previous approaches model events by clustering tweets, words or segments. On the other hand, event clusters represented by tweets are easier to understand than those represented by words/segments. However, compared to words/segments, tweets are sparser and therefore makes clustering less effective. This article proposes to represent events with triple structures called frames, which are as efficient as, yet can be easier to understand than words/segments. Frames are extracted based on shallow syntactic information of tweets with an unsupervised open information extraction method, which is introduced for domain-independent relation extraction in a single pass over web scale data. This is then followed by bursty frame element extraction functions as feature selection by filtering frame elements with bursty frequency pattern via a probabilistic model. After being clustered and ranked, high-quality events are yielded and then reported by linking frame elements back to frames. Experimental results show that frame-based event detection leads to improved precision over a state-of-the-art baseline segment-based event detection method. Superior readability of frame-based events as compared with segment-based events is demonstrated in some example outputs.

  • A Joint Convolutional Bidirectional LSTM Framework for Facial Expression Recognition

    Jingwei YAN  Wenming ZHENG  Zhen CUI  Peng SONG  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1217-1220

    Facial expressions are generated by the actions of the facial muscles located at different facial regions. The spatial dependencies of different spatial facial regions are worth exploring and can improve the performance of facial expression recognition. In this letter we propose a joint convolutional bidirectional long short-term memory (JCBLSTM) framework to model the discriminative facial textures and spatial relations between different regions jointly. We treat each row or column of feature maps output from CNN as individual ordered sequence and employ LSTM to model the spatial dependencies within it. Moreover, a shortcut connection for convolutional feature maps is introduced for joint feature representation. We conduct experiments on two databases to evaluate the proposed JCBLSTM method. The experimental results demonstrate that the JCBLSTM method achieves state-of-the-art performance on Multi-PIE and very competitive result on FER-2013.

  • Filter Level Pruning Based on Similar Feature Extraction for Convolutional Neural Networks

    Lianqiang LI  Yuhui XU  Jie ZHU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1203-1206

    This paper introduces a filter level pruning method based on similar feature extraction for compressing and accelerating the convolutional neural networks by k-means++ algorithm. In contrast to other pruning methods, the proposed method would analyze the similarities in recognizing features among filters rather than evaluate the importance of filters to prune the redundant ones. This strategy would be more reasonable and effective. Furthermore, our method does not result in unstructured network. As a result, it needs not extra sparse representation and could be efficiently supported by any off-the-shelf deep learning libraries. Experimental results show that our filter pruning method could reduce the number of parameters and the amount of computational costs in Lenet-5 by a factor of 17.9× with only 0.3% accuracy loss.

2761-2780hit(21534hit)