The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

1261-1280hit(8214hit)

  • A Survey of Thai Knowledge Extraction for the Semantic Web Research and Tools Open Access

    Ponrudee NETISOPAKUL  Gerhard WOHLGENANNT  

     
    SURVEY PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    986-1002

    As the manual creation of domain models and also of linked data is very costly, the extraction of knowledge from structured and unstructured data has been one of the central research areas in the Semantic Web field in the last two decades. Here, we look specifically at the extraction of formalized knowledge from natural language text, which is the most abundant source of human knowledge available. There are many tools on hand for information and knowledge extraction for English natural language, for written Thai language the situation is different. The goal of this work is to assess the state-of-the-art of research on formal knowledge extraction specifically from Thai language text, and then give suggestions and practical research ideas on how to improve the state-of-the-art. To address the goal, first we distinguish nine knowledge extraction for the Semantic Web tasks defined in literature on knowledge extraction from English text, for example taxonomy extraction, relation extraction, or named entity recognition. For each of the nine tasks, we analyze the publications and tools available for Thai text in the form of a comprehensive literature survey. Additionally to our assessment, we measure the self-assessment by the Thai research community with the help of a questionnaire-based survey on each of the tasks. Furthermore, the structure and size of the Thai community is analyzed using complex literature database queries. Combining all the collected information we finally identify research gaps in knowledge extraction from Thai language. An extensive list of practical research ideas is presented, focusing on concrete suggestions for every knowledge extraction task - which can be implemented and evaluated with reasonable effort. Besides the task-specific hints for improvements of the state-of-the-art, we also include general recommendations on how to raise the efficiency of the respective research community.

  • A Joint Convolutional Bidirectional LSTM Framework for Facial Expression Recognition

    Jingwei YAN  Wenming ZHENG  Zhen CUI  Peng SONG  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1217-1220

    Facial expressions are generated by the actions of the facial muscles located at different facial regions. The spatial dependencies of different spatial facial regions are worth exploring and can improve the performance of facial expression recognition. In this letter we propose a joint convolutional bidirectional long short-term memory (JCBLSTM) framework to model the discriminative facial textures and spatial relations between different regions jointly. We treat each row or column of feature maps output from CNN as individual ordered sequence and employ LSTM to model the spatial dependencies within it. Moreover, a shortcut connection for convolutional feature maps is introduced for joint feature representation. We conduct experiments on two databases to evaluate the proposed JCBLSTM method. The experimental results demonstrate that the JCBLSTM method achieves state-of-the-art performance on Multi-PIE and very competitive result on FER-2013.

  • Filter Level Pruning Based on Similar Feature Extraction for Convolutional Neural Networks

    Lianqiang LI  Yuhui XU  Jie ZHU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1203-1206

    This paper introduces a filter level pruning method based on similar feature extraction for compressing and accelerating the convolutional neural networks by k-means++ algorithm. In contrast to other pruning methods, the proposed method would analyze the similarities in recognizing features among filters rather than evaluate the importance of filters to prune the redundant ones. This strategy would be more reasonable and effective. Furthermore, our method does not result in unstructured network. As a result, it needs not extra sparse representation and could be efficiently supported by any off-the-shelf deep learning libraries. Experimental results show that our filter pruning method could reduce the number of parameters and the amount of computational costs in Lenet-5 by a factor of 17.9× with only 0.3% accuracy loss.

  • Cyber-Physical Hybrid Environment Using a Largescale Discussion System Enhances Audiences' Participation and Satisfaction in the Panel Discussion

    Satoshi KAWASE  Takayuki ITO  Takanobu OTSUKA  Akihisa SENGOKU  Shun SHIRAMATSU  Tokuro MATSUO  Tetsuya OISHI  Rieko FUJITA  Naoki FUKUTA  Katsuhide FUJITA  

     
    PAPER-Creativity Support Systems and Decision Support Systems

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    847-855

    Performance based on multi-party discussion has been reported to be superior to that based on individuals. However, it is impossible that all participants simultaneously express opinions due to the time and space limitations in a large-scale discussion. In particular, only a few representative discussants and audiences can speak in conventional unidirectional discussions (e.g., panel discussion), although many participants gather for the discussion. To solve these problems, in this study, we proposed a cyber-physical discussion using “COLLAGREE,” which we developed for building consensus of large-scale online discussions. COLLAGREE is equipped with functions such as a facilitator, point ranking system, and display of discussion in tree structure. We focused on the relationship between satisfaction with the discussion and participants' desire to express opinions. We conducted the experiment in the panel discussion of an actual international conference. Participants who were audiences in the floor used COLLAGREE during the panel discussion. They responded to questionnaires after the experiment. The main findings are as follows: (1) Participation in online discussion was associated with the satisfaction of the participants; (2) Participants who desired to positively express opinions joined the cyber-space discussion; and (3) The satisfaction of participants who expressed opinions in the cyber-space discussion was higher than those of participants who expressed opinions in the real-space discussion and those who did not express opinions in both the cyber- and real-space discussions. Overall, active behaviors in the cyber-space discussion were associated with participants' satisfaction with the entire discussion, suggesting that cyberspace provided useful alternative opportunities to express opinions for audiences who used to listen to conventional unidirectional discussions passively. In addition, a complementary relationship exists between participation in the cyber-space and real-space discussions. These findings can serve to create a user-friendly discussion environment.

  • A 11.3-µA Physical Activity Monitoring System Using Acceleration and Heart Rate

    Motofumi NAKANISHI  Shintaro IZUMI  Mio TSUKAHARA  Hiroshi KAWAGUCHI  Hiromitsu KIMURA  Kyoji MARUMOTO  Takaaki FUCHIKAMI  Yoshikazu FUJIMORI  Masahiko YOSHIMOTO  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    233-242

    This paper presents an algorithm for a physical activity (PA) classification and metabolic equivalents (METs) monitoring and its System-on-a-Chip (SoC) implementation to realize both power reduction and high estimation accuracy. Long-term PA monitoring is an effective means of preventing lifestyle-related diseases. Low power consumption and long battery life are key features supporting the wider dissemination of the monitoring system. As described herein, an adaptive sampling method is implemented for longer battery life by minimizing the active rate of acceleration without decreasing accuracy. Furthermore, advanced PA classification using both the heart rate and acceleration is introduced. The proposed algorithms are evaluated by experimentation with eight subjects in actual conditions. Evaluation results show that the root mean square error with respect to the result of processing with fixed sampling rate is less than 0.22[METs], and the mean absolute error is less than 0.06[METs]. Furthermore, to minimize the system-level power dissipation, a dedicated SoC is implemented using 130-nm CMOS process with FeRAM. A non-volatile CPU using non-volatile memory and a flip-flop is used to reduce the stand-by power. The proposed algorithm, which is implemented using dedicated hardware, reduces the active rate of the CPU and accelerometer. The current consumption of the SoC is less than 3-µA. And the evaluation system using the test chip achieves 74% system-level power reduction. The total current consumption including that of the accelerometer is 11.3-µA on average.

  • A Mixture Model for Image Boundary Detection Fusion

    Yinghui ZHANG  Hongjun WANG  Hengxue ZHOU  Ping DENG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1159-1166

    Image boundary detection or image segmentation is an important step in image analysis. However, choosing appropriate parameters for boundary detection algorithms is necessary to achieve good boundary detection results. Image boundary detection fusion with unsupervised parameters can output a final consensus boundary, which is generally better than using unsupervised or supervised image boundary detection algorithms. In this study, we theoretically examine why image boundary detection fusion can work well and we propose a mixture model for image boundary detection fusion (MMIBDF) to achieve good consensus segmentation in an unsupervised manner. All of the segmentation algorithms are treated as new features and the segmentation results obtained by the algorithms are the values of the new features. The MMIBDF is designed to sample the boundary according to a discrete distribution. We present an inference method for MMIBDF and describe the corresponding algorithm in detail. Extensive empirical results demonstrate that MMIBDF significantly outperforms other image boundary detection fusion algorithms and the base image boundary detection algorithms according to most performance indices.

  • Triangular Active Charge Injection Method for Resonant Power Supply Noise Reduction

    Masahiro KANO  Toru NAKURA  Tetsuya IIZUKA  Kunihiro ASADA  

     
    PAPER-Electronic Circuits

      Vol:
    E101-C No:4
      Page(s):
    292-298

    This paper proposes a triangular active charge injection method to reduce resonant power supply noise by injecting the adequate amount of charge into the supply line of the LSI in response to the current consumption of the core circuit. The proposed circuit is composed of three key components, a voltage drop detector, an injection controller circuit and a canceling capacitor circuit. In addition to the theoretical analysis of the proposed method, the measurement results indicate that our proposed method with active capacitor can realize about 14% noise reduction compared with the original noise amplitude. The proposed circuit consumes 25.2 mW in steady state and occupies 0.182 mm2.

  • Low-Latency Communication in LTE and WiFi Using Spatial Diversity and Encoding Redundancy

    Yu YU  Stepan KUCERA  Yuto LIM  Yasuo TAN  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/09/29
      Vol:
    E101-B No:4
      Page(s):
    1116-1127

    In mobile and wireless networks, controlling data delivery latency is one of open problems due to the stochastic nature of wireless channels, which are inherently unreliable. This paper explores how the current best-effort throughput-oriented wireless services might evolve into latency-sensitive enablers of new mobile applications such as remote three-dimensional (3D) graphical rendering for interactive virtual/augmented-reality overlay. Assuming that the signal propagation delay and achievable throughput meet the standard latency requirements of the user application, we examine the idea of trading excess/federated bandwidth for the elimination of non-negligible delay of data re-ordering, caused by temporal transmission failures and buffer overflows. The general system design is based on (i) spatially diverse data delivery over multiple paths with uncorrelated outage likelihoods; and (ii) forward packet-loss protection (FPP), creating encoding redundancy for proactive recovery of intolerably delayed data without end-to-end retransmissions. Analysis and evaluation are based on traces of real life traffic, which is measured in live carrier-grade long term evolution (LTE) networks and campus WiFi networks, due to no such system/environment yet to verify the importance of spatial diversity and encoding redundancy. Analysis and evaluation reveal the seriousness of the latency problem and that the proposed FPP with spatial diversity and encoding redundancy can minimize the delay of re-ordering. Moreover, a novel FPP effectiveness coefficient is proposed to explicitly represent the effectiveness of EPP implementation.

  • Full-Automatic Optic Disc Boundary Extraction Based on Active Contour Model with Multiple Energies

    Yuan GAO  Chengdong WU  Xiaosheng YU  Wei ZHOU  Jiahui WU  

     
    LETTER-Vision

      Vol:
    E101-A No:3
      Page(s):
    658-661

    Efficient optic disc (OD) segmentation plays a significant role in retinal image analysis and retinal disease screening. In this paper, we present a full-automatic segmentation approach called double boundary extraction for the OD segmentation. The proposed approach consists of the following two stages: first, we utilize an unsupervised learning technology and statistical method based on OD boundary information to obtain the initial contour adaptively. Second, the final optic disc boundary is extracted using the proposed LSO model. The performance of the proposed method is tested on the public DIARETDB1 database and the experimental results demonstrate the effectiveness and advantage of the proposed method.

  • Energy-Efficient DRAM Selective Refresh Technique with Page Residence in a Memory Hierarchy of Hardware-Managed TLB

    Miseon HAN  Yeoul NA  Dongha JUNG  Hokyoon LEE  Seon WOOK KIM  Youngsun HAN  

     
    PAPER-Integrated Electronics

      Vol:
    E101-C No:3
      Page(s):
    170-182

    A memory controller refreshes DRAM rows periodically in order to prevent DRAM cells from losing data over time. Refreshes consume a large amount of energy, and the problem becomes worse with the future larger DRAM capacity. Previously proposed selective refreshing techniques are either conservative in exploiting the opportunity or expensive in terms of required implementation overhead. In this paper, we propose a novel DRAM selective refresh technique by using page residence in a memory hierarchy of hardware-managed TLB. Our technique maximizes the opportunity to optimize refreshing by activating/deactivating refreshes for DRAM pages when their PTEs are inserted to/evicted from TLB or data caches, while the implementation cost is minimized by slightly extending the existing infrastructure. Our experiment shows that the proposed technique can reduce DRAM refresh power 43.6% on average and EDP 3.5% with small amount of hardware overhead.

  • Performance Comparison of Subjective Quality Assessment Methods for 4k Video

    Kimiko KAWASHIMA  Kazuhisa YAMAGISHI  Takanori HAYASHI  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2017/08/29
      Vol:
    E101-B No:3
      Page(s):
    933-945

    Many subjective quality assessment methods have been standardized. Experimenters can select a method from these methods in accordance with the aim of the planned subjective assessment experiment. It is often argued that the results of subjective quality assessment are affected by range effects that are caused by the quality distribution of the assessment videos. However, there are no studies on the double stimulus continuous quality-scale (DSCQS) and absolute category rating with hidden reference (ACR-HR) methods that investigate range effects in the high-quality range. Therefore, we conduct experiments using high-quality assessment videos (high-quality experiment) and low-to-high-quality assessment videos (low-to-high-quality experiment) and compare the DSCQS and ACR-HR methods in terms of accuracy, stability, and discrimination ability. Regarding accuracy, we find that the mean opinion scores of the DSCQS and ACR-HR methods were marginally affected by range effects, although almost all common processed video sequences showed no significant difference for the high- and low-to-high-quality experiments. Second, the DSCQS and ACR-HR methods were equally stable in the low-to-high-quality experiment, whereas the DSCQS method was more stable than the ACR-HR method in the high-quality experiment. Finally, the DSCQS method had higher discrimination ability than the ACR-HR method in the low-to-high-quality experiment, whereas both methods had almost the same discrimination ability for the high-quality experiment. We thus determined that the DSCQS method is better at minimizing the range effects than the ACR-HR method in the high-quality range.

  • The Estimation of Satellite Attitude Using the Radar Cross Section Sequence and Particle Swarm Optimization

    Jidong QIN  Jiandong ZHU  Huafeng PENG  Tao SUN  Dexiu HU  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:3
      Page(s):
    595-599

    The existing methods to estimate satellite attitude by using radar cross section (RCS) sequence suffer from problems such as low precision, computation complexity, etc. To overcome these problems, a novel model of satellite attitude estimation by the local maximum points of the RCS sequence is established and can reduce the computational time by downscaling the dimension of the feature vector. Moreover, a particle swarm optimization method is adopted to improve efficiency of computation. Numerical simulations show that the proposed method is robust and efficient.

  • Classification of Utterances Based on Multiple BLEU Scores for Translation-Game-Type CALL Systems

    Reiko KUWA  Tsuneo KATO  Seiichi YAMAMOTO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/12/04
      Vol:
    E101-D No:3
      Page(s):
    750-757

    This paper proposes a classification method of second-language-learner utterances for interactive computer-assisted language learning systems. This classification method uses three types of bilingual evaluation understudy (BLEU) scores as features for a classifier. The three BLEU scores are calculated in accordance with three subsets of a learner corpus divided according to the quality of utterances. For the purpose of overcoming the data-sparseness problem, this classification method uses the BLEU scores calculated using a mixture of word and part-of-speech (POS)-tag sequences converted from word sequences based on a POS-replacement rule according to which words are replaced with POS tags in n-grams. Experiments of classifying English utterances by Japanese demonstrated that the proposed classification method achieved classification accuracy of 78.2% which was 12.3 points higher than a baseline with one BLEU score.

  • Improved MCAS Based Spectrum Sensing in Cognitive Radio

    Shusuke NARIEDA  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/08/29
      Vol:
    E101-B No:3
      Page(s):
    915-923

    This paper presents a computationally efficient cyclostationarity detection based spectrum sensing technique in cognitive radio. Traditionally, several cyclostationarity detection based spectrum sensing techniques with a low computational complexity have been presented, e.g., peak detector (PD), maximum cyclic autocorrelation selection (MCAS), and so on. PD can be affected by noise uncertainty because it requires a noise floor estimation, whereas MCAS does not require the estimation. Furthermore, the computational complexity of MCAS is greater than that of PD because MCAS must compute some statistics for signal detection instead of the estimation unnecessary whereas PD must compute only one statistic. In the presented MCAS based techniques, only one statistic must be computed. The presented technique obtains other necessary statistics from the procedure that computes the statistic. Therefore, the computational complexity of the presented is almost the same as that of PD, and it does not require the noise floor estimation for threshold. Numerical examples are shown to validate the effectiveness of the presented technique.

  • On the Properties and Applications of Inconsistent Neighborhood in Neighborhood Rough Set Models

    Shujiao LIAO  Qingxin ZHU  Rui LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    709-718

    Rough set theory is an important branch of data mining and granular computing, among which neighborhood rough set is presented to deal with numerical data and hybrid data. In this paper, we propose a new concept called inconsistent neighborhood, which extracts inconsistent objects from a traditional neighborhood. Firstly, a series of interesting properties are obtained for inconsistent neighborhoods. Specially, some properties generate new solutions to compute the quantities in neighborhood rough set. Then, a fast forward attribute reduction algorithm is proposed by applying the obtained properties. Experiments undertaken on twelve UCI datasets show that the proposed algorithm can get the same attribute reduction results as the existing algorithms in neighborhood rough set domain, and it runs much faster than the existing ones. This validates that employing inconsistent neighborhoods is advantageous in the applications of neighborhood rough set. The study would provide a new insight into neighborhood rough set theory.

  • Polynomial Time Learnability of Graph Pattern Languages Defined by Cographs

    Takayoshi SHOUDAI  Yuta YOSHIMURA  Yusuke SUZUKI  Tomoyuki UCHIDA  Tetsuhiro MIYAHARA  

     
    PAPER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    582-592

    A cograph (complement reducible graph) is a graph which can be generated by disjoint union and complement operations on graphs, starting with a single vertex graph. Cographs arise in many areas of computer science and are studied extensively. With the goal of developing an effective data mining method for graph structured data, in this paper we introduce a graph pattern expression, called a cograph pattern, which is a special type of cograph having structured variables. Firstly, we show that a problem whether or not a given cograph pattern g matches a given cograph G is NP-complete. From this result, we consider the polynomial time learnability of cograph pattern languages defined by cograph patterns having variables labeled with mutually different labels, called linear cograph patterns. Secondly, we present a polynomial time matching algorithm for linear cograph patterns. Next, we give a polynomial time algorithm for obtaining a minimally generalized linear cograph pattern which explains given positive data. Finally, we show that the class of linear cograph pattern languages is polynomial time inductively inferable from positive data.

  • Action Recognition Using Low-Rank Sparse Representation

    Shilei CHENG  Song GU  Maoquan YE  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/24
      Vol:
    E101-D No:3
      Page(s):
    830-834

    Human action recognition in videos draws huge research interests in computer vision. The Bag-of-Word model is quite commonly used to obtain the video level representations, however, BoW model roughly assigns each feature vector to its nearest visual word and the collection of unordered words ignores the interest points' spatial information, inevitably causing nontrivial quantization errors and impairing improvements on classification rates. To address these drawbacks, we propose an approach for action recognition by encoding spatio-temporal log Euclidean covariance matrix (ST-LECM) features within the low-rank and sparse representation framework. Motivated by low rank matrix recovery, local descriptors in a spatial temporal neighborhood have similar representation and should be approximately low rank. The learned coefficients can not only capture the global data structures, but also preserve consistent. Experimental results showed that the proposed approach yields excellent recognition performance on synthetic video datasets and are robust to action variability, view variations and partial occlusion.

  • Comparative Study between Two Approaches Using Edit Operations and Code Differences to Detect Past Refactorings

    Takayuki OMORI  Katsuhisa MARUYAMA  

     
    PAPER-Software Engineering

      Pubricized:
    2017/11/27
      Vol:
    E101-D No:3
      Page(s):
    644-658

    Understanding which refactoring transformations were performed is in demand in modern software constructions. Traditionally, many researchers have been tackling understanding code changes with history data derived from version control systems. In those studies, problems of the traditional approach are pointed out, such as entanglement of multiple changes. To alleviate the problems, operation histories on IDEs' code editors are available as a new source of software evolution data nowadays. By replaying such histories, we can investigate past code changes in a fine-grained level. However, the prior studies did not provide enough evidence of their effectiveness for detecting refactoring transformations. This paper describes an experiment in which participants detect refactoring transformations performed by other participants after investigating the code changes with an operation-replay tool and diff tools. The results show that both approaches have their respective factors that pose misunderstanding and overlooking of refactoring transformations. Two negative factors on divided operations and generated compound operations were observed in the operation-based approach, whereas all the negative factors resulted from three problems on tangling, shadowing, and out-of-order of code changes in the difference-based approach. This paper also shows seven concrete examples of participants' mistakes in both approaches. These findings give us hints for improving existing tools for understanding code changes and detecting refactoring transformations.

  • Efficient Early Termination Criterion for ADMM Penalized LDPC Decoder

    Biao WANG  Xiaopeng JIAO  Jianjun MU  Zhongfei WANG  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    623-626

    By tracking the changing rate of hard decisions during every two consecutive iterations of the alternating direction method of multipliers (ADMM) penalized decoding, an efficient early termination (ET) criterion is proposed to improve the convergence rate of ADMM penalized decoder for low-density parity-check (LDPC) codes. Compared to the existing ET criterion for ADMM penalized decoding, the proposed method can reduce the average number of iterations significantly at low signal-to-noise ratios with negligible performance degradation.

  • Corpus Expansion for Neural CWS on Microblog-Oriented Data with λ-Active Learning Approach

    Jing ZHANG  Degen HUANG  Kaiyu HUANG  Zhuang LIU  Fuji REN  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/12/08
      Vol:
    E101-D No:3
      Page(s):
    778-785

    Microblog data contains rich information of real-world events with great commercial values, so microblog-oriented natural language processing (NLP) tasks have grabbed considerable attention of researchers. However, the performance of microblog-oriented Chinese Word Segmentation (CWS) based on deep neural networks (DNNs) is still not satisfying. One critical reason is that the existing microblog-oriented training corpus is inadequate to train effective weight matrices for DNNs. In this paper, we propose a novel active learning method to extend the scale of the training corpus for DNNs. However, due to a large amount of partially overlapped sentences in the microblogs, it is difficult to select samples with high annotation values from raw microblogs during the active learning procedure. To select samples with higher annotation values, parameter λ is introduced to control the number of repeatedly selected samples. Meanwhile, various strategies are adopted to measure the overall annotation values of a sample during the active learning procedure. Experiments on the benchmark datasets of NLPCC 2015 show that our λ-active learning method outperforms the baseline system and the state-of-the-art method. Besides, the results also demonstrate that the performances of the DNNs trained on the extended corpus are significantly improved.

1261-1280hit(8214hit)