The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Y(22683hit)

1241-1260hit(22683hit)

  • Improved Hybrid Feature Selection Framework

    Weizhi LIAO  Guanglei YE  Weijun YAN  Yaheng MA  Dongzhou ZUO  

     
    PAPER

      Pubricized:
    2021/05/12
      Vol:
    E104-D No:8
      Page(s):
    1266-1273

    An efficient Feature selection strategy is important in the dimension reduction of data. Extensive existing research efforts could be summarized into three classes: Filter method, Wrapper method, and Embedded method. In this work, we propose an integrated two-stage feature extraction method, referred to as FWS, which combines Filter and Wrapper method to efficiently extract important features in an innovative hybrid mode. FWS conducts the first level of selection to filter out non-related features using correlation analysis and the second level selection to find out the near-optimal sub set that capturing valuable discrete features by evaluating the performance of predictive model trained on such sub set. Compared with the technologies such as mRMR and Relief-F, FWS significantly improves the detection performance through an integrated optimization strategy.Results show the performance superiority of the proposed solution over several well-known methods for feature selection.

  • Matrix Factorization Based Recommendation Algorithm for Sharing Patent Resource

    Xueqing ZHANG  Xiaoxia LIU  Jun GUO  Wenlei BAI  Daguang GAN  

     
    PAPER

      Pubricized:
    2021/04/26
      Vol:
    E104-D No:8
      Page(s):
    1250-1257

    As scientific and technological resources are experiencing information overload, it is quite expensive to find resources that users are interested in exactly. The personalized recommendation system is a good candidate to solve this problem, but data sparseness and the cold starting problem still prevent the application of the recommendation system. Sparse data affects the quality of the similarity measurement and consequently the quality of the recommender system. In this paper, we propose a matrix factorization recommendation algorithm based on similarity calculation(SCMF), which introduces potential similarity relationships to solve the problem of data sparseness. A penalty factor is adopted in the latent item similarity matrix calculation to capture more real relationships furthermore. We compared our approach with other 6 recommendation algorithms and conducted experiments on 5 public data sets. According to the experimental results, the recommendation precision can improve by 2% to 9% versus the traditional best algorithm. As for sparse data sets, the prediction accuracy can also improve by 0.17% to 18%. Besides, our approach was applied to patent resource exploitation provided by the wanfang patents retrieval system. Experimental results show that our method performs better than commonly used algorithms, especially under the cold starting condition.

  • Collaborative Filtering Auto-Encoders for Technical Patent Recommending

    Wenlei BAI  Jun GUO  Xueqing ZHANG  Baoying LIU  Daguang GAN  

     
    PAPER

      Pubricized:
    2021/04/26
      Vol:
    E104-D No:8
      Page(s):
    1258-1265

    To find the exact items from the massive patent resources for users is a matter of great urgency. Although the recommender systems have shot this problem to a certain extent, there are still some challenging problems, such as tracking user interests and improving the recommendation quality when the rating matrix is extremely sparse. In this paper, we propose a novel method called Collaborative Filtering Auto-Encoder for the top-N recommendation. This method employs Auto-Encoders to extract the item's features, converts a high-dimensional sparse vector into a low-dimensional dense vector, and then uses the dense vector for similarity calculation. At the same time, to make the recommendation list closer to the user's recent interests, we divide the recommendation weight into time-based and recent similarity-based weights. In fact, the proposed method is an improved, item-based collaborative filtering model with more flexible components. Experimental results show that the method consistently outperforms state-of-the-art top-N recommendation methods by a significant margin on standard evaluation metrics.

  • Two-Stage Fine-Grained Text-Level Sentiment Analysis Based on Syntactic Rule Matching and Deep Semantic

    Weizhi LIAO  Yaheng MA  Yiling CAO  Guanglei YE  Dongzhou ZUO  

     
    PAPER

      Pubricized:
    2021/04/28
      Vol:
    E104-D No:8
      Page(s):
    1274-1280

    Aiming at the problem that traditional text-level sentiment analysis methods usually ignore the emotional tendency corresponding to the object or attribute. In this paper, a novel two-stage fine-grained text-level sentiment analysis model based on syntactic rule matching and deep semantics is proposed. Based on analyzing the characteristics and difficulties of fine-grained sentiment analysis, a two-stage fine-grained sentiment analysis algorithm framework is constructed. In the first stage, the objects and its corresponding opinions are extracted based on syntactic rules matching to obtain preliminary objects and opinions. The second stage based on deep semantic network to extract more accurate objects and opinions. Aiming at the problem that the extraction result contains multiple objects and opinions to be matched, an object-opinion matching algorithm based on the minimum lexical separation distance is proposed to achieve accurate pairwise matching. Finally, the proposed algorithm is evaluated on several public datasets to demonstrate its practicality and effectiveness.

  • Remote Dynamic Reconfiguration of a Multi-FPGA System FiC (Flow-in-Cloud)

    Kazuei HIRONAKA  Kensuke IIZUKA  Miho YAMAKURA  Akram BEN AHMED  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2021/05/12
      Vol:
    E104-D No:8
      Page(s):
    1321-1331

    Multi-FPGA systems have been receiving a lot of attention as a low cost and energy efficient system for Multi-access Edge Computing (MEC). For such purpose, a bare-metal multi-FPGA system called FiC (Flow-in-Cloud) is under development. In this paper, we introduce the FiC multi FPGA cluster which is applied partial reconfiguration (PR) FPGA design flow to support online user defined accelerator replacement while executing FPGA interconnection network and its low-level multiple FPGA management software called remote PR manager. With the remote PR manager, the user can define the FiC FPGA cluster setup by JSON and control the cluster from user application with the cooperation of simple cluster management tool / library called ficmgr on the client host and REST API service provider called ficwww on Raspberry Pi 3 (RPi3) on each node. According to the evaluation results with a prototype FiC FPGA cluster system with 12 nodes, using with online application replacement by PR and on-the-fly FPGA bitstream compression, the time for FPGA bitstream distribution was reduced to 1/17 and the total cluster setup time was reduced by 21∼57% than compared to cluster setup with full configuration FPGA bitstream.

  • FCA-BNN: Flexible and Configurable Accelerator for Binarized Neural Networks on FPGA

    Jiabao GAO  Yuchen YAO  Zhengjie LI  Jinmei LAI  

     
    PAPER-Biocybernetics, Neurocomputing

      Pubricized:
    2021/05/19
      Vol:
    E104-D No:8
      Page(s):
    1367-1377

    A series of Binarized Neural Networks (BNNs) show the accepted accuracy in image classification tasks and achieve the excellent performance on field programmable gate array (FPGA). Nevertheless, we observe existing designs of BNNs are quite time-consuming in change of the target BNN and acceleration of a new BNN. Therefore, this paper presents FCA-BNN, a flexible and configurable accelerator, which employs the layer-level configurable technique to execute seamlessly each layer of target BNN. Initially, to save resource and improve energy efficiency, the hardware-oriented optimal formulas are introduced to design energy-efficient computing array for different sizes of padded-convolution and fully-connected layers. Moreover, to accelerate the target BNNs efficiently, we exploit the analytical model to explore the optimal design parameters for FCA-BNN. Finally, our proposed mapping flow changes the target network by entering order, and accelerates a new network by compiling and loading corresponding instructions, while without loading and generating bitstream. The evaluations on three major structures of BNNs show the differences between inference accuracy of FCA-BNN and that of GPU are just 0.07%, 0.31% and 0.4% for LFC, VGG-like and Cifar-10 AlexNet. Furthermore, our energy-efficiency results achieve the results of existing customized FPGA accelerators by 0.8× for LFC and 2.6× for VGG-like. For Cifar-10 AlexNet, FCA-BNN achieves 188.2× and 60.6× better than CPU and GPU in energy efficiency, respectively. To the best of our knowledge, FCA-BNN is the most efficient design for change of the target BNN and acceleration of a new BNN, while keeps the competitive performance.

  • Improvement of CT Reconstruction Using Scattered X-Rays

    Shota ITO  Naohiro TODA  

     
    PAPER-Biological Engineering

      Pubricized:
    2021/05/06
      Vol:
    E104-D No:8
      Page(s):
    1378-1385

    A neural network that outputs reconstructed images based on projection data containing scattered X-rays is presented, and the proposed scheme exhibits better accuracy than conventional computed tomography (CT), in which the scatter information is removed. In medical X-ray CT, it is a common practice to remove scattered X-rays using a collimator placed in front of the detector. In this study, the scattered X-rays were assumed to have useful information, and a method was devised to utilize this information effectively using a neural network. Therefore, we generated 70,000 projection data by Monte Carlo simulations using a cube comprising 216 (6 × 6 × 6) smaller cubes having random density parameters as the target object. For each projection simulation, the densities of the smaller cubes were reset to different values, and detectors were deployed around the target object to capture the scattered X-rays from all directions. Then, a neural network was trained using these projection data to output the densities of the smaller cubes. We confirmed through numerical evaluations that the neural-network approach that utilized scattered X-rays reconstructed images with higher accuracy than did the conventional method, in which the scattered X-rays were removed. The results of this study suggest that utilizing the scattered X-ray information can help significantly reduce patient dosing during imaging.

  • Unified Likelihood Ratio Estimation for High- to Zero-Frequency N-Grams

    Masato KIKUCHI  Kento KAWAKAMI  Kazuho WATANABE  Mitsuo YOSHIDA  Kyoji UMEMURA  

     
    PAPER-Mathematical Systems Science

      Pubricized:
    2021/02/08
      Vol:
    E104-A No:8
      Page(s):
    1059-1074

    Likelihood ratios (LRs), which are commonly used for probabilistic data processing, are often estimated based on the frequency counts of individual elements obtained from samples. In natural language processing, an element can be a continuous sequence of N items, called an N-gram, in which each item is a word, letter, etc. In this paper, we attempt to estimate LRs based on N-gram frequency information. A naive estimation approach that uses only N-gram frequencies is sensitive to low-frequency (rare) N-grams and not applicable to zero-frequency (unobserved) N-grams; these are known as the low- and zero-frequency problems, respectively. To address these problems, we propose a method for decomposing N-grams into item units and then applying their frequencies along with the original N-gram frequencies. Our method can obtain the estimates of unobserved N-grams by using the unit frequencies. Although using only unit frequencies ignores dependencies between items, our method takes advantage of the fact that certain items often co-occur in practice and therefore maintains their dependencies by using the relevant N-gram frequencies. We also introduce a regularization to achieve robust estimation for rare N-grams. Our experimental results demonstrate that our method is effective at solving both problems and can effectively control dependencies.

  • Extracting Knowledge Entities from Sci-Tech Intelligence Resources Based on BiLSTM and Conditional Random Field

    Weizhi LIAO  Mingtong HUANG  Pan MA  Yu WANG  

     
    PAPER

      Pubricized:
    2021/04/22
      Vol:
    E104-D No:8
      Page(s):
    1214-1221

    There are many knowledge entities in sci-tech intelligence resources. Extracting these knowledge entities is of great importance for building knowledge networks, exploring the relationship between knowledge, and optimizing search engines. Many existing methods, which are mainly based on rules and traditional machine learning, require significant human involvement, but still suffer from unsatisfactory extraction accuracy. This paper proposes a novel approach for knowledge entity extraction based on BiLSTM and conditional random field (CRF).A BiLSTM neural network to obtain the context information of sentences, and CRF is then employed to integrate global label information to achieve optimal labels. This approach does not require the manual construction of features, and outperforms conventional methods. In the experiments presented in this paper, the titles and abstracts of 20,000 items in the existing sci-tech literature are processed, of which 50,243 items are used to build benchmark datasets. Based on these datasets, comparative experiments are conducted to evaluate the effectiveness of the proposed approach. Knowledge entities are extracted and corresponding knowledge networks are established with a further elaboration on the correlation of two different types of knowledge entities. The proposed research has the potential to improve the quality of sci-tech information services.

  • Construction of Ternary Bent Functions by FFT-Like Permutation Algorithms

    Radomir S. STANKOVIĆ  Milena STANKOVIĆ  Claudio MORAGA  Jaakko T. ASTOLA  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:8
      Page(s):
    1092-1102

    Binary bent functions have a strictly specified number of non-zero values. In the same way, ternary bent functions satisfy certain requirements on the elements of their value vectors. These requirements can be used to specify six classes of ternary bent functions. Classes are mutually related by encoding of function values. Given a basic ternary bent function, other functions in the same class can be constructed by permutation matrices having a block structure similar to that of the factor matrices appearing in the Good-Thomas decomposition of Cooley-Tukey Fast Fourier transform and related algorithms.

  • Generation of Large-Amplitude Pulses through the Pulse Shortening Superposed in Series-Connected Tunnel-Diode Transmission Line

    Koichi NARAHARA  

     
    BRIEF PAPER-Electronic Circuits

      Pubricized:
    2021/02/08
      Vol:
    E104-C No:8
      Page(s):
    394-397

    A scheme is proposed for generation of large-amplitude short pulses using a transmission line with regularly spaced series-connected tunnel diodes (TDs). In the case where the loaded TD is unique, it is established that the leading edge of the inputted pulse moves slower than the trailing edge, when the pulse amplitude exceeds the peak voltage of the loaded TD; therefore, the pulse width is autonomously reduced through propagation in the line. In this study, we find that this property is true even when the several series-connected TDs are loaded periodically. By these mechanisms, the TD line succeeds in generating large and short pulses. Herein, we clarify the design criteria of the TD line, together with both numerical and experimental validation.

  • Transmission Loss of Optical Fibers; Achievements in Half a Century Open Access

    Hiroo KANAMORI  

     
    INVITED PAPER-Optical Fiber for Communications

      Pubricized:
    2021/02/15
      Vol:
    E104-B No:8
      Page(s):
    922-933

    This paper reviews the evolutionary process that reduced the transmission loss of silica optical fibers from the report of 20dB/km by Corning in 1970 to the current record-low loss. At an early stage, the main effort was to remove impurities especially hydroxy groups for fibers with GeO2-SiO2 core, resulting in the loss of 0.20dB/km in 1980. In order to suppress Rayleigh scattering due to composition fluctuation, pure-silica-core fibers were developed, and the loss of 0.154dB/km was achieved in 1986. As the residual main factor of the loss, Rayleigh scattering due to density fluctuation was actively investigated by utilizing IR and Raman spectroscopy in the 1990s and early 2000s. Now, ultra-low-loss fibers with the loss of 0.150dB/km are commercially available in trans-oceanic submarine cable systems.

  • Performance Evaluation of Online Machine Learning Models Based on Cyclic Dynamic and Feature-Adaptive Time Series

    Ahmed Salih AL-KHALEEFA  Rosilah HASSAN  Mohd Riduan AHMAD  Faizan QAMAR  Zheng WEN  Azana Hafizah MOHD AMAN  Keping YU  

     
    PAPER

      Pubricized:
    2021/05/14
      Vol:
    E104-D No:8
      Page(s):
    1172-1184

    Machine learning is becoming an attractive topic for researchers and industrial firms in the area of computational intelligence because of its proven effectiveness and performance in resolving real-world problems. However, some challenges such as precise search, intelligent discovery and intelligent learning need to be addressed and solved. One most important challenge is the non-steady performance of various machine learning models during online learning and operation. Online learning is the ability of a machine-learning model to modernize information without retraining the scheme when new information is available. To address this challenge, we evaluate and analyze four widely used online machine learning models: Online Sequential Extreme Learning Machine (OSELM), Feature Adaptive OSELM (FA-OSELM), Knowledge Preserving OSELM (KP-OSELM), and Infinite Term Memory OSELM (ITM-OSELM). Specifically, we provide a testbed for the models by building a framework and configuring various evaluation scenarios given different factors in the topological and mathematical aspects of the models. Furthermore, we generate different characteristics of the time series to be learned. Results prove the real impact of the tested parameters and scenarios on the models. In terms of accuracy, KP-OSELM and ITM-OSELM are superior to OSELM and FA-OSELM. With regard to time efficiency related to the percentage of decreases in active features, ITM-OSELM is superior to KP-OSELM.

  • A Novel Multi-AP Diversity for Highly Reliable Transmissions in Wireless LANs

    Toshihisa NABETANI  Masahiro SEKIYA  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2021/01/08
      Vol:
    E104-B No:7
      Page(s):
    913-921

    With the development of the IEEE 802.11 standard for wireless LANs, there has been an enormous increase in the usage of wireless LANs in factories, plants, and other industrial environments. In industrial applications, wireless LAN systems require high reliability for stable real-time communications. In this paper, we propose a multi-access-point (AP) diversity method that contributes to the realization of robust data transmissions toward realization of ultra-reliable low-latency communications (URLLC) in wireless LANs. The proposed method can obtain a diversity effect of multipaths with independent transmission errors and collisions without modification of the IEEE 802.11 standard or increasing overhead of communication resources. We evaluate the effects of the proposed method by numerical analysis, develop a prototype to demonstrate its feasibility, and perform experiments using the prototype in a factory wireless environment. These numerical evaluations and experiments show that the proposed method increases reliability and decreases transmission delay.

  • Correlation of Centralities: A Study through Distinct Graph Robustness

    Xin-Ling GUO  Zhe-Ming LU  Yi-Jia ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/04/05
      Vol:
    E104-D No:7
      Page(s):
    1054-1057

    Robustness of complex networks is an essential subject for improving their performance when vertices or links are removed due to potential threats. In recent years, significant advancements have been achieved in this field by many researchers. In this paper we show an overview from a novel statistic perspective. We present a brief review about complex networks at first including 2 primary network models, 12 popular attack strategies and the most convincing network robustness metrics. Then, we focus on the correlations of 12 attack strategies with each other, and the difference of the correlations from one network model to the other. We are also curious about the robustness of networks when vertices are removed according to different attack strategies and the difference of robustness from one network model to the other. Our aim is to observe the correlation mechanism of centralities for distinct network models, and compare the network robustness when different centralities are applied as attacking directors to distinct network models. What inspires us is that maybe we can find a paradigm that combines several high-destructive attack strategies to find the optimal strategy based on the deep learning framework.

  • Attention Voting Network with Prior Distance Augmented Loss for 6DoF Pose Estimation

    Yong HE  Ji LI  Xuanhong ZHOU  Zewei CHEN  Xin LIU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/03/26
      Vol:
    E104-D No:7
      Page(s):
    1039-1048

    6DoF pose estimation from a monocular RGB image is a challenging but fundamental task. The methods based on unit direction vector-field representation and Hough voting strategy achieved state-of-the-art performance. Nevertheless, they apply the smooth l1 loss to learn the two elements of the unit vector separately, resulting in which is not taken into account that the prior distance between the pixel and the keypoint. While the positioning error is significantly affected by the prior distance. In this work, we propose a Prior Distance Augmented Loss (PDAL) to exploit the prior distance for more accurate vector-field representation. Furthermore, we propose a lightweight channel-level attention module for adaptive feature fusion. Embedding this Adaptive Fusion Attention Module (AFAM) into the U-Net, we build an Attention Voting Network to further improve the performance of our method. We conduct extensive experiments to demonstrate the effectiveness and performance improvement of our methods on the LINEMOD, OCCLUSION and YCB-Video datasets. Our experiments show that the proposed methods bring significant performance gains and outperform state-of-the-art RGB-based methods without any post-refinement.

  • Alleviating File System Journaling Problem in Containers for DBMS Consolidation

    Asraa ABDULRAZAK ALI MARDAN  Kenji KONO  

     
    PAPER-Software System

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:7
      Page(s):
    931-940

    Containers offer a lightweight alternative over virtual machines and become a preferable choice for application consolidation in the clouds. However, the sharing of kernel components can violate the I/O performance and isolation in containers. It is widely recognized that file system journaling has terrible performance side effects in containers, especially when consolidating database management systems (DBMSs). The sharing of journaling modules among containers causes performance dependency among them. This dependency violates resource consumption enforced by the resource controller, and degrades I/O performance due to the contention of the journaling module. The operating system developers have been working on novel designs of file systems or new journaling mechanisms to solve the journaling problems. This paper shows that it is possible to overcome journaling problems without re-designing file systems or implementing a new journaling method. A careful configuration of containers in existing file systems can gracefully solve the problems. Our recommended configuration consists of 1) per-container journaling by presenting each container with a virtual block device to have its own journaling module, and 2) accounting journaling I/Os separately for each container. Our experimental results show that our configuration resolves journaling-related problems, improves MySQL performance by 3.4x, and achieves reasonable performance isolation among containers.

  • Complete l-Diversity Grouping Algorithm for Multiple Sensitive Attributes and Its Applications

    Yuelei XIAO  Shuang HUANG  

     
    LETTER-Cryptography and Information Security

      Pubricized:
    2021/01/12
      Vol:
    E104-A No:7
      Page(s):
    984-990

    For the first stage of the multi-sensitive bucketization (MSB) method, the l-diversity grouping for multiple sensitive attributes is incomplete, causing more information loss. To solve this problem, we give the definitions of the l-diversity avoidance set for multiple sensitive attributes and the avoiding of a multiple dimensional bucket, and propose a complete l-diversity grouping (CLDG) algorithm for multiple sensitive attributes. Then, we improve the first stages of the MSB algorithms by applying the CLDG algorithm to them. The experimental results show that the grouping ratio of the improved first stages of the MSB algorithms is significantly higher than that of the original first stages of the MSB algorithms, decreasing the information loss of the published microdata.

  • Energy Efficient Approximate Storing of Image Data for MTJ Based Non-Volatile Flip-Flops and MRAM

    Yoshinori ONO  Kimiyoshi USAMI  

     
    PAPER

      Pubricized:
    2021/01/06
      Vol:
    E104-C No:7
      Page(s):
    338-349

    A non-volatile memory (NVM) employing MTJ has a lot of strong points such as read/write performance, best endurance and operating-voltage compatibility with standard CMOS. However, it consumes a lot of energy when writing the data. This becomes an obstacle when applying to battery-operated mobile devices. To solve this problem, we propose an approach to augment the capability of the precision scaling technique for the write operation in NVM. Precision scaling is an approximate computing technique to reduce the bit width of data (i.e. precision) for energy reduction. When writing image data to NVM with the precision scaling, the write energy and the image quality are changed according to the write time and the target bit range. We propose an energy-efficient approximate storing scheme for non-volatile flip-flops and a magnetic random-access memory (MRAM) that allows us to write the data by optimizing the bit positions to split the data and the write time for each bit range. By using the statistical model, we obtained optimal values for the write time and the targeted bit range under the trade-off between the write energy reduction and image quality degradation. Simulation results have demonstrated that by using these optimal values the write energy can be reduced up to 50% while maintaining the acceptable image quality. We also investigated the relationship between the input images and the output image quality when using this approach in detail. In addition, we evaluated the energy benefits when applying our approach to nine types of image processing including linear filters and edge detectors. Results showed that the write energy is reduced by further 12.5% at the maximum.

  • Multi-Input Functional Encryption with Controlled Decryption

    Nuttapong ATTRAPADUNG  Goichiro HANAOKA  Takato HIRANO  Yutaka KAWAI  Yoshihiro KOSEKI  Jacob C. N. SCHULDT  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2021/01/12
      Vol:
    E104-A No:7
      Page(s):
    968-978

    In this paper, we put forward the notion of a token-based multi-input functional encryption (token-based MIFE) scheme - a notion intended to give encryptors a mechanism to control the decryption of encrypted messages, by extending the encryption and decryption algorithms to additionally use tokens. The basic idea is that a decryptor must hold an appropriate decryption token in addition to his secrete key, to be able to decrypt. This type of scheme can address security concerns potentially arising in applications of functional encryption aimed at addressing the problem of privacy preserving data analysis. We firstly formalize token-based MIFE, and then provide two basic schemes; both are based on an ordinary MIFE scheme, but the first additionally makes use of a public key encryption scheme, whereas the second makes use of a pseudorandom function (PRF). Lastly, we extend the latter construction to allow decryption tokens to be restricted to specified set of encryptions, even if all encryptions have been done using the same encryption token. This is achieved by using a constrained PRF.

1241-1260hit(22683hit)