The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

521-540hit(8214hit)

  • Watermarkable Signature with Computational Function Preserving

    Kyohei SUDO  Keisuke HARA  Masayuki TEZUKA  Yusuke YOSHIDA  Keisuke TANAKA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2021/03/19
      Vol:
    E104-A No:9
      Page(s):
    1255-1270

    Software watermarking enables one to embed some information called “mark” into a program while preserving its functionality, and to read it from the program. As a definition of function preserving, Cohen et al. (STOC 2016) proposed statistical function preserving which requires that the input/output behavior of the marked circuit is identical almost everywhere to that of the original unmarked circuit. They showed how to construct watermarkable cryptographic primitives with statistical function preserving, including pseudorandom functions (PRFs) and public-key encryption from indistinguishability obfuscation. Recently, Goyal et al. (CRYPTO 2019) introduced more relaxed definition of function preserving for watermarkable signature. Watermarkable signature embeds a mark into a signing circuit of digital signature. The relaxed function preserving only requires that the marked signing circuit outputs valid signatures. They provide watermarkable signature with the relaxed function preserving only based on (standard) digital signature. In this work, we introduce an intermediate notion of function preserving for watermarkable signature, which is called computational function preserving. Then, we examine the relationship among our computational function preserving, relaxed function preserving by Goyal et al., and statistical function preserving by Cohen et al. Furthermore, we propose a generic construction of watermarkable signature scheme satisfying computational function preserving based on public key encryption and (standard) digital signature.

  • Efficient DLT-Based Method for Solving PnP, PnPf, and PnPfr Problems

    Gaku NAKANO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/06/17
      Vol:
    E104-D No:9
      Page(s):
    1467-1477

    This paper presents an efficient method for solving PnP, PnPf, and PnPfr problems, which are the problems of determining camera parameters from 2D-3D point correspondences. The proposed method is derived based on a simple usage of linear algebra, similarly to the classical DLT methods. Therefore, the new method is easier to understand, easier to implement, and several times faster than the state-of-the-art methods using Gröbner basis. Contrary to the existing Gröbner basis methods, the proposed method consists of three algorithms depending on the number of the points and the 3D point configuration. Experimental results show that the proposed method is as accurate as the state-of-the-art methods even in near-planar scenes while achieving up to three times faster.

  • Applying K-SVD Dictionary Learning for EEG Compressed Sensing Framework with Outlier Detection and Independent Component Analysis Open Access

    Kotaro NAGAI  Daisuke KANEMOTO  Makoto OHKI  

     
    LETTER-Biometrics

      Pubricized:
    2021/03/01
      Vol:
    E104-A No:9
      Page(s):
    1375-1378

    This letter reports on the effectiveness of applying the K-singular value decomposition (SVD) dictionary learning to the electroencephalogram (EEG) compressed sensing framework with outlier detection and independent component analysis. Using the K-SVD dictionary matrix with our design parameter optimization, for example, at compression ratio of four, we improved the normalized mean square error value by 31.4% compared with that of the discrete cosine transform dictionary for CHB-MIT Scalp EEG Database.

  • Fabrication Process for Superconducting Digital Circuits Open Access

    Mutsuo HIDAKA  Shuichi NAGASAWA  

     
    INVITED PAPER

      Pubricized:
    2021/03/03
      Vol:
    E104-C No:9
      Page(s):
    405-410

    This review provides a current overview of the fabrication processes for superconducting digital circuits at CRAVITY (clean room for analog and digital superconductivity) at the National Institute of Advanced Industrial Science and Technology (AIST), Japan. CRAVITY routinely fabricates superconducting digital circuits using three types of fabrication processes and supplies several thousand chips to its collaborators each year. Researchers at CRAVITY have focused on improving the controllability and uniformity of device parameters and the reliability, which means reducing defects. These three aspects are important for the correct operation of large-scale digital circuits. The current technologies used at CRAVITY permit ±10% controllability over the critical current density (Jc) of Josephson junctions (JJs) with respect to the design values, while the critical current (Ic) uniformity is within 1σ=2% for JJs with areas exceeding 1.0 µm2 and the defect density is on the order of one defect for every 100,000 JJs.

  • Feature Detection Based on Significancy of Local Features for Image Matching

    TaeWoo KIM  

     
    LETTER-Pattern Recognition

      Pubricized:
    2021/06/03
      Vol:
    E104-D No:9
      Page(s):
    1510-1513

    Feature detection and matching procedure require most of processing time in image matching where the time dramatically increases according to the number of feature points. The number of features is needed to be controlled for specific applications because of their processing time. This paper proposes a feature detection method based on significancy of local features. The feature significancy is computed for all pixels and higher significant features are chosen considering spatial distribution. The method contributes to reduce the number of features in order to match two images with maintaining high matching accuracy. It was shown that this approach was faster about two times in average processing time than FAST detector for natural scene images in the experiments.

  • Realization of Multi-Terminal Universal Interconnection Networks Using Contact Switches

    Tsutomu SASAO  Takashi MATSUBARA  Katsufumi TSUJI  Yoshiaki KOGA  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:8
      Page(s):
    1068-1075

    A universal interconnection network implements arbitrary interconnections among n terminals. This paper considers a problem to realize such a network using contact switches. When n=2, it can be implemented with a single switch. The number of different connections among n terminals is given by the Bell number B(n). The Bell number shows the total number of methods to partition n distinct elements. For n=2, 3, 4, 5 and 6, the corresponding Bell numbers are 2, 5, 15, 52, and 203, respectively. This paper shows a method to realize an n terminal universal interconnection network with $ rac {3}{8}(n^2-1)$ contact switches when n=2m+1≥5, and $ rac {n}{8}(3n+2)$ contact switches, when n=2m≥6. Also, it shows that a lower bound on the number of contact switches to realize an n-terminal universal interconnection network is ⌈log 2B(n)⌉, where B(n) is the Bell number.

  • Consumption Pricing Mechanism of Scientific and Technological Resources Based on Multi-Agent Game Theory: An Interactive Analytical Model and Experimental Validation

    Fanying ZHENG  Fu GU  Yangjian JI  Jianfeng GUO  Xinjian GU  Jin ZHANG  

     
    PAPER

      Pubricized:
    2021/04/16
      Vol:
    E104-D No:8
      Page(s):
    1292-1301

    In the context of Web 2.0, the interaction between users and resources is more and more frequent in the process of resource sharing and consumption. However, the current research on resource pricing mainly focuses on the attributes of the resource itself, and does not weigh the interests of the resource sharing participants. In order to deal with these problems, the pricing mechanism of resource-user interaction evaluation based on multi-agent game theory is established in this paper. Moreover, the user similarity, the evaluation bias based on link analysis and punishment of academic group cheating are also included in the model. Based on the data of 181 scholars and 509 articles from the Wanfang database, this paper conducts 5483 pricing experiments for 13 months, and the results show that this model is more effective than other pricing models - the pricing accuracy of resource resources is 94.2%, and the accuracy of user value evaluation is 96.4%. Besides, this model can intuitively show the relationship within users and within resources. The case study also exhibits that the user's knowledge level is not positively correlated with his or her authority. Discovering and punishing academic group cheating is conducive to objectively evaluating researchers and resources. The pricing mechanism of scientific and technological resources and the users proposed in this paper is the premise of fair trade of scientific and technological resources.

  • Construction of Multiple-Valued Bent Functions Using Subsets of Coefficients in GF and RMF Domains

    Milo&scaron M. RADMANOVIĆ  Radomir S. STANKOVIĆ  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/21
      Vol:
    E104-D No:8
      Page(s):
    1103-1110

    Multiple-valued bent functions are functions with highest nonlinearity which makes them interesting for multiple-valued cryptography. Since the general structure of bent functions is still unknown, methods for construction of bent functions are often based on some deterministic criteria. For practical applications, it is often necessary to be able to construct a bent function that does not belong to any specific class of functions. Thus, the criteria for constructions are combined with exhaustive search over all possible functions which can be very CPU time consuming. A solution is to restrict the search space by some conditions that should be satisfied by the produced bent functions. In this paper, we proposed the construction method based on spectral subsets of multiple-valued bent functions satisfying certain appropriately formulated restrictions in Galois field (GF) and Reed-Muller-Fourier (RMF) domains. Experimental results show that the proposed method efficiently constructs ternary and quaternary bent functions by using these restrictions.

  • Out-of-Bound Signal Demapping for Lattice Reduction-Aided Iterative Linear Receivers in Overloaded MIMO Systems

    Takuya FUJIWARA  Satoshi DENNO  Yafei HOU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2021/02/15
      Vol:
    E104-B No:8
      Page(s):
    974-982

    This paper proposes out-of-bound signal demapping for lattice reduction-aided iterative linear receivers in overloaded MIMO channels. While lattice reduction aided linear receivers sometimes output hard-decision signals that are not contained in the modulation constellation, the proposed demapping converts those hard-decision signals into binary digits that can be mapped onto the modulation constellation. Even though the proposed demapping can be implemented with almost no additional complexity, the proposed demapping achieves more gain as the linear reception is iterated. Furthermore, we show that the transmission performance depends on bit mapping in modulations such as the Gray mapping and the natural mapping. The transmission performance is confirmed by computer simulation in a 6 × 2 MIMO system, i.e., the overloading ratio of 3. One of the proposed demapping called “modulo demapping” attains a gain of about 2 dB at the packet error rate (PER) of 10-1 when the 64QAM is applied.

  • Classification Functions for Handwritten Digit Recognition

    Tsutomu SASAO  Yuto HORIKAWA  Yukihiro IGUCHI  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:8
      Page(s):
    1076-1082

    A classification function maps a set of vectors into several classes. A machine learning problem is treated as a design problem for partially defined classification functions. To realize classification functions for MNIST hand written digits, three different architectures are considered: Single-unit realization, 45-unit realization, and 45-unit ×r realization. The 45-unit realization consists of 45 ternary classifiers, 10 counters, and a max selector. Test accuracy of these architectures are compared using MNIST data set.

  • Energy-Efficient ECG Signals Outlier Detection Hardware Using a Sparse Robust Deep Autoencoder

    Naoto SOGA  Shimpei SATO  Hiroki NAKAHARA  

     
    PAPER-Logic Design

      Pubricized:
    2021/05/17
      Vol:
    E104-D No:8
      Page(s):
    1121-1129

    Advancements in portable electrocardiographs have allowed electrocardiogram (ECG) signals to be recorded in everyday life. Machine-learning techniques, including deep learning, have been used in numerous studies to analyze ECG signals because they exhibit superior performance to conventional methods. A mobile ECG analysis device is needed so that abnormal ECG waves can be detected anywhere. Such mobile device requires a real-time performance and low power consumption, however, deep-learning based models often have too many parameters to implement on mobile hardware, its amount of hardware is too large and dissipates much power consumption. We propose a design flow to implement the outlier detector using an autoencoder on a low-end FPGA. To shorten the preparation time of ECG data used in training an autoencoder, an unsupervised learning technique is applied. Additionally, to minimize the volume of the weight parameters, a weight sparseness technique is applied, and all the parameters are converted into fixed-point values. We show that even if the parameters are reduced converted into fixed-point values, the outlier detection performance degradation is only 0.83 points. By reducing the volume of the weight parameters, all the parameters can be stored in on-chip memory. We design the architecture according to the CRS format, which is the well-known data structure of a sparse matrix, minimizing the hardware size and reducing the power consumption. We use weight sharing to further reduce the weight-parameter volumes. By using weight sharing, we could reduce the bit width of the memories by 60% while maintaining the outlier detection performance. We implemented the autoencoder on a Digilent Inc. ZedBoard and compared the results with those for the ARM mobile CPU for a built-in device. The results indicated that our FPGA implementation of the outlier detector was 12 times faster and 106 times more energy-efficient.

  • The Fractional-N All Digital Frequency Locked Loop with Robustness for PVT Variation and Its Application for the Microcontroller Unit

    Ryoichi MIYAUCHI  Akio YOSHIDA  Shuya NAKANO  Hiroki TAMURA  Koichi TANNO  Yutaka FUKUCHI  Yukio KAWAMURA  Yuki KODAMA  Yuichi SEKIYA  

     
    PAPER-Circuit Technologies

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:8
      Page(s):
    1146-1153

    This paper describes the Fractional-N All Digital Frequency Locked Loop (ADFLL) with Robustness for PVT variation and its application for the microcontroller unit. The conventional FLL is difficult to achieve the required specification by using the fine CMOS process. Especially, the conventional FLL has some problems such as unexpected operation and long lock time that are caused by PVT variation. To overcome these problems, we propose a new ADFLL which uses dynamic selecting digital filter coefficients. The proposed ADFLL was evaluatied through the HSPICE simulation and fabricating chips using a 0.13 µm CMOS process. From these results, we observed the proposed ADFLL has robustness for PVT variation by using dynamic selecting digital filter coefficient, and the lock time is improved up to 57%, clock jitter is 0.85 nsec.

  • Improved Hybrid Feature Selection Framework

    Weizhi LIAO  Guanglei YE  Weijun YAN  Yaheng MA  Dongzhou ZUO  

     
    PAPER

      Pubricized:
    2021/05/12
      Vol:
    E104-D No:8
      Page(s):
    1266-1273

    An efficient Feature selection strategy is important in the dimension reduction of data. Extensive existing research efforts could be summarized into three classes: Filter method, Wrapper method, and Embedded method. In this work, we propose an integrated two-stage feature extraction method, referred to as FWS, which combines Filter and Wrapper method to efficiently extract important features in an innovative hybrid mode. FWS conducts the first level of selection to filter out non-related features using correlation analysis and the second level selection to find out the near-optimal sub set that capturing valuable discrete features by evaluating the performance of predictive model trained on such sub set. Compared with the technologies such as mRMR and Relief-F, FWS significantly improves the detection performance through an integrated optimization strategy.Results show the performance superiority of the proposed solution over several well-known methods for feature selection.

  • Two-Stage Fine-Grained Text-Level Sentiment Analysis Based on Syntactic Rule Matching and Deep Semantic

    Weizhi LIAO  Yaheng MA  Yiling CAO  Guanglei YE  Dongzhou ZUO  

     
    PAPER

      Pubricized:
    2021/04/28
      Vol:
    E104-D No:8
      Page(s):
    1274-1280

    Aiming at the problem that traditional text-level sentiment analysis methods usually ignore the emotional tendency corresponding to the object or attribute. In this paper, a novel two-stage fine-grained text-level sentiment analysis model based on syntactic rule matching and deep semantics is proposed. Based on analyzing the characteristics and difficulties of fine-grained sentiment analysis, a two-stage fine-grained sentiment analysis algorithm framework is constructed. In the first stage, the objects and its corresponding opinions are extracted based on syntactic rules matching to obtain preliminary objects and opinions. The second stage based on deep semantic network to extract more accurate objects and opinions. Aiming at the problem that the extraction result contains multiple objects and opinions to be matched, an object-opinion matching algorithm based on the minimum lexical separation distance is proposed to achieve accurate pairwise matching. Finally, the proposed algorithm is evaluated on several public datasets to demonstrate its practicality and effectiveness.

  • Remote Dynamic Reconfiguration of a Multi-FPGA System FiC (Flow-in-Cloud)

    Kazuei HIRONAKA  Kensuke IIZUKA  Miho YAMAKURA  Akram BEN AHMED  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2021/05/12
      Vol:
    E104-D No:8
      Page(s):
    1321-1331

    Multi-FPGA systems have been receiving a lot of attention as a low cost and energy efficient system for Multi-access Edge Computing (MEC). For such purpose, a bare-metal multi-FPGA system called FiC (Flow-in-Cloud) is under development. In this paper, we introduce the FiC multi FPGA cluster which is applied partial reconfiguration (PR) FPGA design flow to support online user defined accelerator replacement while executing FPGA interconnection network and its low-level multiple FPGA management software called remote PR manager. With the remote PR manager, the user can define the FiC FPGA cluster setup by JSON and control the cluster from user application with the cooperation of simple cluster management tool / library called ficmgr on the client host and REST API service provider called ficwww on Raspberry Pi 3 (RPi3) on each node. According to the evaluation results with a prototype FiC FPGA cluster system with 12 nodes, using with online application replacement by PR and on-the-fly FPGA bitstream compression, the time for FPGA bitstream distribution was reduced to 1/17 and the total cluster setup time was reduced by 21∼57% than compared to cluster setup with full configuration FPGA bitstream.

  • Improvement of CT Reconstruction Using Scattered X-Rays

    Shota ITO  Naohiro TODA  

     
    PAPER-Biological Engineering

      Pubricized:
    2021/05/06
      Vol:
    E104-D No:8
      Page(s):
    1378-1385

    A neural network that outputs reconstructed images based on projection data containing scattered X-rays is presented, and the proposed scheme exhibits better accuracy than conventional computed tomography (CT), in which the scatter information is removed. In medical X-ray CT, it is a common practice to remove scattered X-rays using a collimator placed in front of the detector. In this study, the scattered X-rays were assumed to have useful information, and a method was devised to utilize this information effectively using a neural network. Therefore, we generated 70,000 projection data by Monte Carlo simulations using a cube comprising 216 (6 × 6 × 6) smaller cubes having random density parameters as the target object. For each projection simulation, the densities of the smaller cubes were reset to different values, and detectors were deployed around the target object to capture the scattered X-rays from all directions. Then, a neural network was trained using these projection data to output the densities of the smaller cubes. We confirmed through numerical evaluations that the neural-network approach that utilized scattered X-rays reconstructed images with higher accuracy than did the conventional method, in which the scattered X-rays were removed. The results of this study suggest that utilizing the scattered X-ray information can help significantly reduce patient dosing during imaging.

  • Construction of Ternary Bent Functions by FFT-Like Permutation Algorithms

    Radomir S. STANKOVIĆ  Milena STANKOVIĆ  Claudio MORAGA  Jaakko T. ASTOLA  

     
    PAPER-Logic Design

      Pubricized:
    2021/04/01
      Vol:
    E104-D No:8
      Page(s):
    1092-1102

    Binary bent functions have a strictly specified number of non-zero values. In the same way, ternary bent functions satisfy certain requirements on the elements of their value vectors. These requirements can be used to specify six classes of ternary bent functions. Classes are mutually related by encoding of function values. Given a basic ternary bent function, other functions in the same class can be constructed by permutation matrices having a block structure similar to that of the factor matrices appearing in the Good-Thomas decomposition of Cooley-Tukey Fast Fourier transform and related algorithms.

  • Transmission Loss of Optical Fibers; Achievements in Half a Century Open Access

    Hiroo KANAMORI  

     
    INVITED PAPER-Optical Fiber for Communications

      Pubricized:
    2021/02/15
      Vol:
    E104-B No:8
      Page(s):
    922-933

    This paper reviews the evolutionary process that reduced the transmission loss of silica optical fibers from the report of 20dB/km by Corning in 1970 to the current record-low loss. At an early stage, the main effort was to remove impurities especially hydroxy groups for fibers with GeO2-SiO2 core, resulting in the loss of 0.20dB/km in 1980. In order to suppress Rayleigh scattering due to composition fluctuation, pure-silica-core fibers were developed, and the loss of 0.154dB/km was achieved in 1986. As the residual main factor of the loss, Rayleigh scattering due to density fluctuation was actively investigated by utilizing IR and Raman spectroscopy in the 1990s and early 2000s. Now, ultra-low-loss fibers with the loss of 0.150dB/km are commercially available in trans-oceanic submarine cable systems.

  • Creation of Temporal Model for Prioritized Transmission in Predictive Spatial-Monitoring Using Machine Learning Open Access

    Keiichiro SATO  Ryoichi SHINKUMA  Takehiro SATO  Eiji OKI  Takanori IWAI  Takeo ONISHI  Takahiro NOBUKIYO  Dai KANETOMO  Kozo SATODA  

     
    PAPER-Network

      Pubricized:
    2021/02/01
      Vol:
    E104-B No:8
      Page(s):
    951-960

    Predictive spatial-monitoring, which predicts spatial information such as road traffic, has attracted much attention in the context of smart cities. Machine learning enables predictive spatial-monitoring by using a large amount of aggregated sensor data. Since the capacity of mobile networks is strictly limited, serious transmission delays occur when loads of communication traffic are heavy. If some of the data used for predictive spatial-monitoring do not arrive on time, prediction accuracy degrades because the prediction has to be done using only the received data, which implies that data for prediction are ‘delay-sensitive’. A utility-based allocation technique has suggested modeling of temporal characteristics of such delay-sensitive data for prioritized transmission. However, no study has addressed temporal model for prioritized transmission in predictive spatial-monitoring. Therefore, this paper proposes a scheme that enables the creation of a temporal model for predictive spatial-monitoring. The scheme is roughly composed of two steps: the first involves creating training data from original time-series data and a machine learning model that can use the data, while the second step involves modeling a temporal model using feature selection in the learning model. Feature selection enables the estimation of the importance of data in terms of how much the data contribute to prediction accuracy from the machine learning model. This paper considers road-traffic prediction as a scenario and shows that the temporal models created with the proposed scheme can handle real spatial datasets. A numerical study demonstrated how our temporal model works effectively in prioritized transmission for predictive spatial-monitoring in terms of prediction accuracy.

  • Energy Efficient Approximate Storing of Image Data for MTJ Based Non-Volatile Flip-Flops and MRAM

    Yoshinori ONO  Kimiyoshi USAMI  

     
    PAPER

      Pubricized:
    2021/01/06
      Vol:
    E104-C No:7
      Page(s):
    338-349

    A non-volatile memory (NVM) employing MTJ has a lot of strong points such as read/write performance, best endurance and operating-voltage compatibility with standard CMOS. However, it consumes a lot of energy when writing the data. This becomes an obstacle when applying to battery-operated mobile devices. To solve this problem, we propose an approach to augment the capability of the precision scaling technique for the write operation in NVM. Precision scaling is an approximate computing technique to reduce the bit width of data (i.e. precision) for energy reduction. When writing image data to NVM with the precision scaling, the write energy and the image quality are changed according to the write time and the target bit range. We propose an energy-efficient approximate storing scheme for non-volatile flip-flops and a magnetic random-access memory (MRAM) that allows us to write the data by optimizing the bit positions to split the data and the write time for each bit range. By using the statistical model, we obtained optimal values for the write time and the targeted bit range under the trade-off between the write energy reduction and image quality degradation. Simulation results have demonstrated that by using these optimal values the write energy can be reduced up to 50% while maintaining the acceptable image quality. We also investigated the relationship between the input images and the output image quality when using this approach in detail. In addition, we evaluated the energy benefits when applying our approach to nine types of image processing including linear filters and edge detectors. Results showed that the write energy is reduced by further 12.5% at the maximum.

521-540hit(8214hit)