The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Z(5900hit)

441-460hit(5900hit)

  • Presenting Walking Route for VR Zombie

    Nobuchika SAKATA  Kohei KANAMORI  Tomu TOMINAGA  Yoshinori HIJIKATA  Kensuke HARADA  Kiyoshi KIYOKAWA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2020/09/30
      Vol:
    E104-D No:1
      Page(s):
    162-173

    The aim of this study is to calculate optimal walking routes in real space for users partaking in immersive virtual reality (VR) games without compromising their immersion. To this end, we propose a navigation system to automatically determine the route to be taken by a VR user to avoid collisions with surrounding obstacles. The proposed method is evaluated by simulating a real environment. It is verified to be capable of calculating and displaying walking routes to safely guide users to their destinations without compromising their VR immersion. In addition, while walking in real space while experiencing VR content, users can choose between 6-DoF (six degrees of freedom) and 3-DoF (three degrees of freedom). However, we expect users to prefer 3-DoF conditions, as they tend to walk longer while using VR content. In dynamic situations, when two pedestrians are added to a designated computer-generated real environment, it is necessary to calculate the walking route using moving body prediction and display the moving body in virtual space to preserve immersion.

  • Faster Rotation-Based Gauss Sieve for Solving the SVP on General Ideal Lattices Open Access

    Shintaro NARISADA  Hiroki OKADA  Kazuhide FUKUSHIMA  Shinsaku KIYOMOTO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    79-88

    The hardness in solving the shortest vector problem (SVP) is a fundamental assumption for the security of lattice-based cryptographic algorithms. In 2010, Micciancio and Voulgaris proposed an algorithm named the Gauss Sieve, which is a fast and heuristic algorithm for solving the SVP. Schneider presented another algorithm named the Ideal Gauss Sieve in 2011, which is applicable to a special class of lattices, called ideal lattices. The Ideal Gauss Sieve speeds up the Gauss Sieve by using some properties of the ideal lattices. However, the algorithm is applicable only if the dimension of the ideal lattice n is a power of two or n+1 is a prime. Ishiguro et al. proposed an extension to the Ideal Gauss Sieve algorithm in 2014, which is applicable only if the prime factor of n is 2 or 3. In this paper, we first generalize the dimensions that can be applied to the ideal lattice properties to when the prime factor of n is derived from 2, p or q for two primes p and q. To the best of our knowledge, no algorithm using ideal lattice properties has been proposed so far with dimensions such as: 20, 44, 80, 84, and 92. Then we present an algorithm that speeds up the Gauss Sieve for these dimensions. Our experiments show that our proposed algorithm is 10 times faster than the original Gauss Sieve in solving an 80-dimensional SVP problem. Moreover, we propose a rotation-based Gauss Sieve that is approximately 1.5 times faster than the Ideal Gauss Sieve.

  • 2.65Gbps Downlink Communications with Polarization Multiplexing in X-Band for Small Earth Observation Satellite Open Access

    Tomoki KANEKO  Noriyuki KAWANO  Yuhei NAGAO  Keishi MURAKAMI  Hiromi WATANABE  Makoto MITA  Takahisa TOMODA  Keiichi HIRAKO  Seiko SHIRASAKA  Shinichi NAKASUKA  Hirobumi SAITO  Akira HIROSE  

     
    POSITION PAPER-Satellite Communications

      Pubricized:
    2020/07/01
      Vol:
    E104-B No:1
      Page(s):
    1-12

    This paper reports our new communication components and downlink tests for realizing 2.65Gbps by utilizing two circular polarizations. We have developed an on-board X-band transmitter, an on-board dual circularly polarized-wave antenna, and a ground station. In the on-board transmitter, we optimized the bias conditions of GaN High Power Amplifier (HPA) to linearize AM-AM performance. We have also designed and fabricated a dual circularly polarized-wave antenna for low-crosstalk polarization multiplexing. The antenna is composed of a corrugated horn antenna and a septum-type polarizer. The antenna achieves Cross Polarization Discrimination (XPD) of 37-43dB in the target X-band. We also modify an existing 10m ground station antenna by replacing its primary radiator and adding a polarizer. We put the polarizer and Low Noise Amplifiers (LNAs) in a cryogenic chamber to reduce thermal noise. Total system noise temperature of the antenna is 58K (maximum) for 18K physical temperature when the angle of elevation is 90° on a fine winter day. The dual circularly polarized-wave ground station antenna has 39.0dB/K of Gain - system-noise Temperature ratio (G/T) and an XPD higher than 37dB. The downlinked signals are stored in a data recorder at the antenna site. Afterwards, we decoded the signals by using our non-real-time software demodulator. Our system has high frequency efficiency with a roll-off factor α=0.05 and polarization multiplexing of 64APSK. The communication bits per hertz corresponds to 8.41bit/Hz (2.65Gbit/315MHz). The system is demonstrated in orbit on board the RAPid Innovative payload demonstration Satellite (RAPIS-1). RAPIS-1 was launched from Uchinoura Space Center on January 19th, 2019. We decoded 1010 bits of downlinked R- and L-channel signals and found that the downlinked binary data was error free. Consequently, we have achieved 2.65Gbps communication speed in the X-band for earth observation satellites at 300 Mega symbols per second (Msps) and polarization multiplexing of 64APSK (coding rate: 4/5) for right- and left-hand circular polarizations.

  • IND-CCA1 Secure FHE on Non-Associative Ring

    Masahiro YAGISAWA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2020/07/08
      Vol:
    E104-A No:1
      Page(s):
    275-282

    A fully homomorphic encryption (FHE) would be the important cryptosystem as the basic scheme for the cloud computing. Since Gentry discovered in 2009 the first fully homomorphic encryption scheme, some fully homomorphic encryption schemes were proposed. In the systems proposed until now the bootstrapping process is the main bottleneck and the large complexity for computing the ciphertext is required. In 2011 Zvika Brakerski et al. proposed a leveled FHE without bootstrapping. But circuit of arbitrary level cannot be evaluated in their scheme while in our scheme circuit of any level can be evaluated. The existence of an efficient fully homomorphic cryptosystem would have great practical implications in the outsourcing of private computations, for instance, in the field of the cloud computing. In this paper, IND-CCA1secure FHE based on the difficulty of prime factorization is proposed which does not need the bootstrapping and it is thought that our scheme is more efficient than the previous schemes. In particular the computational overhead for homomorphic evaluation is O(1).

  • Transparent Glass Quartz Antennas on the Windows of 5G-Millimeter-Wave-Connected Cars

    Osamu KAGAYA  Yasuo MORIMOTO  Takeshi MOTEGI  Minoru INOMATA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2020/07/14
      Vol:
    E104-B No:1
      Page(s):
    64-72

    This paper proposes a transparent glass quartz antenna for 5G-millimeter-wave-connected vehicles and clarifies the characteristics of signal reception when the glass antennas are placed on the windows of a vehicle traveling in an urban environment. Synthetic fused quartz is a material particularly suited for millimeter-wave devices owing to its excellent low transmission loss. Realizing synthetic fused quartz devices requires accurate micromachining technology specialized for the material coupled with the material technology. This paper presents a transparent antenna comprising a thin mesh pattern on a quartz substrate for installation on a vehicle window. A comparison of distributed transparent antennas and an omnidirectional antenna shows that the relative received power of the distributed antenna system is higher than that of the omnidirectional antenna. In addition, results show that the power received is similar when using vertically and horizontally polarized antennas. The design is verified in a field test using transparent antennas on the windows of a real vehicle.

  • Preventing Fake Information Generation Against Media Clone Attacks Open Access

    Noboru BABAGUCHI  Isao ECHIZEN  Junichi YAMAGISHI  Naoko NITTA  Yuta NAKASHIMA  Kazuaki NAKAMURA  Kazuhiro KONO  Fuming FANG  Seiko MYOJIN  Zhenzhong KUANG  Huy H. NGUYEN  Ngoc-Dung T. TIEU  

     
    INVITED PAPER

      Pubricized:
    2020/10/19
      Vol:
    E104-D No:1
      Page(s):
    2-11

    Fake media has been spreading due to remarkable advances in media processing and machine leaning technologies, causing serious problems in society. We are conducting a research project called Media Clone aimed at developing methods for protecting people from fake but skillfully fabricated replicas of real media called media clones. Such media can be created from fake information about a specific person. Our goal is to develop a trusted communication system that can defend against attacks of media clones. This paper describes some research results of the Media Clone project, in particular, various methods for protecting personal information against generating fake information. We focus on 1) fake information generation in the physical world, 2) anonymization and abstraction in the cyber world, and 3) modeling of media clone attacks.

  • Fuzzy Output Support Vector Machine Based Incident Ticket Classification

    Libo YANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/10/14
      Vol:
    E104-D No:1
      Page(s):
    146-151

    Incident ticket classification plays an important role in the complex system maintenance. However, low classification accuracy will result in high maintenance costs. To solve this issue, this paper proposes a fuzzy output support vector machine (FOSVM) based incident ticket classification approach, which can be implemented in the context of both two-class SVMs and multi-class SVMs such as one-versus-one and one-versus-rest. Our purpose is to solve the unclassifiable regions of multi-class SVMs to output reliable and robust results by more fine-grained analysis. Experiments on both benchmark data sets and real-world ticket data demonstrate that our method has better performance than commonly used multi-class SVM and fuzzy SVM methods.

  • A Scheme of Reversible Data Hiding for the Encryption-Then-Compression System

    Masaaki FUJIYOSHI  Ruifeng LI  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2020/10/21
      Vol:
    E104-D No:1
      Page(s):
    43-50

    This paper proposes an encryption-then-compression (EtC) system-friendly data hiding scheme for images, where an EtC system compresses images after they are encrypted. The EtC system divides an image into non-overlapping blocks and applies four block-based processes independently and randomly to the image for visual encryption of the image. The proposed scheme hides data to a plain, i.e., unencrypted image and the scheme can take hidden data out from the image encrypted by the EtC system. Furthermore, the scheme serves reversible data hiding, so it can perfectly recover the unmarked image from the marked image whereas the scheme once distorts unmarked image for hiding data to the image. The proposed scheme copes with the three of four processes in the EtC system, namely, block permutation, rotation/flipping of blocks, and inverting brightness in blocks, whereas the conventional schemes for the system do not cope with the last one. In addition, these conventional schemes have to identify the encrypted image so that image-dependent side information can be used to extract embedded data and to restore the unmarked image, but the proposed scheme does not need such identification. Moreover, whereas the data hiding process must know the block size of encryption in conventional schemes, the proposed scheme needs no prior knowledge of the block size for encryption. Experimental results show the effectiveness of the proposed scheme.

  • Body Part Connection, Categorization and Occlusion Based Tracking with Correction by Temporal Positions for Volleyball Spike Height Analysis

    Xina CHENG  Ziken LI  Songlin DU  Takeshi IKENAGA  

     
    PAPER-Vision

      Vol:
    E103-A No:12
      Page(s):
    1503-1511

    The spike height of volleyball players is important in volleyball analysis as the quantitative criteria to evaluation players' motions, which not only provides rich information to audiences in live broadcast of sports events but also makes contribution to evaluate and improve the performance of players in strategy analysis and players training. In the volleyball game scene, the high similarity between hands, the deformation and the occlusion are three main problems that influence the acquisition performance of spike height. To solve these problems, this paper proposes a body part connection, categorization and occlusion based observation model and a temporal position based correction method. Firstly, skin pixel filter based connection detection solves the problem of high similarity between hands by judging whether a hand is connected to the spike player. Secondly, the body part categorization based observation uses the probability distribution map of hand to determine the category of each body part to solve the deformation problem. Thirdly, the occlusion part detection based observation eliminates the influence of the views with occluded body part by detecting the occluded views with a trained classifier of body part. At last, the temporal position based result correction combines the estimated results, which refers the historical positions, and the posterior result to obtain an optimal result by degree of confidence. The experiments are based on the videos of final and semi-final games of 2014 Japan Inter High School Men's Volleyball in Tokyo Metropolitan Gymnasium, which includes 196 spike sequences of 4 teams. The experiment results of proposed methods are that: 93.37% of test sequences can be successfully detected the spike height, and in which the average error of spike height is 5.96cm.

  • PCA-LDA Based Color Quantization Method Taking Account of Saliency

    Yoshiaki UEDA  Seiichi KOJIMA  Noriaki SUETAKE  

     
    LETTER-Image

      Vol:
    E103-A No:12
      Page(s):
    1613-1617

    In this letter, we propose a color quantization method based on saliency. In the proposed method, the salient colors are selected as representative colors preferentially by using saliency as weights. Through experiments, we verify the effectiveness of the proposed method.

  • A 32GHz 68dBΩ Low-Noise and Balance Operation Transimpedance Amplifier in 130nm SiGe BiCMOS for Optical Receivers

    Chao WANG  Xianliang LUO  Mohamed ATEF  Pan TANG  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1408-1416

    In this paper, a balance operation Transimpedance Amplifier (TIA) with low-noise has been implemented for optical receivers in 130 nm SiGe BiCMOS Technology, in which the optimal tradeoff emitter current density and the location of high-frequency noise corner were analyzed for acquiring low-noise performance. The Auto-Zero Feedback Loop (AZFL) without introducing unnecessary noises at input of the TIA, the tail current sink with high symmetries and the balance operation TIA with the shared output of Operational Amplifier (OpAmp) in AZFL were designed to keep balanced operation for the TIA. Moreover, cascode and shunt-feedback were also employed to expanding bandwidth and decreasing input referred noise. Besides, the formula for calculating high-frequency noise corner in Heterojunction Bipolar Transistor (HBT) TIA with shunt-feedback was derived. The electrical measurement was performed to validate the notions described in this work, appearing 9.6 pA/√Hz of input referred noise current Power Spectral Density (PSD), balance operation (VIN1=896mV, VIN2=896mV, VOUT1=1.978V, VOUT2=1.979V), bandwidth of 32GHz, overall transimpedance gain of 68.6dBΩ, a total 117mW power consumption and chip area of 484µm × 486µm.

  • Optimization Methods during RTL Conversion from Synchronous RTL Models to Asynchronous RTL Models

    Shogo SEMBA  Hiroshi SAITO  Masato TATSUOKA  Katsuya FUJIMURA  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1417-1426

    In this paper, we propose four optimization methods during the Register Transfer Level (RTL) conversion from synchronous RTL models into asynchronous RTL models. The modularization of data-path resources and the use of appropriate D flip-flops reduce the circuit area. Fixing the control signal of the multiplexers and inserting latches for the data-path resources reduce the dynamic power consumption. In the experiment, we evaluated the effect of the proposed optimization methods. The combination of all optimization methods could reduce the energy consumption by 21.9% on average compared to the ones without the proposed optimization methods.

  • Retinex-Based Image Enhancement with Particle Swarm Optimization and Multi-Objective Function

    Farzin MATIN  Yoosoo JEONG  Hanhoon PARK  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2020/09/15
      Vol:
    E103-D No:12
      Page(s):
    2721-2724

    Multiscale retinex is one of the most popular image enhancement methods. However, its control parameters, such as Gaussian kernel sizes, gain, and offset, should be tuned carefully according to the image contents. In this letter, we propose a new method that optimizes the parameters using practical swarm optimization and multi-objective function. The method iteratively verifies the visual quality (i.e. brightness, contrast, and colorfulness) of the enhanced image using a multi-objective function while subtly adjusting the parameters. Experimental results shows that the proposed method achieves better image quality qualitatively and quantitatively compared with other image enhancement methods.

  • Subchannel and Power Allocation with Fairness Guaranteed for the Downlink of NOMA-Based Networks

    Qingyuan LIU  Qi ZHANG  Xiangjun XIN  Ran GAO  Qinghua TIAN  Feng TIAN  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2020/06/08
      Vol:
    E103-B No:12
      Page(s):
    1447-1461

    This paper investigates the resource allocation problem for the downlink of non-orthogonal multiple access (NOMA) networks. A novel resource allocation method is proposed to deal with the problem of maximizing the system capacity while taking into account user fairness. Since the optimization problem is nonconvex and intractable, we adopt the idea of step-by-step optimization, decomposing it into user pairing, subchannel and power allocation subproblems. First, all users are paired according to their different channel gains. Then, the subchannel allocation is executed by the proposed subchannel selection algorithm (SSA) based on channel priority. Once the subchannel allocation is fixed, to further improve the system capacity, the subchannel power allocation is implemented by the successive convex approximation (SCA) approach where the nonconvex optimization problem is transformed into the approximated convex optimization problem in each iteration. To ensure user fairness, the upper and lower bounds of the power allocation coefficients are derived and combined by introducing the tuning coefficients. The power allocation coefficients are dynamically adjustable by adjusting the tuning coefficients, thus the diversified quality of service (QoS) requirements can be satisfied. Finally, simulation results demonstrate the superiority of the proposed method over the existing methods in terms of system performance, furthermore, a good tradeoff between the system capacity and user fairness can be achieved.

  • Efficient Secure Neural Network Prediction Protocol Reducing Accuracy Degradation

    Naohisa NISHIDA  Tatsumi OBA  Yuji UNAGAMI  Jason PAUL CRUZ  Naoto YANAI  Tadanori TERUYA  Nuttapong ATTRAPADUNG  Takahiro MATSUDA  Goichiro HANAOKA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E103-A No:12
      Page(s):
    1367-1380

    Machine learning models inherently memorize significant amounts of information, and thus hiding not only prediction processes but also trained models, i.e., model obliviousness, is desirable in the cloud setting. Several works achieved model obliviousness with the MNIST dataset, but datasets that include complicated samples, e.g., CIFAR-10 and CIFAR-100, are also used in actual applications, such as face recognition. Secret sharing-based secure prediction for CIFAR-10 is difficult to achieve. When a deep layer architecture such as CNN is used, the calculation error when performing secret calculation becomes large and the accuracy deteriorates. In addition, if detailed calculations are performed to improve accuracy, a large amount of calculation is required. Therefore, even if the conventional method is applied to CNN as it is, good results as described in the paper cannot be obtained. In this paper, we propose two approaches to solve this problem. Firstly, we propose a new protocol named Batch-normalizedActivation that combines BatchNormalization and Activation. Since BatchNormalization includes real number operations, when performing secret calculation, parameters must be converted into integers, which causes a calculation error and decrease accuracy. By using our protocol, calculation errors can be eliminated, and accuracy degradation can be eliminated. Further, the processing is simplified, and the amount of calculation is reduced. Secondly, we explore a secret computation friendly and high accuracy architecture. Related works use a low-accuracy, simple architecture, but in reality, a high accuracy architecture should be used. Therefore, we also explored a high accuracy architecture for the CIFAR10 dataset. Our proposed protocol can compute prediction of CIFAR-10 within 15.05 seconds with 87.36% accuracy while providing model obliviousness.

  • Advanced Antlion Optimizer with Discrete Ant Behavior for Feature Selection

    Mengmeng LI  Xiaoguang REN  Yanzhen WANG  Wei QIN  Yi LIU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/09/04
      Vol:
    E103-D No:12
      Page(s):
    2717-2720

    Feature selection is important for learning algorithms, and it is still an open problem. Antlion optimizer is an excellent nature inspired method, but it doesn't work well for feature selection. This paper proposes a hybrid approach called Ant-Antlion Optimizer which combines advantages of antlion's smart behavior of antlion optimizer and ant's powerful searching movement of ant colony optimization. A mutation operator is also adopted to strengthen exploration ability. Comprehensive experiments by binary classification problems show that the proposed algorithm is superiority to other state-of-art methods on four performance indicators.

  • Multi-Layered DP Quantization Algorithm Open Access

    Yukihiro BANDOH  Seishi TAKAMURA  Hideaki KIMATA  

     
    PAPER-Image

      Vol:
    E103-A No:12
      Page(s):
    1552-1561

    Designing an optimum quantizer can be treated as the optimization problem of finding the quantization indices that minimize the quantization error. One solution to the optimization problem, DP quantization, is based on dynamic programming. Some applications, such as bit-depth scalable codec and tone mapping, require the construction of multiple quantizers with different quantization levels, for example, from 12bit/channel to 10bit/channel and 8bit/channel. Unfortunately, the above mentioned DP quantization optimizes the quantizer for just one quantization level. That is, it is unable to simultaneously optimize multiple quantizers. Therefore, when DP quantization is used to design multiple quantizers, there are many redundant computations in the optimization process. This paper proposes an extended DP quantization with a complexity reduction algorithm for the optimal design of multiple quantizers. Experiments show that the proposed algorithm reduces complexity by 20.8%, on average, compared to conventional DP quantization.

  • A Data-Centric Directive-Based Framework to Accelerate Out-of-Core Stencil Computation on a GPU

    Jingcheng SHEN  Fumihiko INO  Albert FARRÉS  Mauricio HANZICH  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2020/09/07
      Vol:
    E103-D No:12
      Page(s):
    2421-2434

    Graphics processing units (GPUs) are highly efficient architectures for parallel stencil code; however, the small device (i.e., GPU) memory capacity (several tens of GBs) necessitates the use of out-of-core computation to process excess data. Great programming effort is needed to manually implement efficient out-of-core stencil code. To relieve such programming burdens, directive-based frameworks emerged, such as the pipelined accelerator (PACC); however, they usually lack specific optimizations to reduce data transfer. In this paper, we extend PACC with two data-centric optimizations to address data transfer problems. The first is a direct-mapping scheme that eliminates host (i.e., CPU) buffers, which intermediate between the original data and device buffers. The second is a region-sharing scheme that significantly reduces host-to-device data transfer. The extended PACC was applied to an acoustic wave propagator, automatically extending the length of original serial code 2.3-fold to obtain the out-of-core code. Experimental results revealed that on a Tesla V100 GPU, the generated code ran 41.0, 22.1, and 3.6 times as fast as implementations based on Open Multi-Processing (OpenMP), Unified Memory, and the previous PACC, respectively. The generated code also demonstrated usefulness with small datasets that fit in the device capacity, running 1.3 times as fast as an in-core implementation.

  • Relationship between Recognition Accuracy and Numerical Precision in Convolutional Neural Network Models

    Yasuhiro NAKAHARA  Masato KIYAMA  Motoki AMAGASAKI  Masahiro IIDA  

     
    LETTER-Computer System

      Pubricized:
    2020/08/13
      Vol:
    E103-D No:12
      Page(s):
    2528-2529

    Quantization is an important technique for implementing convolutional neural networks on edge devices. Quantization often requires relearning, but relearning sometimes cannot be always be applied because of issues such as cost or privacy. In such cases, it is important to know the numerical precision required to maintain accuracy. We accurately simulate calculations on hardware and accurately measure the relationship between accuracy and numerical precision.

  • Multiple Subspace Model and Image-Inpainting Algorithm Based on Multiple Matrix Rank Minimization

    Tomohiro TAKAHASHI  Katsumi KONISHI  Kazunori URUMA  Toshihiro FURUKAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2020/08/31
      Vol:
    E103-D No:12
      Page(s):
    2682-2692

    This paper proposes an image inpainting algorithm based on multiple linear models and matrix rank minimization. Several inpainting algorithms have been previously proposed based on the assumption that an image can be modeled using autoregressive (AR) models. However, these algorithms perform poorly when applied to natural photographs because they assume that an image is modeled by a position-invariant linear model with a fixed model order. In order to improve inpainting quality, this work introduces a multiple AR model and proposes an image inpainting algorithm based on multiple matrix rank minimization with sparse regularization. In doing so, a practical algorithm is provided based on the iterative partial matrix shrinkage algorithm, with numerical examples showing the effectiveness of the proposed algorithm.

441-460hit(5900hit)