The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Z(5900hit)

501-520hit(5900hit)

  • Lattice-Based Cryptanalysis of RSA with Implicitly Related Keys

    Mengce ZHENG  Noboru KUNIHIRO  Honggang HU  

     
    PAPER-Cryptography and Information Security

      Vol:
    E103-A No:8
      Page(s):
    959-968

    We address the security issue of RSA with implicitly related keys in this paper. Informally, we investigate under what condition is it possible to efficiently factorize RSA moduli in polynomial time given implicit relation of the related private keys that certain portions of bit pattern are the same. We formulate concrete attack scenarios and propose lattice-based cryptanalysis by using lattice reduction algorithms. A subtle lattice technique is adapted to represent an unknown private key with the help of known implicit relation. We analyze a simple case when given two RSA instances with the known amount of shared most significant bits (MSBs) and least significant bits (LSBs) of the private keys. We further extend to a generic lattice-based attack for given more RSA instances with implicitly related keys. Our theoretical results indicate that RSA with implicitly related keys is more insecure and better asymptotic results can be achieved as the number of RSA instances increases. Furthermore, we conduct numerical experiments to verify the validity of the proposed attacks.

  • A Novel Multi-Satellite Multi-Beam System with Single Frequency Reuse Applying MIMO

    Daisuke GOTO  Fumihiro YAMASHITA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2020/02/03
      Vol:
    E103-B No:8
      Page(s):
    842-851

    This paper introduces a new multi-satellite multi-beam system with single frequency reuse; it uses the MIMO (Multi Input Multi Output) technique to improve the frequency efficiency as the satellite communication band is limited. MIMO is the one of the most important approaches to improve the spectral efficiency in support of broadband communications. Since it is difficult to achieve high spectral efficiency by simply combining conventional MIMO satellite techniques, i.e. combining a multi-beam system with single frequency reuse with a multiple satellite system, this paper proposes transmitter pre-coding and receiver equalization techniques to enhance the channel capacity even under time/frequency asynchronous conditions. A channel capacity comparison shows that the proposed system is superior to conventional alternatives.

  • Link Prediction Using Higher-Order Feature Combinations across Objects

    Kyohei ATARASHI  Satoshi OYAMA  Masahito KURIHARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/05/14
      Vol:
    E103-D No:8
      Page(s):
    1833-1842

    Link prediction, the computational problem of determining whether there is a link between two objects, is important in machine learning and data mining. Feature-based link prediction, in which the feature vectors of the two objects are given, is of particular interest because it can also be used for various identification-related problems. Although the factorization machine and the higher-order factorization machine (HOFM) are widely used for feature-based link prediction, they use feature combinations not only across the two objects but also from the same object. Feature combinations from the same object are irrelevant to major link prediction problems such as predicting identity because using them increases computational cost and degrades accuracy. In this paper, we present novel models that use higher-order feature combinations only across the two objects. Since there were no algorithms for efficiently computing higher-order feature combinations only across two objects, we derive one by leveraging reported and newly obtained results of calculating the ANOVA kernel. We present an efficient coordinate descent algorithm for proposed models. We also improve the effectiveness of the existing one for the HOFM. Furthermore, we extend proposed models to a deep neural network. Experimental results demonstrated the effectiveness of our proposed models.

  • Development of a Low Frequency Electric Field Probe Integrating Data Acquisition and Storage

    Zhongyuan ZHOU  Mingjie SHENG  Peng LI  Peng HU  Qi ZHOU  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2020/02/27
      Vol:
    E103-C No:8
      Page(s):
    345-352

    A low frequency electric field probe that integrates data acquisition and storage is developed in this paper. An electric small monopole antenna printed on the circuit board is used as the receiving antenna; the rear end of the monopole antenna is connected to the integral circuit to achieve the flat frequency response; the logarithmic detection method is applied to obtain a high measurement dynamic range. In addition, a Microprogrammed Control Unit is set inside to realize data acquisition and storage. The size of the probe developed is not exceeding 20 mm × 20 mm × 30 mm. The field strength 0.2 V/m ~ 261 V/m can be measured in the frequency range of 500 Hz ~ 10 MHz, achieving a dynamic range over 62 dB. It is suitable for low frequency electric field strength measurement and shielding effectiveness test of small shield.

  • Improving Faster R-CNN Framework for Multiscale Chinese Character Detection and Localization

    Minseong KIM  Hyun-Chul CHOI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2020/04/06
      Vol:
    E103-D No:7
      Page(s):
    1777-1781

    Faster R-CNN uses a region proposal network which consists of a single scale convolution filter and fully connected networks to localize detected regions. However, using a single scale filter is not enough to detect full regions of characters. In this letter, we propose a simple but effective way, i.e., utilizing variously sized convolution filters, to accurately detect Chinese characters of multiple scales in documents. We experimentally verified that our method improved IoU by 4% and detection rate by 3% than the previous single scale Faster R-CNN method.

  • DomainScouter: Analyzing the Risks of Deceptive Internationalized Domain Names

    Daiki CHIBA  Ayako AKIYAMA HASEGAWA  Takashi KOIDE  Yuta SAWABE  Shigeki GOTO  Mitsuaki AKIYAMA  

     
    PAPER-Network and System Security

      Pubricized:
    2020/03/19
      Vol:
    E103-D No:7
      Page(s):
    1493-1511

    Internationalized domain names (IDNs) are abused to create domain names that are visually similar to those of legitimate/popular brands. In this work, we systematize such domain names, which we call deceptive IDNs, and analyze the risks associated with them. In particular, we propose a new system called DomainScouter to detect various deceptive IDNs and calculate a deceptive IDN score, a new metric indicating the number of users that are likely to be misled by a deceptive IDN. We perform a comprehensive measurement study on the identified deceptive IDNs using over 4.4 million registered IDNs under 570 top-level domains (TLDs). The measurement results demonstrate that there are many previously unexplored deceptive IDNs targeting non-English brands or combining other domain squatting methods. Furthermore, we conduct online surveys to examine and highlight vulnerabilities in user perceptions when encountering such IDNs. Finally, we discuss the practical countermeasures that stakeholders can take against deceptive IDNs.

  • Optimization Approach to Minimize Backup Capacity Considering Routing in Primary and Backup Networks for Random Multiple Link Failures

    Soudalin KHOUANGVICHIT  Nattapong KITSUWAN  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2020/01/17
      Vol:
    E103-B No:7
      Page(s):
    726-735

    This paper proposes an optimization approach that designs the backup network with the minimum total capacity to protect the primary network from random multiple link failures with link failure probability. In the conventional approach, the routing in the primary network is not considered as a factor in minimizing the total capacity of the backup network. Considering primary routing as a variable when deciding the backup network can reduce the total capacity in the backup network compared to the conventional approach. The optimization problem examined here employs robust optimization to provide probabilistic survivability guarantees for different link capacities in the primary network. The proposed approach formulates the optimization problem as a mixed integer linear programming (MILP) problem with robust optimization. A heuristic implementation is introduced for the proposed approach as the MILP problem cannot be solved in practical time when the network size increases. Numerical results show that the proposed approach can achieve lower total capacity in the backup network than the conventional approach.

  • Control Vector Selection for Extended Packetized Predictive Control in Wireless Networked Control Systems

    Keisuke NAKASHIMA  Takahiro MATSUDA  Masaaki NAGAHARA  Tetsuya TAKINE  

     
    PAPER-Network

      Pubricized:
    2020/01/15
      Vol:
    E103-B No:7
      Page(s):
    748-758

    We study wireless networked control systems (WNCSs), where controllers (CLs), controlled objects (COs), and other devices are connected through wireless networks. In WNCSs, COs can become unstable due to bursty packet losses and random delays on wireless networks. To reduce these network-induced effects, we utilize the packetized predictive control (PPC) method, where future control vectors to compensate bursty packet losses are generated in the receiving horizon manner, and they are packed into packets and transferred to a CO unit. In this paper, we extend the PPC method so as to compensate random delays as well as bursty packet losses. In the extended PPC method, generating many control vectors improves the robustness against both problems while it increases traffic on wireless networks. Therefore, we consider control vector selection to improve the robustness effectively under the constraint of single packet transmission. We first reconsider the input strategy of control vectors received by COs and propose a control vector selection scheme suitable for the strategy. In our selection scheme, control vectors are selected based on the estimated average and variance of round-trip delays. Moreover, we solve the problem that the CL may misconceive the CO's state due to insufficient information for state estimation. Simulation results show that our selection scheme achieves the higher robustness against both bursty packet losses and delays in terms of the 2-norm of the CO's state.

  • Clustering for Interference Alignment with Cache-Enabled Base Stations under Limited Backhaul Links

    Junyao RAN  Youhua FU  Hairong WANG  Chen LIU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2019/12/25
      Vol:
    E103-B No:7
      Page(s):
    796-803

    We propose to use clustered interference alignment for the situation where the backhaul link capacity is limited and the base station is cache-enabled given MIMO interference channels, when the number of Tx-Rx pairs exceeds the feasibility constraint of interference alignment. We optimize clustering with the soft cluster size constraint algorithm by adding a cluster size balancing process. In addition, the CSI overhead is quantified as a system performance indicator along with the average throughput. Simulation results show that cluster size balancing algorithm generates clusters that are more balanced as well as attaining higher long-term throughput than the soft cluster size constraint algorithm. The long-term throughput is further improved under high SNR by reallocating the capacity of the backhaul links based on the clustering results.

  • Instruction Filters for Mitigating Attacks on Instruction Emulation in Hypervisors

    Kenta ISHIGURO  Kenji KONO  

     
    PAPER-Dependable Computing

      Pubricized:
    2020/04/06
      Vol:
    E103-D No:7
      Page(s):
    1660-1671

    Vulnerabilities in hypervisors are crucial in multi-tenant clouds and attractive for attackers because a vulnerability in the hypervisor can undermine all the virtual machine (VM) security. This paper focuses on vulnerabilities in instruction emulators inside hypervisors. Vulnerabilities in instruction emulators are not rare; CVE-2017-2583, CVE-2016-9756, CVE-2015-0239, CVE-2014-3647, to name a few. For backward compatibility with legacy x86 CPUs, conventional hypervisors emulate arbitrary instructions at any time if requested. This design leads to a large attack surface, making it hard to get rid of vulnerabilities in the emulator.This paper proposes FWinst that narrows the attack surface against vulnerabilities in the emulator. The key insight behind FWinst is that the emulator should emulate only a small subset of instructions, depending on the underlying CPU micro-architecture and the hypervisor configuration. FWinst recognizes emulation contexts in which the instruction emulator is invoked, and identifies a legitimate subset of instructions that are allowed to be emulated in the current context. By filtering out illegitimate instructions, FWinst narrows the attack surface. In particular, FWinst is effective on recent x86 micro-architectures because the legitimate subset becomes very small. Our experimental results demonstrate FWinst prevents existing vulnerabilities in the emulator from being exploited on Westmere and Skylake micro-architectures, and the runtime overhead is negligible.

  • Analysis and Minimization of Roundoff Noise for Generalized Direct-Form II Realization of 2-D Separable-Denominator Filters

    Takao HINAMOTO  Akimitsu DOI  Wu-Sheng LU  

     
    PAPER-Digital Signal Processing

      Vol:
    E103-A No:7
      Page(s):
    873-884

    Based on the concept of polynomial operators, this paper explores generalized direct-form II structure and its state-space realization for two-dimensional separable-denominator digital filters of order (m, n) where a structure with 3(m+n)+mn+1 fixed parameters plus m+n free parameters is introduced and analyzed. An l2-scaling method utilizing different coupling coefficients at different branch nodes to avoid overflow is presented. Expressions of evaluating the roundoff noise for the filter structure as well as its state-space realization are derived and investigated. The availability of the m+n free parameters is shown to be beneficial as the roundoff noise measures can be minimized with respect to these free parameters by means of an exhaustive search over a set with finite number of candidate elements. The important role these parameters can play in the endeavors of roundoff noise reduction is demonstrated by numerical experiments.

  • Millimeter-Wave Radio Channel Characterization Using Multi-Dimensional Sub-Grid CLEAN Algorithm

    Minseok KIM  Tatsuki IWATA  Shigenobu SASAKI  Jun-ichi TAKADA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2020/01/10
      Vol:
    E103-B No:7
      Page(s):
    767-779

    In radio channel measurements and modeling, directional scanning via highly directive antennas is the most popular method to obtain angular channel characteristics to develop and evaluate advanced wireless systems for high frequency band use. However, it is often insufficient for ray-/cluster-level characterizations because the angular resolution of the measured data is limited by the angular sampling interval over a given scanning angle range and antenna half power beamwidth. This study proposes the sub-grid CLEAN algorithm, a novel technique for high-resolution multipath component (MPC) extraction from the multi-dimensional power image, so called double-directional angular delay power spectrum. This technique can successfully extract the MPCs by using the multi-dimensional power image. Simulation and measurements showed that the proposed technique could extract MPCs for ray-/cluster-level characterizations and channel modeling. Further, applying the proposed method to the data captured at 58.5GHz in an atrium entrance hall environment which is an indoor hotspot access scenario in the fifth generation mobile system, the multipath clusters and corresponding scattering processes were identified.

  • Stochastic Discrete First-Order Algorithm for Feature Subset Selection

    Kota KUDO  Yuichi TAKANO  Ryo NOMURA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/04/13
      Vol:
    E103-D No:7
      Page(s):
    1693-1702

    This paper addresses the problem of selecting a significant subset of candidate features to use for multiple linear regression. Bertsimas et al. [5] recently proposed the discrete first-order (DFO) algorithm to efficiently find near-optimal solutions to this problem. However, this algorithm is unable to escape from locally optimal solutions. To resolve this, we propose a stochastic discrete first-order (SDFO) algorithm for feature subset selection. In this algorithm, random perturbations are added to a sequence of candidate solutions as a means to escape from locally optimal solutions, which broadens the range of discoverable solutions. Moreover, we derive the optimal step size in the gradient-descent direction to accelerate convergence of the algorithm. We also make effective use of the L2-regularization term to improve the predictive performance of a resultant subset regression model. The simulation results demonstrate that our algorithm substantially outperforms the original DFO algorithm. Our algorithm was superior in predictive performance to lasso and forward stepwise selection as well.

  • An Overview of De-Identification Techniques and Their Standardization Directions

    Heung Youl YOUM  

     
    INVITED PAPER

      Pubricized:
    2020/05/14
      Vol:
    E103-D No:7
      Page(s):
    1448-1461

    De-identification[1]-[5], [30]-[71] is the process that organizations can use to remove personal information from data that they collect, use, archive, and share with other organizations. It is recognized as an important tool for organizations to balance requirements between the use of data and privacy protection of personal information. Its objective is to remove the association between a set of identifying attributes and the data principal where identifying attribute is attribute in a dataset that is able to contribute to uniquely identifying a data principal within a specific operational context and data principal is entity to which data relates. This paper provides an overview of de-identification techniques including the data release models. It also describes the current standardization activities by the standardization development organizations in terms of de-identification. It suggests future standardization directions including potential future work items.

  • Byzantine-Tolerant Gathering of Mobile Agents in Asynchronous Arbitrary Networks with Authenticated Whiteboards

    Masashi TSUCHIDA  Fukuhito OOSHITA  Michiko INOUE  

     
    PAPER-Dependable Computing

      Pubricized:
    2020/04/15
      Vol:
    E103-D No:7
      Page(s):
    1672-1682

    We propose two algorithms for the gathering of k mobile agents in asynchronous Byzantine environments. For both algorithms, we assume that graph topology is arbitrary, each node is equipped with an authenticated whiteboard, agents have unique IDs, and at most f weakly Byzantine agents exist. Here, a weakly Byzantine agent can make arbitrary behavior except falsifying its ID. Under these assumptions, the first algorithm achieves a gathering without termination detection in O(m+fn) moves per agent (m is the number of edges and n is the number of nodes). The second algorithm achieves a gathering with termination detection in O(m+fn) moves per agent by additionally assuming that agents on the same node are synchronized, $f

  • Locally Repairable Codes from Cyclic Codes and Generalized Quadrangles

    Qiang FU  Ruihu LI  Luobin GUO  

     
    LETTER-Coding Theory

      Vol:
    E103-A No:7
      Page(s):
    947-950

    Locally repairable codes (LRCs) with locality r and availability t are a class of codes which can recover data from erasures by accessing other t disjoint repair groups, that every group contain at most r other code symbols. This letter will investigate constructions of LRCs derived from cyclic codes and generalized quadrangle. On the one hand, two classes of cyclic LRC with given locality m-1 and availability em are proposed via trace function. Our LRCs have the same locality, availability, minimum distance and code rate, but have short length and low dimension. On the other hand, an LRC with $(2,(p+1)lfloor rac{s}{2} floor)$ is presented based on sets of points in PG(k, q) which form generalized quadrangles with order (s, p). For k=3, 4, 5, LRCs with r=2 and different t are determined.

  • A Semantic Similarity Supervised Autoencoder for Zero-Shot Learning

    Fengli SHEN  Zhe-Ming LU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/03/03
      Vol:
    E103-D No:6
      Page(s):
    1419-1422

    This Letter proposes a autoencoder model supervised by semantic similarity for zero-shot learning. With the help of semantic similarity vectors of seen and unseen classes and the classification branch, our experimental results on two datasets are 7.3% and 4% better than the state-of-the-art on conventional zero-shot learning in terms of the averaged top-1 accuracy.

  • Non-Steady Trading Day Detection Based on Stock Index Time-Series Information

    Hideaki IWATA  

     
    PAPER-Numerical Analysis and Optimization

      Vol:
    E103-A No:6
      Page(s):
    821-828

    Outlier detection in a data set is very important in performing proper data mining. In this paper, we propose a method for efficiently detecting outliers by performing cluster analysis using the DS algorithm improved from the k-means algorithm. This method is simpler to detect outliers than traditional methods, and these detected outliers can quantitatively indicate “the degree of outlier”. Using this method, we detect abnormal trading days from OHLCs for S&P500 and FTSA, which are typical and world-wide stock indexes, from the beginning of 2005 to the end of 2015. They are defined as non-steady trading days, and the conditions for becoming the non-steady markets are mined as new knowledge. Applying the mined knowledge to OHLCs from the beginning of 2016 to the end of 2018, we can predict the non-steady trading days during that period. By verifying the predicted content, we show the fact that the appropriate knowledge has been successfully mined and show the effectiveness of the outlier detection method proposed in this paper. Furthermore, we mutually reference and comparatively analyze the results of applying this method to multiple stock indexes. This analyzes possible to visualize when and where social and economic impacts occur and how they propagate through the earth. This is one of the new applications using this method.

  • Improving the Accuracy of Spectrum-Based Fault Localization Using Multiple Rules

    Rongcun WANG  Shujuan JIANG  Kun ZHANG  Qiao YU  

     
    PAPER-Software Engineering

      Pubricized:
    2020/02/26
      Vol:
    E103-D No:6
      Page(s):
    1328-1338

    Software fault localization, as one of the essential activities in program debugging, aids to software developers to identify the locations of faults in a program, thus reducing the cost of program debugging. Spectrum-based fault localization (SBFL), as one of the representative localization techniques, has been intensively studied. The localization technique calculates the probability of each program entity that is faulty by a certain suspiciousness formula. The accuracy of SBFL is not always as satisfactory as expected because it neglects the contextual information of statement executions. Therefore, we proposed 5 rules, i.e., random, the maximum coverage, the minimum coverage, the maximum distance, and the minimum distance, to improve the accuracy of SBFL for further. The 5 rules can effectively use the contextual information of statement executions. Moreover, they can be implemented on the traditional SBFL techniques using suspiciousness formulas with little effort. We empirically evaluated the impacts of the rules on 17 suspiciousness formulas. The results show that all 5 rules can significantly improve the ranking of faulty statements. Particularly, for the faults difficult to locate, the improvement is more remarkable. Generally, the rules can effectively reduce the number of statements examined by an average of more than 19%. Compared with other rules, the minimum coverage rule generates better results. This indicates that the application of the test case having the minimum coverage capability for fault localization is more effective.

  • Survivable Virtual Network Topology Protection Method Based on Particle Swarm Optimization

    Guangyuan LIU  Daokun CHEN  

     
    LETTER-Information Network

      Pubricized:
    2020/03/04
      Vol:
    E103-D No:6
      Page(s):
    1414-1418

    Survivable virtual network embedding (SVNE) is one of major challenges of network virtualization. In order to improve the utilization rate of the substrate network (SN) resources with virtual network (VN) topology connectivity guarantee under link failure in SN, we first establishes an Integer Linear Programming (ILP) model for that under SN supports path splitting. Then we designs a novel survivable VN topology protection method based on particle swarm optimization (VNE-PSO), which redefines the parameters and related operations of particles with the embedding overhead as the fitness function. Simulation results show that the solution significantly improves the long-term average revenue of the SN, the acceptance rate of VN requests, and reduces the embedding time compared with the existing research results.

501-520hit(5900hit)