The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6161-6180hit(42807hit)

  • Revocable Group Signatures with Compact Revocation List Using Vector Commitments

    Shahidatul SADIAH  Toru NAKANISHI  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:8
      Page(s):
    1672-1682

    A group signature allows any group member to anonymously sign a message. One of the important issues is an efficient membership revocation. The scheme proposed by Libert et al. has achieved O(1) signature and membership certificate size, O(1) signing and verification times, and O(log N) public key size, where N is the total number of members. However the Revocation List (RL) data is large, due to O(R) signatures in RL, where R is the number of revoked members. The scheme proposed by Nakanishi et al. achieved a compact RL of O(R/T) signatures for any integer T. However, this scheme increases membership certificate size by O(T). In this paper, we extend the scheme proposed by Libert et al., by reducing the RL size to O(R/T) using a vector commitment to compress the revocation entries, while O(1) membership certificate size remains.

  • Semi-Supervised Speech Enhancement Combining Nonnegative Matrix Factorization and Robust Principal Component Analysis

    Yonggang HU  Xiongwei ZHANG  Xia ZOU  Meng SUN  Yunfei ZHENG  Gang MIN  

     
    LETTER-Speech and Hearing

      Vol:
    E100-A No:8
      Page(s):
    1714-1719

    Nonnegative matrix factorization (NMF) is one of the most popular machine learning tools for speech enhancement. The supervised NMF-based speech enhancement is accomplished by updating iteratively with the prior knowledge of the clean speech and noise spectra bases. However, in many real-world scenarios, it is not always possible for conducting any prior training. The traditional semi-supervised NMF (SNMF) version overcomes this shortcoming while the performance degrades. In this letter, without any prior knowledge of the speech and noise, we present an improved semi-supervised NMF-based speech enhancement algorithm combining techniques of NMF and robust principal component analysis (RPCA). In this approach, fixed speech bases are obtained from the training samples chosen from public dateset offline. The noise samples used for noise bases training, instead of characterizing a priori as usual, can be obtained via RPCA algorithm on the fly. This letter also conducts a study on the assumption whether the time length of the estimated noise samples may have an effect on the performance of the algorithm. Three metrics, including PESQ, SDR and SNR are applied to evaluate the performance of the algorithms by making experiments on TIMIT with 20 noise types at various signal-to-noise ratio levels. Extensive experimental results demonstrate the superiority of the proposed algorithm over the competing speech enhancement algorithm.

  • Backscatter Assisted Wireless Powered Communication Networks with Non-Orthogonal Multiple Access

    Bin LYU  Zhen YANG  Guan GUI  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:8
      Page(s):
    1724-1728

    This letter considers a backscatter assisted wireless powered communication network (BAWPCN) with non-orthogonal multiple access (NOMA). This model consists of a hybrid access point (HAP) and multiple users which can work in either backscatter or harvest-then-transmit (HTT) protocol. To fully exploit time for information transmission, the users working in the backscatter protocol are scheduled to reflect modulated signals during the first phase of the HTT protocol which is dedicated for energy transfer. During the second phase, all users working in the HTT protocol transmit information to the HAP simultaneously since NOMA is adopted. Considering both short-term and long-term optimization problems to maximize the system throughput, the optimal resource allocation policies are obtained. Simulation results show that the proposed model can significantly improve the system performance.

  • Kernel CCA Based Transfer Learning for Software Defect Prediction

    Ying MA  Shunzhi ZHU  Yumin CHEN  Jingjing LI  

     
    LETTER-Software Engineering

      Pubricized:
    2017/04/28
      Vol:
    E100-D No:8
      Page(s):
    1903-1906

    An transfer learning method, called Kernel Canonical Correlation Analysis plus (KCCA+), is proposed for heterogeneous Cross-company defect prediction. Combining the kernel method and transfer learning techniques, this method improves the performance of the predictor with more adaptive ability in nonlinearly separable scenarios. Experiments validate its effectiveness.

  • Indoor and Outdoor Experiments of Downlink Transmission at 15-GHz Band for 5G Radio Access

    Kiichi TATEISHI  Daisuke KURITA  Atsushi HARADA  Yoshihisa KISHIYAMA  Takehiro NAKAMURA  Stefan PARKVALL  Erik DAHLMAN  Johan FURUSKOG  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1238-1246

    This paper presents indoor and outdoor experiments that confirm 4-Gbps throughput based on 400-MHz bandwidth transmission when applying carrier aggregation (CA) with 4 component carriers (CCs) and 4-by-4 single-user multiple-in multiple-out multiplexing (MIMO) in the 15-GHz frequency band in the downlink of 5G cellular radio access. A new radio interface with time division duplexing (TDD) and radio access based on orthogonal frequency-division multiple access (OFDMA) is implemented in a 5G testbed to confirm ultra-high speed transmission with low latency. The indoor experiment in an entrance hall shows that the peak throughput is 4.3Gbps in front of the base station (BS) antenna where the reference signal received power (RSRP) is -40dBm although the channel correlation at user equipment (UE) antenna is 0.8. The outdoor experiment in an open-space parking area shows that the peak throughput is 2.8Gbps in front of a BS antenna with a high RSRP although rank 2 is selected due to the high channel correlation. The results also show that the average throughput of 2Gbps is achieved 120m from the BS antenna. In a courtyard enclosed by building walls, 3.6Gbps is achieved in an outdoor-to-outdoor environment with a high RSRP and in an outdoor-to-indoor environment where the RSRP is lower due to the penetration loss of glass windows, but the multipath rich environment contributes to realizing the low channel correlation.

  • Node Selection for Belief Propagation Based Channel Equalization

    Mitsuyoshi HAGIWARA  Toshihiko NISHIMURA  Takeo OHGANE  Yasutaka OGAWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1285-1292

    Recently, much progress has been made in the study of belief propagation (BP) based signal detection with large-scale factor graphs. When we apply the BP algorithm to equalization in a SISO multipath channel, the corresponding factor graph has many short loops and patterns in an edge connection/strength. Thus, proper convergence may not be achieved. In general, the log-likelihood ratio (LLR) oscillates in ill-converged cases. Therefore, LLR oscillation avoidance is important for BP-based equalization. In this paper, we propose applying node selection (NS) to prevent the LLR from oscillating. The NS extends the loop length virtually by a serial LLR update. Thus, some performance improvement is expected. Simulation results show that the error floor is significantly reduced by NS in the uncoded case and that the NS works very well in the coded case.

  • Decentralized Iterative User Association Method for (p,α)-Proportional Fair-Based System Throughput Maximization in Heterogeneous Cellular Networks

    Kenichi HIGUCHI  Yasuaki YUDA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1323-1333

    This paper proposes a new user association method to maximize the downlink system throughput in a cellular network, where the system throughput is defined based on (p,α)-proportional fairness. The proposed method assumes a fully decentralized approach, which is practical in a real system as complicated inter-base station (BS) cooperation is not required. In the proposed method, each BS periodically and individually broadcasts supplemental information regarding its bandwidth allocation to newly connected users. Assisted by this information, each user calculates the expected throughput that will be obtained by connecting to the respective BSs. Each user terminal feeds back the metric for user association to the temporally best BS, which represents a relative increase in throughput through re-association to that BS. Based on the reported metrics from multiple users, each BS individually updates the user association. The proposed method gives a general framework for optimal user association for (p,α)-proportional fairness-based system throughput maximization and is especially effective in heterogeneous cellular networks where low transmission-power pico BSs overlay a high transmission-power macro BS. Computer simulation results show that the proposed method maximizes the system throughput from the viewpoint of the given (p,α)-proportional fairness.

  • Investigation on Non-Orthogonal Multiple Access with Reduced Complexity Maximum Likelihood Receiver and Dynamic Resource Allocation

    Yousuke SANO  Kazuaki TAKEDA  Satoshi NAGATA  Takehiro NAKAMURA  Xiaohang CHEN  Anxin LI  Xu ZHANG  Jiang HUILING  Kazuhiko FUKAWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1301-1311

    Non-orthogonal multiple access (NOMA) is a promising multiple access scheme for further improving the spectrum efficiency compared to orthogonal multiple access (OMA) in the 5th Generation (5G) mobile communication systems. As inter-user interference cancellers for NOMA, two kinds of receiver structures are considered. One is the reduced complexity-maximum likelihood receiver (R-ML) and the other is the codeword level interference canceller (CWIC). In this paper, we show that the R-ML is superior to the CWIC in terms of scheduling flexibility. In addition, we propose a link to system (L2S) mapping scheme for the R-ML to conduct a system level evaluation, and show that the proposed scheme accurately predicts the block error rate (BLER) performance of the R-ML. The proposed L2S mapping scheme also demonstrates that the system level throughput performance of the R-ML is higher than that for the CWIC thanks to the scheduling flexibility.

  • Iterative Reduction of Out-of-Band Power and Peak-to-Average Power Ratio for Non-Contiguous OFDM Systems Based on POCS

    Yanqing LIU  Liang DONG  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/02/17
      Vol:
    E100-B No:8
      Page(s):
    1489-1497

    Non-contiguous orthogonal frequency-division multiplexing (OFDM) is a promising technique for cognitive radio systems. The secondary users transmit on the selected subcarriers to avoid the frequencies being used by the primary users. However, the out-of-band power (OBP) of the OFDM-modulated tones induces interference to the primary users. Another major drawback of OFDM-based system is their high peak-to-average power ratio (PAPR). In this paper, algorithms are proposed to jointly reduce the OBP and the PAPR for non-contiguous OFDM based on the method of alternating projections onto convex sets. Several OFDM subcarriers are selected to accommodate the adjusting weights for OBP and PAPR reduction. The frequency-domain OFDM symbol is projected onto two convex sets that are defined according to the OBP requirements and the PAPR limits. Each projection iteration solves a convex optimization problem. The projection onto the set constrained by the OBP requirement can be calculated using an iterative algorithm which has low computational complexity. Simulation results show good performance of joint reduction of the OBP and the PAPR. The proposed algorithms converge quickly in a few iterations.

  • Experimental Investigation of Space Division Multiplexing on Massive Antenna Systems for Wireless Entrance

    Kazuki MARUTA  Atsushi OHTA  Satoshi KUROSAKI  Takuto ARAI  Masataka IIZUKA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2017/01/20
      Vol:
    E100-B No:8
      Page(s):
    1436-1448

    This paper experimentally verifies the potential of higher order space division multiplexing in line-of-sight (LOS) channels for multiuser massive MIMO. We previously proposed an inter-user interference (IUI) cancellation scheme and a simplified user scheduling method for Massive Antenna Systems for Wireless Entrance (MAS-WE). In order to verify the effectiveness of the proposed techniques, channel state information (CSI) for a 1×32 SIMO channel is measured in a real propagation environment with simplified test equipment. Evaluations of the measured CSI data confirm the effectiveness of our proposals; they offer good equal gain transmission (EGT) performance, reduced spatial correlation with enlarged angular gap between users, and quite small channel state fluctuation. Link level simulations elucidate that the simple IUI cancellation method is stable in practical conditions. The degradation in symbol error rate with the measured CSI, relative to that yielded by the output of the theoretical LOS channel model, is insignificant.

  • Exact Intersymbol Interference Analysis for Upsampled OFDM Signals with Symbol Timing Errors

    Heon HUH  Feng LU  James V. KROGMEIER  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/01/20
      Vol:
    E100-B No:8
      Page(s):
    1472-1479

    In OFDM systems, link performance depends heavily on the estimation of symbol-timing and frequency offsets. Performance sensitivity to these estimates is a major drawback of OFDM systems. Timing errors destroy the orthogonality of OFDM signals and lead to inter-symbol interference (ISI) and inter-carrier interference (ICI). The interference due to timing errors can be exploited as a metric for symbol-timing synchronization. In this paper, we propose a novel method to extract interference components using a DFT of the upsampled OFDM signals. Mathematical analysis and formulation are given for the dependence of interference on timing errors. From a numerical analysis, the proposed interference estimation shows robustness against channel dispersion.

  • Subarray Based Low Computational Design of Multiuser MIMO System Adopting Massive Transmit Array Antenna

    Tetsuki TANIGUCHI  Yoshio KARASAWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1205-1214

    Massive multiple input multiple output (MIMO) communication system offers high rate transmission and/or support of a large number of users by invoking the power of a large array antenna, but one of its problem is the heavy computational burden required for the design and signal processing. Assuming the utilization of a large array in the transmitter side and much fewer users than the maximum possible value, this paper first presents a subarray based design approach of MIMO system with a low computational load taking into account efficient subarray grouping for the realization of higher performance; a large transmit array is first divided into subarrays based on channel gain or channel correlation, then block diagonalization is applied to each of them, and finally a large array weight is reconstructed by maximal ratio combining (MRC). In addition, the extension of the proposed method to two-stage design is studied in order to support a larger number of users; in the process of reconstruction to a large array, subarrays are again divided into groups, and block diagonalization is applied to those subarray groups. Through computer simulations, it is shown that the both channel gain and correlation based grouping strategies are effective under certain conditions, and that the number of supported users can be increased by two-stage design if certain level of performance degradation is acceptable.

  • APPraiser: A Large Scale Analysis of Android Clone Apps

    Yuta ISHII  Takuya WATANABE  Mitsuaki AKIYAMA  Tatsuya MORI  

     
    PAPER-Program Analysis

      Pubricized:
    2017/05/18
      Vol:
    E100-D No:8
      Page(s):
    1703-1713

    Android is one of the most popular mobile device platforms. However, since Android apps can be disassembled easily, attackers inject additional advertisements or malicious codes to the original apps and redistribute them. There are a non-negligible number of such repackaged apps. We generally call those malicious repackaged apps “clones.” However, there are apps that are not clones but are similar to each other. We call such apps “relatives.” In this work, we developed a framework called APPraiser that extracts similar apps and classifies them into clones and relatives from the large dataset. We used the APPraiser framework to study over 1.3 million apps collected from both official and third-party marketplaces. Our extensive analysis revealed the following findings: In the official marketplace, 79% of similar apps were attributed to relatives, while in the third-party marketplace, 50% of similar apps were attributed to clones. The majority of relatives are apps developed by prolific developers in both marketplaces. We also found that in the third-party market, of the clones that were originally published in the official market, 76% of them are malware.

  • A Novel Channel Assignment Method to Ensure Deadlock-Freedom for Deterministic Routing

    Ryuta KAWANO  Hiroshi NAKAHARA  Seiichi TADE  Ikki FUJIWARA  Hiroki MATSUTANI  Michihiro KOIBUCHI  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2017/05/19
      Vol:
    E100-D No:8
      Page(s):
    1798-1806

    Inter-switch networks for HPC systems and data-centers can be improved by applying random shortcut topologies with a reduced number of hops. With minimal routing in such networks; however, deadlock-freedom is not guaranteed. Multiple Virtual Channels (VCs) are efficiently used to avoid this problem. However, previous works do not provide good trade-offs between the number of required VCs and the time and memory complexities of an algorithm. In this work, a novel and fast algorithm, named ACRO, is proposed to endorse the arbitrary routing functions with deadlock-freedom, as well as consuming a small number of VCs. A heuristic approach to reduce VCs is achieved with a hash table, which improves the scalability of the algorithm compared with our previous work. Moreover, experimental results show that ACRO can reduce the average number of VCs by up to 63% when compared with a conventional algorithm that has the same time complexity. Furthermore, ACRO reduces the time complexity by a factor of O(|N|⋅log|N|), when compared with another conventional algorithm that requires almost the same number of VCs.

  • Incidence Rate Prediction of Diabetes from Medical Checkup Data

    Masakazu MORIMOTO  Naotake KAMIURA  Yutaka HATA  Ichiro YAMAMOTO  

     
    PAPER-Soft Computing

      Pubricized:
    2017/05/19
      Vol:
    E100-D No:8
      Page(s):
    1642-1646

    To promote effective guidance by health checkup results, this paper predict a likelihood of developing lifestyle-related diseases from health check data. In this paper, we focus on the fluctuation of hemoglobin A1c (HbA1c) value, which deeply connected with diabetes onset. Here we predict incensement of HbA1c value and examine which kind of health checkup item has important role for HbA1c fluctuation. Our experimental results show that, when we classify the subjects according to their gender and triglyceride (TG) fluctuation value, we will effectively evaluate the risk of diabetes onset for each class.

  • Fine-Grained Analysis of Compromised Websites with Redirection Graphs and JavaScript Traces

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeshi YADA  Shigeki GOTO  

     
    PAPER-Internet Security

      Pubricized:
    2017/05/18
      Vol:
    E100-D No:8
      Page(s):
    1714-1728

    An incident response organization such as a CSIRT contributes to preventing the spread of malware infection by analyzing compromised websites and sending abuse reports with detected URLs to webmasters. However, these abuse reports with only URLs are not sufficient to clean up the websites. In addition, it is difficult to analyze malicious websites across different client environments because these websites change behavior depending on a client environment. To expedite compromised website clean-up, it is important to provide fine-grained information such as malicious URL relations, the precise position of compromised web content, and the target range of client environments. In this paper, we propose a new method of constructing a redirection graph with context, such as which web content redirects to malicious websites. The proposed method analyzes a website in a multi-client environment to identify which client environment is exposed to threats. We evaluated our system using crawling datasets of approximately 2,000 compromised websites. The result shows that our system successfully identified malicious URL relations and compromised web content, and the number of URLs and the amount of web content to be analyzed were sufficient for incident responders by 15.0% and 0.8%, respectively. Furthermore, it can also identify the target range of client environments in 30.4% of websites and a vulnerability that has been used in malicious websites by leveraging target information. This fine-grained analysis by our system would contribute to improving the daily work of incident responders.

  • Node-to-Node Disjoint Paths Problem in Möbius Cubes

    David KOCIK  Keiichi KANEKO  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/04/25
      Vol:
    E100-D No:8
      Page(s):
    1837-1843

    The Möbius cube is a variant of the hypercube. Its advantage is that it can connect the same number of nodes as a hypercube but with almost half the diameter of the hypercube. We propose an algorithm to solve the node-to-node disjoint paths problem in n-Möbius cubes in polynomial-order time of n. We provide a proof of correctness of the algorithm and estimate that the time complexity is O(n2) and the maximum path length is 3n-5.

  • Feature Selection Based on Modified Bat Algorithm

    Bin YANG  Yuliang LU  Kailong ZHU  Guozheng YANG  Jingwei LIU  Haibo YIN  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/05/01
      Vol:
    E100-D No:8
      Page(s):
    1860-1869

    The rapid development of information techniques has lead to more and more high-dimensional datasets, making classification more difficult. However, not all of the features are useful for classification, and some of these features may even cause low classification accuracy. Feature selection is a useful technique, which aims to reduce the dimensionality of datasets, for solving classification problems. In this paper, we propose a modified bat algorithm (BA) for feature selection, called MBAFS, using a SVM. Some mechanisms are designed for avoiding the premature convergence. On the one hand, in order to maintain the diversity of bats, they are guided by the combination of a random bat and the global best bat. On the other hand, to enhance the ability of escaping from local optimization, MBAFS employs one mutation mechanism while the algorithm trapped into local optima. Furthermore, the performance of MBAFS was tested on twelve benchmark datasets, and was compared with other BA based algorithms and some well-known BPSO based algorithms. Experimental results indicated that the proposed algorithm outperforms than other methods. Also, the comparison details showed that MBAFS is competitive in terms of computational time.

  • Rapid Generation of the State Codebook in Side Match Vector Quantization

    Hanhoon PARK  Jong-Il PARK  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/05/16
      Vol:
    E100-D No:8
      Page(s):
    1934-1937

    Side match vector quantization (SMVQ) has been originally developed for image compression and is also useful for steganography. SMVQ requires to create its own state codebook for each block in both encoding and decoding phases. Since the conventional method for the state codebook generation is extremely time-consuming, this letter proposes a fast generation method. The proposed method is tens times faster than the conventional one without loss of perceptual visual quality.

  • Stochastic Number Generation with the Minimum Inputs

    Ritsuko MUGURUMA  Shigeru YAMASHITA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E100-A No:8
      Page(s):
    1661-1671

    For some applications, it has been known that stochastic computing (SC) has many potential advantages compared with conventional computation on binary radix encoding. Thus, there has been proposed many design methodologies to realize SCs. Recently, a general design method to realize SC operations by designing Boolean circuits (functions) has been proposed. As a central part of the method, we need to design a logic circuit such that its output becomes 1 with a certain desired probability with respect to random inputs. Also, to realize an SC arithmetic operation with a constant value, in some situations we need to prepare a random bit-stream that becomes 1 with a desired probability from a set of predetermined physical random sources. We call such a bit-stream as a stochastic number (SN). We can utilize the above-mentioned previous method to prepare stochastic numbers by designing Boolean circuits. The method assumes all the random sources become 1 with the same probability 1/2. In this paper, we investigate a different framework where we can prepare different probabilities of each stochastic number in the physical random sources. Then, this paper presents the necessary and sufficient condition of given random inputs in order to produce a stochastic number with a given specified precision. Based on the condition, we can propose a method to generate a stochastic number by using the minimum number of random inputs. Indeed our method uses much less number of inputs than the previous method, and our preliminary experiment shows that the generated circuits by our method also tend to be smaller than the ones by the previous method.

6161-6180hit(42807hit)