The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

2821-2840hit(18690hit)

  • An Efficient GPU Implementation of CKY Parsing Using the Bitwise Parallel Bulk Computation Technique

    Toru FUJITA  Koji NAKANO  Yasuaki ITO  Daisuke TAKAFUJI  

     
    PAPER-GPU computing

      Pubricized:
    2017/08/04
      Vol:
    E100-D No:12
      Page(s):
    2857-2865

    The main contribution of this paper is to present an efficient GPU implementation of bulk computation of the CKY parsing for a context-free grammar, which determines if a context-free grammar derives each of a lot of input strings. The bulk computation is to execute the same algorithm for a lot of inputs in turn or at the same time. The CKY parsing is to determine if a context-free grammar derives a given string. We show that the bulk computation of the CKY parsing can be implemented in the GPU efficiently using Bitwise Parallel Bulk Computation (BPBC) technique. We also show the rule minimization technique and the dynamic scheduling method for further acceleration of the CKY parsing on the GPU. The experimental results using NVIDIA TITAN X GPU show that our implementation of the bitwise-parallel CKY parsing for strings of length 32 takes 395µs per string with 131072 production rules for 512 non-terminal symbols.

  • On Asymptotically Good Ramp Secret Sharing Schemes

    Olav GEIL  Stefano MARTIN  Umberto MARTÍNEZ-PEÑAS  Ryutaroh MATSUMOTO  Diego RUANO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:12
      Page(s):
    2699-2708

    Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes [4]-[8], [10]. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.

  • A New Algorithm to Determine Covariance in Statistical Maximum for Gaussian Mixture Model

    Daiki AZUMA  Shuji TSUKIYAMA  

     
    PAPER

      Vol:
    E100-A No:12
      Page(s):
    2834-2841

    In statistical approaches such as statistical static timing analysis, the distribution of the maximum of plural distributions is computed by repeating a maximum operation of two distributions. Moreover, since each distribution is represented by a linear combination of several explanatory random variables so as to handle correlations efficiently, sensitivity of the maximum of two distributions to each explanatory random variable, that is, covariance between the maximum and an explanatory random variable, must be calculated in every maximum operation. Since distribution of the maximum of two Gaussian distributions is not a Gaussian, Gaussian mixture model is used for representing a distribution. However, if Gaussian mixture models are used, then it is not always possible to make both variance and covariance of the maximum correct simultaneously. We propose a new algorithm to determine covariance without deteriorating the accuracy of variance of the maximum, and show experimental results to evaluate its performance.

  • Energy-Efficient Resource Allocation Strategy for Low Probability of Intercept and Anti-Jamming Systems

    Yu Min HWANG  Jun Hee JUNG  Kwang Yul KIM  Yong Sin KIM  Jae Seang LEE  Yoan SHIN  Jin Young KIM  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2498-2502

    The aim of this letter is to guarantee the ability of low probability of intercept (LPI) and anti-jamming (AJ) by maximizing the energy efficiency (EE) to improve wireless communication survivability and sustain wireless communication in jamming environments. We studied a scenario based on one transceiver pair with a partial-band noise jammer in a Rician fading channel and proposed an EE optimization algorithm to solve the optimization problem. With the proposed EE optimization algorithm, the LPI and AJ can be simultaneously guaranteed while satisfying the constraint of the maximum signal-to-jamming-and-noise ratio and combinatorial subchannel allocation condition, respectively. The results of the simulation indicate that the proposed algorithm is more energy-efficient than those of the baseline schemes and guarantees the LPI and AJ performance in a jamming environment.

  • Detecting Semantic Communities in Social Networks

    Zhen LI  Zhisong PAN  Guyu HU  Guopeng LI  Xingyu ZHOU  

     
    LETTER-Graphs and Networks

      Vol:
    E100-A No:11
      Page(s):
    2507-2512

    Community detection is an important task in the social network analysis field. Many detection methods have been developed; however, they provide little semantic interpretation for the discovered communities. We develop a framework based on joint matrix factorization to integrate network topology and node content information, such that the communities and their semantic labels are derived simultaneously. Moreover, to improve the detection accuracy, we attempt to make the community relationships derived from two types of information consistent. Experimental results on real-world networks show the superior performance of the proposed method and demonstrate its ability to semantically annotate communities.

  • Network Function Virtualization: A Survey Open Access

    Malathi VEERARAGHAVAN  Takehiro SATO  Molly BUCHANAN  Reza RAHIMI  Satoru OKAMOTO  Naoaki YAMANAKA  

     
    INVITED PAPER

      Pubricized:
    2017/05/16
      Vol:
    E100-B No:11
      Page(s):
    1978-1991

    The objectives of this survey are to provide an in-depth coverage of a few selected research papers that have made significant contributions to the development of Network Function Virtualization (NFV), and to provide readers insights into the key advantages and disadvantages of NFV and Software Defined Networks (SDN) when compared to traditional networks. The research papers covered are classified into four categories: NFV Infrastructure (NFVI), Network Functions (NFs), Management And Network Orchestration (MANO), and service chaining. The NFVI papers describe “framework” software that implement common functions, such as dynamic scaling and load balancing, required by NF developers. Papers on NFs are classified as offering solutions for software switches or middleboxes. MANO papers covered in this survey are primarily on resource allocation (virtual network embedding), which is an orchestrator function. Finally, service chaining papers that offer examples and extensions are reviewed. Our conclusions are that with the current level of investment in NFV from cloud and Internet service providers, the promised cost savings are likely to be realized, though many challenges remain.

  • Joint Transmission and Coding Scheme for High-Resolution Video Streams over Multiuser MIMO-OFDM Systems

    Koji TASHIRO  Leonardo LANANTE  Masayuki KUROSAKI  Hiroshi OCHI  

     
    PAPER-Communication Systems

      Vol:
    E100-A No:11
      Page(s):
    2304-2313

    High-resolution image and video communication in home networks is highly expected to proliferate with the spread of Wi-Fi devices and the introduction of multiple-input multiple-output (MIMO) systems. This paper proposes a joint transmission and coding scheme for broadcasting high-resolution video streams over multiuser MIMO systems with an eigenbeam-space division multiplexing (E-SDM) technique. Scalable video coding makes it possible to produce the code stream comprised of multiple layers having unequal contribution to image quality. The proposed scheme jointly assigns the data of scalable code streams to subcarriers and spatial streams based on their signal-to-noise ratio (SNR) values in order to transmit visually important data with high reliability. Simulation results show that the proposed scheme surpasses the conventional unequal power allocation (UPA) approach in terms of both peak signal-to-noise ratio (PSNR) of received images and correct decoding probability. PSNR performance of the proposed scheme exceeds 35dB with the probability of over 95% when received SNR is higher than 6dB. The improvement in average PSNR by the proposed scheme compared to the conventional UPA comes up to approx. 20dB at received SNR of 6dB. Furthermore, correct decoding probability reaches 95% when received SNR is greater than 4dB.

  • Subcarrier-Selectable Short Preamble for OFDM Channel Estimation in Real-Time Wireless Control Systems

    Theerat SAKDEJAYONT  Chun-Hao LIAO  Makoto SUZUKI  Hiroyuki MORIKAWA  

     
    PAPER-Communication Systems

      Vol:
    E100-A No:11
      Page(s):
    2323-2331

    Real-time and reliable radio communication is essential for wireless control systems (WCS). In WCS, preambles create significant overhead and affect the real-time capability since payloads are typically small. To shorten the preamble transmission time in OFDM systems, previous works have considered adopting either time-direction extrapolation (TDE) or frequency-direction interpolation (FDI) for channel estimation which however result in poor performance in fast fading channels and frequency-selective fading channels, respectively. In this work, we propose a subcarrier-selectable short preamble (SSSP) by introducing selectability to subcarrier sampling patterns of a preamble such that it can provide full sampling coverage of all subcarriers with several preamble transmissions. In addition, we introduce adaptability to a channel estimation algorithm for the SSSP so that it conforms to both fast and frequency-selective channels. Simulation results validate the feasibility of the proposed method in terms of the reliability and real-time capability. In particular, the SSSP scheme shows its advantage in flexibility as it can provide a low error rate and short communication time in various channel conditions.

  • Mitigating Throughput Starvation in Dense WLANs through Potential Game-Based Channel Selection

    Bo YIN  Shotaro KAMIYA  Koji YAMAMOTO  Takayuki NISHIO  Masahiro MORIKURA  Hirantha ABEYSEKERA  

     
    PAPER-Communication Systems

      Vol:
    E100-A No:11
      Page(s):
    2341-2350

    Distributed channel selection schemes are proposed in this paper to mitigate the flow-in-the-middle (FIM) starvation in dense wireless local area networks (WLANs). The FIM starvation occurs when the middle transmitter is within the carrier sense range of two exterior transmitters, while the two exterior transmitters are not within the carrier sense range of each other. Since an exterior transmitter sends a frame regardless of the other, the middle transmitter has a high probability of detecting the channel being occupied. Under heavy traffic conditions, the middle transmitter suffers from extremely low transmission opportunities, i.e., throughput starvation. The basic idea of the proposed schemes is to let each access point (AP) select the channel which has less three-node-chain topologies within its two-hop neighborhood. The proposed schemes are formulated in strategic form games. Payoff functions are designed so that they are proved to be potential games. Therefore, the convergence is guaranteed when the proposed schemes are conducted in a distributed manner by using unilateral improvement dynamics. Moreover, we conduct evaluations through graph-based simulations and the ns-3 simulator. Simulations confirm that the FIM starvation has been mitigated since the number of three-node-chain topologies has been significantly reduced. The 5th percentile throughput has been improved.

  • Off-Grid Frequency Estimation with Random Measurements

    Xushan CHEN  Jibin YANG  Meng SUN  Jianfeng LI  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2493-2497

    In order to significantly reduce the time and space needed, compressive sensing builds upon the fundamental assumption of sparsity under a suitable discrete dictionary. However, in many signal processing applications there exists mismatch between the assumed and the true sparsity bases, so that the actual representative coefficients do not lie on the finite grid discretized by the assumed dictionary. Unlike previous work this paper introduces the unified compressive measurement operator into atomic norm denoising and investigates the problems of recovering the frequency support of a combination of multiple sinusoids from sub-Nyquist samples. We provide some useful properties to ensure the optimality of the unified framework via semidefinite programming (SDP). We also provide a sufficient condition to guarantee the uniqueness of the optimizer with high probability. Theoretical results demonstrate the proposed method can locate the nonzero coefficients on an infinitely dense grid over a wide range of SNR case.

  • A Scaling and Non-Negative Garrote in Soft-Thresholding

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/07/27
      Vol:
    E100-D No:11
      Page(s):
    2702-2710

    Soft-thresholding is a sparse modeling method typically applied to wavelet denoising in statistical signal processing. It is also important in machine learning since it is an essential nature of the well-known LASSO (Least Absolute Shrinkage and Selection Operator). It is known that soft-thresholding, thus, LASSO suffers from a problem of dilemma between sparsity and generalization. This is caused by excessive shrinkage at a sparse representation. There are several methods for improving this problem in the field of signal processing and machine learning. In this paper, we considered to extend and analyze a method of scaling of soft-thresholding estimators. In a setting of non-parametric orthogonal regression problem including discrete wavelet transform, we introduced component-wise and data-dependent scaling that is indeed identical to non-negative garrote. We here considered a case where a parameter value of soft-thresholding is chosen from absolute values of the least squares estimates, by which the model selection problem reduces to the determination of the number of non-zero coefficient estimates. In this case, we firstly derived a risk and construct SURE (Stein's unbiased risk estimator) that can be used for determining the number of non-zero coefficient estimates. We also analyzed some properties of the risk curve and found that our scaling method with the derived SURE is possible to yield a model with low risk and high sparsity compared to a naive soft-thresholding method with SURE. This theoretical speculation was verified by a simple numerical experiment of wavelet denoising.

  • Hue-Preserving Color Image Processing with a High Arbitrariness in RGB Color Space

    Minako KAMIYAMA  Akira TAGUCHI  

     
    PAPER-Image Processing

      Vol:
    E100-A No:11
      Page(s):
    2256-2265

    Preserving hue is an important issue for color image processing. In order to preserve hue, color image processing is often carried out in HSI or HSV color space which is translated from RGB color space. Transforming from RGB color space to another color space and processing in this space usually generate gamut problem. We propose image enhancement methods which conserve hue and preserve the range (gamut) of the R, G, B channels in this paper. First we show an intensity processing method while preserving hue and saturation. In this method, arbitrary gray-scale transformation functions can be applied to the intensity component. Next, a saturation processing method while preserving hue and intensity is proposed. Arbitrary gray-scale transform methods can be also applied to the saturation component. Two processing methods are completely independent. Therefore, two methods are easily combined by applying two processing methods in succession. The combination method realizes the hue-preserving color image processing with a high arbitrariness without gamut problem. Furthermore, the concrete enhancement algorithm based on the proposed processing methods is proposed. Numerical results confirm our theoretical results and show that our processing algorithm performs much better than the conventional hue-preserving methods.

  • Convex Filter Networks Based on Morphological Filters and their Application to Image Noise and Mask Removal

    Makoto NAKASHIZUKA  Kei-ichiro KOBAYASHI  Toru ISHIKAWA  Kiyoaki ITOI  

     
    PAPER-Image Processing

      Vol:
    E100-A No:11
      Page(s):
    2238-2247

    This paper presents convex filter networks that are obtained from extensions of morphological filters. The proposed filter network consists of a convex and concave filter that are extensions of the dilation and erosion of mathematical morphology with the maxout activation function. Maxout can approximate arbitrary convex functions as piecewise linear functions, including the max function. The class of the convex function hence includes the morphological dilation and can be trained for specific image processing tasks. In this paper, the closing filter is extended to a convex-concave filter network with maxout. The convex-concave filter is trained by the stochastic gradient method for noise and mask removal. The examples of noise and mask removal show that the convex-concave filter can obtain a recovered image, whose quality is comparable to inpainting by using the total variation minimization with reduced computational cost without mask information of the corrupted pixels.

  • The Crosscorrelation of Binary Interleaved Sequences of Period 4N

    Tongjiang YAN  Ruixia YUAN  Xiao MA  

     
    LETTER-Cryptography and Information Security

      Vol:
    E100-A No:11
      Page(s):
    2513-2517

    In this paper, we consider the crosscorrelation of two interleaved sequences of period 4N constructed by Gong and Tang which has been proved to possess optimal autocorrelation. Results show that the interleaved sequences achieve the largest crosscorrelation value 4.

  • Generating Questions for Inquiry-Based Learning of History in Elementary Schools by Using Stereoscopic 3D Images Open Access

    Takashi SHIBATA  Kazunori SATO  Ryohei IKEJIRI  

     
    INVITED PAPER

      Vol:
    E100-C No:11
      Page(s):
    1012-1020

    We conducted experimental classes in an elementary school to examine how the advantages of using stereoscopic 3D images could be applied in education. More specifically, we selected a unit of the Tumulus period in Japan for sixth-graders as the source of our 3D educational materials. This unit represents part of the coursework for the topic of Japanese history. The educational materials used in our study included stereoscopic 3D images for examining the stone chambers and Haniwa (i.e., terracotta clay figures) of the Tumulus period. The results of our experimental class showed that 3D educational materials helped students focus on specific parts in images such as attached objects of the Haniwa and also understand 3D spaces and concavo-convex shapes. The experimental class revealed that 3D educational materials also helped students come up with novel questions regarding attached objects of the Haniwa, and Haniwa's spatial balance and spatial alignment. The results suggest that the educational use of stereoscopic 3D images is worthwhile in that they lead to question and hypothesis generation and an inquiry-based learning approach to history.

  • Evaluation of Phase Retardation of Curved Thin Polycarbonate Substrates for Wide-viewing Angle Flexible Liquid Crystal Displays Open Access

    Shuichi HONDA  Takahiro ISHINABE  Yosei SHIBATA  Hideo FUJIKAKE  

     
    INVITED PAPER

      Vol:
    E100-C No:11
      Page(s):
    992-997

    We investigated the effects of a bending stress on the change in phase retardation of curved polycarbonate substrates and optical characteristics of flexible liquid crystal displays (LCDs). We clarified that the change in phase retardation was extremely small even for the substrates with a small radius of curvature, because bending stresses occurred in the inner and upper surfaces are canceled each other out. We compensated for the phase retardation of polycarbonate substrates by a positive C-plate and successfully suppressed light leakage in both non-curved and curved states. These results indicate the feasibility of high-quality flexible LCDs using polycarbonate substrates even in curved states.

  • Maximizing the Throughput of Wi-Fi Mesh Networks with Distributed Link Activation

    Jae-Young YANG  Ledan WU  Yafeng ZHOU  Joonho KWON  Han-You JEONG  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E100-A No:11
      Page(s):
    2425-2438

    In this paper, we study Wi-Fi mesh networks (WMNs) as a promising candidate for wireless networking infrastructure that interconnects a variety of access networks. The main performance bottleneck of a WMN is their limited capacity due to the packet collision from the contention-based IEEE 802.11s MAC. To mitigate this problem, we present the distributed link-activation (DLA) protocol which activates a set of collision-free links for a fixed amount of time by exchanging a few control packets between neighboring MRs. Through the rigorous proof, it is shown that the upper bound of the DLA rounds is O(Smax), where Smax is the maximum number of (simultaneous) interference-free links in a WMN topology. Based on the DLA, we also design the distributed throughput-maximal scheduling (D-TMS) scheme which overlays the DLA protocol on a new frame architecture based on the IEEE 802.11 power saving mode. To mitigate its high latency, we propose the D-TMS adaptive data-period control (D-TMS-ADPC) that adjusts the data period depending on the traffic load of a WMN. Numerical results show that the D-TMS-ADPC scheme achieves much higher throughput performance than the IEEE 802.11s MAC.

  • Rational Proofs against Rational Verifiers

    Keita INASAWA  Kenji YASUNAGA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:11
      Page(s):
    2392-2397

    Rational proofs, introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is rational, and may deviate from the protocol for increasing his reward. Guo et al. (ITCS 2014) demonstrated that rational proofs are relevant to delegation of computation. By restricting the prover to be computationally bounded, they presented a one-round delegation scheme with sublinear verification for functions computable by log-space uniform circuits with logarithmic depth. In this work, we study rational proofs in which the verifier is also rational, and may deviate from the protocol for decreasing the prover's reward. We construct a three-message delegation scheme with sublinear verification for functions computable by log-space uniform circuits with polylogarithmic depth in the random oracle model.

  • AIGIF: Adaptively Integrated Gradient and Intensity Feature for Robust and Low-Dimensional Description of Local Keypoint

    Songlin DU  Takeshi IKENAGA  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2275-2284

    Establishing local visual correspondences between images taken under different conditions is an important and challenging task in computer vision. A common solution for this task is detecting keypoints in images and then matching the keypoints with a feature descriptor. This paper proposes a robust and low-dimensional local feature descriptor named Adaptively Integrated Gradient and Intensity Feature (AIGIF). The proposed AIGIF descriptor partitions the support region surrounding each keypoint into sub-regions, and classifies the sub-regions into two categories: edge-dominated ones and smoothness-dominated ones. For edge-dominated sub-regions, gradient magnitude and orientation features are extracted; for smoothness-dominated sub-regions, intensity feature is extracted. The gradient and intensity features are integrated to generate the descriptor. Experiments on image matching were conducted to evaluate performances of the proposed AIGIF. Compared with SIFT, the proposed AIGIF achieves 75% reduction of feature dimension (from 128 bytes to 32 bytes); compared with SURF, the proposed AIGIF achieves 87.5% reduction of feature dimension (from 256 bytes to 32 bytes); compared with the state-of-the-art ORB descriptor which has the same feature dimension with AIGIF, AIGIF achieves higher accuracy and robustness. In summary, the AIGIF combines the advantages of gradient feature and intensity feature, and achieves relatively high accuracy and robustness with low feature dimension.

  • Distortion Control and Optimization for Lossy Embedded Compression in Video Codec System

    Li GUO  Dajiang ZHOU  Shinji KIMURA  Satoshi GOTO  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:11
      Page(s):
    2416-2424

    For mobile video codecs, the huge energy dissipation for external memory traffic is a critical challenge under the battery power constraint. Lossy embedded compression (EC), as a solution to this challenge, is considered in this paper. While previous studies in lossy EC mostly focused on algorithm optimization to reduce distortion, this work, to the best of our knowledge, is the first one that addresses the distortion control. Firstly, from both theoretical analysis and experiments for distortion optimization, a conclusion is drawn that, at the frame level, allocating memory traffic evenly is a reliable approximation to the optimal solution to minimize quality loss. Then, to reduce the complexity of decoding twice, the distortion between two sequences is estimated by a linear function of that calculated within one sequence. Finally, on the basis of even allocation, the distortion control is proposed to determine the amount of memory traffic according to a given distortion limitation. With the adaptive target setting and estimating function updating in each group of pictures (GOP), the scene change in video stream is supported without adding a detector or retraining process. From experimental results, the proposed distortion control is able to accurately fix the quality loss to the target. Compared to the baseline of negative feedback on non-referred B frames, it achieves about twice memory traffic reduction.

2821-2840hit(18690hit)