The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] source(799hit)

121-140hit(799hit)

  • Lossy Source Coding for Non-Uniform Binary Source with Trellis Codes

    Junya HIRAMATSU  Motohiko ISAKA  

     
    LETTER-Information Theory

      Vol:
    E101-A No:2
      Page(s):
    531-534

    This letter presents numerical results of lossy source coding for non-uniformly distributed binary source with trellis codes. The results show how the performance of trellis codes approaches the rate-distortion function in terms of the number of states.

  • Enabling FPGA-as-a-Service in the Cloud with hCODE Platform

    Qian ZHAO  Motoki AMAGASAKI  Masahiro IIDA  Morihiro KUGA  Toshinori SUEYOSHI  

     
    PAPER-Design Methodology and Platform

      Pubricized:
    2017/11/17
      Vol:
    E101-D No:2
      Page(s):
    335-343

    Major cloud service providers, including Amazon and Microsoft, have started employing field-programmable gate arrays (FPGAs) to build high-performance and low-power-consumption cloud capability. However, utilizing an FPGA-enabled cloud is still challenging because of two main reasons. First, the introduction of software and hardware co-design leads to high development complexity. Second, FPGA virtualization and accelerator scheduling techniques are not fully researched for cluster deployment. In this paper, we propose an open-source FPGA-as-a-service (FaaS) platform, the hCODE, to simplify the design, management and deployment of FPGA accelerators at cluster scale. The proposed platform implements a Shell-and-IP design pattern and an open accelerator repository to reduce design and management costs of FPGA projects. Efficient FPGA virtualization and accelerator scheduling techniques are proposed to deploy accelerators on the FPGA-enabled cluster easily. With the proposed hCODE, hardware designers and accelerator users can be organized on one platform to efficiently build open-hardware ecosystem.

  • Availability of Reference Sound Sources for Qualification of Hemi-Anechoic Rooms Based on Deviation of Sound Pressure Level from Inverse Square Law

    Keisuke YAMADA  Hironobu TAKAHASHI  Ryuzo HORIUCHI  

     
    PAPER-Engineering Acoustics

      Vol:
    E101-A No:1
      Page(s):
    211-218

    The sound power level is a physical quantity indispensable for evaluating the amount of sound energy radiated from electrical and mechanical apparatuses. The precise determination of the sound power level requires the qualification of the measurement environment, such as a hemi-anechoic room, by estimating the deviation of the sound pressure level from the inverse-square law. In this respect, Annex A of ISO 3745 specifies the procedure for room qualification and defines a tolerance limit for the directivity of the sound source, which is used for the qualification. However, it is impractical to prepare a special loudspeaker only for room qualification. Thus, we developed a simulation method to investigate the influence of the sound source directivity on the measured deviation of the sound pressure level from the inverse-square law by introducing a quantitative index for the influence of the directivity. In this study, type 4202 reference sound source by Brüel & Kjær was used as a directional sound source because it has been widely used as a reference standard for the measurement of sound power levels. We experimentally obtained the directivity of the sound source by measuring the sound pressure level over the measurement surface. Moreover, the proposed method was applied to the qualification of several hemi-anechoic rooms, and we discussed the availability of a directional sound source for the process. Analytical results showed that available reference sound sources may be used for the evaluation of hemi-anechoic rooms depending on the sound energy absorption coefficient of the inner wall, the directionality of the microphone traverse, and the size of the space to be qualified. In other words, the results revealed that a reference sound source that is once quantified by the proposed method can be used for qualifying hemi-anechoic rooms.

  • Research Challenges for Network Function Virtualization - Re-Architecting Middlebox for High Performance and Efficient, Elastic and Resilient Platform to Create New Services - Open Access

    Kohei SHIOMOTO  

     
    INVITED SURVEY PAPER-Network

      Pubricized:
    2017/07/21
      Vol:
    E101-B No:1
      Page(s):
    96-122

    Today's enterprise, data-center, and internet-service-provider networks deploy different types of network devices, including switches, routers, and middleboxes such as network address translation and firewalls. These devices are vertically integrated monolithic systems. Software-defined networking (SDN) and network function virtualization (NFV) are promising technologies for dis-aggregating vertically integrated systems into components by using “softwarization”. Software-defined networking separates the control plane from the data plane of switch and router, while NFV decouples high-layer service functions (SFs) or Network Functions (NFs) implemented in the data plane of a middlebox and enables the innovation of policy implementation by using SF chaining. Even though there have been several survey studies in this area, this area is continuing to grow rapidly. In this paper, we present a recent survey of this area. In particular, we survey research activities in the areas of re-architecting middleboxes, state management, high-performance platforms, service chaining, resource management, and trouble shooting. Efforts in these research areas will enable the development of future virtual-network-function platforms and innovation in service management while maintaining acceptable capital and operational expenditure.

  • Learning Supervised Feature Transformations on Zero Resources for Improved Acoustic Unit Discovery

    Michael HECK  Sakriani SAKTI  Satoshi NAKAMURA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/10/20
      Vol:
    E101-D No:1
      Page(s):
    205-214

    In this work we utilize feature transformations that are common in supervised learning without having prior supervision, with the goal to improve Dirichlet process Gaussian mixture model (DPGMM) based acoustic unit discovery. The motivation of using such transformations is to create feature vectors that are more suitable for clustering. The need of labels for these methods makes it difficult to use them in a zero resource setting. To overcome this issue we utilize a first iteration of DPGMM clustering to generate frame based class labels for the target data. The labels serve as basis for learning linear discriminant analysis (LDA), maximum likelihood linear transform (MLLT) and feature-space maximum likelihood linear regression (fMLLR) based feature transformations. The novelty of our approach is the way how we use a traditional acoustic model training pipeline for supervised learning to estimate feature transformations in a zero resource scenario. We show that the learned transformations greatly support the DPGMM sampler in finding better clusters, according to the performance of the DPGMM posteriorgrams on the ABX sound class discriminability task. We also introduce a method for combining posteriorgram outputs of multiple clusterings and demonstrate that such combinations can further improve sound class discriminability.

  • A Variable-to-Fixed Length Lossless Source Code Attaining Better Performance than Tunstall Code in Several Criterions

    Mitsuharu ARIMURA  

     
    PAPER-Information Theory

      Vol:
    E101-A No:1
      Page(s):
    249-258

    Tunstall code is known as an optimal variable-to-fixed length (VF) lossless source code under the criterion of average coding rate, which is defined as the codeword length divided by the average phrase length. In this paper we define the average coding rate of a VF code as the expectation of the pointwise coding rate defined by the codeword length divided by the phrase length. We call this type of average coding rate the average pointwise coding rate. In this paper, a new VF code is proposed. An incremental parsing tree construction algorithm like the one that builds Tunstall parsing tree is presented. It is proved that this code is optimal under the criterion of the average pointwise coding rate, and that the average pointwise coding rate of this code converges asymptotically to the entropy of the stationary memoryless source emitting the data to be encoded. Moreover, it is proved that the proposed code attains better worst-case coding rate than Tunstall code.

  • Evaluation of Overflow Probability of Bayes Code in Moderate Deviation Regime

    Shota SAITO  Toshiyasu MATSUSHIMA  

     
    LETTER-Shannon Theory

      Vol:
    E100-A No:12
      Page(s):
    2728-2731

    This letter treats the problem of lossless fixed-to-variable length source coding in moderate deviation regime. We investigate the behavior of the overflow probability of the Bayes code. Our result clarifies that the behavior of the overflow probability of the Bayes code is similar to that of the optimal non-universal code for i.i.d. sources.

  • Second-Order Intrinsic Randomness for Correlated Non-Mixed and Mixed Sources

    Tomohiko UYEMATSU  Tetsunao MATSUTA  

     
    PAPER-Shannon Theory

      Vol:
    E100-A No:12
      Page(s):
    2615-2628

    We consider the intrinsic randomness problem for correlated sources. Specifically, there are three correlated sources, and we want to extract two mutually independent random numbers by using two separate mappings, where each mapping converts one of the output sequences from two correlated sources into a random number. In addition, we assume that the obtained pair of random numbers is also independent of the output sequence from the third source. We first show the δ-achievable rate region where a rate pair of two mappings must satisfy in order to obtain the approximation error within δ ∈ [0,1), and the second-order achievable rate region for correlated general sources. Then, we apply our results to non-mixed and mixed independently and identically distributed (i.i.d.) correlated sources, and reveal that the second-order achievable rate region for these sources can be represented in terms of the sum of normal distributions.

  • Joint Transmission and Coding Scheme for High-Resolution Video Streams over Multiuser MIMO-OFDM Systems

    Koji TASHIRO  Leonardo LANANTE  Masayuki KUROSAKI  Hiroshi OCHI  

     
    PAPER-Communication Systems

      Vol:
    E100-A No:11
      Page(s):
    2304-2313

    High-resolution image and video communication in home networks is highly expected to proliferate with the spread of Wi-Fi devices and the introduction of multiple-input multiple-output (MIMO) systems. This paper proposes a joint transmission and coding scheme for broadcasting high-resolution video streams over multiuser MIMO systems with an eigenbeam-space division multiplexing (E-SDM) technique. Scalable video coding makes it possible to produce the code stream comprised of multiple layers having unequal contribution to image quality. The proposed scheme jointly assigns the data of scalable code streams to subcarriers and spatial streams based on their signal-to-noise ratio (SNR) values in order to transmit visually important data with high reliability. Simulation results show that the proposed scheme surpasses the conventional unequal power allocation (UPA) approach in terms of both peak signal-to-noise ratio (PSNR) of received images and correct decoding probability. PSNR performance of the proposed scheme exceeds 35dB with the probability of over 95% when received SNR is higher than 6dB. The improvement in average PSNR by the proposed scheme compared to the conventional UPA comes up to approx. 20dB at received SNR of 6dB. Furthermore, correct decoding probability reaches 95% when received SNR is greater than 4dB.

  • Maximizing the Throughput of Wi-Fi Mesh Networks with Distributed Link Activation

    Jae-Young YANG  Ledan WU  Yafeng ZHOU  Joonho KWON  Han-You JEONG  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E100-A No:11
      Page(s):
    2425-2438

    In this paper, we study Wi-Fi mesh networks (WMNs) as a promising candidate for wireless networking infrastructure that interconnects a variety of access networks. The main performance bottleneck of a WMN is their limited capacity due to the packet collision from the contention-based IEEE 802.11s MAC. To mitigate this problem, we present the distributed link-activation (DLA) protocol which activates a set of collision-free links for a fixed amount of time by exchanging a few control packets between neighboring MRs. Through the rigorous proof, it is shown that the upper bound of the DLA rounds is O(Smax), where Smax is the maximum number of (simultaneous) interference-free links in a WMN topology. Based on the DLA, we also design the distributed throughput-maximal scheduling (D-TMS) scheme which overlays the DLA protocol on a new frame architecture based on the IEEE 802.11 power saving mode. To mitigate its high latency, we propose the D-TMS adaptive data-period control (D-TMS-ADPC) that adjusts the data period depending on the traffic load of a WMN. Numerical results show that the D-TMS-ADPC scheme achieves much higher throughput performance than the IEEE 802.11s MAC.

  • Energy-Efficient Resource Allocation Strategy for Low Probability of Intercept and Anti-Jamming Systems

    Yu Min HWANG  Jun Hee JUNG  Kwang Yul KIM  Yong Sin KIM  Jae Seang LEE  Yoan SHIN  Jin Young KIM  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2498-2502

    The aim of this letter is to guarantee the ability of low probability of intercept (LPI) and anti-jamming (AJ) by maximizing the energy efficiency (EE) to improve wireless communication survivability and sustain wireless communication in jamming environments. We studied a scenario based on one transceiver pair with a partial-band noise jammer in a Rician fading channel and proposed an EE optimization algorithm to solve the optimization problem. With the proposed EE optimization algorithm, the LPI and AJ can be simultaneously guaranteed while satisfying the constraint of the maximum signal-to-jamming-and-noise ratio and combinatorial subchannel allocation condition, respectively. The results of the simulation indicate that the proposed algorithm is more energy-efficient than those of the baseline schemes and guarantees the LPI and AJ performance in a jamming environment.

  • Simulating the Three-Dimensional Room Transfer Function for a Rotatable Complex Source

    Bing BU  Changchun BAO  Maoshen JIA  

     
    LETTER-Engineering Acoustics

      Vol:
    E100-A No:11
      Page(s):
    2487-2492

    This letter proposes an extended image-source model to simulate the room transfer function for a rotatable complex source in a three-dimensional reverberant room. The proposed model uses spherical harmonic decomposition to describe the exterior sound field from the complex source. Based on “axis flip” concept, the mirroring relations between the source and images are summarized by a unified mirroring operator that occurs on the soundfield coefficients. The rotation movement of the source is taken into account by exploiting the rotation property of spherical harmonics. The accuracy of our proposed model is verified through the appropriate simulation examples.

  • Optimizing the System Performance of Relay Enhanced Cellular Networks through Time Partitioning

    Liqun ZHAO  Hongpeng WANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:10
      Page(s):
    2204-2206

    In this letter, an effective algorithm is proposed to improve the performance of relay enhanced cellular networks, which is to allocate appropriate resources to each access point with quality of service constraint. First we derive the ergodic rate for backhaul link based on a poison point process model, and then allocate resources to each link according to the quality of service requirements and ergodic rate of links. Numerical results show that the proposed algorithm can not only improve system throughput but also improve the rate distribution of user equipment.

  • A Compact Tree Representation of an Antidictionary

    Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Information Theory

      Vol:
    E100-A No:9
      Page(s):
    1973-1984

    In both theoretical analysis and practical use for an antidictionary coding algorithm, an important problem is how to encode an antidictionary of an input source. This paper presents a proposal for a compact tree representation of an antidictionary built from a circular string for an input source. We use a technique for encoding a tree in the compression via substring enumeration to encode a tree representation of the antidictionary. Moreover, we propose a new two-pass universal antidictionary coding algorithm by means of the proposal tree representation. We prove that the proposed algorithm is asymptotic optimal for a stationary ergodic source.

  • An Efficient Resource Allocation Algorithm for Underlay Cognitive Radio Multichannel Multicast Networks

    Qun LI  Ding XU  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:9
      Page(s):
    2065-2068

    In underlay cognitive radio (CR) multicast networks, the cognitive base station (CBS) can transmit at the lowest rate of all the secondary users (SUs) within the multicast group. Existing works showed that the sum rate of such networks saturates when the number of SUs increases. In this letter, for CR multicast networks with multiple channels, we group the SUs into different subgroups, each with an exclusive channel. Then, the problem of joint user grouping and power allocation to maximize the sum rate of all subgroups under the interference power constraint and the transmit power constraint is investigated. Compared to exponential complexity in the number of SUs required by the optimal algorithm, we proposed an efficient algorithm with only linear complexity. Simulation results confirm that the proposed algorithm achieves the sum rate very closed to that achieved by the optimal algorithm and greatly outperforms the maximum signal-to-noise-ratio based user grouping algorithm and the conventional algorithm without user grouping.

  • Radio Resource Management Based on User and Network Characteristics Considering 5G Radio Access Network in a Metropolitan Environment

    Akira KISHIDA  Yoshifumi MORIHIRO  Takahiro ASAI  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1352-1365

    In this paper, we clarify the issues in a metropolitan environment involving overlying frequency bands with various bandwidths and propose a cell selection scheme that improves the communications quality based on user and network characteristics. Different frequency bands with various signal bandwidths will be overlaid on each other in forthcoming fifth-generation (5G) radio access networks. At the same time, services, applications or features of sets of user equipment (UEs) will become more diversified and the requirements for the quality of communications will become more varied. Moreover, in real environments, roads and buildings have irregular constructions. Especially in an urban or metropolitan environment, the complex architecture present in a metropolis directly affects radio propagation. Under these conditions, the communications quality is degraded because cell radio resources are depleted due to many UE connections and the mismatch between service requirements and cell capabilities. The proposed scheme prevents this degradation in communications quality. The effectiveness of the proposed scheme is evaluated in an ideal regular deployment and in a non-regular metropolitan environment based on computer simulations. Simulation results show that the average of the time for the proposed scheme from the start of transmission to the completion of reception at the UE is improved by approximately 40% compared to an existing cell selection scheme that is based on the Maximum Signal-to-Interference plus Noise power Ratio (SINR).

  • Variable-Length Coding with Cost Allowing Non-Vanishing Error Probability

    Hideki YAGI  Ryo NOMURA  

     
    PAPER-Information Theory

      Vol:
    E100-A No:8
      Page(s):
    1683-1692

    We consider fixed-to-variable length coding with a regular cost function by allowing the error probability up to any constantε. We first derive finite-length upper and lower bounds on the average codeword cost, which are used to derive general formulas of two kinds of minimum achievable rates. For a fixed-to-variable length code, we call the set of source sequences that can be decoded without error the dominant set of source sequences. For any two regular cost functions, it is revealed that the dominant set of source sequences for a code attaining the minimum achievable rate under a cost function is also the dominant set for a code attaining the minimum achievable rate under the other cost function. We also give general formulas of the second-order minimum achievable rates.

  • Affinity Propagation Algorithm Based Multi-Source Localization Method for Binary Detection

    Yan WANG  Long CHENG  Jian ZHANG  

     
    LETTER-Information Network

      Pubricized:
    2017/05/10
      Vol:
    E100-D No:8
      Page(s):
    1916-1919

    Wireless sensor network (WSN) has attracted many researchers to investigate it in recent years. It can be widely used in the areas of surveillances, health care and agriculture. The location information is very important for WSN applications such as geographic routing, data fusion and tracking. So the localization technology is one of the key technologies for WSN. Since the computational complexity of the traditional source localization is high, the localization method can not be used in the sensor node. In this paper, we firstly introduce the Neyman-Pearson criterion based detection model. This model considers the effect of false alarm and missing alarm rate, so it is more realistic than the binary and probability model. An affinity propagation algorithm based localization method is proposed. Simulation results show that the proposed method provides high localization accuracy.

  • Latency-Aware Selection of Check Variables for Soft-Error Tolerant Datapath Synthesis

    Junghoon OH  Mineo KANEKO  

     
    LETTER

      Vol:
    E100-A No:7
      Page(s):
    1506-1510

    This letter proposes a heuristic algorithm to select check variables, which are points of comparison for error detection, for soft-error tolerant datapaths. Our soft-error tolerance scheme is based on check-and-retry computation and an efficient resource management named speculative resource sharing (SRS). Starting with the smallest set of check variables, the proposed algorithm repeats to add new check variable one by one incrementally and find the minimum latency solution among the series of generated solutions. During the process, each new check variable is selected so that the opportunity of SRS is enlarged. Experimental results show that improvements in latency are achieved compared with the choice of the smallest set of check variables.

  • Low-Complexity Recursive-Least-Squares-Based Online Nonnegative Matrix Factorization Algorithm for Audio Source Separation

    Seokjin LEE  

     
    LETTER-Music Information Processing

      Pubricized:
    2017/02/06
      Vol:
    E100-D No:5
      Page(s):
    1152-1156

    An online nonnegative matrix factorization (NMF) algorithm based on recursive least squares (RLS) is described in a matrix form, and a simplified algorithm for a low-complexity calculation is developed for frame-by-frame online audio source separation system. First, the online NMF algorithm based on the RLS method is described as solving the NMF problem recursively. Next, a simplified algorithm is developed to approximate the RLS-based online NMF algorithm with low complexity. The proposed algorithm is evaluated in terms of audio source separation, and the results show that the performance of the proposed algorithms are superior to that of the conventional online NMF algorithm with significantly reduced complexity.

121-140hit(799hit)