The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

6281-6300hit(20498hit)

  • Comparing Operating Systems Scalability on Multicore Processors by Microbenchmarking

    Yan CUI  Yu CHEN  Yuanchun SHI  

     
    PAPER-Computer System and Services

      Vol:
    E95-D No:12
      Page(s):
    2810-2820

    Multicore processor architectures have become ubiquitous in today's computing platforms, especially in parallel computing installations, with their power and cost advantages. While the technology trend continues towards having hundreds of cores on a chip in the foreseeable future, an urgent question posed to system designers as well as application users is whether applications can receive sufficient support on today's operating systems for them to scale to many cores. To this end, people need to understand the strengths and weaknesses on their support on scalability and to identify major bottlenecks limiting the scalability, if any. As open-source operating systems are of particular interests in the research and industry communities, in this paper we choose three operating systems (Linux, Solaris and FreeBSD) to systematically evaluate and compare their scalability by using a set of highly-focused microbenchmarks for broad and detailed understanding their scalability on an AMD 32-core system. We use system profiling tools and analyze kernel source codes to find out the root cause of each observed scalability bottleneck. Our results reveal that there is no single operating system among the three standing out on all system aspects, though some system(s) can prevail on some of the system aspects. For example, Linux outperforms Solaris and FreeBSD significantly for file-descriptor- and process-intensive operations. For applications with intensive sockets creation and deletion operations, Solaris leads FreeBSD, which scales better than Linux. With the help of performance tools and source code instrumentation and analysis, we find that synchronization primitives protecting shared data structures in the kernels are the major bottleneck limiting system scalability.

  • Incremental Non-Gaussian Analysis on Multivariate EEG Signal Data

    Kam Swee NG  Hyung-Jeong YANG  Soo-Hyung KIM  Sun-Hee KIM  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:12
      Page(s):
    3010-3016

    In this paper, we propose a novel incremental method for discovering latent variables from multivariate data with high efficiency. It integrates non-Gaussianity and an adaptive incremental model in an unsupervised way to extract informative features. Our proposed method discovers a small number of compact features from a very large number of features and can still achieve good predictive performance in EEG signals. The promising EEG signal classification results from our experiments prove that this approach can successfully extract important features. Our proposed method also has low memory requirements and computational costs.

  • Mastering Signal Processing in MPEG SAOC

    Kwangki KIM  Minsoo HAHN  Jinsul KIM  

     
    PAPER-Speech and Hearing

      Vol:
    E95-D No:12
      Page(s):
    3053-3059

    MPEG spatial audio object coding (SAOC) is a new audio coding standard which efficiently represents various audio objects as a down-mix signal and spatial parameters. MPEG SAOC has a backward compatibility with existing playback systems for the down-mix signal. If a mastering signal is used for providing CD-like sound quality instead of the down-mix signal, an output signal decoded with the mastering signal may be easily degraded due to the difference between the down-mix and the mastering signals. To successfully use the mastering signal in MPEG SAOC, the difference between two signals should be eliminated. As a simple mastering signal processing, we propose a mastering signal processing using the mastering down-mix gain (MDG) which is similar to the arbitrary down-mix gain of MPEG Surround. Also, we propose an enhanced mastering signal processing using the MDG bias in order to reduce quantization errors of the MDG. Experimental results show that the proposed schemes can improve sound quality of the output signal decoded with the mastering signal. Especially, the enhanced method shows better performance than the simple method in the aspects of the quantization errors and the sound quality.

  • On d-Asymptotics for High-Dimensional Discriminant Analysis with Different Variance-Covariance Matrices

    Takanori AYANO  Joe SUZUKI  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E95-D No:12
      Page(s):
    3106-3108

    In this paper we consider the two-class classification problem with high-dimensional data. It is important to find a class of distributions such that we cannot expect good performance in classification for any classifier. In this paper, when two population variance-covariance matrices are different, we give a reasonable sufficient condition for distributions such that the misclassification rate converges to the worst value as the dimension of data tends to infinity for any classifier. Our results can give guidelines to decide whether or not an experiment is worth performing in many fields such as bioinformatics.

  • Software FMEA for Safety-Critical System Based on Co-analysis of System Model and Software Model

    Guoqi LI  

     
    LETTER-Dependable Computing

      Vol:
    E95-D No:12
      Page(s):
    3101-3105

    Software FMEA is valuable and practically used for embedded software of safety-critical systems. In this paper, a novel method for Software FMEA is presented based on co-analysis of system model and software model. The method is hopeful to detect quantitative and dynamic effects by a targeted software failure. A typical application of the method is provided to illustrate the procedure and the applicable scenarios. In addition, a pattern is refined from the application for further reuse.

  • Traffic Engineering of Peer-Assisted Content Delivery Network with Content-Oriented Incentive Mechanism

    Naoya MAKI  Takayuki NISHIO  Ryoichi SHINKUMA  Tatsuya MORI  Noriaki KAMIYAMA  Ryoichi KAWAHARA  Tatsuro TAKAHASHI  

     
    PAPER-Network and Communication

      Vol:
    E95-D No:12
      Page(s):
    2860-2869

    In content services where people purchase and download large-volume contents, minimizing network traffic is crucial for the service provider and the network operator since they want to lower the cost charged for bandwidth and the cost for network infrastructure, respectively. Traffic localization is an effective way of reducing network traffic. Network traffic is localized when a client can obtain the requested content files from other a near-by altruistic client instead of the source servers. The concept of the peer-assisted content distribution network (CDN) can reduce the overall traffic with this mechanism and enable service providers to minimize traffic without deploying or borrowing distributed storage. To localize traffic effectively, content files that are likely to be requested by many clients should be cached locally. This paper presents a novel traffic engineering scheme for peer-assisted CDN models. Its key idea is to control the behavior of clients by using content-oriented incentive mechanism. This approach enables us to optimize traffic flows by letting altruistic clients download content files that are most likely contributed to localizing traffic among clients. In order to let altruistic clients request the desired files, we combine content files while keeping the price equal to the one for a single content. This paper presents a solution for optimizing the selection of content files to be combined so that cross traffic in a network is minimized. We also give a model for analyzing the upper-bound performance and the numerical results.

  • A Forced Alignment Based Approach for English Passage Reading Assessment

    Junbo ZHANG  Fuping PAN  Bin DONG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E95-D No:12
      Page(s):
    3046-3052

    This paper presents our investigation into improving the performance of our previous automatic reading quality assessment system. The method of the baseline system is calculating the average value of the Phone Log-Posterior Probability (PLPP) of all phones in the voice to be assessed, and the average value is used as the reading quality assessment feature. In this paper, we presents three improvements. First, we cluster the triphones, and then calculate the average value of the normalized PLPP for each classification separately, and use this average values as the multi-dimensional assessment features instead of the original one-dimensional assessment feature. This method is simple but effective, which made the score difference of the machine scoring and manual scoring decrease by 30.2% relatively. Second, in order to assess the reading rhythm, we train Gaussian Mixture Models (GMM), which contain the information of each triphone's relative duration under standard pronunciation. Using the GMM, we can calculate the probability that the relative duration of each phone is conform to the standard pronunciation, and the average value of the probabilities is added to the assessment feature vector as a dimension of feature, which decreased the score difference between the machine scoring and manual scoring by 9.7% relatively. Third, we detect Filled Pauses (FP) by analyzing the formant curve, and then calculate the relative duration of FP, and add the relative duration of FP to the assessment feature vector as a dimension of feature. This method made the score difference between the machine scoring and manual scoring be further decreased by 10.2% relatively. Finally, when the feature vector extracted by the three methods are used together, the score difference between the machine scoring and manual scoring was decreased by 43.9% relatively compared to the baseline system.

  • On Improving the Tradeoff between Symbol Rate and Diversity Gain Using Quasi-Orthogonal Space-Time Block Codes with Linear Receivers

    Kazuyuki MORIOKA  David ASANO  

     
    LETTER

      Vol:
    E95-B No:12
      Page(s):
    3763-3767

    In this letter, the tradeoff between symbol rate and diversity gain of Space-Time Block Codes (STBCs) with linear receivers is considered. It is known that Group Orthogonal-Toeplitz Codes (GOTCs) can achieve a good tradeoff with linear receivers. However, the symbol rate of GOTCs is limited to that of the base Orthogonal Space-Time Block Codes (OSTBCs). We propose to simply change the GOTC base codes from OSTBCs to Quasi-Orthogonal Space-Time Block Codes (Q-OSTBCs). Q-OSTBCs can improve the symbol rate of GOTCs at the expense of diversity gain. Simulation results show that Q-OSTBC based GOTCs can improve the tradeoff between symbol rate and diversity gain over that of the original GOTCs.

  • Lossless Compression of Double-Precision Floating-Point Data for Numerical Simulations: Highly Parallelizable Algorithms for GPU Computing

    Mamoru OHARA  Takashi YAMAGUCHI  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E95-D No:12
      Page(s):
    2778-2786

    In numerical simulations using massively parallel computers like GPGPU (General-Purpose computing on Graphics Processing Units), we often need to transfer computational results from external devices such as GPUs to the main memory or secondary storage of the host machine. Since size of the computation results is sometimes unacceptably large to hold them, it is desired that the data is compressed and stored. In addition, considering overheads for transferring data between the devices and host memories, it is preferable that the data is compressed in a part of parallel computation performed on the devices. Traditional compression methods for floating-point numbers do not always show good parallelism. In this paper, we propose a new compression method for massively-parallel simulations running on GPUs, in which we combine a few successive floating-point numbers and interleave them to improve compression efficiency. We also present numerical examples of compression ratio and throughput obtained from experimental implementations of the proposed method runnig on CPUs and GPUs.

  • Analytical Modeling of Network Throughput Prediction on the Internet

    Chunghan LEE  Hirotake ABE  Toshio HIROTSU  Kyoji UMEMURA  

     
    PAPER-Network and Communication

      Vol:
    E95-D No:12
      Page(s):
    2870-2878

    Predicting network throughput is important for network-aware applications. Network throughput depends on a number of factors, and many throughput prediction methods have been proposed. However, many of these methods are suffering from the fact that a distribution of traffic fluctuation is unclear and the scale and the bandwidth of networks are rapidly increasing. Furthermore, virtual machines are used as platforms in many network research and services fields, and they can affect network measurement. A prediction method that uses pairs of differently sized connections has been proposed. This method, which we call connection pair, features a small probe transfer using the TCP that can be used to predict the throughput of a large data transfer. We focus on measurements, analyses, and modeling for precise prediction results. We first clarified that the actual throughput for the connection pair is non-linearly and monotonically changed with noise. Second, we built a previously proposed predictor using the same training data sets as for our proposed method, and it was unsuitable for considering the above characteristics. We propose a throughput prediction method based on the connection pair that uses ν-support vector regression and the polynomial kernel to deal with prediction models represented as a non-linear and continuous monotonic function. The prediction results of our method compared to those of the previous predictor are more accurate. Moreover, under an unstable network state, the drop in accuracy is also smaller than that of the previous predictor.

  • SLA_Driven Adaptive Resource Allocation for Virtualized Servers

    Wei ZHANG  Li RUAN  Mingfa ZHU  Limin XIAO  Jiajun LIU  Xiaolan TANG  Yiduo MEI  Ying SONG  Yuzhong SUN  

     
    PAPER-Computer System and Services

      Vol:
    E95-D No:12
      Page(s):
    2833-2843

    In order to reduce cost and improve efficiency, many data centers adopt virtualization solutions. The advent of virtualization allows multiple virtual machines hosted on a single physical server. However, this poses new challenges for resource management. Web workloads which are dominant in data centers are known to vary dynamically with time. In order to meet application's service level agreement (SLA), how to allocate resources for virtual machines has become an important challenge in virtualized server environments, especially when dealing with fluctuating workloads and complex server applications. User experience is an important manifestation of SLA and attracts more attention. In this paper, the SLA is defined by server-side response time. Traditional resource allocation based on resource utilization has some drawbacks. We argue that dynamic resource allocation directly based on real-time user experience is more reasonable and also has practical significance. To address the problem, we propose a system architecture that combines response time measurements and analysis of user experience for resource allocation. An optimization model is introduced to dynamically allocate the resources among virtual machines. When resources are insufficient, we provide service differentiation and firstly guarantee resource requirements of applications that have higher priorities. We evaluate our proposal using TPC-W and Webbench. The experimental results show that our system can judiciously allocate system resources. The system helps stabilize applications' user experience. It can reduce the mean deviation of user experience from desired targets.

  • Asymptotically Optimal Merging on ManyCore GPUs

    Arne KUTZNER  Pok-Son KIM  Won-Kwang PARK  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E95-D No:12
      Page(s):
    2769-2777

    We propose a family of algorithms for efficiently merging on contemporary GPUs, so that each algorithm requires O(m log (+1)) element comparisons, where m and n are the sizes of the input sequences with m ≤ n. According to the lower bounds for merging all proposed algorithms are asymptotically optimal regarding the number of necessary comparisons. First we introduce a parallely structured algorithm that splits a merging problem of size 2l into 2i subproblems of size 2l-i, for some arbitrary i with (0 ≤ i ≤ l). This algorithm represents a merger for i=l but it is rather inefficient in this case. The efficiency is boosted by moving to a two stage approach where the splitting process stops at some predetermined level and transfers control to several parallely operating block-mergers. We formally prove the asymptotic optimality of the splitting process and show that for symmetrically sized inputs our approach delivers up to 4 times faster runtimes than the thrust::merge function that is part of the Thrust library. For assessing the value of our merging technique in the context of sorting we construct and evaluate a MergeSort on top of it. In the context of our benchmarking the resulting MergeSort clearly outperforms the MergeSort implementation provided by the Thrust library as well as Cederman's GPU optimized variant of QuickSort.

  • An Energy-Balancing Unequal Clustering and TDMA-Like Scheduling Mechanism in Wireless Sensor Networks

    Tao LIU  

     
    LETTER-Network

      Vol:
    E95-B No:12
      Page(s):
    3882-3885

    In wireless sensor networks, unbalanced energy consumption and transmission collisions are two inherent problems and can significantly reduce network lifetime. This letter proposes an unequal clustering and TDMA-like scheduling mechanism (UCTSM) based on a gradient sinking model in wireless sensor networks. It integrates unequal clustering and TDMA-like transmission scheduling to balance the energy consumption among cluster heads and reduce transmission collisions. Simulation results show that UCTSM balances the energy consumption among the cluster heads, saves nodes' energy and so improves the network lifetime.

  • Trust-Based Bargaining Game Model for Cognitive Radio Spectrum Sharing Scheme

    Sungwook KIM  

     
    LETTER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E95-B No:12
      Page(s):
    3925-3928

    Recently, cooperative spectrum sensing is being studied to greatly improve the sensing performance of cognitive radio networks. To develop an adaptable cooperative sensing algorithm, an important issue is how to properly induce selfish users to participate in spectrum sensing work. In this paper, a new cognitive radio spectrum sharing scheme is developed by employing the trust-based bargaining model. The proposed scheme dynamically adjusts bargaining powers and adaptively shares the available spectrum in real-time online manner. Under widely different and diversified network situations, this approach is so dynamic and flexible that it can adaptively respond to current network conditions. Simulation results demonstrate that the proposed scheme can obtain better network performance and bandwidth efficiency than existing schemes.

  • Pro-Detection of Atrial Fibrillation Using Mixture of Experts

    Mohamed Ezzeldin A. BASHIR  Kwang Sun RYU  Unil YUN  Keun Ho RYU  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E95-D No:12
      Page(s):
    2982-2990

    A reliable detection of atrial fibrillation (AF) in Electrocardiogram (ECG) monitoring systems is significant for early treatment and health risk reduction. Various ECG mining and analysis studies have addressed a wide variety of clinical and technical issues. However, there is still room for improvement mostly in two areas. First, the morphological descriptors not only between different patients or patient clusters but also within the same patient are potentially changing. As a result, the model constructed using an old training data no longer needs to be adjusted in order to identify new concepts. Second, the number and types of ECG parameters necessary for detecting AF arrhythmia with high quality encounter a massive number of challenges in relation to computational effort and time consumption. We proposed a mixture technique that caters to these limitations. It includes an active learning method in conjunction with an ECG parameter customization technique to achieve a better AF arrhythmia detection in real-time applications. The performance of our proposed technique showed a sensitivity of 95.2%, a specificity of 99.6%, and an overall accuracy of 99.2%.

  • Research and Development on Satellite Positioning and Navigation in China Open Access

    Weixiao MENG  Enxiao LIU  Shuai HAN  Qiyue YU  

     
    INVITED PAPER

      Vol:
    E95-B No:11
      Page(s):
    3385-3392

    With the development of Global Navigation Satellite System (GNSS), the amount of related research is growing rapidly in China. A lot of accomplishments have been achieved in all branches of the satellite navigation field, especially motivated by the BeiDou Program. In this paper, the current status, technologies and developments in satellite positioning and navigation in China are introduced. Firstly, an overview and update of the BeiDou Program is presented, known as the three-step development strategy for different services. Then signal design for the BeiDou system is discussed, including the generation of pseudo-random noise (PRN) codes for currently available signal B1, and the investigation of a new signal modulation scheme for interoperability at open frequency B1C. The B1C signal should comply to Multiplexed Binary Offset Carrier (MBOC) constrains, and a modulation called Quadrature Multiplexed BOC (QMBOC) is presented, which is equivalent to time-multiplexed BOC (TMBOC) for GPS and composite BOC (CBOC) for Galileo, while overcomes the drawback of CBOC. Besides, the inter and intra system compatibility is discussed, based on the effective C/N0 proposed by International Telecommunication Union (ITU). After that, receiver technologies in challenging environments are introduced, such as weak signal acquisition and assisted GNSS (A-GNSS). Moreover, a method of ambiguity mitigation for adaptive digital beam forming (ADBF) in large spacing antenna arrays is proposed, by which interference suppression is available. Furthermore, cutting edge technologies are brought in, including seamless navigation for indoor and outdoor, and collaborative navigation. After all, GNSS applications in China for industry and daily life are shown, as well as the market prospection.

  • Burst Error Resilient Channel Coding for SVC over Mobile Networks

    GunWoo KIM  Yongwoo CHO  Jihyeok YUN  DougYoung SUH  

     
    LETTER-Multimedia Environment Technology

      Vol:
    E95-A No:11
      Page(s):
    2032-2035

    This paper proposes Burst Error Resilient coding (BRC) technology in mobile broadcasting network. The proposed method utilizes Scalable Video Coding (SVC) and Forward Error Correction (FEC) to overcome service outage due to burst loss in mobile network. The performance evaluation is performed by comparing PSNR of SVC and the proposed method under MBSFN simulation channel. The simulation result shows PSNR of SVC equal error protection (EEP), unequal error protection (UEP) and proposed BRC using Raptor FEC code.

  • Development of Optically Controlled Beam-Forming Network

    Akira AKAISHI  Takashi TAKAHASHI  Yoshiyuki FUJINO  Mitsugu OHKAWA  Toshio ASAI  Ryutaro SUZUKI  Tomohiro AKIYAMA  Hirofumi MATSUZAWA  

     
    PAPER

      Vol:
    E95-B No:11
      Page(s):
    3404-3411

    NICT has developed a test model of an optically controlled beam-forming network (OBF) for a future multiple-beam antenna. The OBF test model consists of an electro-optic converter unit, an OBF unit, and an optoelectronic converter unit. A Ka-band OBF test model was manufactured to demonstrate the OBF. Radiation patterns obtained from the measured OBF data confirmed agreement between the expected and calculated results. Communication tests of the bit error rate (BER) for the digital communication link were performed. The results confirmed the OBF had no serious degradation below 1 dB of Eb/N0 on BER performance at 110-8.

  • Balanced Switching Schemes for Gradient-Error Compensation in Current-Steering DACs

    Xueqing LI  Qi WEI  Fei QIAO  Huazhong YANG  

     
    PAPER-Electronic Circuits

      Vol:
    E95-C No:11
      Page(s):
    1790-1798

    This paper introduces balanced switching schemes to compensate linear and quadratic gradient errors, in the unary current source array of a current-steering digital-to-analog converter (DAC). A novel algorithm is proposed to avoid the accumulation of gradient errors, yielding much less integral nonlinearities (INLs) than conventional switching schemes. Switching scheme examples with different number of current cells are also exhibited in this paper, including symmetric arrays and non-symmetric arrays in round and square outlines. (a) For symmetric arrays where each cell is divided into two parallel concentric ones, the simulated INL of the proposed round/square switching scheme is less than 25%/40% of conventional switching schemes, respectively. Such improvement is achieved by the cancelation of linear errors and the reduction of accumulated quadratic errors to near the absolute lower bound, using the proposed balanced algorithm. (b) For non-symmetric arrays, i.e. arrays where cells are not divided into parallel ones, linear errors cannot be canceled, and the accumulated INL varies with different quadratic error distribution centers. In this case, the proposed algorithm strictly controls the accumulation of quadratic gradient errors, and different from the algorithm in symmetric arrays, linear errors are also strictly controlled in two orthogonal directions simultaneously. Therefore, the INLs of the proposed non-symmetric switching schemes are less than 64% of conventional switching schemes.

  • A Downlink Multi-Relay Transmission Scheme Employing Tomlinson-Harashima Precoding and Interference Alignment

    Heng LIU  Pingzhi FAN  Li HAO  

     
    PAPER-Mobile Information Network

      Vol:
    E95-A No:11
      Page(s):
    1904-1911

    This paper proposes a downlink multi-user transmission scheme for the amplify-and-forward(AF)-based multi-relay cellular network, in which Tomlinson-Harashima precoding(TH precoding) and interference alignment(IA) are jointly applied. The whole process of transmission is divided into two phases: TH precoding is first performed at base-station(BS) to support the multiplexing of data streams transmitted to both mobile-stations(MS) and relay-stations(RS), and then IA is performed at both BS and RSs to achieve the interference-free communication. During the whole process, neither data exchange nor strict synchronization is required among BS and RSs thus reducing the cooperative complexity as well as improving the system performance. Theoretical analysis is provided with respect to the channel capacity of different types of users, resulting the upper-bounds of channel capacity. Our analysis and simulation results show that the joint applications of TH precoding and IA outperforms other schemes in the presented multi-relay cellular network.

6281-6300hit(20498hit)