The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6421-6440hit(42807hit)

  • SMT-Based Scheduling for Overloaded Real-Time Systems

    Zhuo CHENG  Haitao ZHANG  Yasuo TAN  Yuto LIM  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/01/23
      Vol:
    E100-D No:5
      Page(s):
    1055-1066

    In a real-time system, tasks are required to be completed before their deadlines. Under normal workload conditions, a scheduler with a proper scheduling policy can make all the tasks meet their deadlines. However, in practical environment, system workload may vary widely. Once system workload becomes too heavy, so that there does not exist a feasible schedule can make all the tasks meet their deadlines, we say the system is overloaded under which some tasks will miss their deadlines. To alleviate the degrees of system performance degradation caused by the missed deadline tasks, the design of scheduling is crucial. Many design objectives can be considered. In this paper, we first focus on maximizing the total number of tasks that can be completed before their deadlines. A scheduling method based on satisfiability modulo theories (SMT) is proposed. In the method, the problem of scheduling is treated as a satisfiability problem. The key work is to formalize the satisfiability problem using first-order language. After the formalization, a SMT solver (e.g., Z3, Yices) is employed to solve this satisfiability problem. An optimal schedule can be generated based on the solution model returned by the SMT solver. The correctness of this method and the optimality of the generated schedule can be verified in a straightforward manner. The time efficiency of the proposed method is demonstrated through various simulations. Moreover, in the proposed SMT-based scheduling method, we define the scheduling constraints as system constraints and target constraints. This means if we want to design scheduling to achieve other objectives, only the target constraints need to be modified. To demonstrate this advantage, we adapt the SMT-based scheduling method to other design objectives: maximizing effective processor utilization and maximizing obtained values of completed tasks. Only very little changes are needed in the adaption procedure, which means the proposed SMT-based scheduling method is flexible and sufficiently general.

  • An Improved Perceptual MBSS Noise Reduction with an SNR-Based VAD for a Fully Operational Digital Hearing Aid

    Zhaoyang GUO  Xin'an WANG  Bo WANG  Shanshan YONG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/02/17
      Vol:
    E100-D No:5
      Page(s):
    1087-1096

    This paper first reviews the state-of-the-art noise reduction methods and points out their vulnerability in noise reduction performance and speech quality, especially under the low signal-noise ratios (SNR) environments. Then this paper presents an improved perceptual multiband spectral subtraction (MBSS) noise reduction algorithm (NRA) and a novel robust voice activity detection (VAD) based on the amended sub-band SNR. The proposed SNR-based VAD can considerably increase the accuracy of discrimination between noise and speech frame. The simulation results show that the proposed NRA has better segmental SNR (segSNR) and perceptual evaluation of speech quality (PESQ) performance than other noise reduction algorithms especially under low SNR environments. In addition, a fully operational digital hearing aid chip is designed and fabricated in the 0.13 µm CMOS process based on the proposed NRA. The final chip implementation shows that the whole chip dissipates 1.3 mA at the 1.2 V operation. The acoustic test result shows that the maximum output sound pressure level (OSPL) is 114.6 dB SPL, the equivalent input noise is 5.9 dB SPL, and the total harmonic distortion is 2.5%. So the proposed digital hearing aid chip is a promising candidate for high performance hearing-aid systems.

  • Correcting Syntactic Annotation Errors Based on Tree Mining

    Kanta SUZUKI  Yoshihide KATO  Shigeki MATSUBARA  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/01/23
      Vol:
    E100-D No:5
      Page(s):
    1106-1113

    This paper provides a new method to correct annotation errors in a treebank. The previous error correction method constructs a pseudo parallel corpus where incorrect partial parse trees are paired with correct ones, and extracts error correction rules from the parallel corpus. By applying these rules to a treebank, the method corrects errors. However, this method does not achieve wide coverage of error correction. To achieve wide coverage, our method adopts a different approach. In our method, we consider that if an infrequent pattern can be transformed to a frequent one, then it is an annotation error pattern. Based on a tree mining technique, our method seeks such infrequent tree patterns, and constructs error correction rules each of which consists of an infrequent pattern and a corresponding frequent pattern. We conducted an experiment using the Penn Treebank. We obtained 1,987 rules which are not constructed by the previous method, and the rules achieved good precision.

  • RRWL: Round Robin-Based Wear Leveling Using Block Erase Table for Flash Memory

    Seon Hwan KIM  Ju Hee CHOI  Jong Wook KWAK  

     
    LETTER-Software System

      Pubricized:
    2017/01/30
      Vol:
    E100-D No:5
      Page(s):
    1124-1127

    In this letter, we propose a round robin-based wear leveling (RRWL) for flash memory systems. RRWL uses a block erase table (BET), which is composed of a bit array and saves the erasure histories of blocks. BET can use one-to-one mode to increase the performance of wear leveling or one-to-many mode to reduce memory consumption. However, one-to-many mode decreases the accuracy of cold block information, which results in the lifetime degradation of flash memory. To solve this problem, RRWL consistently uses one-to-one mode based on round robin method to increase the accuracy of cold block identification, with reduced memory size of BET, like in one-to-many mode. Experiments show that RRWL increases the lifetime of flash memory by up to 47% and 14%, compared with BET and HaWL, respectively.

  • Learning Corpus-Invariant Discriminant Feature Representations for Speech Emotion Recognition

    Peng SONG  Shifeng OU  Zhenbin DU  Yanyan GUO  Wenming MA  Jinglei LIU  Wenming ZHENG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2017/02/02
      Vol:
    E100-D No:5
      Page(s):
    1136-1139

    As a hot topic of speech signal processing, speech emotion recognition methods have been developed rapidly in recent years. Some satisfactory results have been achieved. However, it should be noted that most of these methods are trained and evaluated on the same corpus. In reality, the training data and testing data are often collected from different corpora, and the feature distributions of different datasets often follow different distributions. These discrepancies will greatly affect the recognition performance. To tackle this problem, a novel corpus-invariant discriminant feature representation algorithm, called transfer discriminant analysis (TDA), is presented for speech emotion recognition. The basic idea of TDA is to integrate the kernel LDA algorithm and the similarity measurement of distributions into one objective function. Experimental results under the cross-corpus conditions show that our proposed method can significantly improve the recognition rates.

  • Traffic Anomaly Detection Based on Robust Principal Component Analysis Using Periodic Traffic Behavior

    Takahiro MATSUDA  Tatsuya MORITA  Takanori KUDO  Tetsuya TAKINE  

     
    PAPER-Network

      Pubricized:
    2016/11/21
      Vol:
    E100-B No:5
      Page(s):
    749-761

    In this paper, we study robust Principal Component Analysis (PCA)-based anomaly detection techniques in network traffic, which can detect traffic anomalies by projecting measured traffic data onto a normal subspace and an anomalous subspace. In a PCA-based anomaly detection, outliers, anomalies with excessively large traffic volume, may contaminate the subspaces and degrade the performance of the detector. To solve this problem, robust PCA methods have been studied. In a robust PCA-based anomaly detection scheme, outliers can be removed from the measured traffic data before constructing the subspaces. Although the robust PCA methods are promising, they incure high computational cost to obtain the optimal location vector and scatter matrix for the subspace. We propose a novel anomaly detection scheme by extending the minimum covariance determinant (MCD) estimator, a robust PCA method. The proposed scheme utilizes the daily periodicity in traffic volume and attempts to detect anomalies for every period of measured traffic. In each period, before constructing the subspace, outliers are removed from the measured traffic data by using a location vector and a scatter matrix obtained in the preceding period. We validate the proposed scheme by applying it to measured traffic data in the Abiline network. Numerical results show that the proposed scheme provides robust anomaly detection with less computational cost.

  • Removal of Salt-and-Pepper Noise Using a High-Precision Frequency Analysis Approach

    Masaya HASEGAWA  Kazuki SAKASHITA  Kousei UCHIKOSHI  Shigeki HIROBAYASHI  Tadanobu MISAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2017/01/24
      Vol:
    E100-D No:5
      Page(s):
    1097-1105

    A digital image is often deteriorated by impulse noise that may occur during processes such as transmission. An impulse noise converts the pixel data in the image into black (0) or white (255) values at a random frequency and is also called salt-and-pepper noise. In this paper, we identify the details of pixels that have been damaged by impulse noise by analyzing the frequency of the noisy image using non-harmonic analysis (NHA). From experimental results, we can confirm that this method shows superior performance compared to the recent PSNR denoising method. In addition, we show that the proposed method is particularly superior in eliminating impulse noise in images with high noise rates.

  • Variance Analysis for Least p-Norm Estimator in Mixture of Generalized Gaussian Noise

    Yuan CHEN  Long-Ting HUANG  Xiao Long YANG  Hing Cheung SO  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:5
      Page(s):
    1226-1230

    Variance analysis is an important research topic to assess the quality of estimators. In this paper, we analyze the performance of the least ℓp-norm estimator in the presence of mixture of generalized Gaussian (MGG) noise. In the case of known density parameters, the variance expression of the ℓp-norm minimizer is first derived, for the general complex-valued signal model. Since the formula is a function of p, the optimal value of p corresponding to the minimum variance is then investigated. Simulation results show the correctness of our study and the near-optimality of the ℓp-norm minimizer compared with Cramér-Rao lower bound.

  • Dual-DCT-Lifting-Based Lapped Transform with Improved Reversible Symmetric Extension

    Taizo SUZUKI  Masaaki IKEHARA  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:5
      Page(s):
    1109-1118

    We present a lifting-based lapped transform (L-LT) and a reversible symmetric extension (RSE) in the boundary processing for more effective lossy-to-lossless image coding of data with various qualities from only one piece of lossless compressed data. The proposed dual-DCT-lifting-based LT (D2L-LT) parallel processes two identical LTs and consists of 1-D and 2-D DCT-liftings which allow the direct use of a DCT matrix in each lifting coefficient. Since the DCT-lifting can utilize any existing DCT software or hardware, it has great potential for elegant implementations that are dependent on the architecture and DCT algorithm used. In addition, we present an improved RSE (IRSE) that works by recalculating the boundary processing and solves the boundary problem that the DCT-lifting-based L-LT (DL-LT) has. We show that D2L-LT with IRSE mostly outperforms conventional L-LTs in lossy-to-lossless image coding.

  • Highly Robust Double Node Upset Resilient Hardened Latch Design

    Huaguo LIANG  Xin LI  Zhengfeng HUANG  Aibin YAN  Xiumin XU  

     
    PAPER-Electronic Circuits

      Vol:
    E100-C No:5
      Page(s):
    496-503

    With the scaling of technology, nanoscale CMOS integrated circuits are becoming more sensitive to single event double node upsets induced by charge sharing. A novel highly robust hardened latch design is presented that is fully resilient to single event double node upsets and single node upsets. The proposed latch employs multiple redundant C-elements to form a dual interlocked structure in which the redundant C-elements can bring the affected nodes back to the correct states regardless of the energy of the striking particle. Detailed HSPICE results confirm that the proposed latch features complete resilience to double node upsets and achieves an improved trade-off in terms of robustness, area, delay and power in comparison with previous latches. Extensive Monte Carlo simulations validate the proposed latch features as less sensitive to process, supply voltage and temperature variations.

  • Fast Montgomery Modular Multiplication and Squaring on Embedded Processors

    Yang LI  Jinlin WANG  Xuewen ZENG  Xiaozhou YE  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/12/06
      Vol:
    E100-B No:5
      Page(s):
    680-690

    Montgomery modular multiplication is one of the most efficient algorithms for modular multiplication of large integers. On resource-constraint embedded processors, memory-access operations play an important role as arithmetic operations in the modular multiplication. To improve the efficiency of Montgomery modular multiplication on embedded processors, this paper concentrates on reducing the memory-access operations through adding a few working registers. We first revisit previous popular Montgomery modular multiplication algorithms, and then present improved algorithms for Montgomery modular multiplication and squaring for arbitrary prime fields. The algorithms adopt the general ideas of hybrid multiplication algorithm proposed by Gura and lazy doubling algorithm proposed by Lee. By careful optimization and redesign, we propose novel implementations for Montgomery multiplication and squaring called coarsely integrated product and operand hybrid scanning algorithm (CIPOHS) and coarsely integrated lazy doubling algorithm (CILD). Then, we implement the algorithms on general MIPS64 processor and OCTEON CN6645 processor equipped with specific multiply-add instructions. Experiments show that CIPOHS and CILD offer the best performance both on the general MIPS64 and OCTEON CN6645 processors. But the proposed algorithms have obvious advantages for the processors with specific multiply-add instructions such as OCTEON CN6645. When the modulus is 2048 bits, the CIPOHS and CILD outperform the CIOS algorithm by a factor of 47% and 58%, respectively.

  • Detecting Transportation Modes Using Deep Neural Network

    Hao WANG  GaoJun LIU  Jianyong DUAN  Lei ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/02/15
      Vol:
    E100-D No:5
      Page(s):
    1132-1135

    Existing studies on transportation mode detection from global positioning system (GPS) trajectories mainly adopt handcrafted features. These features require researchers with a professional background and do not always work well because of the complexity of traffic behavior. To address these issues, we propose a model using a sparse autoencoder to extract point-level deep features from point-level handcrafted features. A convolution neural network then aggregates the point-level deep features and generates a trajectory-level deep feature. A deep neural network incorporates the trajectory-level handcrafted features and the trajectory-level deep feature for detecting the users' transportation modes. Experiments conducted on Microsoft's GeoLife data show that our model can automatically extract the effective features and improve the accuracy of transportation mode detection. Compared with the model using only handcrafted features and shallow classifiers, the proposed model increases the maximum accuracy by 6%.

  • FOREWORD Open Access

    Tatsuya KUNIKIYO  

     
    FOREWORD

      Vol:
    E100-C No:5
      Page(s):
    416-416
  • SDN-Based Self-Organizing Energy Efficient Downlink/Uplink Scheduling in Heterogeneous Cellular Networks Open Access

    Seungil MOON  Thant Zin OO  S. M. Ahsan KAZMI  Bang Ju PARK  Choong Seon HONG  

     
    INVITED PAPER

      Pubricized:
    2017/02/18
      Vol:
    E100-D No:5
      Page(s):
    939-947

    The increase in network access devices and demand for high quality of service (QoS) by the users have led to insufficient capacity for the network operators. Moreover, the existing control equipment and mechanisms are not flexible and agile enough for the dynamically changing environment of heterogeneous cellular networks (HetNets). This non-agile control plane is hard to scale with ever increasing traffic demand and has become the performance bottleneck. Furthermore, the new HetNet architecture requires tight coordination and cooperation for the densely deployed small cell base stations, particularly for interference mitigation and dynamic frequency reuse and sharing. These issues further complicate the existing control plane and can cause serious inefficiencies in terms of users' quality of experience and network performance. This article presents an SDN control framework for energy efficient downlink/uplink scheduling in HetNets. The framework decouples the control plane from data plane by means of a logically centralized controller with distributed agents implemented in separate entities of the network (users and base stations). The scheduling problem consists of three sub-problems: (i) user association, (ii) power control, (iii) resource allocation and (iv) interference mitigation. Moreover, these sub-problems are coupled and must be solved simultaneously. We formulate the DL/UL scheduling in HetNet as an optimization problem and use the Markov approximation framework to propose a distributed economical algorithm. Then, we divide the algorithm into three sub-routines for (i) user association, (ii) power control, (iii) resource allocation and (iv) interference mitigation. These sub-routines are then implemented on different agents of the SDN framework. We run extensive simulation to validate our proposal and finally, present the performance analysis.

  • Simulation Study of Low Latency Network Architecture Using Mobile Edge Computing

    Krittin INTHARAWIJITR  Katsuyoshi IIDA  Hiroyuki KOGA  

     
    PAPER

      Pubricized:
    2017/02/08
      Vol:
    E100-D No:5
      Page(s):
    963-972

    Attaining extremely low latency service in 5G cellular networks is an important challenge in the communication research field. A higher QoS in the next-generation network could enable several unprecedented services, such as Tactile Internet, Augmented Reality, and Virtual Reality. However, these services will all need support from powerful computational resources provided through cloud computing. Unfortunately, the geolocation of cloud data centers could be insufficient to satisfy the latency aimed for in 5G networks. The physical distance between servers and users will sometimes be too great to enable quick reaction within the service time boundary. The problem of long latency resulting from long communication distances can be solved by Mobile Edge Computing (MEC), though, which places many servers along the edges of networks. MEC can provide shorter communication latency, but total latency consists of both the transmission and the processing times. Always selecting the closest edge server will lead to a longer computing latency in many cases, especially when there is a mass of users around particular edge servers. Therefore, the research studies the effects of both latencies. The communication latency is represented by hop count, and the computation latency is modeled by processor sharing (PS). An optimization model and selection policies are also proposed. Quantitative evaluations using simulations show that selecting a server according to the lowest total latency leads to the best performance, and permitting an over-latency barrier would further improve results.

  • LTDE: A Layout Tree Based Approach for Deep Page Data Extraction

    Jun ZENG  Feng LI  Brendan FLANAGAN  Sachio HIROKAWA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/02/21
      Vol:
    E100-D No:5
      Page(s):
    1067-1078

    Content extraction from deep Web pages has received great attention in recent years. However, the increasingly complicated HTML structure of Web documents makes it more difficult to recognize the data records by only analyzing the HTML source code. In this paper, we propose a method named LTDE to extract data records from a deep Web page. Instead of analyzing the HTML source code, LTDE utilizes the visual features of data records in deep Web pages. A Web page is considered as a finite set of visual blocks. The data records are the visual blocks that have similar layout. We also propose a pattern recognizing method named layout tree to cluster the similar layout visual blocks. The weight of all clusters is calculated, and the visual blocks in the cluster that has the highest weight are chosen as the data records to be extracted. The experiment results show that LTDE has higher effectiveness and better robustness for Web data extraction compared to previous works.

  • Design Differences in Pedestrian Navigation Systems Depending on the Availability of Carriable Navigation Information

    Tetsuya MANABE  Takaaki HASEGAWA  

     
    PAPER-Intelligent Transport System

      Vol:
    E100-A No:5
      Page(s):
    1197-1205

    In this paper, the differences in navigation information design, which is important for kiosk-type pedestrian navigation systems, were experimentally examined depending on presence or absence of carriable navigation information in order to acquire the knowledge to contribute design guidelines of kiosk-type pedestrian navigation systems. In particular, we used route complexity information calculated using a regression equation that contained multiple factors. In the absence of carriable navigation information, both the destination arrival rate and route deviation rate improved. Easy routes were designed as M (17 to 39 characters in Japanese), while complicated routes were denoted as L (40 or more characters in Japanese). On the contrary, in the presence of carriable navigation information, the user's memory load was found to be reduced by carrying the same navigation information as kiosk-type terminals. Thus, the reconsideration of kiosk-type pedestrian navigation systems design, e.g., the means of presenting navigation information, is required. For example, if the system attaches importance to a high destination arrival rate, L_Carrying without regard to route complexity is better. If the system attaching importance to the low route deviation rate, M_Carrying in the case of easy routes and L_Carrying in the case of complicated routes have been better. Consequently, this paper presents the differences in the designs of pedestrian navigation systems depending on whether carriable navigation information is absent or present.

  • Data-Adapted Volume Rendering for Scattered Point Data

    Junda ZHANG  Libing JIANG  Longxing KONG  Li WANG  Xiao'an TANG  

     
    LETTER-Computer Graphics

      Pubricized:
    2017/02/15
      Vol:
    E100-D No:5
      Page(s):
    1148-1151

    In this letter, we present a novel method for reconstructing continuous data field from scattered point data, which leads to a more characteristic visualization result by volume rendering. The gradient distribution of scattered point data is analyzed for local feature investigation via singular-value decomposition. A data-adaptive ellipsoidal shaped function is constructed as the penalty function to evaluate point weight coefficient in MLS approximation. The experimental results show that the proposed method can reduce the reconstruction error and get a visualization with better feature discrimination.

  • Transition Mappings between De Bruijn Sequences

    Ming LI  Yupeng JIANG  Dongdai LIN  Qiuyan WANG  

     
    LETTER-Cryptography and Information Security

      Vol:
    E100-A No:5
      Page(s):
    1254-1256

    We regard a De Bruijn sequence of order n as a bijection on $mathbb{F}_2^n$ and consider the transition mappings between them. It is shown that there are only two conjugate transformations that always transfer De Bruijn sequences to De Bruijn sequences.

  • TOA Based Recalibration Systems for Improving LOS/NLOS Identification

    Yu Min HWANG  Yuchan SONG  Kwang Yul KIM  Yong Sin KIM  Jae Seang LEE  Yoan SHIN  Jin Young KIM  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:5
      Page(s):
    1267-1270

    In this paper, we propose a non-cooperative line-of-sight (LOS)/non-LOS channel identification algorithm with single node channel measurements based on time-of-arrival statistics. In order to improve the accuracy of channel identification, we adopt a recalibration interval in terms of measured distance to the proposed algorithm. Experimental results are presented in terms of identification probability and recalibration interval. The proposed algorithm involves a trade-off between the channel identification quality and the recalibration rate. However, depending on the recalibration interval, it is possible to greatly improve the sensitivity of the channel identification system.

6421-6440hit(42807hit)