Kazuaki TAKEDA Hiromichi TOMEBA Fumiyuki ADACHI
Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual inter-chip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.
Augusto FORONDA Chikara OHTA Hisashi TAMAKI
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
Chirawat KOTCHASARN Poompat SAENGUDOMLERT
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
This paper presents a real-time decision support system (RDSS) based on artificial intelligence (AI) for voltage collapse avoidance (VCA) in power supply networks. The RDSS scheme employs a fuzzy hyperrectangular composite neural network (FHRCNN) to carry out voltage risk identification (VRI). In the event that a threat to the security of the power supply network is detected, an evolutionary programming (EP)-based algorithm is triggered to determine the operational settings required to restore the power supply network to a secure condition. The effectiveness of the RDSS methodology is demonstrated through its application to the American Electric Power Provider System (AEP, 30-bus system) under various heavy load conditions and contingency scenarios. In general, the numerical results confirm the ability of the RDSS scheme to minimize the risk of voltage collapse in power supply networks. In other words, RDSS provides Power Provider Enterprises (PPEs) with a viable tool for performing on-line voltage risk assessment and power system security enhancement functions.
Hong-Wei SUN Kwok-Yan LAM Dieter GOLLMANN Siu-Leung CHUNG Jian-Bin LI Jia-Guang SUN
In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e.g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.
Seog Chung SEO Dong-Guk HAN Hyung Chan KIM Seokhie HONG
In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF(2m) on sensor motes using small word size is not appropriate because XOR multiplication over GF(2m) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF(2m) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF(2m) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF(2163) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15-19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve - a kind of TinyOS package supporting elliptic curve operations) which is the first implementation of Koblitz curve on sensor motes as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF(2m) can be faster than that over GF(p) on 8-bit Atmega128 processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF(p). TinyECCK with sect163k1 can generate a signature and verify it in 1.37 and 2.32 secs on a Micaz mote with 13,748-byte of ROM and 1,004-byte of RAM.
Hyun-Chool SHIN Hyoung-Nam KIM Woo-Jin SONG
In this letter we propose a simple adaptive algorithm which solves the unit-norm constrained optimization problem. Instead of conventional parameter norm based normalization, the proposed algorithm incorporates single parameter normalization which is computationally much simpler. The simulation results illustrate that the proposed algorithm performs as good as conventional ones while being computationally simpler.
In this paper, we propose a reduced-complexity radial basis function (RBF)-assisted decision-feedback equalizer (DFE)-based turbo equalization (TEQ) scheme using a novel extended fuzzy c-means (FCM) algorithm, which not only is comparable in performance to the Jacobian RBF DFE-based TEQ but also is low-complexity. Previous TEQ research has shown that the Jacobian RBF DFE TEQ considerably reduces the computational complexity with similar performance, when compared to the logarithmic maximum a posteriori (Log-MAP) TEQ. In this study, the proposed reduced-complexity RBF DFE TEQ further greatly reduces the computational complexity and is capable of attaining a similar performance in contrast to the Jacobian RBF DFE TEQ in the context of both binary phase-shift keying (BPSK) modulation and 4 quadrature amplitude modulation (QAM). With this proposal, the materialization of the RBF-assisted TEQ scheme becomes more feasible.
Seok Gyu CHOI Young Hyun BAEK Jung Hun OH Min HAN Seok Ho BANG Jin-Koo RHEE
In this study, we have performed both the channel modification of the conventional MHEMT (Metamorphic High Electron Mobility Transistor) and the variation of gate recess width to improve the breakdown and RF characteristics. The modified channel consists of the InxGa1-xAs and the InP layers. Since InP has lower impact ionization coefficient than In0.53Ga0.47As, we have adopted the InP-composite channel in the modified MHEMT. Also, the gate recess width is both functions of breakdown and RF characteristic of a HEMT structure. Therefore, we have studied the breakdown and RF characteristic for various gate recess widths in MHEMT. We have compared breakdown characteristic of the InP-composite channel with that of conventional MHEMT. It is shown that on and off state breakdown voltages of the InP-composite channel MHEMT were increased by about 20 and 27%, respectively, compared with the conventional structure. Also, breakdown voltage of the InP-composite channel MHEMT was increased with increasing gate recess width. The fT was increased with decreasing the gate recess width, whereas fmax was increased with increasing the gate recess width. Also, we extracted small-signal parameters. It was shown that Gd of the InP-composite channel MHEMT is decreased about by 30% compared with the conventional MHEMT. Therefore, the suppression of the impact ionization in the InP-composite channel increases the breakdown voltage and decreases the output conductance.
Muhammad ZUBAIR Muhammad A.S. CHOUDHRY Aqdas NAVEED Ijaz Mansoor QURESHI
Due to the computational complexity of the optimum maximum likelihood detector (OMD) growing exponentially with the number of users, suboptimum techniques have received significant attention. We have proposed the particle swarm optimization (PSO) for the multiuser detection (MUD) in asynchronous multicarrier code division multiple access (MC-CDMA) system. The performance of PSO based MUD is near optimum, while its computational complexity is far less than OMD. Performance of PSO-MUD has also been shown to be better than that of genetic algorithm based MUD (GA-MUD) at practical SNR.
Shunsuke YAMAKI Masahide ABE Masayuki KAWAMATA
This paper proposes a closed form solution to L2-sensitivity minimization of second-order state-space digital filters. Restricting ourselves to the second-order case of state-space digital filters, we can express the L2-sensitivity by a simple linear combination of exponential functions and formulate the L2-sensitivity minimization problem by a simple polynomial equation. As a result, the L2-sensitivity minimization problem can be converted into a problem to find the solution to a fourth-degree polynomial equation of constant coefficients, which can be algebraically solved in closed form without iterative calculations.
Muhammad ZUBAIR Muhammad A.S. CHOUDHRY Aqdas NAVEED Ijaz Mansoor QURESHI
The computation involved in multiuser detection (MUD) for multicarrier CDMA (MC-CDMA) based on maximum likelihood (ML) principle grows exponentially with the number of users. Particle swarm optimization (PSO) with soft decisions has been proposed to mitigate this problem. The computational complexity of PSO, is comparable with genetic algorithm (GA), but is much less than the optimal ML detector and yet its performance is much better than GA.
Quoc Tuan TRAN Shinsuke HARA Atsushi HONDA Yuuta NAKAYA Ichirou IDA Yasuyuki OISHI
Phased array antennas are attractive in terms of low cost and power consumption. This paper proposes a controlling scheme based on a bisection method for phased array antennas employing phase shifters with slow switching speed, which is typical for Micro Electro Mechanical Systems (MEMS) switches. Computer simulation results, assuming the IEEE 802.11a Wireless Local Area Network (WLAN) standard, show that the proposed scheme has good gain enhancement capability in multipath fading channels.
Young-Hwan YOU Sang-Tae KIM Kyung-Taek LEE Hyoung-Kyu SONG
In this letter, a robust pilot-assisted synchronization scheme is proposed for estimation of residual frequency offset (RFO) in OFDM-based digital radio mondiale (DRM) system. The RFO estimator uses the gain reference pilots mainly reserved for the channel tracking in the DRM standard. To demonstrate the efficiency of the proposed RFO estimator, comparisons are made with the conventional RFO estimator using the frequency reference pilots in terms of mean square error (MSE) performance.
Rong RAN JangHoon YANG DongKu KIM
In this letter, a simple but effective antenna selection algorithm for orthogonal space-time block codes with a linear complex precoder (OSTBC-LCP) is proposed and compared with two conventional algorithms in temporally and spatially correlated fading channels. The proposed algorithm, which minimizes pairwise error probability (MinPEP) with an error codebook (EC) constructed from the error vector quantization, is shown to provide nearly the same performance of MinPEP based on all possible error vectors, while keeping the complexity close to that of antenna selection algorithm based on maximum power criterion (Maxpower).
Erwan LE MALECOT Masayoshi KOHARA Yoshiaki HORI Kouichi SAKURAI
With the multiplication of attacks against computer networks, system administrators are required to monitor carefully the traffic exchanged by the networks they manage. However, that monitoring task is increasingly laborious because of the augmentation of the amount of data to analyze. And that trend is going to intensify with the explosion of the number of devices connected to computer networks along with the global rise of the available network bandwidth. So system administrators now heavily rely on automated tools to assist them and simplify the analysis of the data. Yet, these tools provide limited support and, most of the time, require highly skilled operators. Recently, some research teams have started to study the application of visualization techniques to the analysis of network traffic data. We believe that this original approach can also allow system administrators to deal with the large amount of data they have to process. In this paper, we introduce a tool for network traffic monitoring using visualization techniques that we developed in order to assist the system administrators of our corporate network. We explain how we designed the tool and some of the choices we made regarding the visualization techniques to use. The resulting tool proposes two linked representations of the network traffic and activity, one in 2D and the other in 3D. As 2D and 3D visualization techniques have different assets, we resulted in combining them in our tool to take advantage of their complementarity. We finally tested our tool in order to evaluate the accuracy of our approach.
Jun YAO Shinobu MIWA Hajime SHIMADA Shinji TOMITA
Recently, a method called pipeline stage unification (PSU) has been proposed to reduce energy consumption for mobile processors via inactivating and bypassing some of the pipeline registers and thus adopt shallow pipelines. It is designed to be an energy efficient method especially for the processors under future process technologies. In this paper, we present a mechanism for the PSU controller which can dynamically predict a suitable configuration based on the program phase detection. Our results show that the designed predictor can achieve a PSU degree prediction accuracy of 84.0%, averaged from the SPEC CPU2000 integer benchmarks. With this dynamic control mechanism, we can obtain 11.4% Energy-Delay-Product (EDP) reduction in the processor that adopts a PSU pipeline, compared to the baseline processor, even after the application of complex clock gating.
Sooyong CHOI Jong-Moon CHUNG Wun-Cheol JEONG
A new blind adaptive equalization method for constant modulus signals based on minimizing the approximate negentropy of the estimation error for a finite-length equalizer is presented. We consider the approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve the performance of a linear equalizer using the conventional constant modulus algorithm (CMA). Negentropy includes higher order statistical information and its minimization provides improved convergence, performance, and accuracy compared to traditional methods, such as the CMA, in terms of the bit error rate (BER). Also, the proposed equalizer shows faster convergence characteristics than the CMA equalizer and is more robust to nonlinear distortion than the CMA equalizer.
Ken TANAKA Hiromichi TOMEBA Fumiyuki ADACHI
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.
Jang-Won LEE Mung CHIANG A. Robert CALDERBANK
We use the network utility maximization (NUM) framework to create an efficient and fair medium access control (MAC) protocol for wireless networks. By adjusting the parameters in the utility objective functions of NUM problems, we control the tradeoff between efficiency and fairness of radio resource allocation through a rigorous and systematic design. In this paper, we propose a scheduling-based MAC protocol. Since it provides an upper-bound on the achievable performance, it establishes the optimality benchmarks for comparison with other algorithms in related work.