Ying MA Guangchun LUO Hao CHEN
A kernel based asymmetric learning method is developed for software defect prediction. This method improves the performance of the predictor on class imbalanced data, since it is based on kernel principal component analysis. An experiment validates its effectiveness.
Fanxin ZENG Xiaoping ZENG Zhenyu ZHANG Guixin XUAN
Based on quadriphase perfect sequences and their cyclical shift versions, three families of almost perfect 16-QAM sequences are presented. When one of two time shifts chosen equals half a period of quadriphase sequence employed and another is zero, two of the proposed three sequence families possess the property that their out-of-phase autocorrelation function values vanish except one. At the same time, to the other time shifts, the nontrivial autocorrelation function values in three families are zero except two or four. In addition, two classes of periodic complementary sequence (PCS) pairs over the 16-QAM constellation, whose autocorrelation is similar to the one of conventional PCS pairs, are constructed as well.
This letter presents a criterion for selecting a transmit antenna subset when ZF detectors followed by Rake combiners are employed for spatial multiplexing (SM) ultra-wideband (UWB) multiple input multiple output (MIMO) systems. The presented criterion is based on the largest minimum post-processing signal to interference plus noise ratio of the multiplexed streams, which is obtained on the basis of QR decomposition. Simulation results show that the proposed antenna selection algorithm considerably improves the BER performance of the SM UWB MIMO systems when the number of multipath diversity branches is not so large and thus offers diversity advantages on a log-normal multipath fading channel.
Chi GUO Li-na WANG Xiao-ying ZHANG
Network structure has a great impact both on hazard spread and network immunization. The vulnerability of the network node is associated with each other, assortative or disassortative. Firstly, an algorithm for vulnerability relevance clustering is proposed to show that the vulnerability community phenomenon is obviously existent in complex networks. On this basis, next, a new indicator called network “hyper-betweenness” is given for evaluating the vulnerability of network node. Network hyper-betweenness can reflect the importance of network node in hazard spread better. Finally, the dynamic stochastic process of hazard spread is simulated based on Monte-Carlo sampling method and a two-player, non-cooperative, constant-sum game model is designed to obtain an equilibrated network immunization strategy.
In multimedia communication, due to the limited computational capability of the personal information machine, a coder with low computational complexity is needed to integrate services from several media sources. This paper presents two efficient candidate schemes to simplify the most computationally demanding operation, the excitation codebook search procedure. For fast adaptive codebook search, we propose an algorithm that uses residual signals to predict the candidate gain-vectors of the adaptive codebook. For the fixed codebook, we propose a fast search algorithm using an energy function to predict the candidate pulses, and we redesign the codebook structure to twin multi-track positions architecture. Overall simulation results indicate that the average perceptual evaluation of speech quality (PESQ) score is degraded slightly, by 0.049, and our proposed methods can reduce total computational complexity by about 67% relative to the original G.723.1 encoder computation load, and with perceptually negligible degradation. Objective and subjective evaluations verify that the more efficient candidate schemes we propose can provide speech quality comparable to that using the original coder approach.
Ryo NISHIMAKI Eiichiro FUJISAKI Keisuke TANAKA
This paper presents a new non-interactive string-commitment scheme that achieves universally composable security. Security is proven under the decisional composite residuosity (DCR) assumption (or the decisional Diffie-Hellman (DDH) assumption) in the common reference string (CRS) model. The universal composability (UC) is a very strong security notion. If cryptographic protocols are proven secure in the UC framework, then they remain secure even if they are composed with arbitrary protocols and polynomially many copies of the protocols are run concurrently. Many UC commitment schemes in the CRS model have been proposed, but they are either interactive commitment or bit-commitment (not string-commitment) schemes. We note, however, that although our scheme is the first non-interactive UC string-commitment scheme, a CRS is not reusable. We use an extension of all-but-one trapdoor functions (ABO-TDFs) proposed by Peikert and Waters at STOC 2008 as an essential building block. Our main idea is to extend (original deterministic) ABO-TDFs to probabilistic ones by using the homomorphic properties of their function indices. The function indices of ABO-TDFs consist of ciphertexts of homomorphic encryption schemes (such as ElGamal, and Damgåd-Jurik encryption). Therefore we can re-randomize the output of ABO-TDFs by re-randomization of ciphertexts. This is a new application of ABO-TDFs.
Ikki FUJIWARA Kento AIDA Isao ONO
This paper proposes a combinatorial auction-based marketplace mechanism for cloud computing services, which allows users to reserve arbitrary combination of services at requested timeslots, prices and quality of service. The proposed mechanism helps enterprise users build workflow applications in a cloud computing environment, specifically on the platform-as-a-service, where the users need to compose multiple types of services at different timeslots. The proposed marketplace mechanism consists of a forward market for an advance reservation and a spot market for immediate allocation of services. Each market employs mixed integer programming to enforce a Pareto optimum allocation with maximized social economic welfare, as well as double-sided auction design to encourage both users and providers to compete for buying and selling the services. The evaluation results show that (1) the proposed forward/combinatorial mechanism outperforms other non-combinatorial and/or non-reservation (spot) mechanisms in both user-centric rationality and global efficiency, and (2) running both a forward market and a spot market improves utilization without disturbing advance reservations depending on the provider's policy.
Rui MIN Yating HU Yiming PI Zongjie CAO
Tomo-SAR imaging with sparse baselines can be formulated as a sparse signal recovery problem, which suggests the use of the Compressive Sensing (CS) method. In this paper, a novel Tomo-SAR imaging approach based on Sparse Bayesian Learning (SBL) is presented to obtain super-resolution in elevation direction and is validated by simulation results.
Kengo YAGYU Takeshi NAKAMORI Hiroyuki ISHII Mikio IWAMURA Nobuhiko MIKI Takahiro ASAI Junichiro HAGIWARA
In Long-Term Evolution-Advanced (LTE-A), which is currently in the process of standardization in the 3rd generation partnership project (3GPP), carrier aggregation (CA) was introduced as a main feature for bandwidth extension while maintaining backward compatibility with LTE Release 8 (Rel. 8). In the CA mode of operation, since two or more component carriers (CCs), each of which is compatible with LTE Rel. 8, are aggregated, mobility management is needed for CCs such as inter/intra-frequency handover, CC addition, and CC removal to provide sufficient coverage and better overall signal quality. Therefore, the signaling overhead for Radio Resource Control (RRC) reconfiguration for the mobility management of CCs in LTE-A is expected to be larger than that in LTE Rel. 8. In addition, CA allows aggregation of cells with different types of coverage. Therefore, the signaling overhead may be dependent on the coverage of each CC assumed in a CA deployment scenario. Furthermore, especially in a picocell-overlaid scenario, the amount of signaling overhead may be different according to whether the aggregation of CCs between a macrocell and a picocell, i.e., transmission and reception from multiple sites, is allowed or not. Therefore, this paper investigates the CC control overhead with several CC management policies in some CA deployment scenarios, including a scenario with overlaid picocells. Simulation results show that the control overhead is almost the same irrespective of the different management policies, when almost the same coverage is provided for the CCs. In addition, it is shown that the increase in the control overhead is not significant even in a CA deployment scenario with overlaid picocells. We also show that the amount of signaling overhead in a picocell-overlaid scenario with the CA between a macrocell and a picocell is almost twice as that without the CA between a macrocell and a picocell.
Zhenghao ZHANG Husheng LI Changxing PEI Qi ZENG
There are two major challenges in wide-band spectrum sensing in a heterogenous spectrum environment. One is the spectrum acquisition in the wide-band scenario due to limited sampling capability; the other is how to collaborate in a heterogenous spectrum environment. Compressed spectrum sensing is a promising technology for wide-band signal acquisition but it requires effective collaboration to combat noise. However, most collaboration methods assume that all the secondary users share the same occupancy of primary users, which is invalid in a heterogenous spectrum environment where secondary users at different locations may be affected by different primary users. In this paper, we propose an automatic clustering collaborative compressed spectrum sensing (ACCSS) algorithm. A hierarchy probabilistic model is proposed to represent the compressed reconstruction procedure, and Dirichlet process mixed model is introduced to cluster the compressed measurements. Cluster membership estimation and compressed spectrum reconstruction are jointly implemented in the fusion center. Based on the probabilistic model, the compressed measurements from the same cluster can be effectively fused and used to jointly reconstruct the corresponding primary user's spectrum signal. Consequently, the spectrum occupancy status of each primary user can be attained. Numerical simulation results demonstrate that the proposed ACCSS algorithm can effectively estimate the cluster membership of each secondary user and improve compressed spectrum sensing performance under low signal-to-noise ratio.
Pulung WASKITO Shinobu MIWA Yasue MITSUKURA Hironori NAKAJO
In off-line analysis, the demand for high precision signal processing has introduced a new method called Empirical Mode Decomposition (EMD), which is used for analyzing a complex set of data. Unfortunately, EMD is highly compute-intensive. In this paper, we show parallel implementation of Empirical Mode Decomposition on a GPU. We propose the use of “partial+total” switching method to increase performance while keeping the precision. We also focused on reducing the computation complexity in the above method from O(N) on a single CPU to O(N/P log (N)) on a GPU. Evaluation results show our single GPU implementation using Tesla C2050 (Fermi architecture) achieves a 29.9x speedup partially, and a 11.8x speedup totally when compared to a single Intel dual core CPU.
Masayuki CHIKAMATSU Yoshinori HORII Ming LU Yuji YOSHIDA Reiko AZUMI Kiyoshi YASE
We fabricated solution-processed organic complementary inverters based on α,ω-bis(2-hexyldecyl)sexithiophene (BHD6T) for p-channel and C60-fused N-methylpyrrolidine-meta-dodecyl phenyl (C60MC12) for n-channel. The BHD6T and C60MC12 thin-film transistors showed high field-effect mobilities of 0.035 and 0.057 cm2/Vs, respectively. The complementary inverter with a supply voltage of 50 V exhibited inverting voltages of 26.8 V for forward and 27.0 V for backward sweeps and a high gain of 76.
Sho ENDO Jun SONODA Motoyuki SATO Takafumi AOKI
Finite difference time domain (FDTD) method has been accelerated on the Cell Broadband Engine (Cell B.E.). However the problem has arisen that speedup is limited by the bandwidth of the main memory on large-scale analysis. As described in this paper, we propose a novel algorithm and implement FDTD using it. We compared the novel algorithm with results obtained using region segmentation, thereby demonstrating that the proposed algorithm has shorter calculation time than that provided by region segmentation.
Kenichi OHHATA Hiroki DATE Mai ARITA
We propose a capacitive averaging technique applied to a double-tail latched comparator without a preamplifier for an offset reduction technique. Capacitive averaging can be introduced by considering the first stage of the double-tail latched comparator as a capacitive loaded amplifier. This makes it possible to reduce the offset voltage while preventing an increase in power dissipation. A positive feedback technique is also used for the first stage, which maximizes the effectiveness of the capacitive averaging. The capacitive averaging mechanism and the relationship between the offset reduction and the linearity of the amplifier is discussed in detail. Simulation results for a 90-nm CMOS process show that the proposed technique can reduce the offset voltage by 1/3.5 (3 mV) at a power dissipation of only 45 µW.
Junichi OHMURA Takefumi MIYOSHI Hidetsugu IRIE Tsutomu YOSHINAGA
In this paper, we propose an approach to obtaining enhanced performance of the Linpack benchmark on a GPU-accelerated PC cluster connected via relatively slow inter-node connections. For one node with a quad-core Intel Xeon W3520 processor and a NVIDIA Tesla C1060 GPU card, we implement a CPU–GPU parallel double-precision general matrix–matrix multiplication (dgemm) operation, and achieve a performance improvement of 34% compared with the GPU-only case and 64% compared with the CPU-only case. For an entire 16-node cluster, each node of which is the same as the above and is connected with two gigabit Ethernet links, we use a computation-communication overlap scheme with GPU acceleration for the Linpack benchmark, and achieve a performance improvement of 28% compared with the GPU-accelerated high-performance Linpack benchmark (HPL) without overlapping. Our overlap GPU acceleration solution uses overlaps in which the main inter-node communication and data transfer to the GPU device memory are overlapped with the main computation task on the CPU cores. These overlaps use multi-core processors, which almost all of today's high-performance computers use. In particular, as well as using a CPU core for communication tasks, we also simultaneously use other CPU cores and the GPU for computation tasks. In order to enable overlap between inter-node communication and computation tasks, we eliminate their close dependence by breaking the main computation task into smaller tasks and rescheduling. Based on a scheme in which part of the CPU computation power is simultaneously used for tasks other than computation tasks, we experimentally find the optimal computation ratio for CPUs; this ratio differs from the case of parallel dgemm operation of one node.
Downlink multi-point transmission as a capacity enhancement method for the users at cell edge and the operators is studied in this paper. It is based on the so-called aggregate base station architecture using distributed antennas and cloud computing. Its advantages are analyzed by both its architectural side and simulation. The simulation results show that the capacity may be affected by the number of cell belonging to an aggregate base station and by the parameters related to the operation of it.
Omur OZEL Elif UYSAL-BIYIKOGLU Tolga GIRICI
A finite buffer shared by multiple packet queues is considered. Partitioning the buffer to maximize total throughput is formulated as a resource allocation problem, the solution is shown to be achieved by a greedy incremental algorithm in polynomial time. The optimal buffer allocation strategy is applied to different models for a wireless downlink. First, a set of parallel M/M/1/mi queues, corresponding to a downlink with orthogonal channels is considered. It is verified that at high load, optimal buffer partitioning can boost the throughput significantly with respect to complete sharing of the buffer. Next, the problem of optimal combined buffer allocation and channel assignment problems are shown to be separable in an outage scenario. Motivated by this observation, buffer allocation is considered in a system where users need to be multiplexed and scheduled based on channel state. It is observed that under finite buffers in the high load regime, scheduling simply with respect to channel state with a simply partitioned buffer achieves comparable throughput to combined channel and queue-aware scheduling.
Tetsushi ABE Yoshihisa KISHIYAMA Yoshikazu KAKURA Daichi IMAMURA
This paper presents an overview of radio interface technologies for cooperative transmission in 3GPP LTE-Advanced, i.e., coordinated multi-point (CoMP) transmission, enhanced inter-cell interference coordination (eICIC) for heterogeneous deployments, and relay transmission techniques. This paper covers not only the technical components in the 3GPP specifications that have already been released, but also those that were discussed in the Study Item phase of LTE-Advanced, and those that are currently being discussed in 3GPP for potential specification in future LTE releases.
Morihiro HAYASHIDA Tatsuya AKUTSU
For measuring the similarity of biological sequences and structures such as DNA sequences, protein sequences, and tertiary structures, several compression-based methods have been developed. However, they are based on compression algorithms only for sequential data. For instance, protein structures can be represented by two-dimensional distance matrices. Therefore, it is expected that image compression is useful for measuring the similarity of protein structures because image compression algorithms compress data horizontally and vertically. This paper proposes series of methods for measuring the similarity of protein structures. In the methods, an original protein structure is transformed into a distance matrix, which is regarded as a two-dimensional image. Then, the similarity of two protein structures is measured by a kind of compression ratio of the concatenated image. We employed several image compression algorithms, JPEG, GIF, PNG, IFS, and SPC. Since SPC often gave better results among the other image compression methods, and it is simple and easy to be modified, we modified SPC and obtained MSPC. We applied the proposed methods to clustering of protein structures, and performed Receiver Operating Characteristic (ROC) analysis. The results of computational experiments suggest that MSPC has the best performance among existing compression-based methods. We also present some theoretical results on the time complexity and Kolmogorov complexity of image compression-based protein structure comparison.
Raul FERNANDEZ-GARCIA Ignacio GIL Alexandre BOYER Sonia BENDHIA Bertrand VRIGNON
A simple analytical model to predict the DC MOSFET behavior under electromagnetic interference (EMI) is presented. The model is able to describe the MOSFET performance in the linear and saturation regions under EMI disturbance applied to the gate. The model consists of a unique simple equivalent circuit based on a voltage dependent current source and a reduced number of parameters which can accurately predict the drift on the drain current due to the EMI source. The analytical approach has been validated by means of electric simulation and measurements and can be easily introduced in circuit simulators. The proposed modeling technique combined with the nth-power law model of the MOSFET without EMI, significantly improves its accuracy in comparison with the n-th power law directly applied to a MOSFET under EMI impact.