Ryoichi ISHIHARA Jin ZHANG Miki TRIFUNOVIC Jaber DERAKHSHANDEH Negin GOLSHANI Daniel M. R. TAJARI MOFRAD Tao CHEN Kees BEENAKKER Tatsuya SHIMODA
We review our recent achievements in monolithic 3D-ICs and flexible electronics based on single-grain Si TFTs that are fabricated inside a single-grain with a low-temperature process. Based on pulsed-laser crystallization and submicron sized cavities made in the substrate, amorphous-Si precursor film was converted into poly-Si having grains that are formed on predetermined positions. Using the method called µ-Czochralski process and LPCVD a-Si precursor film, two layers of the SG Si TFT layers with the grains having a diameter of 6µm were vertically stacked with a maximum process temperature of 550°C. Mobility for electrons and holes were 600cm2/Vs and 200cm2/Vs, respectively. As a demonstration of monolithic 3D-ICs, the two SG-TFT layers were successfully implemented into CMOS inverter, 3D 6T-SRAM and single-grain lateral PIN photo-diode with in-pixel amplifier. The SG Si TFTs were applied to flexible electronics. In this case, the a-Si precursor was prepared by doctor-blade coating of liquid-Si based on pure cyclopentasilane (CPS) on a polyimide (PI) substrate with maximum process temperature of 350°C. The µ-Czochralski process provided location-controlled Si grains with a diameter of 3µm and mobilities of 460 and 121cm2/Vs for electrons and holes, respectively, were obtained. The devices on PI were transferred to a plastic foil which can operate with a bending diameter of 6mm. Those results indicate that the SG TFTs are attractive for their use in both monolithic 3D-ICs and flexible electronics.
As one of the popular social media that many people turn to in recent years, collaborative encyclopedia Wikipedia provides information in a more “Neutral Point of View” way than others. Towards this core principle, plenty of efforts have been put into collaborative contribution and editing. The trajectories of how such collaboration appears by revisions are valuable for group dynamics and social media research, which suggest that we should extract the underlying derivation relationships among revisions from chronologically-sorted revision history in a precise way. In this paper, we propose a revision graph extraction method based on supergram decomposition in the document collection of near-duplicates. The plain text of revisions would be measured by its frequency distribution of supergram, which is the variable-length token sequence that keeps the same through revisions. We show that this method can effectively perform the task than existing methods.
Tao QIN Wei LI Chenxu WANG Xingjun ZHANG
With the ever-growing prevalence of web 2.0, users can access information and resources easily and ubiquitously. It becomes increasingly important to understand the characteristics of user's complex behavior for efficient network management and security monitoring. In this paper, we develop a novel method to visualize and measure user's web-communication-behavior character in large-scale networks. First, we employ the active and passive monitoring methods to collect more than 20,000 IP addresses providing web services, which are divided into 12 types according to the content they provide, e.g. News, music, movie and etc, and then the IP address library is established with elements as (servicetype, IPaddress). User's behaviors are complex as they stay in multiple service types during any specific time period, we propose the behavior spectrum to model this kind of behavior characteristics in an easily understandable way. Secondly, two kinds of user's behavior characters are analyzed: the character at particular time instants and the dynamic changing characters among continuous time points. We then employ Renyi cross entropy to classify the users into different groups with the expectation that users in the same groups have similar behavior profiles. Finally, we demonstrated the application of behavior spectrum in profiling network traffic patterns and finding illegal users. The efficiency and correctness of the proposed methods are verified by the experimental results using the actual traffic traces collected from the Northwest Regional Center of China Education and Research Network (CERNET).
Yan DING Huaimin WANG Lifeng WEI Songzheng CHEN Hongyi FU Xinhai XU
MapReduce is commonly used as a parallel massive data processing model. When deploying it as a service over the open systems, the computational integrity of the participants is becoming an important issue due to the untrustworthy workers. Current duplication-based solutions can effectively solve non-collusive attacks, yet most of them require a centralized worker to re-compute additional sampled tasks to defend collusive attacks, which makes the worker a bottleneck. In this paper, we try to explore a trusted worker scheduling framework, named VAWS, to detect collusive attackers and assure the integrity of data processing without extra re-computation. Based on the historical results of verification, we construct an Integrity Attestation Graph (IAG) in VAWS to identify malicious mappers and remove them from the framework. To further improve the efficiency of identification, a verification-couple selection method with the IAG guidance is introduced to detect the potential accomplices of the confirmed malicious worker. We have proven the effectiveness of our proposed method on the improvement of system performance in theoretical analysis. Intensive experiments show the accuracy of VAWS is over 97% and the overhead of computation is closed to the ideal value of 2 with the increasing of the number of map tasks in our scheme.
Sumxin JIANG Rendong YING Peilin LIU Zhenqi LU Zenghui ZHANG
This paper describes a new method for lossy audio signal compression via compressive sensing (CS). In this method, a structured shrinkage operator is employed to decompose the audio signal into three layers, with two sparse layers, tonal and transient, and additive noise, and then, both the tonal and transient layers are compressed using CS. Since the shrinkage operator is able to take into account the structure information of the coefficients in the transform domain, it is able to achieve a better sparse approximation of the audio signal than traditional methods do. In addition, we propose a sparsity allocation algorithm, which adjusts the sparsity between the two layers, thus improving the performance of CS. Experimental results demonstrated that the new method provided a better compression performance than conventional methods did.
He LIU Mangui LIANG Haoliang SUN
In this letter, we propose a new secure and efficient certificateless aggregate signature scheme which has the advantages of both certificateless public key cryptosystem and aggregate signature. Based on the computational Diffie-Hellman problem, our scheme can be proven existentially unforgeable against adaptive chosen-message attacks. Most importantly, our scheme requires short group elements for aggregate signature and constant pairing computations for aggregate verification, which leads to high efficiency due to no relations with the number of signers.
Nan SHA Yuanyuan GAO Xiaoxin YI Wenlong LI Weiwei YANG
A joint continuous phase frequency shift keying (CPFSK) modulation and physical-layer network coding (PNC), i.e., CPFSK-PNC, is proposed for two-way relay channels (TWRCs). This letter discusses the signal detection of the CPFSK-PNC scheme with emphasis on the maximum-likelihood sequence detection (MLSD) algorithm for the relay receiver. The end-to-end error performance of the proposed CPFSK-PNC scheme is evaluated through simulations.
The large and complicated safety-critical systems today need to keep changing to accommodate ever-changing objectives and environments. Accordingly, runtime analysis for safe reconfiguration or evaluation is currently a hot topic in the field, whereas information acquisition of external environment is crucial for runtime safety analysis. With the rapid development of web services, mobile networks and ubiquitous computing, abundant realtime information of environment is available on the Internet. To integrate these public information into runtime safety analysis of critical systems, this paper brings forward a framework, which could be implemented with open source and cross platform modules and encouragingly, applicable to various safety-critical systems.
Seng KHEANG Kouichi KATSURADA Yurie IRIBE Tsuneo NITTA
To achieve high quality output speech synthesis systems, data-driven grapheme-to-phoneme (G2P) conversion is usually used to generate the phonetic transcription of out-of-vocabulary (OOV) words. To improve the performance of G2P conversion, this paper deals with the problem of conflicting phonemes, where an input grapheme can, in the same context, produce many possible output phonemes at the same time. To this end, we propose a two-stage neural network-based approach that converts the input text to phoneme sequences in the first stage and then predicts each output phoneme in the second stage using the phonemic information obtained. The first-stage neural network is fundamentally implemented as a many-to-many mapping model for automatic conversion of word to phoneme sequences, while the second stage uses a combination of the obtained phoneme sequences to predict the output phoneme corresponding to each input grapheme in a given word. We evaluate the performance of this approach using the American English words-based pronunciation dictionary known as the auto-aligned CMUDict corpus[1]. In terms of phoneme and word accuracy of the OOV words, on comparison with several proposed baseline approaches, the evaluation results show that our proposed approach improves on the previous one-stage neural network-based approach for G2P conversion. The results of comparison with another existing approach indicate that it provides higher phoneme accuracy but lower word accuracy on a general dataset, and slightly higher phoneme and word accuracy on a selection of words consisting of more than one phoneme conflicts.
Jian GAO Fang-Wei FU Linzhi SHEN Wenli REN
Generalized quasi-cyclic (GQC) codes with arbitrary lengths over the ring $mathbb{F}_{q}+umathbb{F}_{q}$, where u2=0, q=pn, n a positive integer and p a prime number, are investigated. By the Chinese Remainder Theorem, structural properties and the decomposition of GQC codes are given. For 1-generator GQC codes, minimum generating sets and lower bounds on the minimum distance are given.
Yeong Jun KIM Tae Hwan HONG Yong Soo CHO
In this paper, a new technique is proposed to reduce the frequency of cell search by user equipment (UE) in the presence of femtocells. A new common signal (CS) and a separate set of primary synchronization signals (PSSs) are employed to facilitate efficient cell search in a next-geration LTE-based system. The velocity of the UE is also utilized to determine cell search mode. A slow UE recognizes the presence of femtocells using the CS, so that it can make separate searches for macrocells and femtocells. A fast UE will not search for femtocells since the coverage of femtocells is restricted to a small region. The fast UE detects the macrocell boundary using the PSSs transmitted from neighboring macrocells, so that it can search for macrocells only at the macrocell boundary. The effects of CS and UE velocity on the number of cell searches are analyzed. The performance of the proposed technique is evaluated by computer simulations.
Abdulla Al MARUF Hung-Hsuan HUANG Kyoji KAWAGOE
A lot of work has been conducted on time series classification and similarity search over the past decades. However, the classification of a time series with high accuracy is still insufficient in applications such as ubiquitous or sensor systems. In this paper, a novel textual approximation of a time series, called TAX, is proposed to achieve high accuracy time series classification. l-TAX, an extended version of TAX that shows promising classification accuracy over TAX and other existing methods, is also proposed. We also provide a comprehensive comparison between TAX and l-TAX, and discuss the benefits of both methods. Both TAX and l-TAX transform a time series into a textual structure using existing document retrieval methods and bioinformatics algorithms. In TAX, a time series is represented as a document like structure, whereas l-TAX used a sequence of textual symbols. This paper provides a comprehensive overview of the textual approximation and techniques used by TAX and l-TAX
Yong REN Nobuhiro KAJI Naoki YOSHINAGA Masaru KITSUREGAWA
In sentiment classification, conventional supervised approaches heavily rely on a large amount of linguistic resources, which are costly to obtain for under-resourced languages. To overcome this scarce resource problem, there exist several methods that exploit graph-based semi-supervised learning (SSL). However, fundamental issues such as controlling label propagation, choosing the initial seeds, selecting edges have barely been studied. Our evaluation on three real datasets demonstrates that manipulating the label propagating behavior and choosing labeled seeds appropriately play a critical role in adopting graph-based SSL approaches for this task.
Naomi YAGI Tomomoto ISHIKAWA Yutaka HATA
This paper describes an ultrasonic system that estimates the cell quantity of an artificial culture bone, which is effective for appropriate treat with a composite of this material and Bone Marrow Stromal Cells. For this system, we examine two approaches for analyzing the ultrasound waves transmitted through the cultured bone, including stem cells to estimate cell quantity: multiple regression and fuzzy inference. We employ two characteristics from the obtained wave for applying each method. These features are the amplitude and the frequency; the amplitude is measured from the obtained wave, and the frequency is calculated by the cross-spectrum method. The results confirmed that the fuzzy inference method yields the accurate estimates of cell quantity in artificial culture bone. Using this ultrasonic estimation system, the orthopaedic surgeons can choose the composites that contain favorable number of cells before the implantation.
Shun UMETSU Akinobu SHIMIZU Hidefumi WATANABE Hidefumi KOBATAKE Shigeru NAWANO
This paper presents a novel liver segmentation algorithm that achieves higher performance than conventional algorithms in the segmentation of cases with unusual liver shapes and/or large liver lesions. An L1 norm was introduced to the mean squared difference to find the most relevant cases with an input case from a training dataset. A patient-specific probabilistic atlas was generated from the retrieved cases to compensate for livers with unusual shapes, which accounts for liver shape more specifically than a conventional probabilistic atlas that is averaged over a number of training cases. To make the above process robust against large pathological lesions, we incorporated a novel term based on a set of “lesion bases” proposed in this study that account for the differences from normal liver parenchyma. Subsequently, the patient-specific probabilistic atlas was forwarded to a graph-cuts-based fine segmentation step, in which a penalty function was computed from the probabilistic atlas. A leave-one-out test using clinical abdominal CT volumes was conducted to validate the performance, and proved that the proposed segmentation algorithm with the proposed patient-specific atlas reinforced by the lesion bases outperformed the conventional algorithm with a statistically significant difference.
Lijian ZHOU Wanquan LIU Zhe-Ming LU Tingyuan NIE
In this Letter, a new face recognition approach based on curvelets and local ternary patterns (LTP) is proposed. First, we observe that the curvelet transform is a new anisotropic multi-resolution transform and can efficiently represent edge discontinuities in face images, and that the LTP operator is one of the best texture descriptors in terms of characterizing face image details. This motivated us to decompose the image using the curvelet transform, and extract the features in different frequency bands. As revealed by curvelet transform properties, the highest frequency band information represents the noisy information, so we directly drop it from feature selection. The lowest frequency band mainly contains coarse image information, and thus we deal with it more precisely to extract features as the face's details using LTP. The remaining frequency bands mainly represent edge information, and we normalize them for achieving explicit structure information. Then, all the extracted features are put together as the elementary feature set. With these features, we can reduce the features' dimension using PCA, and then use the sparse sensing technique for face recognition. Experiments on the Yale database, the extended Yale B database, and the CMU PIE database show the effectiveness of the proposed methods.
Chengqian XU Xiaoyu CHEN Kai LIU
This letter presents new methods for transforming perfect ternary sequences into perfect 8-QAM+ sequences. Firstly, based on perfect ternary sequences with even period, two mappings which can map two ternary variables to an 8-QAM+ symbol are employed for constructing new perfect 8-QAM+ sequences. In this case, the proposed construction is a generalization of the existing one. Then based on perfect ternary sequence with odd period, perfect 8-QAM sequences are generated. Compared with perfect 8-QAM+ sequences, the resultant sequences have no energy loss.
Arunee RATIKAN Mikifumi SHIKIDA
Online Social Networks (OSNs) have recently been playing an important role in communication. From the audience aspect, they enable audiences to get unlimited information via the information feeding mechanism (IFM), which is an important part of the OSNs. The audience relies on the quantity and quality of the information served by it. We found that existing IFMs can result in two problems: information overload and cultural ignorance. In this paper, we propose a new type of IFM that solves these problems. The advantage of our proposed IFM is that it can filter irrelevant information with consideration of audiences' culture by using the Naïve Bayes (NB) algorithm together with features and factors. It then dynamically serves interesting and important information based on the current situation and preference of the audience. This mechanism helps the audience to reduce the time spent in finding interesting information. It can be applied to other cultures, societies and businesses. In the near future, the audience will be provided with excellent, and less annoying, communication. Through our studies, we have found that our proposed IFM is most appropriate for Thai and some groups of Japanese audiences under the consideration of audiences' culture.
A soft-decision recursive decoding algorithm (RDA) for the class of the binary linear block codes recursively generated using a u|u+v-construction method is proposed. It is well known that Reed-Muller (RM) codes are in this class. A code in this class can be decomposed into left and right components. At a recursive level of the RDA, if the component is decomposable, the RDA is performed for the left component and then for the cosets generated from the left decoding result and the right component. The result of this level is obtained by concatenating the left and right decoding results. If the component is indecomposable, a proposed iterative bounded-distance decoding algorithm is performed. Computer simulations were made to evaluate the RDA for RM codes over an additive white Gaussian-noise channel using binary phase-shift keying modulation. The results show that the block error rates of the RDA are relatively close to those of the maximum-likelihood decoding for the third-order RM code of length 26 and better than those of the Chase II decoding for the third-order RM codes of length 26 and 27, and the fourth-order RM code of length 28.
Bu-Ching LIN Juinn-Dar HUANG Jing-Yang JOU
The notion of multiple constant multiplication (MCM) is extensively adopted in digital signal processing (DSP) applications such as finite impulse filter (FIR) designs. A set of adders is utilized to replace regular multipliers for the multiplications between input data and constant filter coefficients. Though many algorithms have been proposed to reduce the total number of adders in an MCM block for area minimization, they do not consider the actual bitwidth of each adder, which may not estimate the hardware cost well enough. Therefore, in this article we propose a bitwidth-aware MCM optimization algorithm that focuses on minimizing the total number of adder bits rather than the adder count. It first builds a subexpression graph based on the given coefficients, derives a set of constraints for adder bitwidth minimization, and then optimally solves the problem through integer linear programming (ILP). Experimental results show that the proposed algorithm can effectively reduce the required adder bit count and outperforms the existing state-of-the-art techniques.