Huaning WU Yalong YAN Chao LIU Jing ZHANG
This paper introduces and uses spider monkey optimization (SMO) for synthesis sparse linear arrays, which are composed of a uniformly spaced core subarray and an extended sparse subarray. The amplitudes of all the elements and the locations of elements in the extended sparse subarray are optimized by the SMO algorithm to reduce the side lobe levels of the whole array, under a set of practical constraints. To show the efficiency of SMO, different examples are presented and solved. Simulation results of the sparse arrays designed by SMO are compared with published results to verify the effectiveness of the SMO method.
Given an undirected graph G, an edge dominating set is a subset F of edges such that each edge not in F is adjacent to some edge in F, and computing the minimum size of an edge dominating set is known to be NP-hard. Since the size of any edge dominating set is at least half of the maximum size µ(G) of a matching in G, we study the problem of testing whether a given graph G has an edge dominating set of size ⌈µ(G)/2⌉ or not. In this paper, we prove that the problem is NP-complete, whereas we design an O*(2.0801µ(G)/2)-time and polynomial-space algorithm to the problem.
In this paper, we present two classes of zero difference balanced (ZDB) functions, which are derived by difference balanced functions, and a class of perfect ternary sequences respectively. The proposed functions have parameters not covered in the literature, and can be used to design optimal constant composition codes, and perfect difference systems of sets.
Ganzorig GANKHUYAG Eungi HONG Yoonsik CHOE
Network coding (NC) is considered a new paradigm for distributed networks. However, NC has an all-or-nothing property. In this paper, we propose a sparse recovery approach using sparse sensing matrix to solve the NC all-or-nothing problem over a finite field. The effectiveness of the proposed approach is evaluated based on a sensor network.
Xiaoxia DAI Wei XIA Wenlong HE
Much attention has recently been paid to sparsity-aware space-time adaptive processing (STAP) algorithms. The idea of sparsity-aware technology is commonly based on the convex l1-norm penalty. However, some works investigate the lq (0 < q < 1) penalty which induces more sparsity owing to its lack of convexity. We herein consider the design of an lq penalized STAP processor with a generalized sidelobe canceler (GSC) architecture. The lq cyclic descent (CD) algorithm is utilized with the least squares (LS) design criterion. It is validated through simulations that the lq penalized STAP processor outperforms the existing l1-based counterparts in both convergence speed and steady-state performance.
In this paper, we propose a method to find similar sentences based on language resources for building a parallel corpus between English and Korean from Wikipedia. We use a Wiki-dictionary consisted of document titles from the Wikipedia and bilingual example sentence pairs from Web dictionary instead of traditional machine readable dictionary. In this way, we perform similarity calculation between sentences using sequential matching of the language resources, and evaluate the extracted parallel sentences. In the experiments, the proposed parallel sentences extraction method finally shows 65.4% of F1-score.
In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel and pipeline processing. We show results for eight real-world biological datasets. We compare execution times against our previous hardware approach, and TNT, the fastest available parsimony program, which is also accelerated by the Indirect Calculation of Tree Lengths method. Acceleration rates between 34 to 45 per rearrangement, and 2 to 6 for the whole search, are obtained against our previous hardware approach. Acceleration rates between 2 to 36 per rearrangement, and 18 to 112 for the whole search, are obtained against TNT.
Workflow nets (WF-nets for short) are a standard way to automate business processes. Well-Structured WF-nets (WS WF-nets for short) are an important subclass of WF-nets because they have a well-balanced capability to expression power and analysis power. In this paper, we revealed structural and behavioral properties of WS WF-nets. Our results on structural properties are: (i) There is no EFC but non-FC WF-net in WS WF-nets; (ii) A WS WF-net is sound iff it is a van Hee et al.'s ST-net. Our results on behavioral properties are: (i) Any WS WF-net is safe; (ii) Any WS WF-net is separable; (iii) A necessary and sufficient condition on reachability of sound WS WF-net (N,[pIk]). Finally we illustrated the usefulness of the proposed properties with an application example of analyzing workflow evolution.
Haiyang LIU Hao ZHANG Lianrong MA
Based on the codewords of the [q,2,q-1] extended Reed-Solomon (RS) code over the finite field Fq, we can construct a regular binary γq×q2 matrix H(γ,q), where q is a power of 2 and γ≤q. The matrix H(γ,q) defines a regular low-density parity-check (LDPC) code C(γ,q), called a full-length RS-LDPC code. Using some analytical methods, we completely determine the values of s(H(4,q)), s(H(5,q)), and d(C(5,q)) in this letter, where s(H(γ,q)) and d(C(γ,q)) are the stopping distance of H(γ,q) and the minimum distance of C(γ,q), respectively.
Wei HAN Xiongwei ZHANG Meng SUN Li LI Wenhua SHI
In this letter, we propose a novel speech separation method based on perceptual weighted deep recurrent neural network (DRNN) which incorporate the masking properties of the human auditory system. In supervised training stage, we firstly utilize the clean label speech of two different speakers to calculate two perceptual weighting matrices. Then, the obtained different perceptual weighting matrices are utilized to adjust the mean squared error between the network outputs and the reference features of both the two clean speech so that the two different speech can mask each other. Experimental results on TSP speech corpus demonstrate that the proposed speech separation approach can achieve significant improvements over the state-of-the-art methods when tested with different mixing cases.
Xibin WANG Fengji LUO Chunyan SANG Jun ZENG Sachio HIROKAWA
With the rapid development of information and Web technologies, people are facing ‘information overload’ in their daily lives. The personalized recommendation system (PRS) is an effective tool to assist users extract meaningful information from the big data. Collaborative filtering (CF) is one of the most widely used personalized recommendation techniques to recommend the personalized products for users. However, the conventional CF technique has some limitations, such as the low accuracy of of similarity calculation, cold start problem, etc. In this paper, a PRS model based on the Support Vector Machine (SVM) is proposed. The proposed model not only considers the items' content information, but also the users' demographic and behavior information to fully capture the users' interests and preferences. An improved Particle Swarm Optimization (PSO) algorithm is also proposed to improve the performance of the model. The efficiency of the proposed method is verified by multiple benchmark datasets.
Shuai LIU Licheng JIAO Shuyuan YANG Hongying LIU
Restoration is an important area in improving the visual quality, and lays the foundation for accurate object detection or terrain classification in image analysis. In this paper, we introduce Beta process priors into hierarchical sparse Bayesian learning for recovering underlying degraded hyperspectral images (HSI), including suppressing the various noises and inferring the missing data. The proposed method decomposes the HSI into the weighted summation of the dictionary elements, Gaussian noise term and sparse noise term. With these, the latent information and the noise characteristics of HSI can be well learned and represented. Solved by Gibbs sampler, the underlying dictionary and the noise can be efficiently predicted with no tuning of any parameters. The performance of the proposed method is compared with state-of-the-art ones and validated on two hyperspectral datasets, which are contaminated with the Gaussian noises, impulse noises, stripes and dead pixel lines, or with a large number of data missing uniformly at random. The visual and quantitative results demonstrate the superiority of the proposed method.
Recently, the Static Heterogeneous Particle Swarm Optimization (SHPSO) has been studied by more and more researchers. In SHPSO, the different search behaviours assigned to particles during initialization do not change during the search process. As a consequence of this, the inappropriate population size of exploratory particles could leave the SHPSO with great difficulties of escaping local optima. This motivated our attempt to improve the performance of SHPSO by introducing the dynamic heterogeneity. The self-adaptive heterogeneity is able to alter its heterogeneous structure according to some events caused by the behaviour of the swarm. The proposed triggering events are confirmed by keeping track of the frequency of the unchanged global best position (pg) for a number of iterations. This information is then used to select a new heterogeneous structure when pg is considered stagnant. According to the different types of heterogeneity, DHPSO-d and DHPSO-p are proposed in this paper. In, particles dynamically use different rules for updating their position when the triggering events are confirmed. In DHPSO-p, a global gbest model and a pairwise connection model are automatically selected by the triggering configuration. In order to investigate the scalability of and DHPSO-p, a series of experiments with four state-of-the-art algorithms are performed on ten well-known optimization problems. The scalability analysis of and DHPSO-p reveals that the dynamic self-adaptive heterogeneous structure is able to address the exploration-exploitation trade-off problem in PSO, and provide the excellent optimal solution of a problem simultaneously.
This paper presents the set of procedures to blend GNSS and V2V communication to improve the performance of the stand-alone on-board GNSS receiver and to assure mutual positioning with a bounded error. Particle filter algorithm is applied to enhance mutual positioning of vehicles, and it fuses the information provided by the GNSS receiver, wireless measurements in vehicular environments, odometer, and digital road map data including reachability and zone probabilities. Measurement-based statistical model of relative distance as a function of Time-of-Arrival is experimentally obtained. The number of collaborative vehicles to the mutual positioning procedure is investigated in terms of positioning accuracy and network performance through realistic simulation studies, and the proposed mutual positioning procedure is experimentally evaluated by a fleet of five IEEE 802.11p radio modem equipped vehicles. Collaboration in a VANET improves availability of position measurement and its accuracy up to 40% in comparison with respect to the stand-alone GNSS receiver.
Yilong ZHANG Yuehua LI Safieddin SAFAVI-NAEINI
Object detection in millimeter-wave Interferometric Synthetic Aperture Radiometer (InSAR) imaging is always a crucial task. Facing unpredictable and numerous objects, traditional object detection models running after the InSAR system accomplishing imaging suffer from disadvantages such as complex clutter backgrounds, weak intensity of objects, Gibbs ringing, which makes a general purpose saliency detection system for InSAR necessary. This letter proposes a spectrum-based saliency detection algorithm to extract the salient regions from unknown backgrounds cooperating with sparse sensing InSAR imaging procedure. Directly using the interferometric value and sparse information of scenes in the basis of the Discrete Cosine Transform (DCT) domain adopted by InSAR imaging procedure, the proposed algorithm isolates the support of saliency region and then inversely transforms it back to calculate the saliency map. Comparing with other detecting algorithms which run after accomplishing imaging, the proposed algorithm will not be affected by information-loss accused by imaging procedure. Experimental results prove that it is effective and adaptable for millimeter-wave InSAR imaging.
Shanqi PANG Ying WANG Jiao DU Wenju XU
Orthogonal arrays and orthogonal partitions have great significance in communications and coding theory. In this letter, by using a generalized orthogonal partition, Latin squares and orthogonal Latin squares, we present an iterative construction method of orthogonal arrays of strength t and orthogonal partitions. As an application of the method, more orthogonal arrays of strength t and orthogonal partitions than the existing methods can be constructed.
Card-based protocols enable us to easily perform cryptographic tasks such as secure multiparty computation using a deck of physical cards. Since the first card-based protocol appeared in 1989, many protocols have been designed. A protocol is usually described with a series of somewhat intuitive and verbal descriptions, such as “turn over this card,” “shuffle these two cards,” “apply a random cut to these five cards,” and so on. On the other hand, a formal computational model of card-based protocols via abstract machine was constructed in 2014. By virtue of the formalization, card-based protocols can be treated more rigorously; for example, it enables one to discuss the lower bounds on the number of cards required for secure computations. In this paper, an overview of the computational model with its applications to designing protocols and a survey of the recent progress in card-based protocols are presented.
Kazuto OGAWA Goichiro HANAOKA Hideki IMAI
A lot of encryption and watermarking schemes have been developed as countermeasures to protect copyrights of broadcast or multicast content from malicious subscribers (traitors) that make pirate receivers (PRs) to use the content illegally. However, solo use of these schemes does not necessarily work well. Traitor tracing encryption schemes are a type of broadcasting encryption and have been developed for broadcasting and multicast services. There are multiple distinct decryption keys for each encryption key, and each service subscriber is given a unique decryption key. Any subscriber that redistributes his or her decryption key to a third party or who uses it and maybe other keys to make a PR can be identified with using the tracing algorithm of the scheme that is used by the services. However, almost all previous schemes have the same weakness; that is, they are vulnerable to an attack (content comparison attack). This is a concrete example such that solo use of the scheme does not work well. The attack involves multiple distinct decryption keys and a content-data comparison mechanism. We have developed a method, called complementary traitor tracing method (CTT), that makes traitor tracing schemes secure against content comparison attacks. It makes it impossible for PRs to distinguish ordinary content data from test data and makes traitor tracing schemes effective against all PRs, even those with multiple distinct decryption keys. CTT is made with a simple combination of schemes that are absolutely necessary. It makes broadcasting or multicast services secure.
Minoru KURIBAYASHI Masakatu MORII
Quick Response (QR) code is a two dimensional barcode widely used in many applications. A standard QR code consists of black and white square modules, and it appears randomized patterns. By modifying the modules using certain rule, it is possible to display a logo image on the QR code. Such a QR code is called an aesthetic QR code. In this paper, we change the encoding method of the Reed-Solomon (RS) code to produce an aesthetic QR code without sacrificing its error correcting capability. The proposed method randomly produces candidates of RS blocks and finds the best one during encoding. Considering an image to be displayed, we also introduce a weighting function during random selection that classifies the visually important regions in the image. We further investigate the shape of modules which represents the image and consider the trade-off between the visual quality and its readability. As a result, we can produce a beautiful aesthetic QR code, which still can be decoded by standard QR code reader.
Video data mining based on topic models as an emerging technique recently has become a very popular research topic. In this paper, we present a novel topic model named sequential correspondence hierarchical Dirichlet processes (Seq-cHDP) to learn the hidden structure within video data. The Seq-cHDP model can be deemed as an extended hierarchical Dirichlet processes (HDP) model containing two important features: one is the time-dependency mechanism that connects neighboring video frames on the basis of a time dependent Markovian assumption, and the other is the correspondence mechanism that provides a solution for dealing with the multimodal data such as the mixture of visual words and speech words extracted from video files. A cascaded Gibbs sampling method is applied for implementing the inference task of Seq-cHDP. We present a comprehensive evaluation for Seq-cHDP through experimentation and finally demonstrate that Seq-cHDP outperforms other baseline models.