Zhibao LIN Zhengqian LI Pinhui KE
Zero-difference balanced (ZDB) functions, which have many applications in coding theory and sequence design, have received a lot of attention in recent years. In this letter, based on two known classes of ZDB functions, a new class of ZDB functions, which is defined on the group (Z2e-1×Zn,+) is presented, where e is a prime and n=p1m1p2m2…pkmk, pi is odd prime satisfying that e|(pi-1) for any 1≤i≤k . In the case of gcd(2e-1,n)=1, the new constructed ZDB functions are cyclic.
In this paper, we study self-dual cyclic codes of length n over the ring R=Z4[u]/
Tao YU Azril HANIZ Kentaro SANO Ryosuke IWATA Ryouta KOSAKA Yusuke KUKI Gia Khanh TRAN Jun-ichi TAKADA Kei SAKAGUCHI
Location information is essential to varieties of applications. It is one of the most important context to be detected by wireless distributed sensors, which is a key technology in Internet-of-Things. Fingerprint-based methods, which compare location unique fingerprints collected beforehand with the fingerprint measured from the target, have attracted much attention recently in both of academia and industry. They have been successfully used for many location-based applications. From the viewpoint of practical applications, in this paper, four different typical approaches of fingerprint-based radio emitter localization system are introduced with four different representative applications: localization of LTE smart phone used for anti-cheating in exams, indoor localization of Wi-Fi terminals, localized light control in BEMS using location information of occupants, and illegal radio localization in outdoor environments. Based on the different practical application scenarios, different solutions, which are designed to enhance the localization performance, are discussed in detail. To the best of the authors' knowledge, this is the first paper to give a guideline for readers about fingerprint-based localization system in terms of fingerprint selection, hardware architecture design and algorithm enhancement.
Kazuya ANAZAWA Toshiaki MIYAZAKI Peng LI
After large-scale disasters, information sharing among people becomes more important than usual. This, however, is extremely difficult to achieve in disaster zones due to serious damage to the existing network infrastructure, power outages, and high traffic congestion. For the quick provision of alternative networks to serve heavy communication demands after disasters, establishing local area networks (LANs) consisting of portable servers with data storage has been considered as one of the most promising solutions. Based on the established LAN and a data server in each area, people can share many kinds of disaster-related information such as emergency information and supply/demand information via deployed neighboring servers. However, due to the lack of stable Internet connection, these servers are isolated and cannot be synchronized in real time. To enable and guarantee more efficient information sharing across the whole disaster-hit area, data stored on each server should be synchronized without the Internet. Our solution is to propose an intermittent data synchronization scheme that uses moving vehicles as relays to exchange data between isolated servers after disasters. With the objective of maximizing the total number of synchronized high priority data under the capability constraints of mobile relays, we first propose a data allocation scheme (DAS) from a server to a mobile relay. After that, we propose a trajectory planning scheme for the relays which is formulated as a Mixed Integer Linear Fractional Programming (MILFP) problem, and an algorithm to solve it efficiently. Extensive simulations and comparisons with other methods show the superior performance of our proposals.
Yo YAMAGUCHI Yosuke FUJINO Hajime KATSUDA Marina NAKANO Hiroyuki FUKUMOTO Shigeru TERUHI Kazunori AKABANE Shuichi YOSHINO
This paper presents a water leakage monitoring system that gathers acoustic data of water pipes using wireless communication technology and identifies the sound of water leakage using machine leaning technology. To collect acoustic data effectively, this system combines three types of data-collection methods: drive-by, walk-by, and static. To design this system, it is important to ascertain the wireless communication distance that can be achieved with sensors installed in a basement. This paper also reports on radio propagation from underground manholes made from reinforced concrete and resin concrete in residential and commercial areas using the 920 MHz band. We reveal that it is possible to design a practical system that uses radio communication from underground sensors.
Keiichi ITOH Haruka NAKAJIMA Hideaki MATSUDA Masaki TANAKA Hajime IGARASHI
This paper reports a novel 3D topology optimization method based on the finite difference time domain (FDTD) method for a dielectric lens antenna. To obtain an optimal lens with smooth boundary, we apply normalized Gaussian networks (NGnet) to 3D topology optimization. Using the proposed method, the dielectric lens with desired radiation characteristics can be designed. As an example of the optimization using the proposed method, the width of the main beam is minimized assuming spatial symmetry. In the optimization, the lens is assumed to be loaded on the aperture of a waveguide slot antenna and is smaller compared with the wavelength. It is shown that the optimized lens has narrower beamwidth of the main beam than that of the conventional lens.
This paper introduces a technique for automatically generating potential training data from sentences in which entity pairs are not apparently presented in a relation extraction. Most previous works on relation extraction by distant supervision ignored cases in which a relationship may be expressed via null-subjects or anaphora. However, natural language text basically has a network structure that is composed of several sentences. If they are closely related, this is not expressed explicitly in the text, which can make relation extraction difficult. This paper describes a new model that augments a paragraph with a “salient entity” that is determined without parsing. The entity can create additional tuple extraction environments as potential subjects in paragraphs. Including the salient entity as part of the sentential input may allow the proposed method to identify relationships that conventional methods cannot identify. This method also has promising potential applicability to languages for which advanced natural language processing tools are lacking.
Yong WANG Zhiqiu HUANG Rongcun WANG Qiao YU
Spectrum-based fault localization (SFL) is a lightweight approach, which aims at helping debuggers to identity root causes of failures by measuring suspiciousness for each program component being a fault, and generate a hypothetical fault ranking list. Although SFL techniques have been shown to be effective, the fault component in a buggy program cannot always be ranked at the top due to its complex fault triggering models. However, it is extremely difficult to model the complex triggering models for all buggy programs. To solve this issue, we propose two simple fault triggering models (RIPRα and RIPRβ), and a refinement technique to improve fault absolute ranking based on the two fault triggering models, through ruling out some higher ranked components according to its fault triggering model. Intuitively, our approach is effective if a fault component was ranked within top k in the two fault ranking lists outputted by the two fault localization strategies. Experimental results show that our approach can significantly improve the fault absolute ranking in the three cases.
Yusuke YAGI Keita TAKAHASHI Toshiaki FUJII Toshiki SONODA Hajime NAGAHARA
A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.
Su LIU Xingguang GENG Yitao ZHANG Shaolong ZHANG Jun ZHANG Yanbin XIAO Chengjun HUANG Haiying ZHANG
The quality of edge detection is related to detection angle, scale, and threshold. There have been many algorithms to promote edge detection quality by some rules about detection angles. However these algorithm did not form rules to detect edges at an arbitrary angle, therefore they just used different number of angles and did not indicate optimized number of angles. In this paper, a novel edge detection algorithm is proposed to detect edges at arbitrary angles and optimized number of angles in the algorithm is introduced. The algorithm combines singularity detection with Gaussian wavelet transform and edge detection at arbitrary directions and contain five steps: 1) An image is divided into some pixel lines at certain angle in the range from 45° to 90° according to decomposition rules of this paper. 2) Singularities of pixel lines are detected and form an edge image at the certain angle. 3) Many edge images at different angles form a final edge images. 4) Detection angles in the range from 45° to 90° are extended to range from 0° to 360°. 5) Optimized number of angles for the algorithm is proposed. Then the algorithm with optimized number of angles shows better performances.
Hirofumi SUZUKI Shin-ichi MINATO
Finding Pareto-optimal solutions is a basic approach in multi-objective combinatorial optimization. In this paper, we focus on the 0-1 multi-objective knapsack problem, and present an algorithm to enumerate all its Pareto-optimal solutions, which improves upon the method proposed by Bazgan et al. Our algorithm is based on dynamic programming techniques using an efficient data structure called zero-suppressed binary decision diagram (ZDD), which handles a set of combinations compactly. In our algorithm, we utilize ZDDs for storing all the feasible solutions compactly, and pruning inessential partial solutions as quickly as possible. As an output of the algorithm, we can obtain a useful ZDD indexing all the Pareto-optimal solutions. The results of our experiments show that our algorithm is faster than the previous method for various types of three- and four-objective instances, which are difficult problems to solve.
Derviş AYGÖR Shafqat Ur REHMAN Fatih Vehbi ÇELEBİ
This paper is primarily concerned with the performance of Medium Access Control (MAC) layer plans for Wireless Sensor Networks (WSNs) in the context of buffer management solutions. We propose a novel buffer management solution that improves the general performance of MAC layer plans, in particular those crafted for WSNs. An analytical model is introduced in order to evaluate the cost of different buffer management solutions. The proposed buffer management solution, Single Queue Multi Priority (SQMP), is compared with well-known Single Queue Single Priority (SQSP) and Multi Queue Multi Priority (MQMP) buffer management solutions. All buffer management solutions are investigated in terms of throughput performance, utilization of the buffer and prioritization capabilities. Despite the relatively good performance of the different buffer management solutions in uncongested networks, the characteristic features of WSNs cause a degradation in the performance. In bursty conditions, SQMP controls and manages this degradation more effectively in comparison with the other two solutions. Simulations based on Omnet++ and Castalia confirm the performance improvements of our buffer management solution.
Xiang ZHAO Zishu HE Yikai WANG Yuan JIANG
This letter addresses the problem of space-time adaptive processing (STAP) for airborne nonuniform linear array (NLA) radar using a generalized sidelobe canceller (GSC). Due to the difficulty of determining the spatial nulls for the NLAs, it is a problem to obtain a valid blocking matrix (BM) of the GSC directly. In order to solve this problem and improve the STAP performance, a BM modification method based on the modified Gram-Schmidt orthogonalization algorithm is proposed. The modified GSC processor can achieve the optimal STAP performance and as well a faster convergence rate than the orthogonal subspace projection method. Numerical simulations validate the effectiveness of the proposed methods.
Kazuyuki MORIOKA Satoshi YAMAZAKI David ASANO
We consider space time block coded-continuous phase modulation (STBC-CPM), which has the advantages of both STBC and CPM at the same time. A weak point of STBC-CPM is that the normalized spectral efficiency (NSE) is limited by the orthogonality of the STBC and CPM parameters. The purpose of this study is to improve the NSE of STBC-CPM. The NSE depends on the transmission rate (TR), the bit error rate (BER) and the occupied bandwidth (OBW). First, to improve the TR, we adapt quasi orthogonal-STBC (QO-STBC) for four transmit antennas and quasi-group orthogonal Toeplitz code (Q-GOTC) for eight transmit antennas, at the expense of the orthogonality. Second, to evaluate the BER, we derive a BER approximation of STBC-CPM with non-orthogonal STBC (NO-STBC). The theoretical analysis and simulation results show that the NSE can be improved by using QO-STBC and Q-GOTC. Third, the OBW depends on CPM parameters, therefore, the tradeoff between the NSE and the CPM parameters is considered. A computer simulation provides a candidate set of CPM parameters which have better NSE. Finally, the adaptation of non-orthogonal STBC to STBC-CPM can be viewed as a generalization of the study by Silvester et al., because orthogonal STBC can be thought of as a special case of non-orthogonal STBC. Also, the adaptation of Q-GOTC to CPM can be viewed as a generalization of our previous letter, because linear modulation scheme can be thought of as a special case of non-linear modulation.
Peerasak INTARAPAIBOON Thanaruk THEERAMUNKONG
Multi-slot information extraction, also known as frame extraction, is a task that identify several related entities simultaneously. Most researches on this task are concerned with applying IE patterns (rules) to extract related entities from unstructured documents. An important obstacle for the success in this task is unknowing where text portions containing interested information are. This problem is more complicated when involving languages with sentence boundary ambiguity, e.g. the Thai language. Applying IE rules to all reasonable text portions can degrade the effect of this obstacle, but it raises another problem that is incorrect (unwanted) extractions. This paper aims to present a method for removing these incorrect extractions. In the method, extractions are represented as intuitionistic fuzzy sets, and a similarity measure for IFSs is used to calculate distance between IFS of an unclassified extraction and that of each already-classified extraction. The concept of k nearest neighbor is adopted to design whether the unclassified extraction is correct or not. From the experiment on various domains, the proposed technique improves extraction precision while satisfactorily preserving recall.
Ryo IWAKI Hiroki YOKOYAMA Minoru ASADA
The step size is a parameter of fundamental importance in learning algorithms, particularly for the natural policy gradient (NPG) methods. We derive an upper bound for the step size in an incremental NPG estimation, and propose an adaptive step size to implement the derived upper bound. The proposed adaptive step size guarantees that an updated parameter does not overshoot the target, which is achieved by weighting the learning samples according to their relative importances. We also provide tight upper and lower bounds for the step size, though they are not suitable for the incremental learning. We confirm the usefulness of the proposed step size using the classical benchmarks. To the best of our knowledge, this is the first adaptive step size method for NPG estimation.
Keisuke NONAKA Houari SABIRIN Jun CHEN Hiroshi SANKOH Sei NAITO
A free-viewpoint application has been developed that yields an immersive user experience. One of the simple free-viewpoint approaches called “billboard methods” is suitable for displaying a synthesized 3D view in a mobile device, but it suffers from the limitation that a billboard should be positioned in only one position in the world. This fact gives users an unacceptable impression in the case where an object being shot is situated at multiple points. To solve this problem, we propose optimal deformation of the billboard. The deformation is designed as a mapping of grid points in the input billboard silhouette to produce an optimal silhouette from an accurate voxel model of the object. We formulate and solve this procedure as a nonlinear optimization problem based on a grid-point constraint and some a priori information. Our results show that the proposed method generates a synthesized virtual image having a natural appearance and better objective score in terms of the silhouette and structural similarity.
Takahiro FUJITA Kohei HATANO Shuji KIJIMA Eiji TAKIMOTO
We consider combinatorial online prediction problems and propose a new construction method of efficient algorithms for the problems. One of the previous approaches to the problem is to apply online prediction method, in which two external procedures the projection and the metarounding are assumed to be implemented. In this work, we generalize the projection to multiple projections. As an application of our framework, we show an algorithm for an online job scheduling problem with a single machine with precedence constraints.
Kazuhiro KURITA Kunihiro WASA Takeaki UNO Hiroki ARIMURA
In this study, we address a problem pertaining to the induced matching enumeration. An edge set M is an induced matching of a graph G=(V,E). The enumeration of matchings has been widely studied in literature; however, there few studies on induced matching. A straightforward algorithm takes O(Δ2) time for each solution that is coming from the time to generate a subproblem, where Δ is the maximum degree in an input graph. To generate a subproblem, an algorithm picks up an edge e and generates two graphs, the one is obtained by removing e from G, the other is obtained by removing e, adjacent edge to e, and edges adjacent to adjacent edge of e. Since this operation needs O(Δ2) time, a straightforward algorithm enumerates all induced matchings in O(Δ2) time per solution. We investigated local structures that enable us to generate subproblems within a short time and proved that the time complexity will be O(1) if the input graph is C4-free. A graph is C4-free if and only if none of its subgraphs have a cycle of length four.
Jingjing SI Jing XIANG Yinbo CHENG Kai LIU
Generalized approximate message passing (GAMP) can be applied to compressive phase retrieval (CPR) with excellent phase-transition behavior. In this paper, we introduced the cartoon-texture model into the denoising-based phase retrieval GAMP(D-prGAMP), and proposed a cartoon-texture model based D-prGAMP (C-T D-prGAMP) algorithm. Then, based on experiments and analyses on the variations of the performance of D-PrGAMP algorithms with iterations, we proposed a 2-stage D-prGAMP algorithm, which makes tradeoffs between the C-T D-prGAMP algorithm and general D-prGAMP algorithms. Finally, facing the non-convergence issues of D-prGAMP, we incorporated adaptive damping to 2-stage D-prGAMP, and proposed the adaptively damped 2-stage D-prGAMP (2-stage ADD-prGAMP) algorithm. Simulation results show that, runtime of 2-stage D-prGAMP is relatively equivalent to that of BM3D-prGAMP, but 2-stage D-prGAMP can achieve higher image reconstruction quality than BM3D-prGAMP. 2-stage ADD-prGAMP spends more reconstruction time than 2-stage D-prGAMP and BM3D-prGAMP. But, 2-stage ADD-prGAMP can achieve PSNRs 0.2∼3dB higher than those of 2-stage D-prGAMP and 0.3∼3.1dB higher than those of BM3D-prGAMP.