Masaya TAMURA Yasumasa NAKA Kousuke MURAI
This paper presents the design of a capacitive coupler for underwater wireless power transfer (U-WPT) focusing on kQ product. Power transfer efficiency hinges on the coupling coefficient k between the couplers and Q-factor of water calculated from the complex permittivity. High efficiency can be achieved by handling k and the Q-factor effectively. First, the pivotal elements on k are derived from the equivalent circuit of the coupler. Next, the frequency characteristic of the Q-factor in tap water is calculated from the measured results. Then, the design parameters in which kQ product has the maximal values are determined. Finally, it is demonstrated that the efficiency of U-WPT with the capacitive coupling designed by our method achieves approximately 80%.
Yusuke YAGI Keita TAKAHASHI Toshiaki FUJII Toshiki SONODA Hajime NAGAHARA
A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.
Phuc V. TRINH Thanh V. PHAM Anh T. PHAM
Both spatial diversity and multihop relaying are considered to be effective methods for mitigating the impact of atmospheric turbulence-induced fading on the performance of free-space optical (FSO) systems. Multihop relaying can significantly reduce the impact of fading by relaying the information over a number of shorter hops. However, it is not feasible or economical to deploy relays in many practical scenarios. Spatial diversity could substantially reduce the fading variance by introducing additional degrees of freedom in the spatial domain. Nevertheless, its superiority is diminished when the fading sub-channels are correlated. In this paper, our aim is to study the fundamental performance limits of spatial diversity suffering from correlated Gamma-Gamma (G-G) fading channels in multihop coherent FSO systems. For the performance analysis, we propose to approximate the sum of correlated G-G random variables (RVs) as a G-G RV, which is then verified by the Kolmogorov-Smirnov (KS) goodness-of-fit statistical test. Performance metrics, including the outage probability and the ergodic capacity, are newly derived in closed-form expressions and thoroughly investigated. Monte-Carlo (M-C) simulations are also performed to validate the analytical results.
We discuss Nash equilibria in combinatorial auctions with item bidding. Specifically, we give a characterization for the existence of a Nash equilibrium in a combinatorial auction with item bidding when valuations by n bidders satisfy symmetric and subadditive properties. By this characterization, we can obtain an algorithm for deciding whether a Nash equilibrium exists in such a combinatorial auction.
In recent years, deep learning based approaches have substantially improved the performance of face recognition. Most existing deep learning techniques work well, but neglect effective utilization of face correlation information. The resulting performance loss is noteworthy for personal appearance variations caused by factors such as illumination, pose, occlusion, and misalignment. We believe that face correlation information should be introduced to solve this network performance problem originating from by intra-personal variations. Recently, graph deep learning approaches have emerged for representing structured graph data. A graph is a powerful tool for representing complex information of the face image. In this paper, we survey the recent research related to the graph structure of Convolutional Neural Networks and try to devise a definition of graph structure included in Compressed Sensing and Deep Learning. This paper devoted to the story explain of two properties of our graph - sparse and depth. Sparse can be advantageous since features are more likely to be linearly separable and they are more robust. The depth means that this is a multi-resolution multi-channel learning process. We think that sparse graph based deep neural network can more effectively make similar objects to attract each other, the relative, different objects mutually exclusive, similar to a better sparse multi-resolution clustering. Based on this concept, we propose a sparse graph representation based on the face correlation information that is embedded via the sparse reconstruction and deep learning within an irregular domain. The resulting classification is remarkably robust. The proposed method achieves high recognition rates of 99.61% (94.67%) on the benchmark LFW (YTF) facial evaluation database.
Yitong LIU Wang TIAN Yuchen LI Hongwen YANG
High Efficiency Video Coding (HEVC) has a better coding efficiency comparing with H.264/AVC. However, performance enhancement results in increased computational complexity which is mainly brought by the quadtree based coding tree unit (CTU). In this paper, an early termination algorithm based on AdaBoost classifier for coding unit (CU) is proposed to accelerate the process of searching the best partition for CTU. Experiment results indicate that our method can save 39% computational complexity on average at the cost of increasing Bjontegaard-Delta rate (BD-rate) by 0.18.
Yi LIU Qingkun MENG Xingtong LIU Jian WANG Lei ZHANG Chaojing TANG
Electronic payment protocols provide secure service for electronic commerce transactions and protect private information from malicious entities in a network. Formal methods have been introduced to verify the security of electronic payment protocols; however, these methods concentrate on the accountability and fairness of the protocols, without considering the impact caused by timeliness. To make up for this deficiency, we present a formal method to analyze the security properties of electronic payment protocols, namely, accountability, fairness and timeliness. We add a concise time expression to an existing logical reasoning method to represent the event time and extend the time characteristics of the logical inference rules. Then, the Netbill protocol is analyzed with our formal method, and we find that the fairness of the protocol is not satisfied due to the timeliness problem. The results illustrate that our formal method can analyze the key properties of electronic payment protocols. Furthermore, it can be used to verify the time properties of other security protocols.
Di YAO Xin ZHANG Qiang YANG Weibo DENG
In small-aperture high frequency surface wave radar, the main-lobe clutter all can be seen as a more severe space spread clutter under the influence of the smaller array aperture. It compromises the detection performance of moving vessels, especially when the target is submerged in the clutter. To tackle this issue, an improved spread clutter estimated canceller, combining spread clutter estimated canceller, adaptive selection strategy of the optimal training samples and rotating spatial beam method, is presented to suppress main-lobe clutter in both angle domain and range domain. According to the experimental results, the proposed algorithm is shown to have far superior clutter suppression performance based on the real data.
Ting WU Yong FENG JiaXing SANG BaoHua QIANG YaNan WANG
Recommender systems (RS) exploit user ratings on items and side information to make personalized recommendations. In order to recommend the right products to users, RS must accurately model the implicit preferences of each user and the properties of each product. In reality, both user preferences and item properties are changing dynamically over time, so treating the historical decisions of a user or the received comments of an item as static is inappropriate. Besides, the review text accompanied with a rating score can help us to understand why a user likes or dislikes an item, so temporal dynamics and text information in reviews are important side information for recommender systems. Moreover, compared with the large number of available items, the number of items a user can buy is very limited, which is called the sparsity problem. In order to solve this problem, utilizing item correlation provides a promising solution. Although famous methods like TimeSVD++, TopicMF and CoFactor partially take temporal dynamics, reviews and correlation into consideration, none of them combine these information together for accurate recommendation. Therefore, in this paper we propose a novel combined model called TmRevCo which is based on matrix factorization. Our model combines the dynamic user factor of TimeSVD++ with the hidden topic of each review text mined by the topic model of TopicMF through a new transformation function. Meanwhile, to support our five-scoring datasets, we use a more appropriate item correlation measure in CoFactor and associate the item factors of CoFactor with that of matrix factorization. Our model comprehensively combines the temporal dynamics, review information and item correlation simultaneously. Experimental results on three real-world datasets show that our proposed model leads to significant improvement compared with the baseline methods.
Koichi ISHIDA Yoshiaki TANIGUCHI Nobukazu IGUCHI
We have proposed a fish farm monitoring system for achieving efficient fish farming. In our system, sensor nodes are attached at fish to monitor its health status. In this letter, we propose a method for gathering sensor data from sensor nodes to sink nodes when the transmission range of sensor node is shorter than the size of fish cage. In our proposed method, a part of sensor nodes become leader nodes and they forward gathered sensor data to the sink nodes. Through simulation evaluations, we show that the data gathering performance of our proposed method is higher than that of traditional methods.
Hieu Ngoc QUANG Hiroshi SHIRAI
In this study, the electromagnetic scatterings from conducting bodies have been investigated via a surface equivalence theorem. When one formulates equivalent electric and magnetic currents from geometrical optics (GO) reflected field in the illuminated surface and GO incident field in the shadowed surface, it has been found that the asymptotically derived radiation fields are found to be the same as those formulated from physical optics (PO) approximation.
Chunxiao FAN Xiaopeng HONG Lei TIAN Yue MING Matti PIETIKÄINEN Guoying ZHAO
PCANet, as one noticeable shallow network, employs the histogram representation for feature pooling. However, there are three main problems about this kind of pooling method. First, the histogram-based pooling method binarizes the feature maps and leads to inevitable discriminative information loss. Second, it is difficult to effectively combine other visual cues into a compact representation, because the simple concatenation of various visual cues leads to feature representation inefficiency. Third, the dimensionality of histogram-based output grows exponentially with the number of feature maps used. In order to overcome these problems, we propose a novel shallow network model, named as PCANet-II. Compared with the histogram-based output, the second order pooling not only provides more discriminative information by preserving both the magnitude and sign of convolutional responses, but also dramatically reduces the size of output features. Thus we combine the second order statistical pooling method with the shallow network, i.e., PCANet. Moreover, it is easy to combine other discriminative and robust cues by using the second order pooling. So we introduce the binary feature difference encoding scheme into our PCANet-II to further improve robustness. Experiments demonstrate the effectiveness and robustness of our proposed PCANet-II method.
Wei LI Yi WU Chunlin SHEN Huajun GONG
We present a system to improve the robustness of real-time 3D surface reconstruction by utilizing non-inertial localization sensor. Benefiting from such sensor, our easy-to-build system can effectively avoid tracking drift and lost comparing with conventional dense tracking and mapping systems. To best fusing the sensor, we first adopt a hand-eye calibration and performance analysis for our setup and then propose a novel optimization framework based on adaptive criterion function to improve the robustness as well as accuracy. We apply our system to several challenging reconstruction tasks, which show significant improvement in scanning robustness and reconstruction quality.
Ken-ichiro MORIDOMI Kohei HATANO Eiji TAKIMOTO
We prove generalization error bounds of classes of low-rank matrices with some norm constraints for collaborative filtering tasks. Our bounds are tighter, compared to known bounds using rank or the related quantity only, by taking the additional L1 and L∞ constraints into account. Also, we show that our bounds on the Rademacher complexity of the classes are optimal.
Lei ZHANG Qingfu FAN Guoxing ZHANG Zhizheng LIANG
Existing trajectory prediction methods suffer from the “data sparsity” and neglect “time awareness”, which leads to low accuracy. Aiming to the problem, we propose a fast time-aware sparse trajectories prediction with tensor factorization method (TSTP-TF). Firstly, we do trajectory synthesis based on trajectory entropy and put synthesized trajectories into the original trajectory space. It resolves the sparse problem of trajectory data and makes the new trajectory space more reliable. Then, we introduce multidimensional tensor modeling into Markov model to add the time dimension. Tensor factorization is adopted to infer the missing regions transition probabilities to further solve the problem of data sparsity. Due to the scale of the tensor, we design a divide and conquer tensor factorization model to reduce memory consumption and speed up decomposition. Experiments with real dataset show that TSTP-TF improves prediction accuracy generally by as much as 9% and 2% compared to the Baseline algorithm and ESTP-MF algorithm, respectively.
Shogo SEKI Tomoki TODA Kazuya TAKEDA
This paper proposes a semi-supervised source separation method for stereophonic music signals containing multiple recorded or processed signals, where synthesized music is focused on the stereophonic music. As the synthesized music signals are often generated as linear combinations of many individual source signals and their respective mixing gains, phase or phase difference information between inter-channel signals, which represent spatial characteristics of recording environments, cannot be utilized as acoustic clues for source separation. Non-negative Tensor Factorization (NTF) is an effective technique which can be used to resolve this problem by decomposing amplitude spectrograms of stereo channel music signals into basis vectors and activations of individual music source signals, along with their corresponding mixing gains. However, it is difficult to achieve sufficient separation performance using this method alone, as the acoustic clues available for separation are limited. To address this issue, this paper proposes a Cepstral Distance Regularization (CDR) method for NTF-based stereo channel separation, which involves making the cepstrum of the separated source signals follow Gaussian Mixture Models (GMMs) of the corresponding the music source signal. These GMMs are trained in advance using available samples. Experimental evaluations separating three and four sound sources are conducted to investigate the effectiveness of the proposed method in both supervised and semi-supervised separation frameworks, and performance is also compared with that of a conventional NTF method. Experimental results demonstrate that the proposed method yields significant improvements within both separation frameworks, and that cepstral distance regularization provides better separation parameters.
Shohei KAMAMURA Aki FUKUDA Hiroki MORI Rie HAYASHI Yoshihiko UEMATSU
By focusing on the recent swing to the centralized approach by the software defined network (SDN), this paper presents a novel network architecture for refactoring the current distributed Internet protocol (IP) by not only utilizing the SDN itself but also implementing its cooperation with the optical transport layer. The first IP refactoring is for flexible network topology reconfiguration: the global routing and explicit routing functions are transferred from the distributed routers to the centralized SDN. The second IP refactoring is for cost-efficient maintenance migration: we introduce a resource portable IP router that can behave as a shared backup router by cooperating with the optical transport path switching. Extensive evaluations show that our architecture makes the current IP network easier to configure and more scalable. We also validate the feasibility of our proposal.
Koichiro ADACHI Takanori SUZUKI Shigehisa TANAKA
A lens-integrated surface-emitting DFB laser and its application to low-cost single-mode optical sub-assemblies (OSAs) are discussed. By using the LISEL, high-efficient optical coupling with reduced number of optical components and non-hermetic packaging are demonstrated. Designing the integrated lens of LISELs makes it possible to achieve passive alignment optical coupling to an SMF without the need for an additional lens. For SiP coupling, the light-emission angle from the LISEL can be controlled by the mirror angle and by displacing the lens. The capability for a low coupling loss of 3.9 dB between the LISEL and a grating coupler on the SiP platform was demonstrated. The LISEL with facet-free structure, integrating DBR mirror, PD, and window structure on its end facet, showed the same lasing performance as the conventional laser with AR facet coating. A storage test (200-hour saturated pressure-cooker test (PCT) at 138°C and 85% RH.) showed that the lasing characteristics did not degrade with high-humidity, demonstrating the potential for applying non-hermetic packaging. Our results indicate that the LISEL is one of the promising light sources for creating cost-effective OSAs.
Ruicong ZHI Ghada ZAMZMI Dmitry GOLDGOF Terri ASHMEADE Tingting LI Yu SUN
The accurate assessment of infants' pain is important for understanding their medical conditions and developing suitable treatment. Pediatric studies reported that the inadequate treatment of infants' pain might cause various neuroanatomical and psychological problems. The fact that infants can not communicate verbally motivates increasing interests to develop automatic pain assessment system that provides continuous and accurate pain assessment. In this paper, we propose a new set of pain facial activity features to describe the infants' facial expression of pain. Both dynamic facial texture feature and dynamic geometric feature are extracted from video sequences and utilized to classify facial expression of infants as pain or no pain. For the dynamic analysis of facial expression, we construct spatiotemporal domain representation for texture features and time series representation (i.e. time series of frame-level features) for geometric features. Multiple facial features are combined through both feature fusion and decision fusion schemes to evaluate their effectiveness in infants' pain assessment. Experiments are conducted on the video acquired from NICU infants, and the best accuracy of the proposed pain assessment approaches is 95.6%. Moreover, we find that although decision fusion does not perform better than that of feature fusion, the False Negative Rate of decision fusion (6.2%) is much lower than that of feature fusion (25%).
Panita MEANANEATRA Songsakdi RONGVIRIYAPANISH Taweesup APIWATTANAPONG
An important step for improving software analyzability is applying refactorings during the maintenance phase to remove bad smells, especially the long method bad smell. Long method bad smell occurs most frequently and is a root cause of other bad smells. However, no research has proposed an approach to repeating refactoring identification, suggestion, and application until all long method bad smells have been removed completely without reducing software analyzability. This paper proposes an effective approach to identifying refactoring opportunities and suggesting an effective refactoring set for complete removal of long method bad smell without reducing code analyzability. This approach, called the long method remover or LMR, uses refactoring enabling conditions based on program analysis and code metrics to identify four refactoring techniques and uses a technique embedded in JDeodorant to identify extract method. For effective refactoring set suggestion, LMR uses two criteria: code analyzability level and the number of statements impacted by the refactorings. LMR also uses side effect analysis to ensure behavior preservation. To evaluate LMR, we apply it to the core package of a real world java application. Our evaluation criteria are 1) the preservation of code functionality, 2) the removal rate of long method characteristics, and 3) the improvement on analyzability. The result showed that the methods that apply suggested refactoring sets can completely remove long method bad smell, still have behavior preservation, and have not decreased analyzability. It is concluded that LMR meets the objectives in almost all classes. We also discussed the issues we found during evaluation as lesson learned.