Akio MATOBA Narutoshi HORIMOTO Toshimichi SAITO
This letter studies a digital return map that is a mapping from a set of lattice points to itself. The digital map can exhibit various periodic orbits. As a typical example, we present the digital logistic map based on the logistic map. Two fundamental results are shown. When the logistic map has a unique periodic orbit, the digital map can have plural periodic orbits. When the logistic map has an unstable period-3 orbit that causes chaos, the digital map can have a stable period-3 orbit with various domain of attractions.
Nhat-Phuong TRAN Myungho LEE Sugwon HONG Seung-Jae LEE
Data encryption and decryption are common operations in network-based application programs that must offer security. In order to keep pace with the high data input rate of network-based applications such as the multimedia data streaming, real-time processing of the data encryption/decryption is crucial. In this paper, we propose a new parallelization approach to improve the throughput performance for the de-facto standard data encryption and decryption algorithm, AES-CTR (Counter mode of AES). The new approach extends the size of the block encrypted at one time across the unit block boundaries, thus effectively encrypting multiple unit blocks at the same time. This reduces the associated parallelization overheads such as the number of procedure calls, the scheduling and the synchronizations compared with previous approaches. Therefore, this leads to significant throughput performance improvements on a computing platform with a general-purpose multi-core processor and a Graphic Processing Unit (GPU).
Tsutomu FUJII Takafumi SAWAUMI Atsushi AIKAWA
This study investigated the test-retest reliability and the criterion-related validity of the Implicit Association Test (IAT [1]) that was developed for measuring shyness among Japanese people. The IAT has been used to measure implicit stereotypes, as well as self-concepts, such as implicit shyness and implicit self-esteem. We administered the shyness IAT and the self-esteem IAT to participants (N = 59) on two occasions over a one-week interval (Time 1 and Time 2) and examined the test-retest reliability by correlating shyness IATs between the two time points. We also assessed the criterion-related validity by calculating the correlation between implicit shyness and implicit self-esteem. The results indicated a sufficient positive correlation coefficient between the scores of implicit shyness over the one-week interval (r = .67, p < .01). Moreover, a strong negative correlation coefficient was indicated between implicit shyness and implicit self-esteem (r = -.72, p < .01). These results confirmed the test-retest reliability and the criterion-related validity of the Japanese version of the shyness IAT, which is indicative of the validity of the test for assessing implicit shyness.
Yoshihiko SUSUKI Ryoya KAZAOKA Takashi HIKIHARA
This paper proposes the physical architecture of an electric power system with multiple homes. The notion of home is a unit of small-scale power system that includes local energy source, energy storage, load, power conversion circuits, and control systems. An entire power system consists of multiple homes that are interconnected via a distribution network and that are connected to the commercial power grid. The interconnection is autonomously achieved with a recently developed technology of grid-connected inverters. A mathematical model of slow dynamics of the power system is also developed in this paper. The developed model enables the evaluation of steady and transient characteristics of power systems.
Tsuyoshi SAWAGASHIRA Tatsuro HAYASHI Takeshi HARA Akitoshi KATSUMATA Chisako MURAMATSU Xiangrong ZHOU Yukihiro IIDA Kiyoji KATAGI Hiroshi FUJITA
The purpose of this study is to develop an automated scheme of carotid artery calcification (CAC) detection on dental panoramic radiographs (DPRs). The CAC is one of the indices for predicting the risk of arteriosclerosis. First, regions of interest (ROIs) that include carotid arteries are determined on the basis of inflection points of the mandibular contour. Initial CAC candidates are detected by using a grayscale top-hat filter and a simple grayscale thresholding technique. Finally, a rule-based approach and a support vector machine to reduce the number of false positive (FP) findings are applied using features such as area, location, and circularity. A hundred DPRs were used to evaluate the proposed scheme. The sensitivity for the detection of CACs was 90% with 4.3 FPs (80% with 1.9 FPs) per image. Experiments show that our computer-aided detection scheme may be useful to detect CACs.
Many High-Dynamic-Range (HDR) rendering techniques have been developed. Of these, the image color appearance model, iCAM, is a typical HDR image rendering algorithm. HDR rendering methods normally require a tone compression process and include many color space transformations from the RGB signal of an input image to the RGB signal of output devices for the realistic depiction of a captured image. The iCAM06, which is a refined iCAM, also contains a tone compression step and several color space conversions for HDR image reproduction. On the other hand, the tone compression and frequent color space changes in the iCAM06 cause color distortion, such as a hue shift and saturation reduction of the output image. To solve these problems, this paper proposes a separate color correction method that has no effect on the output luminance values by controlling only the saturation and hue of the color attributes. The color saturation of the output image was compensated for using the compensation gain and the hue shift was corrected using the rotation matrix. The separate color correction method reduces the existing color changes in iCAM06. The compensation gain and rotation matrix for the color correction were formulated based on the relationship between the input and output tristimulus values through the tone compression. The experimental results show that the revised iCAM06 with the proposed method has better performance than the default iCAM06.
Zezhong LI Hideto IKEDA Junichi FUKUMOTO
In most phrase-based statistical machine translation (SMT) systems, the translation model relies on word alignment, which serves as a constraint for the subsequent building of a phrase table. Word alignment is usually inferred by GIZA++, which implements all the IBM models and HMM model in the framework of Expectation Maximum (EM). In this paper, we present a fully Bayesian inference for word alignment. Different from the EM approach, the Bayesian inference makes use of all possible parameter values rather than estimating a single parameter value, from which we expect a more robust inference. After inferring the word alignment, current SMT systems usually train the phrase table from Viterbi word alignment, which is prone to learn incorrect phrases due to the word alignment mistakes. To overcome this drawback, a new phrase extraction method is proposed based on multiple Gibbs samples from Bayesian inference for word alignment. Empirical results show promising improvements over baselines in alignment quality as well as the translation performance.
Takuya TOJO Hiroyuki KITADA Kimihide MATSUMOTO
Estimating the packet loss ratio of TCP transfers is essential for passively measuring Quality of Service (QoS) on the Internet traffic. However, only a few studies have been conducted on this issue. The Benko-Veres algorithm is one technique for estimating the packet loss ratio of two networks separated by a measurement point. However, this study shows that it leads to an estimation error of a few hundred percent in the particular environment where the packet loss probabilities between the two networks are asymmetrical. We propose a passive method for packet loss estimation that offers improved estimation accuracy by introducing classification conditions for the TCP retransmission timeout. An experiment shows that our proposed algorithm suppressed the maximum estimation error to less than 15%.
Xin WANG Filippos BALASIS Sugang XU Yoshiaki TANAKA
It is believed that the wavelength switched optical network (WSON) technology is moving towards being adopted by large-scale networks. Wavelength conversion and signal regeneration through reamplifying, reshaping, and retiming (3R) are beneficial to support the expansion of WSON. In many cases, these two functions can be technically integrated into a single shared physical component, namely the wavelength convertible 3R regenerator (WC3R). However, fully deploying such devices is infeasible due to their excessive cost. Thus, this topic serves as a motivation behind the investigation of the sparse placement issue of WC3Rs presented in this paper. A series of strategies are proposed based on knowledge of the network. Moreover, a novel adaptive routing and joint resource assignment algorithm is presented to provision the lightpaths in WSON with sparsely placed WC3Rs. Extensive simulation trials are conducted under even and uneven distribution of WC3R resource. Each strategic feature is examined for its efficiency in lowering the blocking probability. The results reveal that carefully designed sparse placement of WC3Rs can achieve performance comparable to that of full WC3R placement scenario. Furthermore, the expenditure of WC3R deployment also depends on the type of used WC3Rs characterized by the wavelength convertibility, i.e., fixed WC3R or tunable WC3R. This paper also investigates WSON from the perspective of cost and benefit by employing different types of WC3Rs in order to find the possibility of more efficient WC3R investment.
Khamphao SISAAT Hiroaki KIKUCHI Shunji MATSUO Masato TERADA Masashi FUJIWARA Surin KITTITORNKUN
A botnet attacks any Victim Hosts via the multiple Command and Control (C&C) Servers, which are controlled by a botmaster. This makes it more difficult to detect the botnet attacks and harder to trace the source country of the botmaster due to the lack of the logged data about the attacks. To locate the C&C Servers during malware/bot downloading phase, we have analyzed the source IP addresses of downloads to more than 90 independent Honeypots in Japan in the CCC (Cyber Clean Center) dataset 2010 comprising over 1 million data records and almost 1 thousand malware names. Based on GeoIP services, a Time Zone Correlation model has been proposed to determine the correlation coefficient between bot downloads from Japan and other source countries. We found a strong correlation between active malware/bot downloads and time zone of the C&C Servers. As a result, our model confirms that malware/bot downloads are synchronized with time zone (country) of the corresponding C&C Servers so that the botmaster can be possibly traced.
Takeshi USUI Kiyohide NAKAUCHI Yozo SHOJI Yoshinori KITATSUJI Hidetoshi YOKOTA Nozomu NISHINAGA
This paper proposes a session state migration architecture for flexible server consolidation. One of technical challenges is how to split a session state from a connection and bind the session state to another connection in any servers. A conventional server and client application assumes that a session state is statically bound to a connection once the connection has been established. The proposed architecture reduces the migration latency, compared to an existing study by splitting the session state from the connection. This paper classifies common procedures of session state migration for various services. The session state migration architecture enables service providers to conduct server maintenance at their own convenience, and to conserve energy consumption at servers by consolidating them. A simulation to evaluate server consolidation reveals that the session state migration reduces the number of servers for accommdating users, compared to virtual machine migration. This paper also shows implementation of the session state migration architecture. Experimental results reveal that the impact caused by the proposed architecture on real-time applications is small.
Yufei LIN Xuejun YANG Xinhai XU Xiaowei GUO
Scaling up the system size has been the common approach to achieving high performance in parallel computing. However, designing and implementing a large-scale parallel system can be very costly in terms of money and time. When building a target system, it is desirable to initially build a smaller version by using the processing nodes with the same architecture as those in the target system. This allows us to achieve efficient and scalable prediction by using the smaller system to predict the performance of the target system. Such scalability prediction is critical because it enables system designers to evaluate different design alternatives so that a certain performance goal can be successfully achieved. As the de facto standard for writing parallel applications, MPI is widely used in large-scale parallel computing. By categorizing the discrete event simulation methods for MPI programs and analyzing the characteristics of scalability prediction, we propose a novel simulation method, called virtual-actual combined execution-driven (VACED) simulation, to achieve scalable prediction for MPI programs. The basic idea behind is to predict the execution time of an MPI program on a target machine by running it on a smaller system so that we can predict its communication time by virtual simulation and obtain its sequential computation time by actual execution. We introduce a model for the VACED simulation as well as the design and implementation of VACED-SIM, a lightweight simulator based on fine-grained activity and event definitions. We have validated our approach on a sub-system of Tianhe-1A. Our experimental results show that VACED-SIM exhibits higher accuracy and efficiency than MPI-SIM. In particular, for a target system with 1024 cores, the relative errors of VACED-SIM are less than 10% and the slowdowns are close to 1.
Haeng-Gon LEE Jungsuk SONG Sang-Soo CHOI Gi-Hwan CHO
In order to cope with the continuous evolution in cyber threats, many security products (e.g., IDS/IPS, TMS, Firewalls) are being deployed in the network of organizations, but it is not so easy to monitor and analyze the security events triggered by the security products constantly and effectively. Thus, in many cases, real-time incident analysis and response activities for each organization are assigned to an external dedicated security center. However, since the external security center deploys its security appliances to only the boundary or the single point of the network, it is very difficult to understand the entire network situation and respond to security incidents rapidly and accurately if they depend on only a single type of security information. In addition, security appliances trigger an unmanageable amount of alerts (in fact, by some estimates, several thousands of alerts are raised everyday, and about 99% of them are false positives), this situation makes it difficult for the analyst to investigate all of them and to identify which alerts are more serious and which are not. In this paper, therefore, we propose an advanced incident response methodology to overcome the limitations of the existing incident response scheme. The main idea of our methodology is to utilize polymorphic security events which can be easily obtained from the security appliances deployed in each organization, and to subject them to correlation analysis. We evaluate the proposed methodology using diverse types of real security information and the results show the effectiveness and superiority of the proposed incident response methodology.
Wittawat JITKRITTUM Hirotaka HACHIYA Masashi SUGIYAMA
Feature selection is a technique to screen out less important features. Many existing supervised feature selection algorithms use redundancy and relevancy as the main criteria to select features. However, feature interaction, potentially a key characteristic in real-world problems, has not received much attention. As an attempt to take feature interaction into account, we propose
Yuzo TAENAKA Kazuya TSUKAMOTO Shigeru KASHIHARA Suguru YAMAGUCHI Yuji OIE
In order to prevent the degradation of TCP performance while traversing two WLANs, we present an implementation design of an inter-domain TCP handover method based on cross-layer and multi-homing. The proposed handover manager (HM) in the transport layer uses two TCP connections previously established via two WLANs (multi-homing) and switches the communication path between the two connections according to the handover trigger and the comparison of new/old APs. The handover trigger and comparison are conducted by assessing the wireless link quality using the frame-retry information obtained from the MAC layer (cross-layer). In a previous study, we proposed a preliminary concept for this method and evaluated its functional effectiveness through simulations. In the present study, we design an implementation considering a real system and then examine the effective performance in a real environment because a real system has several system constraints and suffers from fluctuations in an actual wireless environment. Indeed, depending on the cross-layer design, the implementation often degrades the system performance even if the method exhibits good functional performance. Moreover, the simple assessments of wireless link quality in the previous study indicated unnecessary handovers and inappropriate AP selection in a real environment. Therefore, we herein propose a new architecture that performs cross-layer collaboration between the MAC layer and the transport layer while avoiding degradation of system performance. In addition, we use a new assessment scheme of wireless link quality, i.e., double thresholds of frame retry and comparison of frame retry ratio, in order to prevent handover oscillation caused by fluctuations in the wireless environment. The experimental results demonstrate that the prototype system works well by controlling two TCP connections based on assessments of wireless link quality thereby achieving efficient inter-domain TCP handover in a real WLAN environment.
Eunji PAK Sang-Hoon KIM Jaehyuk HUH Seungryoul MAENG
Although shared caches allow the dynamic allocation of limited cache capacity among cores, traditional LRU replacement policies often cannot prevent negative interference among cores. To address the contention problem in shared caches, cache partitioning and application scheduling techniques have been extensively studied. Partitioning explicitly determines cache capacity for each core to maximize the overall throughput. On the other hand, application scheduling by operating systems groups the least interfering applications for each shared cache, when multiple shared caches exist in systems. Although application scheduling can mitigate the contention problem without any extra hardware support, its effect can be limited for some severe contentions. This paper proposes a low cost solution, based on application scheduling with a simple cache insertion control. Instead of using a full hardware-based cache partitioning mechanism, the proposed technique mostly relies on application scheduling. It selectively uses LRU insertion to the shared caches, which can be added with negligible hardware changes from the current commercial processor designs. For the completeness of cache interference evaluation, this paper examines all possible mixes from a set of applications, instead of using a just few selected mixes. The evaluation shows that the proposed technique can mitigate the cache contention problem effectively, close to the ideal scheduling and partitioning.
Ittetsu TANIGUCHI Kazutoshi SAKAKIBARA Shinya KATO Masahiro FUKUI
Large-scale introduction of renewable energy such as photovoltaic energy and wind is a big motivation for renovating conventional grid systems. To be independent from existing power grids and to use renewable energy as much as possible, a decentralized energy network is proposed as a new grid system. The decentralized energy network is placed among houses to connect them with each other, and each house has a PV panel and a battery. A contribution of this paper is a network topology and battery size exploration for the decentralized energy network in order to make effective use of renewable energy. The proposed method for exploring the decentralized energy network design is inspired by the design methodology of VLSI systems, especially design space exploration in system-level design. The proposed method is based on mixed integer programming (MIP) base power flow optimization, and it was evaluated for all design instances. Experimental results show that the decentralized energy network has the following features. 1) The energy loss and energy purchased due to power shortage were not affected by each battery size but largely affected by the sum of all battery sizes in the network, and 2) the network topology did not largely affect the energy loss and the purchased energy. These results will become a useful guide to designing an optimal decentralized energy network for each region.
Yuan CAO Wei XU Hideo NAKAMURA
This paper investigates a preprocessing technique for a multiuser MIMO downlink system. An efficient joint precoder design with adaptive power allocation is proposed by adopting the channel-diagonalization technique and the minimum mean square error (MMSE) criterion. By exploiting an MMSE-based decoder, we propose an iterative algorithm to design the precoder with further derived closed-form solutions for implementing adaptive power allocation. Simulation results verify the effectiveness of our proposed approach. Compared with conventional benchmark schemes, they show that our proposal matches the performance but with reduced computational complexity.
Zheng-qiang WANG Ling-ge JIANG Chen HE
This letter investigates price-based power control for cognitive radio networks (CRNs) with interference cancellation. The base station (BS) of the primary users (PUs) will admit secondary users (SUs) to access by pricing their interference power under the interference power constraint (IPC). We give the optimal price for BS to maximize its revenue and the optimal interference cancellation order to minimize the total transmit power of SUs. Simulation results show the effectiveness of the proposed pricing scheme.
Yinqiang ZHENG Shigeki SUGIMOTO Masatoshi OKUTOMI
We propose an accurate and scalable solution to the perspective-n-point problem, referred to as ASPnP. Our main idea is to estimate the orientation and position parameters by directly minimizing a properly defined algebraic error. By using a novel quaternion representation of the rotation, our solution is immune to any parametrization degeneracy. To obtain the global optimum, we use the Grobner basis technique to solve the polynomial system derived from the first-order optimality condition. The main advantages of our proposed solution lie in accuracy and scalability. Extensive experiment results, with both synthetic and real data, demonstrate that our proposed solution has better accuracy than the state-of-the-art noniterative solutions. More importantly, by exploiting vectorization operations, the computational cost of our ASPnP solution is almost constant, independent of the number of point correspondences n in the wide range from 4 to 1000. In our experiment settings, the ASPnP solution takes about 4 milliseconds, thus best suited for real-time applications with a drastically varying number of 3D-to-2D point correspondences.