Haruo HAYASHI Munenari INOGUCHI
Recently ICT has been improved rapidly, and it is likely to make a contribution to effective disaster response. However, ICT is not utilized effectively in disaster response because the environment for ICT management is not considered enough. In this paper, we retrieve lessons learned from actual response at the past disasters in Japan, and introduce them following disaster response process model based on human psychological manner. In another point, we suggest significance of Common Operational Picture with spatial information following advanced case study in the United States of America, and identify two essential issues for effective information and technology management. One is information status, such as statics or dynamic information. The other one is five elements for ICT management in disaster response: Governance, Standard Operating Procedures, Technology, Training and Exercise and Use.
Shinnosuke YOSHIDA Youhua SHI Masao YANAGISAWA Nozomu TOGAWA
As process technologies advance, timing-error correction techniques have become important as well. A suspicious timing-error prediction (STEP) technique has been proposed recently, which predicts timing errors by monitoring the middle points, or check points of several speed-paths in a circuit. However, if we insert STEP circuits (STEPCs) in the middle points of all the paths from primary inputs to primary outputs, we need many STEPCs and thus require too much area overhead. How to determine these check points is very important. In this paper, we propose an effective STEPC insertion algorithm minimizing area overhead. Our proposed algorithm moves the STEPC insertion positions to minimize inserted STEPC counts. We apply a max-flow and min-cut approach to determine the optimal positions of inserted STEPCs and reduce the required number of STEPCs to 1/10-1/80 and their area to 1/5-1/8 compared with a naive algorithm. Furthermore, our algorithm realizes 1.12X-1.5X overclocking compared with just inserting STEPCs into several speed-paths.
Yoshio SHIMOMURA Hiroki YAMAMOTO Hayato USUI Ryotaro KOBAYASHI Hajime SHIMADA
Modern processors use Branch Target Buffer (BTB)[1] to relax control dependence. Unfortunately, the energy consumption of the BTB is high. In order to effectively fetch instructions, it is necessary to perform a branch prediction at the fetch stage, regardless of whether the fetched instruction is a branch or a nonbranch. Therefore, the number of accesses to the BTB is large, and the energy consumption of the BTB is high. However, accesses from nonbranches to the BTB waste energy. In this paper, we focus on accesses from nonbranches to the BTB, which we call useless accesses from a viewpoint of power. For reducing energy consumption without performance loss, we present a method that reduces useless accesses by using information that indicates whether a fetched instruction is a branch or not. To realize the above approach, we propose a branch bit called B-Bit. A B-Bit is associated with an instruction and indicates whether it is a branch or not. A B-Bit is available at the beginning of the fetch stage. If a B-Bit is “1” signifying a branch, the BTB is accessed. If a B-Bit is “0” signifying a nonbranch, the BTB is not accessed. The experimental results show that the total energy consumption can be reduced by 54.3% without performance loss.
Leigang HUO Xiangchu FENG Chunlei HUO Chunhong PAN
Using traditional single-layer dictionary learning methods, it is difficult to reveal the complex structures hidden in the hyperspectral images. Motivated by deep learning technique, a deep dictionary learning approach is proposed for hyperspectral image denoising, which consists of hierarchical dictionary learning, feature denoising and fine-tuning. Hierarchical dictionary learning is helpful for uncovering the hidden factors in the spectral dimension, and fine-tuning is beneficial for preserving the spectral structure. Experiments demonstrate the effectiveness of the proposed approach.
For the electric demand prediction problem, a modification mechanism of predicted demand data has been proposed in the previous work. In this paper, we analyze the performance of the modification mechanism in power balancing control. Then, we analytically derive an upper bound of the performance, which is characterized by system parameters and prediction precision.
Yuping SU Ying LI Guanghui SONG
Information-theoretic limits of a multi-way relay channel with direct links (MWRC-DL), where multiple users exchange their messages through a relay terminal and direct links, are discussed in this paper. Under the assumption that a restricted encoder is employed at each user, an outer bound on the capacity region is derived first. Then, a decode-and-forward (DF) strategy is proposed and the corresponding rate region is characterized. The explicit outer bound and the achievable rate region for the Gaussian MWRC-DL are also derived. Numerical examples are provided to demonstrate the performance of the proposed DF strategy.
Tatsuya OTOSHI Yuichi OHSITA Masayuki MURATA Yousuke TAKAHASHI Noriaki KAMIYAMA Keisuke ISHIBASHI Kohei SHIOMOTO Tomoaki HASHIMOTO
In recent years, the time variation of Internet traffic has increased due to the growth of streaming and cloud services. Backbone networks must accommodate such traffic without congestion. Traffic engineering with traffic prediction is one approach to stably accommodating time-varying traffic. In this approach, routes are calculated from predicted traffic to avoid congestion, but predictions may include errors that cause congestion. We propose prediction-based traffic engineering that is robust against prediction errors. To achieve robust control, our method uses model predictive control, a process control method based on prediction of system dynamics. Routes are calculated so that future congestion is avoided without sudden route changes. We apply calculated routes for the next time slot, and observe traffic. Using the newly observed traffic, we again predict traffic and re-calculate the routes. Repeating these steps mitigates the impact of prediction errors, because traffic predictions are corrected in each time slot. Through simulations using backbone network traffic traces, we demonstrate that our method can avoid the congestion that the other methods cannot.
An LIU Maoyin CHEN Donghua ZHOU
Robust crater recognition is a research focus on deep space exploration mission, and sparse representation methods can achieve desirable robustness and accuracy. Due to destruction and noise incurred by complex topography and varied illumination in planetary images, a robust crater recognition approach is proposed based on dictionary learning with a low-rank error correction model in a sparse representation framework. In this approach, all the training images are learned as a compact and discriminative dictionary. A low-rank error correction term is introduced into the dictionary learning to deal with gross error and corruption. Experimental results on crater images show that the proposed method achieves competitive performance in both recognition accuracy and efficiency.
Cesar CARRIZO Kentaro KOBAYASHI Hiraku OKADA Masaaki KATAYAMA
This manuscript presents a simple scheme to improve the performance of a feedback control system that uses power line channels for its feedback loop. The noise and attenuation of power lines, and thus the signal to noise ratio, are known to be cyclostationary. Such cyclic features in the channel allow us to predict virtually error free transmission instants as well as instants of high probability of errors. This paper introduces and evaluates the effectiveness of a packet transmission scheduling that collaborates with a predictive control scheme adapted to this cyclostationary environment. In other words, we explore the cooperation between the physical and application layers of the system in order to achieve an overall optimization. To rate the control quality of the system we evaluate its stability as well as its ability to follow control commands accurately. We compare a scheme of increased packet rate against our proposed scheme which emulates a high packet rate with the use of predictive control. Through this comparison, we verify the effectiveness of the proposed scheme to improve the control quality of the system, even under low signal to noise ratio conditions in the cyclostationary channel.
Yuling LIU Xinxin QU Guojiang XIN Peng LIU
A novel ROI-based reversible data hiding scheme is proposed for medical images, which is able to hide electronic patient record (EPR) and protect the region of interest (ROI) with tamper localization and recovery. The proposed scheme combines prediction error expansion with the sorting technique for embedding EPR into ROI, and the recovery information is embedded into the region of non-interest (RONI) using histogram shifting (HS) method which hardly leads to the overflow and underflow problems. The experimental results show that the proposed scheme not only can embed a large amount of information with low distortion, but also can localize and recover the tampered area inside ROI.
Aram KIM Junhee PARK Byung-Uk LEE
In a patch-based super-resolution algorithm, a low-resolution patch is influenced by surrounding patches due to blurring. We propose to remove this boundary effect by subtracting the blur from the surrounding high-resolution patches, which enables more accurate sparse representation. We demonstrate improved performance through experimentation. The proposed algorithm can be applied to most of patch-based super-resolution algorithms to achieve additional improvement.
Resource Description Framework (RDF) access control suffers from an authorization conflict problem caused by RDF inference. When an access authorization is specified, it can lie in conflict with other access authorizations that have the opposite security sign as a result of RDF inference. In our former study, we analyzed the authorization conflict problem caused by subsumption inference, which is the key inference in RDF. The Rule Interchange Format (RIF) is a Web standard rule language recommended by W3C, and can be combined with RDF data. Therefore, as in RDF inference, an authorization conflict can be caused by RIF inference. In addition, this authorization conflict can arise as a result of the interaction of RIF inference and RDF inference rather than of RIF inference alone. In this paper, we analyze the authorization conflict problem caused by RIF inference and suggest an efficient authorization conflict detection algorithm. The algorithm exploits the graph labeling-based algorithm proposed in our earlier paper. Through experiments, we show that the performance of the graph labeling-based algorithm is outstanding for large RDF data.
Akihiro SATOH Yutaka NAKAMURA Takeshi IKENAGA
A dictionary attack against SSH is a common security threat. Many methods rely on network traffic to detect SSH dictionary attacks because the connections of remote login, file transfer, and TCP/IP forwarding are visibly distinct from those of attacks. However, these methods incorrectly judge the connections of automated operation tasks as those of attacks due to their mutual similarities. In this paper, we propose a new approach to identify user authentication methods on SSH connections and to remove connections that employ non-keystroke based authentication. This approach is based on two perspectives: (1) an SSH dictionary attack targets a host that provides keystroke based authentication; and (2) automated tasks through SSH need to support non-keystroke based authentication. Keystroke based authentication relies on a character string that is input by a human; in contrast, non-keystroke based authentication relies on information other than a character string. We evaluated the effectiveness of our approach through experiments on real network traffic at the edges in four campus networks, and the experimental results showed that our approach provides high identification accuracy with only a few errors.
Mohamed RIHAN Maha ELSABROUTY Osamu MUTA Hiroshi FURUKAWA
This paper presents a downlink interference mitigation framework for two-tier heterogeneous networks, that consist of spectrum-sharing macrocells and femtocells*. This framework establishes cooperation between the two tiers through two algorithms, namely, the restricted waterfilling (RWF) algorithm and iterative reweighted least squares interference alignment (IRLS-IA) algorithm. The proposed framework models the macrocell-femtocell two-tier cellular system as an overlay cognitive radio system in which the macrocell system plays the role of the primary user (PU) while the femtocell networks play the role of the cognitive secondary users (SUs). Through the RWF algorithm, the macrocell basestation (MBS) cooperates with the femtocell basestations (FBSs) by releasing some of its eigenmodes to the FBSs to do their transmissions even if the traffic is heavy and the MBS's signal to noise power ratio (SNR) is high. Then, the FBSs are expected to achieve a near optimum sum rate through employing the IRLS-IA algorithm to mitigate both the co-tier and cross-tier interference at the femtocell users' (FUs) receivers. Simulation results show that the proposed IRLS-IA approach provides an improved sum rate for the femtocell users compared to the conventional IA techniques, such as the leakage minimization approach and the nuclear norm based rank constraint rank minimization approach. Additionally, the proposed framework involving both IRLS-IA and RWF algorithms provides an improved total system sum rate compared with the legacy approaches for the case of multiple femtocell networks.
In order to improve the motion control performance, a new friction determination method, using the LuGre model, is proposed. The model parameters are determined by performing two-step closed-loop experiments using a proportional-integral observer (PIO). The PIO is also used to develop a robust motion controller to deal with additional uncertainties including the effect of the inaccurate estimation of the friction. The experimental results reveal improved performance compared to that of a single-PIO-based controller.
Mengmeng ZHANG Yang ZHANG Huihui BAI
The high efficiency video coding (HEVC) standard has significantly improved compression performance for many applications, including remote desktop and desktop sharing. Screen content video coding is widely used in applications with a high demand for real-time performance. HEVC usually introduces great computational complexity, which makes fast algorithms necessary to offset the limited computing power of HEVC encoders. In this study, a statistical analysis of several screen content sequences is first performed to better account for the completely different statistics of natural images and videos. Second, a fast coding unit (CU) splitting method is proposed, which aims to reduce HEVC intra coding computational complexity, especially in screen content coding. In the proposed scheme, CU size decision is made by checking the smoothness of the luminance values in every coding tree unit. Experiments demonstrate that in HEVC range extension standard, the proposed scheme can save an average of 29% computational complexity with 0.9% Bjøntegaard Delta rate (BD-rate) increase compared with HM13.0+RExt6.0 anchor for screen content sequences. For default HEVC, the proposed scheme can reduce encoding time by an average of 38% with negligible loss of coding efficiency.
Masashi KOUDA Ryuji HIRASE Takeshi YAMAO Shu HOTTA Yuji YOSHIDA
We deposited thin films of thiophene/phenylene co-oligomers (TPCOs) onto poly(tetrafluoroethylene) (PTFE) layers that were friction-transferred on substrates. These films were composed of aligned molecules in such a way that their polarizations of emissions and absorbances were larger along the drawing direction than those perpendicular to that direction. Organic field-effect transistors (OFETs) fabricated with these films indicated large mobilities, when the drawing direction of PTFE was parallel to the channel length direction. The friction-transfer technique forms the TPCO films that indicate the anisotropic optical and electronic properties.
Huy Nhat TRAN Hyungsuk OH Wonha KIM Wook PARK
We present a new method for generating thumbnail images from H.264/AVC coded bit streams. What distinguishes our approach from previous works is that it determines the thumbnail image pixels by summing the residual and estimate block averages. The residual block averages are directly acquired in the transform domain and the estimated block averages are calculated in the spatial domain. Due to the construction of the reference pixels in the spatial domain, the proposed method eliminates the source of mismatch error, thus the result does not suffer any degradation. The thumbnail images produced by the proposed method are indistinguishable to the ones by the method that decodes the H.264/AVC intra coded bit streams and then scales them down. For most images, the proposed method also executes almost 3 times faster than the down-scaling method at frequently used bandwidths.
Jigisha N PATEL Jerin JOSE Suprava PATNAIK
The concept of sparse representation is gaining momentum in image processing applications, especially in image compression, from last one decade. Sparse coding algorithms represent signals as a sparse linear combination of atoms of an overcomplete dictionary. Earlier works shows that sparse coding of images using learned dictionaries outperforms the JPEG standard for image compression. The conventional method of image compression based on sparse coding, though successful, does not adapting the compression rate based on the image local block characteristics. Here, we have proposed a new framework in which the image is classified into three classes by measuring the block activities followed by sparse coding each of the classes using dictionaries learned specific to each class. K-SVD algorithm has been used for dictionary learning. The sparse coefficients for each class are Huffman encoded and combined to form a single bit stream. The model imparts some rate-distortion attributes to compression as there is provision for setting a different constraint for each class depending on its characteristics. We analyse and compare this model with the conventional model. The outcomes are encouraging and the model makes way for an efficient sparse representation based image compression.
Xi CHANG Zhuo ZHANG Yan LEI Jianjun ZHAO
Concurrency bugs do significantly affect system reliability. Although many efforts have been made to address this problem, there are still many bugs that cannot be detected because of the complexity of concurrent programs. Compared with atomicity violations, order violations are always neglected. Efficient and effective approaches to detecting order violations are therefore in urgent need. This paper presents a bidirectional predictive trace analysis approach, BIPED, which can detect order violations in parallel based on a recorded program execution. BIPED collects an expected-order execution trace into a layered bidirectional prediction model, which intensively represents two types of expected-order data flows in the bottom layer and combines the lock sets and the bidirectionally order constraints in the upper layer. BIPED then recognizes two types of candidate violation intervals driven by the bottom-layer model and then checks these recognized intervals bidirectionally based on the upper-layer constraint model. Consequently, concrete schedules can be generated to expose order violation bugs. Our experimental results show that BIPED can effectively detect real order violation bugs and the analysis speed is 2.3x-10.9x and 1.24x-1.8x relative to the state-of-the-art predictive dynamic analysis approaches and hybrid model based static prediction analysis approaches in terms of order violation bugs.