Takahiko KATO Masaki BANDAI Miki YAMAMOTO
Congestion control is a hot topic in named data networking (NDN). Congestion control methods for NDN are classified into two approaches: the rate-based approach and the window-based approach. In the window-based approach, the optimum window size cannot be determined due to the largely changing round-trip time. Therefore, the rate-based approach is considered to be suitable for NDN and has been studied actively. However, there is still room for improvement in the window-based approach because hop-by-hop control in this approach has not been explored. In this paper, we propose a hop-by-hop widow-based congestion control method for NDN (HWCC). The proposed method introduces a window-size control for per-hop Interest transmission using hop-by-hop acknowledgment. In addition, we extend HWCC so that it can support multipath forwarding (M-HWCC) in order to increase the network resources utilization. The simulation results show that both of HWCC and M-HWCC achieve high throughput performance, as well as the max-min fairness, while effectively avoiding congestion.
Wei ZHAO Pengpeng YANG Rongrong NI Yao ZHAO Haorui WU
Recently, image forensics community has paid attention to the research on the design of effective algorithms based on deep learning technique. And facts proved that combining the domain knowledge of image forensics and deep learning would achieve more robust and better performance than the traditional schemes. Instead of improving algorithm performance, in this paper, the safety of deep learning based methods in the field of image forensics is taken into account. To the best of our knowledge, this is the first work focusing on this topic. Specifically, we experimentally find that the method using deep learning would fail when adding the slight noise into the images (adversarial images). Furthermore, two kinds of strategies are proposed to enforce security of deep learning-based methods. Firstly, a penalty term to the loss function is added, which is the 2-norm of the gradient of the loss with respect to the input images, and then an novel training method is adopt to train the model by fusing the normal and adversarial images. Experimental results show that the proposed algorithm can achieve good performance even in the case of adversarial images and provide a security consideration for deep learning-based image forensics.
Jingjie YAN Guanming LU Xiaodong BAI Haibo LI Ning SUN Ruiyu LIANG
In this letter, we propose a supervised bimodal emotion recognition approach based on two important human emotion modalities including facial expression and body gesture. A effectively supervised feature fusion algorithms named supervised multiset canonical correlation analysis (SMCCA) is presented to established the linear connection between three sets of matrices, which contain the feature matrix of two modalities and their concurrent category matrix. The test results in the bimodal emotion recognition of the FABO database show that the SMCCA algorithm can get better or considerable efficiency than unsupervised feature fusion algorithm covering canonical correlation analysis (CCA), sparse canonical correlation analysis (SCCA), multiset canonical correlation analysis (MCCA) and so on.
Minsu KIM Kunwoo LEE Katsuhiko GONDOW Jun-ichi IMURA
The main purpose of Codemark is to distribute digital contents using offline media. Due to the main purpose of Codemark, Codemark cannot be used on digital images. It has high robustness on only printed images. This paper presents a new color code called Robust Index Code (RIC for short), which has high robustness on JPEG Compression and Resize targeting digital images. RIC embeds a remote database index to digital images so that users can reach to any digital contents. Experimental results, using our implemented RIC encoder and decoder, have shown high robustness on JPEG Comp. and Resize of the proposed codemark. The embedded database indexes can be extracted 100% on compressed images to 30%. In conclusion, it is able to store all the type of digital products by embedding indexes into digital images to access database, which means it makes a Superdistribution system with digital images realized. Therefore RIC has the potential for new Internet image services, since all the images encoded by RIC are possible to access original products anywhere.
This paper presents an adaptive least-significant-bit (LSB) steganography for spatial color images on smartphones. For each red, green, and blue color component, the combinations of All-4bit, One-4bit+Two-2bit, and Two-3bit+One-2bit LSB replacements are proposed for content-adaptivity and natural histograms. The high capacity of 8.4bpp with the average peak signal noise ratio (PSNR) 43.7db and fast processing times on smartphones are also demonstrated
Lijing ZHU Kun WANG Duan ZHOU Liangkai LIU Huaxi GU
Ring-based topology is popular for optical network-on-chip. However, the network congestion is serious for ring topology, especially when optical circuit-switching is employed. In this paper, we proposed an algorithm to build a low congestion multi-ring architecture for optical network-on-chip without additional wavelength or scheduling overhead. A network congestion model is established with new network congestion factor defined. An algorithm is developed to optimize the low congestion multi-ring topology. Finally, a case study is shown and the simulation results by OPNET verify the superiority over the traditional ONoC architecture.
This paper presents a novel TV event detection method for automatically generating TV program digests by using Twitter data. Previous studies of TV program digest generation based on Twitter data have developed TV event detection methods that analyze the frequency time series of tweets that users made while watching a given TV program; however, in most of the previous studies, differences in how Twitter is used, e.g., sharing information versus conversing, have not been taken into consideration. Since these different types of Twitter data are lumped together into one category, it is difficult to detect highlight scenes of TV programs and correctly extract their content from the Twitter data. Therefore, this paper presents a highlight scene detection method to automatically generate TV program digests for TV programs based on Twitter data classified by Twitter user behavior. To confirm the effectiveness of the proposed method, experiments using 49 soccer game TV programs were conducted.
Hiroki CHIBA Yuki HYOGO Kazuo MISUE
Spatio-temporal dependent data, such as weather observation data, are data of which the attribute values depend on both time and space. Typical methods for the visualization of such data include plotting the attribute values at each point in time on a map and displaying series of the maps in chronological order with animation, or displaying them by juxtaposing horizontally or vertically. However, these methods are problematic in that they compel readers interested in grasping the spatial changes of the attribute values to memorize the representations on the maps. The problem is exacerbated by considering that the longer the time-period covered by the data, the higher the cognitive load. In order to solve these problems, the authors propose a visualization method capable of overlaying the representations of multiple instantaneous values on a single static map. This paper explains the design of the proposed method and reports two experiments conducted by the authors to investigate the usefulness of the method. The experimental results show that the proposed method is useful in terms of the speed and accuracy with which it reads the spatial changes and its ability to present data with long time series efficiently.
Bimal CHANDRA DAS Satoshi TAKAHASHI Eiji OKI Masakazu MURAMATSU
This paper introduces robust optimization models for minimization of the network congestion ratio that can handle the fluctuation in traffic demands between nodes. The simplest and widely used model to minimize the congestion ratio, called the pipe model, is based on precisely specified traffic demands. However, in practice, network operators are often unable to estimate exact traffic demands as they can fluctuate due to unpredictable factors. To overcome this weakness, we apply robust optimization to the problem of minimizing the network congestion ratio. First, we review existing models as robust counterparts of certain uncertainty sets. Then we consider robust optimization assuming ellipsoidal uncertainty sets, and derive a tractable optimization problem in the form of second-order cone programming (SOCP). Furthermore, we take uncertainty sets to be the intersection of ellipsoid and polyhedral sets, and considering the mirror subproblems inherent in the models, obtain tractable optimization problems, again in SOCP form. Compared to the previous model that assumes an error interval on each coordinate, our models have the advantage of being able to cope with the total amount of errors by setting a parameter that determines the volume of the ellipsoid. We perform numerical experiments to compare our SOCP models with the existing models which are formulated as linear programming problems. The results demonstrate the relevance of our models in terms of congestion ratio and computation time.
Object detection has been a hot topic of image processing, computer vision and pattern recognition. In recent years, training a model from labeled images using machine learning technique becomes popular. However, the relationship between training samples is usually ignored by existing approaches. To address this problem, a novel approach is proposed, which trains Siamese convolutional neural network on feature pairs and finely tunes the network driven by a small amount of training samples. Since the proposed method considers not only the discriminative information between objects and background, but also the relationship between intraclass features, it outperforms the state-of-arts on real images.
The exchanged hypercube, denoted by EH(s,t), is a graph obtained by systematically removing edges from the corresponding hypercube, while preserving many of the hypercube's attractive properties. Moreover, ring-connected topology is one of the most promising topologies in Wavelength Division Multiplexing (WDM) optical networks. Let Rn denote a ring-connected topology. In this paper, we address the routing and wavelength assignment problem for implementing the EH(s,t) communication pattern on Rn, where n=s+t+1. We design an embedding scheme. Based on the embedding scheme, a near-optimal wavelength assignment algorithm using 2s+t-2+⌊2t/3⌋ wavelengths is proposed. We also show that the wavelength assignment algorithm uses no more than an additional 25 percent of (or ⌊2t-1/3⌋) wavelengths, compared to the optimal wavelength assignment algorithm.
Fumiya TESHIMA Hiroyasu OBATA Ryo HAMAMOTO Kenji ISHIDA
Streaming services that use TCP have increased; however, throughput is unstable due to congestion control caused by packet loss when TCP is used. Thus, TCP control to secure a required transmission rate for streaming communication using Forward Error Correction (FEC) technology (TCP-AFEC) has been proposed. TCP-AFEC can control the appropriate transmission rate according to network conditions using a combination of TCP congestion control and FEC. However, TCP-AFEC was not developed for wireless Local Area Network (LAN) environments; thus, it requires a certain time to set the appropriate redundancy and cannot obtain the required throughput. In this paper, we demonstrate the drawbacks of TCP-AFEC in wireless LAN environments. Then, we propose a redundancy setting method that can secure the required throughput for FEC, i.e., TCP-TFEC. Finally, we show that TCP-TFEC can secure more stable throughput than TCP-AFEC.
Ryo OYAMA Shouhei KIDERA Tetsuo KIRIMOTO
Microwave imaging techniques, in particular, synthetic aperture radar (SAR), are promising tools for terrain surface measurement, irrespective of weather conditions. The coherent change detection (CCD) method is being widely applied to detect surface changes by comparing multiple complex SAR images captured from the same scanning orbit. However, in the case of a general damage assessment after a natural disaster such as an earthquake or mudslide, additional about surface change, such as surface height change, is strongly required. Given this background, the current study proposes a novel height change estimation method using a CCD model based on the Pauli decomposition of fully polarimetric SAR images. The notable feature of this method is that it can offer accurate height change beyond the assumed wavelength, by introducing the frequency band-divided approach, and so is significantly better than InSAR based approaches. Experiments in an anechoic chamber on a 1/100 scaled model of the X-band SAR system, show that our proposed method outputs more accurate height change estimates than a similar method that uses single polarimetric data, even if the height change amount is over the assumed wavelength.
Image steganalysis can determine whether the image contains the secret messages. In practice, the number of the cover images is far greater than that of the secret images, so it is very important to solve the detection problem in imbalanced image sets. Currently, SMOTE, Borderline-SMOTE and ADASYN are three importantly synthesized algorithms used to solve the imbalanced problem. In these methods, the new sampling point is synthesized based on the minority class samples. But this research is seldom seen in image steganalysis. In this paper, we find that the features of the majority class sample are similar to those of the minority class sample based on the distribution of the image features in steganalysis. So the majority and minority class samples are both used to integrate the new sample points. In experiments, compared with SMOTE, Borderline-SMOTE and ADASYN, this approach improves detection accuracy using the FLD ensemble classifier.
Takashi SHIBATA Kazunori SATO Ryohei IKEJIRI
We conducted experimental classes in an elementary school to examine how the advantages of using stereoscopic 3D images could be applied in education. More specifically, we selected a unit of the Tumulus period in Japan for sixth-graders as the source of our 3D educational materials. This unit represents part of the coursework for the topic of Japanese history. The educational materials used in our study included stereoscopic 3D images for examining the stone chambers and Haniwa (i.e., terracotta clay figures) of the Tumulus period. The results of our experimental class showed that 3D educational materials helped students focus on specific parts in images such as attached objects of the Haniwa and also understand 3D spaces and concavo-convex shapes. The experimental class revealed that 3D educational materials also helped students come up with novel questions regarding attached objects of the Haniwa, and Haniwa's spatial balance and spatial alignment. The results suggest that the educational use of stereoscopic 3D images is worthwhile in that they lead to question and hypothesis generation and an inquiry-based learning approach to history.
The problem of reproducing high dynamic range (HDR) images on devices with a restricted dynamic range has gained a lot of interest in the computer graphics community. Various approaches to this issue exist, spanning several research areas, including computer graphics, image processing, color vision, and physiology. However, most of the approaches to the issue have several serious well-known color distortion problems. Accordingly, this article presents a tone-mapping method. The proposed method comprises the tone-mapping operator and the chromatic adaptation transform. The tone-mapping method is combined with linear and non-linear mapping using visual gamma based on contrast sensitive function (CSF) and using key of scene value, where the visual gamma is adopted to automatically control the dynamic range, parameter free, as well as to avoid both the luminance shift and the hue shift in the displayed images. Furthermore, the key of scene value is used to represent whether the scene was subjectively light, norm, dark. The resulting image is then processed through a chromatic adaptation transform and emphasis lies in human visual perception (HVP). The experiment results show that the proposed method yields better performance of the color rendering over the conventional method in subjective and quantitative quality and color reproduction.
Shu KONDO Yuto KOBAYASHI Keita TAKAHASHI Toshiaki FUJII
A layered light-field display based on light-field factorization is considered. In the original work, the factorization is formulated under the assumption that the light field is captured with orthographic cameras. In this paper, we introduce a generalized framework for light-field factorization that can handle both the orthographic and perspective camera projection models. With our framework, a light field captured with perspective cameras can be displayed accurately.
Junsuk PARK Nobuhiro SEKI Keiichi KANEKO
In the topologies for interconnected nodes, it is desirable to have a low degree and a small diameter. For the same number of nodes, a dual-cube topology has almost half the degree compared to a hypercube while increasing the diameter by just one. Hence, it is a promising topology for interconnection networks of massively parallel systems. We propose here a stochastic fault-tolerant routing algorithm to find a non-faulty path from a source node to a destination node in a dual-cube.
Stephane KAPTCHOUANG Ihsen AZIZ OUÉDRAOGO Eiji OKI
This paper proposes a Preventive Start-time Optimization with no penalty (PSO-NP). PSO-NP determines a suitable set of Open Shortest Path First (OSPF) link weights at the network operation start time that can handle any link failure scenario preventively while considering both failure and non failure scenarios. Preventive Start-time Optimization (PSO) was designed to minimize the worst case congestion ratio (maximum link utilization over all the links in the network) in case of link failure. PSO considers all failure patterns to determine a link weight set that counters the worst case failure. Unfortunately, when there is no link failure, that link weight set leads to a higher congestion ratio than that of the conventional start-time optimization scheme. This penalty is perpetual and thus a burden especially in networks with few failures. In this work, we suppress that penalty while reducing the worst congestion ratio by considering both failure and non failure scenarios. Our proposed scheme, PSO-NP, is simple and effective in that regard. We expand PSO-NP into a Generalized Preventive Start-time Optimization (GPSO) to find a link weight set that balances both the penalty under no failure and the congestion ratio under the worst case failure. Simulation results show that PSO-NP achieves substantial congestion reduction for any failure case while suppressing the penalty in case of no failure in the network. In addition, GPSO as framework is effective in determining a suitable link weight set that considers the trade off between the penalty under non failure and the worst case congestion ratio reduction.
Kang WU Yijun CHEN Huiling HOU Wenhao CHEN Xuwen LIANG
In this letter, a new and accurate frequency estimation method of complex exponential signals is proposed. The proposed method divides the signal samples into several identical segments and sums up the samples belonging to the same segment respectively. Then it utilizes fast Fourier transform (FFT) algorithm with zero-padding to obtain a coarse estimation, and exploits three Fourier coefficients to interpolate a fine estimation based on least square error (LSE) criterion. Numerical results show that the proposed method can closely approach the Cramer-Rao bound (CRB) at low signal-to-noise ratios (SNRs) with different estimation ranges. Furthermore, the computational complexity of the proposed method is proportional to the estimation range, showing its practical-oriented ability. The proposed method can be useful in several applications involving carrier frequency offset (CFO) estimation for burst-mode satellite communications.