Takanori HARA Masahiro SASABE Kento SUGIHARA Shoji KASAHARA
To establish a network service in network functions virtualization (NFV) networks, the orchestrator addresses the challenge of service chaining and virtual network function placement (SC-VNFP) by mapping virtual network functions (VNFs) and virtual links onto physical nodes and links. Unlike traditional networks, network operators in NFV networks must contend with both hardware and software failures in order to ensure resilient network services, as NFV networks consist of physical nodes and software-based VNFs. To guarantee network service quality in NFV networks, the existing work has proposed an approach for the SC-VNFP problem that considers VNF diversity and redundancy. VNF diversity splits a single VNF into multiple lightweight replica instances that possess the same functionality as the original VNF, which are then executed in a distributed manner. VNF redundancy, on the other hand, deploys backup instances with standby mode on physical nodes to prepare for potential VNF failures. However, the existing approach does not adequately consider the tradeoff between resource efficiency and service availability in the context of VNF diversity and redundancy. In this paper, we formulate the SC-VNFP problem with VNF diversity and redundancy as a two-step integer linear program (ILP) that adjusts the balance between service availability and resource efficiency. Through numerical experiments, we demonstrate the fundamental characteristics of the proposed ILP, including the tradeoff between resource efficiency and service availability.
Yifan GUO Zhijun WANG Wu GUAN Liping LIANG Xin QIU
This letter provides an efficient massive multiple-input multiple-output (MIMO) detector based on quasi-newton methods to speed up the convergence performance under realistic scenarios, such as high user load and spatially correlated channels. The proposed method leverages the information of the Hessian matrix by merging Barzilai-Borwein method and Limited Memory-BFGS method. In addition, an efficient initial solution based on constellation mapping is proposed. The simulation results demonstrate that the proposed method diminishes performance loss to 0.7dB at the bit-error-rate of 10-2 at 128×32 antenna configuration with low complexity, which surpasses the state-of-the-art (SOTA) algorithms.
Xingyu WANG Ruilin ZHANG Hirofumi SHINOHARA
This paper introduces an inverter-based true random number generator (I-TRNG). It uses a single CMOS inverter to amplify thermal noise multiple times. An adaptive calibration mechanism based on clock tuning provides robust operation across a wide range of supply voltage 0.5∼1.1V and temperature -40∼140°C. An 8-bit Von-Neumann post-processing circuit (VN8W) is implemented for maximum raw entropy extraction. In a 130nm CMOS technology, the I-TRNG entropy source only occupies 635μm2 and consumes 0.016pJ/raw-bit at 0.6V. The I-TRNG occupies 13406μm2, including the entropy source, adaptive calibration circuit, and post-processing circuit. The minimum energy consumption of the I-TRNG is 1.38pJ/bit at 0.5V, while passing all NIST 800-22 and 800-90B tests. Moreover, an equivalent 15-year life at 0.7V, 25°C is confirmed by an accelerated NBTI aging test.
Seen from the Internet Service Provider (ISP) side, network traffic monitoring is an indispensable part during network service provisioning, which facilitates maintaining the security and reliability of the communication networks. Among the numerous traffic conditions, we should pay extra attention to traffic anomaly, which significantly affects the network performance. With the advancement of Machine Learning (ML), data-driven traffic anomaly detection algorithms have established high reputation due to the high accuracy and generality. However, they are faced with challenges on inefficient traffic feature extraction and high computational complexity, especially when taking the evolving property of traffic process into consideration. In this paper, we proposed an online learning framework for traffic anomaly detection by embracing Gaussian Process (GP) and Sparse Representation (SR) in two steps: 1). To extract traffic features from past records, and better understand these features, we adopt GP with a special kernel, i.e., mixture of Gaussian in the spectral domain, which makes it possible to more accurately model the network traffic for improving the performance of traffic anomaly detection. 2). To combat noise and modeling error, observing the inherent self-similarity and periodicity properties of network traffic, we manually design a feature vector, based on which SR is adopted to perform robust binary classification. Finally, we demonstrate the superiority of the proposed framework in terms of detection accuracy through simulation.
Aditya RAKHMADI Kazuyuki SAITO
Transcatheter renal denervation (RDN) is a novel treatment to reduce blood pressure in patients with resistant hypertension using an energy-based catheter, mostly radio frequency (RF) current, by eliminating renal sympathetic nerve. However, several inconsistent RDN treatments were reported, mainly due to RF current narrow heating area, and the inability to confirm a successful nerve ablation in a deep area. We proposed microwave energy as an alternative for creating a wider ablation area. However, confirming a successful ablation is still a problem. In this paper, we designed a prediction method for deep renal nerve ablation sites using hybrid numerical calculation-driven machine learning (ML) in combination with a microwave catheter. This work is a first-step investigation to check the hybrid ML prediction capability in a real-world situation. A catheter with a single-slot coaxial antenna at 2.45 GHz with a balloon catheter, combined with a thin thermometer probe on the balloon surface, is proposed. Lumen temperature measured by the probe is used as an ML input to predict the temperature rise at the ablation site. Heating experiments using 6 and 8 mm hole phantom with a 41.3 W excited power, and 8 mm with 36.4 W excited power, were done eight times each to check the feasibility and accuracy of the ML algorithm. In addition, the temperature on the ablation site is measured for reference. Prediction by ML algorithm agrees well with the reference, with a maximum difference of 6°C and 3°C in 6 and 8 mm (both power), respectively. Overall, the proposed ML algorithm is capable of predicting the ablation site temperature rise with high accuracy.
In this survey we summarize properties of pseudorandomness and non-randomness of some number-theoretic sequences and present results on their behaviour under the following measures of pseudorandomness: balance, linear complexity, correlation measure of order k, expansion complexity and 2-adic complexity. The number-theoretic sequences are the Legendre sequence and the two-prime generator, the Thue-Morse sequence and its sub-sequence along squares, and the prime omega sequences for integers and polynomials.
Shinya MATSUFUJI Sho KURODA Yuta IDA Takahiro MATSUMOTO Naoki SUEHIRO
A set consisting of K subsets of Msequences of length L is called a complementary sequence set expressed by A(L, K, M), if the sum of the out-of-phase aperiodic autocorrelation functions of the sequences within a subset and the sum of the cross-correlation functions between the corresponding sequences in any two subsets are zero at any phase shift. Suehiro et al. first proposed complementary set A(Nn, N, N) where N and n are positive integers greater than or equal to 2. Recently, several complementary sets related to Suehiro's construction, such as N being a power of a prime number, have been proposed. However, there is no discussion about their inclusion relation and properties of sequences. This paper rigorously formulates and investigates the (generalized) logic functions of the complementary sets by Suehiro et al. in order to understand its construction method and the properties of sequences. As a result, it is shown that there exists a case where the logic function is bent when n is even. This means that each series can be guaranteed to have pseudo-random properties to some extent. In other words, it means that the complementary set can be successfully applied to communication on fluctuating channels. The logic functions also allow simplification of sequence generators and their matched filters.
Yu ZHOU Jianyong HU Xudong MIAO Xiaoni DU
Low confusion coefficient values can make side-channel attacks harder for vector Boolean functions in Block cipher. In this paper, we give new results of confusion coefficient for f ⊞ g, f ⊡ g, f ⊕ g and fg for different Boolean functions f and g, respectively. And we deduce a relationship on the sum-of-squares of the confusion coefficient between one n-variable function and two (n - 1)-variable decomposition functions. Finally, we find that the confusion coefficient of vector Boolean functions is affine invariant.
Synthetic aperture radar (SAR) is a device for observing the ground surface and is one of the important technologies in the field of microwave remote sensing. In SAR observation, a platform equipped with a small-aperture antenna flies in a straight line and continuously radiates pulse waves to the ground during the flight. After that, by synthesizing the series of observation data obtained during the flight, one realize high-resolution ground surface observation. In SAR observation, there are two spatial resolutions defined in the range and azimuth directions and they are limited by the bandwidth of the SAR system. The purpose of this study is to improve the resolution of SAR by sparse reconstruction. In particular, we aim to improve the resolution of SAR without changing the frequency parameters. In this paper, we propose to improve the resolution of SAR using the deconvolution iterative shrinkage-thresholding algorithm (ISTA) and verify the proposed method by carrying out an experimental analysis using an actual SAR dataset. Experimental results show that the proposed method can improve the resolution of SAR with low computational complexity.
Daisuke AMAYA Takuji TACHIBANA
Network function virtualization (NFV) technology significantly changes the traditional communication network environments by providing network functions as virtual network functions (VNFs) on commercial off-the-shelf (COTS) servers. Moreover, for using VNFs in a pre-determined sequence to provide each network service, service chaining is essential. A VNF can provide multiple service chains with the corresponding network function, reducing the number of VNFs. However, VNFs might be the source or the target of a cyberattack. If the node where the VNF is installed is attacked, the VNF would also be easily attacked because of its security vulnerabilities. Contrarily, a malicious VNF may attack the node where it is installed, and other VNFs installed on the node may also be attacked. Few studies have been done on the security of VNFs and nodes for service chaining. This study proposes a service chain construction with security-level management. The security-level management concept is introduced to built many service chains. Moreover, the cost optimization problem for service chaining is formulated and the heuristic algorithm is proposed. We demonstrate the effectiveness of the proposed method under certain network topologies using numerical examples.
Jiansheng BAI Jinjie YAO Yating HOU Zhiliang YANG Liming WANG
Modulated signal detection has been rapidly advancing in various wireless communication systems as it's a core technology of spectrum sensing. To address the non-Gaussian statistical of noise in radio channels, especially its pulse characteristics in the time/frequency domain, this paper proposes a method based on Information Geometric Difference Mapping (IGDM) to solve the signal detection problem under Alpha-stable distribution (α-stable) noise and improve performance under low Generalized Signal-to-Noise Ratio (GSNR). Scale Mixtures of Gaussians is used to approximate the probability density function (PDF) of signals and model the statistical moments of observed data. Drawing on the principles of information geometry, we map the PDF of different types of data into manifold space. Through the application of statistical moment models, the signal is projected as coordinate points within the manifold structure. We then design a dual-threshold mechanism based on the geometric mean and use Kullback-Leibler divergence (KLD) to measure the information distance between coordinates. Numerical simulations and experiments were conducted to prove the superiority of IGDM for detecting multiple modulated signals in non-Gaussian noise, the results show that IGDM has adaptability and effectiveness under extremely low GSNR.
Quantum key distribution or secret key distribution (SKD) has been studied to deliver a secrete key for secure communications, whose security is physically guaranteed. For practical deployment, such systems are desired to be overlaid onto existing wavelength-multiplexing transmission systems, without using a dedicated transmission line. This study analytically investigates the feasibility of the intensity-modulation/direction-detection (IM/DD) SKD scheme being wavelength-multiplexed with conventional wavelength-division-multiplexed (WDM) signals, concerning spontaneous Raman scattering light from conventional optical signals. Simulation results indicate that IM/DD SKD systems are not degraded when they are overlaid onto practically deployed dense WDM transmission systems in the C-band, owing to the feature of the IM/DD SKD scheme, which uses a signal light with an intensity level comparable to conventional optical signals unlike conventional quantum key distribution schemes.
This paper introduces heuristic approaches and a deep reinforcement learning approach to solve a joint virtual network function deployment and scheduling problem in a dynamic scenario. We formulate the problem as an optimization problem. Based on the mathematical description of the optimization problem, we introduce three heuristic approaches and a deep reinforcement learning approach to solve the problem. We define an objective to maximize the ratio of delay-satisfied requests while minimizing the average resource cost for a dynamic scenario. Our introduced two greedy approaches are named finish time greedy and computational resource greedy, respectively. In the finish time greedy approach, we make each request be finished as soon as possible despite its resource cost; in the computational resource greedy approach, we make each request occupy as few resources as possible despite its finish time. Our introduced simulated annealing approach generates feasible solutions randomly and converges to an approximate solution. In our learning-based approach, neural networks are trained to make decisions. We use a simulated environment to evaluate the performances of our introduced approaches. Numerical results show that the introduced deep reinforcement learning approach has the best performance in terms of benefit in our examined cases.
Yanming CHEN Bin LYU Zhen YANG Fei LI
In this paper, we investigate a wireless-powered relays assisted batteryless IoT network based on the non-linear energy harvesting model, where there exists an energy service provider constituted by the hybrid access point (HAP) and an IoT service provider constituted by multiple clusters. The HAP provides energy signals to the batteryless devices for information backscattering and the wireless-powered relays for energy harvesting. The relays are deployed to assist the batteryless devices with the information transmission to the HAP by using the harvested energy. To model the energy interactions between the energy service provider and IoT service provider, we propose a Stackelberg game based framework. We aim to maximize the respective utility values of the two providers. Since the utility maximization problem of the IoT service provider is non-convex, we employ the fractional programming theory and propose a block coordinate descent (BCD) based algorithm with successive convex approximation (SCA) and semi-definite relaxation (SDR) techniques to solve it. Numerical simulation results confirm that compared to the benchmark schemes, our proposed scheme can achieve larger utility values for both the energy service provider and IoT service provider.
Shinichi MURATA Takahiro MATSUDA
To localize an unknown wave source in non-line-of-sight environments, a wave source localization scheme using multiple unmanned-aerial-vehicles (UAVs) is proposed. In this scheme, each UAV estimates the direction-of-arrivals (DoAs) of received signals and the wave source is localized from the estimated DoAs by means of maximum likelihood estimation. In this study, by extending the concept of this scheme, we propose a novel wave source localization scheme using a single UAV. In the proposed scheme, the UAV moves on the path comprising multiple measurement points and the wave source is sequentially localized from DoA distributions estimated at these measurement points. At each measurement point, with a moving path planning algorithm, the UAV determines the next measurement point from the estimated DoA distributions and measurement points that the UAV has already visited. We consider two moving path planning algorithms, and validate the proposed scheme through simulation experiments.
Guojin LIAO Yongpeng ZUO Qiao LIAO Xiaofeng TIAN
Frame synchronization detection before data transmission is an important module which directly affects the lifetime and coexistence of underwater acoustic communication (UAC) networks, where linear frequency modulation (LFM) is a frame preamble signal commonly used for synchronization. Unlike terrestrial wireless communications, strong bursty noise frequently appears in UAC. Due to the long transmission distance and the low signal-to-noise ratio, strong short-distance bursty noise will greatly reduce the accuracy of conventional fractional fourier transform (FrFT) detection. We propose a multi-segment verification fractional fourier transform (MFrFT) preamble detection algorithm to address this challenge. In the proposed algorithm, 4 times of adjacent FrFT operations are carried out. And the LFM signal identifies by observing the linear correlation between two lines connected in pair among three adjacent peak points, called ‘dual-line-correlation mechanism’. The accurate starting time of the LFM signal can be found according to the peak frequency of the adjacent FrFT. More importantly, MFrFT do not result in an increase in computational complexity. Compared with the conventional FrFT detection method, experimental results show that the proposed algorithm can effectively distinguish between signal starting points and bursty noise with much lower error detection rate, which in turn minimizes the cost of retransmission.
Yushi OGIWARA Ayanori YOROZU Akihisa OHYA Hideyuki KAWASHIMA
In the Robot Operating System (ROS), a major middleware for robots, the Transform Library (TF) is a mandatory package that manages transformation information between coordinate systems by using a directed forest data structure and providing methods for registering and computing the information. However, the structure has two fundamental problems. The first is its poor scalability: since it accepts only a single thread at a time due to using a single giant lock for mutual exclusion, the access to the tree is sequential. Second, there is a lack of data freshness: it retrieves non-latest synthetic data when computing coordinate transformations because it prioritizes temporal consistency over data freshness. In this paper, we propose methods based on transactional techniques. This will allow us to avoid anomalies, achieve high performance, and obtain fresh data. These transactional methods show a throughput of up to 429 times higher than the conventional method on a read-only workload and a freshness of up to 1276 times higher than the conventional one on a read-write combined workload.
Yuya DEGAWA Toru KOIZUMI Tomoki NAKAMURA Ryota SHIOYA Junichiro KADOMOTO Hidetsugu IRIE Shuichi SAKAI
One of the performance bottlenecks of a processor is the front-end that supplies instructions. Various techniques, such as cache replacement algorithms and hardware prefetching, have been investigated to facilitate smooth instruction supply at the front-end and to improve processor performance. In these approaches, one of the most important factors has been the reduction in the number of instruction cache misses. By using the number of instruction cache misses or derived factors, previous studies have explained the performance improvements achieved by their proposed methods. However, we found that the number of instruction cache misses does not always explain performance changes well in modern processors. This is because the front-end in modern processors handles subsequent instruction cache misses in overlap with earlier ones. Based on this observation, we propose a novel factor: the number of miss regions. We define a region as a sequence of instructions from one branch misprediction to the next, while we define a miss region as a region that contains one or more instruction cache misses. At the boundary of each region, the pipeline is flushed owing to a branch misprediction. Thus, cache misses after this boundary are not handled in overlap with cache misses before the boundary. As a result, the number of miss regions is equal to the number of cache misses that are processed without overlap. In this paper, we demonstrate that the number of miss regions can well explain the variation in performance through mathematical models and simulation results. The results show that the model explains cycles per instruction with an average error of 1.0% and maximum error of 4.1% when applying an existing prefetcher to the instruction cache. The idea of miss regions highlights that instruction cache misses and branch mispredictions interact with each other in processors with a decoupled front-end. We hope that considering this interaction will motivate the development of fast performance estimation methods and new microarchitectural methods.
Rin OISHI Junichiro KADOMOTO Hidetsugu IRIE Shuichi SAKAI
As more and more programs handle personal information, the demand for secure handling of data is increasing. The protocol that satisfies this demand is called Secure function evaluation (SFE) and has attracted much attention from a privacy protection perspective. In two-party SFE, two mutually untrustworthy parties compute an arbitrary function on their respective secret inputs without disclosing any information other than the output of the function. For example, it is possible to execute a program while protecting private information, such as genomic information. The garbled circuit (GC) — a method of program obfuscation in which the program is divided into gates and the output is calculated using a symmetric key cipher for each gate — is an efficient method for this purpose. However, GC is computationally expensive and has a significant overhead even with an accelerator. We focus on hardware acceleration because of the nature of GC, which is limited to certain types of calculations, such as encryption and XOR. In this paper, we propose an architecture that accelerates garbling by running multiple garbling engines simultaneously based on the latest FPGA-based GC accelerator. In this architecture, managers are introduced to perform multiple rows of pipeline processing simultaneously. We also propose an optimized implementation of RAM for this FPGA accelerator. As a result, it achieves an average performance improvement of 26% in garbling the same set of programs, compared to the state-of-the-art (SOTA) garbling accelerator.
Chee Siang LEOW Hideaki YAJIMA Tomoki KITAGAWA Hiromitsu NISHIZAKI
Text detection is a crucial pre-processing step in optical character recognition (OCR) for the accurate recognition of text, including both fonts and handwritten characters, in documents. While current deep learning-based text detection tools can detect text regions with high accuracy, they often treat multiple lines of text as a single region. To perform line-based character recognition, it is necessary to divide the text into individual lines, which requires a line detection technique. This paper focuses on the development of a new approach to single-line detection in OCR that is based on the existing Character Region Awareness For Text detection (CRAFT) model and incorporates a deep neural network specialized in line segmentation. However, this new method may still detect multiple lines as a single text region when multi-line text with narrow spacing is present. To address this, we also introduce a post-processing algorithm to detect single text regions using the output of the single-line segmentation. Our proposed method successfully detects single lines, even in multi-line text with narrow line spacing, and hence improves the accuracy of OCR.