In this paper, for improving the robustness of D2D-based SNS by avoiding the cascading failure, we propose an autonomous decentralized friendship management called virtual temporal friendship creation. In our proposed virtual temporal friendship creation, some virtual temporal friendships are created among users based on an optimization problem to improve the robustness although these friendships cannot be used to perform the message exchange in SNS. We investigate the impact of creating a new friendship on the node resilience for the optimization problem. Then we consider an autonomous decentralized algorithm based on the obtained results for the optimization problem of virtual temporal friendship creation. We evaluate the performance of the virtual temporal friendship creation with simulation and investigate the effectiveness of this method by comparing with the performance of a method with meta-heuristic algorithm. From numerical examples, we show that the virtual temporal friendship creation can improve the robustness quickly in an autonomous and decentralized way.
Xiao HONG Yuehong GAO Hongwen YANG
Computer networks tend to be subjected to the proliferation of mobile demands, therefore it poses a great challenge to guarantee the quality of network service. For real-time systems, the QoS performance bound analysis for the complex network topology and background traffic in modern networks is often difficult. Network calculus, nevertheless, converts a complex non-linear network system into an analyzable linear system to accomplish more accurate delay bound analysis. The existing network environment contains complex network resource allocation schemes, and delay bound analysis is generally pessimistic, hence it is essential to modify the analysis model to improve the bound accuracy. In this paper, the main research approach is to obtain the measurement results of an actual network by building a measurement environment and the corresponding theoretical results by network calculus. A comparison between measurement data and theoretical results is made for the purpose of clarifying the scheme of bandwidth scheduling. The measurement results and theoretical analysis results are verified and corrected, in order to propose an accurate per-flow end-to-end delay bound analytic model for a large-scale scheduling network. On this basis, the instructional significance of the analysis results for the engineering construction is discussed.
Masaya KUMAZAKI Masaki OGURA Takuji TACHIBANA
For beyond 5G era, in network function virtualization (NFV) environments, service chaining can be utilized to provide the flexible network infrastructures needed to support the creation of various application services. In this paper, we propose a dynamic service chain construction based on model predictive control (MPC) to utilize network resources. In the proposed method, the number of data packets in the buffer at each node is modeled as a dynamical system for MPC. Then, we formulate an optimization problem with the predicted amount of traffic injecting into each service chain from users for the dynamical system. In the optimization problem, the transmission route of each service chain, the node where each VNF is placed, and the amount of resources for each VNF are determined simultaneously by using MPC so that the amount of resources allocated to VNFs and the number of VNF migrations are minimized. In addition, the performance of data transmission is also controlled by considering the maximum amount of data packets stored in buffers. The performance of the proposed method is evaluated by simulation, and the effectiveness of the proposed method with different parameter values is investigated.
Xiaoling YU Yuntao WANG Chungen XU Tsuyoshi TAKAGI
Due to the property of supporting arbitrary operation over the encrypted data, fully homomorphic encryption (FHE) has drawn considerable attention since it appeared. Some FHE schemes have been constructed based on the general approximate common divisor (GACD) problem, which is widely believed intractable. Therefore, studying the GACD problem's hardness can provide proper security parameters for these FHE schemes and their variants. This paper aims to study an orthogonal lattice algorithm introduced by Ding and Tao (Ding-Tao algorithm) to solve the GACD problem. We revisit the condition that Ding-Tao algorithm works and obtain a new bound of the GACD samples' number based on geometric series assumption. Simultaneously, we also give an analysis of the bound given in the previous work. To further verify the theoretical results, we conduct experiments on Ding-Tao algorithm under our bound. We show a comparison with the experimental results under the previous bound, which indicates the success probability under our bound is higher than that of the previous bound with the growth of the bound.
Koki YAMADA Taishin NAKAMURA Hisashi YAMAMOTO
In the field of reliability engineering, many studies on the relationship of reliability between components and the entire system have been conducted since the 1960s. Various properties of large-scale systems can be studied by limit theorems. In addition, the limit theorem can provide an approximate system reliability. Existing studies have established the limit theorems of a connected-(r, s)-out-of-(m, n):F lattice system consisting of components with the same reliability. However, the existing limit theorems are constrained in terms of (a) the system shape and (b) the condition under which the theorem can be applied. Therefore, this study generalizes the existing limit theorems along the two aforementioned directions. The limit theorem established in this paper can be useful for revealing the properties of the reliability of a large-scale connected-(r, s)-out-of-(m, n):F lattice system.
The 2020 International Conference on Emerging Technologies for Communications (ICETC2020) was held online on December 2nd—4th, 2020, and 213 research papers were accepted and presented in each session. It is expected that the accepted papers will contribute to the development and extension of research in multiple research areas. In this survey paper, all accepted research papers are classified into four research areas: Physical & Fundamental, Communications, Network, and Information Technology & Application, and then research papers are classified into each research topic. For each research area and topic, this survey paper briefly introduces the presented technologies and methods.
Akifumi MARU Akifumi MATSUDA Satoshi KUBOYAMA Mamoru YOSHIMOTO
In order to expect the single event occurrence on highly integrated CMOS memory circuit, quantitative evaluation of charge sharing between memory cells is needed. In this study, charge sharing area induced by heavy ion incident is quantitatively calculated by using device-simulation-based method. The validity of this method is experimentally confirmed using the charged heavy ion accelerator.
We analyze the effect of window choice on the zero-padding method and corrected quadratically interpolated fast Fourier transform using a harmonic signal in noise at both high and low signal-to-noise ratios (SNRs) on a theoretical basis. Then, we validate the theoretical analysis using simulations. The theoretical analysis and simulation results using four traditional window functions show that the optimal window is determined depending on the SNR; the estimation errors are the smallest for the rectangular window at low SNR, the Hamming and Hanning windows at mid SNR, and the Blackman window at high SNR. In addition, we analyze the simulation results using the signal-to-noise floor ratio, which appears to be more effective than the conventional SNR in determining the optimal window.
Qin CHENG Linghua ZHANG Bo XUE Feng SHU Yang YU
As an emerging technology, device-free localization (DFL) using wireless sensor networks to detect targets not carrying any electronic devices, has spawned extensive applications, such as security safeguards and smart homes or hospitals. Previous studies formulate DFL as a classification problem, but there are still some challenges in terms of accuracy and robustness. In this paper, we exploit a generalized thresholding algorithm with parameter p as a penalty function to solve inverse problems with sparsity constraints for DFL. The function applies less bias to the large coefficients and penalizes small coefficients by reducing the value of p. By taking the distinctive capability of the p thresholding function to measure sparsity, the proposed approach can achieve accurate and robust localization performance in challenging environments. Extensive experiments show that the algorithm outperforms current alternatives.
Masayuki FUKUMITSU Shingo HASEGAWA
Multisignatures enable multiple users to sign a message interactively. Many instantiations are proposed for multisignatures, however, most of them are quantum-insecure, because these are based on the integer factoring assumption or the discrete logarithm assumption. Although there exist some constructions based on the lattice problems, which are believed to be quantum-secure, their security reductions are loose. In this paper, we aim to improve the security reduction of lattice-based multisignature schemes concerning tightness. Our basic strategy is combining the multisignature scheme proposed by El Bansarkhani and Sturm with the lattice-based signature scheme by Abdalla, Fouque, Lyubashevsky, and Tibouchi which has a tight security reduction from the Ring-LWE (Ring Learning with Errors) assumption. Our result shows that proof techniques for standard signature schemes can be applied to multisignature schemes, then we can improve the polynomial loss factor concerning the Ring-LWE assumption. Our second result is to address the problem of security proofs of existing lattice-based multisignature schemes pointed out by Damgård, Orlandi, Takahashi, and Tibouchi. We employ a new cryptographic assumption called the Rejected-Ring-LWE assumption, to complete the security proof.
Seyed Mohammadhossein TABATABAEE Jean-Yves LE BOUDEC Marc BOYER
Weighted Round-Robin (WRR) is often used, due to its simplicity, for scheduling packets or tasks. With WRR, a number of packets equal to the weight allocated to a flow can be served consecutively, which leads to a bursty service. Interleaved Weighted Round-Robin (IWRR) is a variant that mitigates this effect. We are interested in finding bounds on worst-case delay obtained with IWRR. To this end, we use a network calculus approach and find a strict service curve for IWRR. The result is obtained using the pseudo-inverse of a function. We show that the strict service curve is the best obtainable one, and that delay bounds derived from it are tight (i.e., worst-case) for flows of packets of constant size. Furthermore, the IWRR strict service curve dominates the strict service curve for WRR that was previously published. We provide some numerical examples to illustrate the reduction in worst-case delays caused by IWRR compared to WRR.
Xiang WANG Xin LU Meiming FU Jiayi LIU Hongyan YANG
Leveraging on Network Function Virtualization (NFV) and Software Defined Networking (SDN), network slicing (NS) is recognized as a key technology that enables the 5G Infrastructure Provider (InP) to support diversified vertical services over a shared common physical infrastructure. 5G end-to-end (E2E) NS is a logical virtual network that spans across the 5G network. Existing works on improving the reliability of the 5G mainly focus on reliable wireless communications, on the other hand, the reliability of an NS also refers to the ability of the NS system to provide continued service. Hence, in this work, we focus on enhancing the reliability of the NS to cope with physical network node failures, and we investigate the NS deployment problem to improve the reliability of the system represented by the NS. The reliability of an NS is enhanced by two means: firstly, by considering the topology information of an NS, critical virtual nodes are backed up to allow failure recovery; secondly, the embedding of the augmented NS virtual network is optimized for failure avoidance. We formulate the embedding of the augmented virtual network (AVN) to maximize the survivability of the NS system as the survivable AVN embedding (S-AVNE) problem through an Integer Linear Program (ILP) formulation. Due to the complexity of the problem, a heuristic algorithm is introduced. Finally, we conduct intensive simulations to evaluate the performance of our algorithm with regard to improving the reliability of the NS system.
Xin LU Xiang WANG Lin PANG Jiayi LIU Qinghai YANG Xingchen SONG
Network Slicing (NS) is recognized as a key technology for the 5G network in providing tailored network services towards various types of verticals over a shared physical infrastructure. It offers the flexibility of on-demand provisioning of diverse services based on tenants' requirements in a dynamic environment. In this work, we focus on two important issues related to 5G Core slices: the deployment and the reconfiguration of 5G Core NSs. Firstly, for slice deployment, balancing the workloads of the underlying network is beneficial in mitigating resource fragmentation for accommodating the future unknown network slice requests. In this vein, we formulate a load-balancing oriented 5G Core NS deployment problem through an Integer Linear Program (ILP) formulation. Further, for slice reconfiguration, we propose a reactive strategy to accommodate a rejected NS request by reorganizing the already-deployed NSs. Typically, the NS deployment algorithm is reutilized with slacked physical resources to find out the congested part of the network, due to which the NS is rejected. Then, these congested physical nodes and links are reconfigured by migrating virtual network functions and virtual links, to re-balance the utilization of the whole physical network. To evaluate the performance of deployment and reconfiguration algorithms we proposed, extensive simulations have been conducted. The results show that our deployment algorithm performs better in resource balancing, hence achieves higher acceptance ratio by comparing to existing works. Moreover, our reconfiguration algorithm improves resource utilization by accommodating more NSs in a dynamic environment.
Zhentian WU Feng YAN Zhihua YANG Jingya YANG
This paper studies using price incentives to shift bandwidth demand from peak to non-peak periods. In particular, cost discounts decrease as peak monthly usage increases. We take into account the delay sensitivity of different apps: during peak hours, the usage of hard real-time applications (HRAS) is not counted in the user's monthly data cap, while the usage of other applications (OAS) is counted in the user's monthly data cap. As a result, users may voluntarily delay or abandon OAS in order to get a higher fee discount. Then, a new data rate control algorithm is proposed. The algorithm allocates the data rate according to the priority of the source, which is determined by two factors: (I) the allocated data rate; and (II) the waiting time.
Hyungjin CHO Seongmin PARK Youngkwon PARK Bomin CHOI Dowon KIM Kangbin YIM
In Feb 2021, As the competition for commercialization of 5G mobile communication has been increasing, 5G SA Network and Vo5G are expected to be commercialized soon. 5G mobile communication aims to provide 20 Gbps transmission speed which is 20 times faster than 4G mobile communication, connection of at least 1 million devices per 1 km2, and 1 ms transmission delay which is 10 times shorter than 4G. To meet this, various technological developments were required, and various technologies such as Massive MIMO (Multiple-Input and Multiple-Output), mmWave, and small cell network were developed and applied in the area of 5G access network. However, in the core network area, the components constituting the LTE (Long Term Evolution) core network are utilized as they are in the NSA (Non-Standalone) architecture, and only the changes in the SA (Standalone) architecture have occurred. Also, in the network area for providing the voice service, the IMS (IP Multimedia Subsystem) infrastructure is still used in the SA architecture. Here, the issue is that while 5G mobile communication is evolving openly to provide various services, security elements are vulnerable to various cyber-attacks because they maintain the same form as before. Therefore, in this paper, we will look at what the network standard for 5G voice service provision consists of, and what are the vulnerable problems in terms of security. And We Suggest Possible Attack Scenario using Security Issue, We also want to consider whether these problems can actually occur and what is the countermeasure.
Kazunari TAKASAKI Ryoichi KIDA Nozomu TOGAWA
With the widespread use of Internet of Things (IoT) devices in recent years, we utilize a variety of hardware devices in our daily life. On the other hand, hardware security issues are emerging. Power analysis is one of the methods to detect anomalous behaviors, but it is hard to apply it to IoT devices where an operating system and various software programs are running. In this paper, we propose an anomalous behavior detection method for an IoT device by extracting application-specific power behaviors. First, we measure power consumption of an IoT device, and obtain the power waveform. Next, we extract an application-specific power waveform by eliminating a steady factor from the obtained power waveform. Finally, we extract feature values from the application-specific power waveform and detect an anomalous behavior by utilizing the local outlier factor (LOF) method. We conduct two experiments to show how our proposed method works: one runs three application programs and an anomalous application program randomly and the other runs three application programs in series and an anomalous application program very rarely. Application programs on both experiments are implemented on a single board computer. The experimental results demonstrate that the proposed method successfully detects anomalous behaviors by extracting application-specific power behaviors, while the existing approaches cannot.
In [31], Shin et al. proposed a Leakage-Resilient and Proactive Authenticated Key Exchange (LRP-AKE) protocol for credential services which provides not only a higher level of security against leakage of stored secrets but also secrecy of private key with respect to the involving server. In this paper, we discuss a problem in the security proof of the LRP-AKE protocol, and then propose a modified LRP-AKE protocol that has a simple and effective measure to the problem. Also, we formally prove its AKE security and mutual authentication for the entire modified LRP-AKE protocol. In addition, we describe several extensions of the (modified) LRP-AKE protocol including 1) synchronization issue between the client and server's stored secrets; 2) randomized ID for the provision of client's privacy; and 3) a solution to preventing server compromise-impersonation attacks. Finally, we evaluate the performance overhead of the LRP-AKE protocol and show its test vectors. From the performance evaluation, we can confirm that the LRP-AKE protocol has almost the same efficiency as the (plain) Diffie-Hellman protocol that does not provide authentication at all.
Yoichi MATSUO Tatsuaki KIMURA Ken NISHIMATSU
When a failure occurs in a network element, such as switch, router, and server, network operators need to recognize the service impact, such as time to recovery from the failure or severity of the failure, since service impact is essential information for handling failures. In this paper, we propose Deep learning based Service Impact Prediction system (DeepSIP), which predicts the service impact of network failure in a network element using a temporal multimodal convolutional neural network (CNN). More precisely, DeepSIP predicts the time to recovery from the failure and the loss of traffic volume due to the failure in a network on the basis of information from syslog messages and traffic volume. Since the time to recovery is useful information for a service level agreement (SLA) and the loss of traffic volume is directly related to the severity of the failure, we regard the time to recovery and the loss of traffic volume as the service impact. The service impact is challenging to predict, since it depends on types of network failures and traffic volume when the failure occurs. Moreover, network elements do not explicitly contain any information about the service impact. To extract the type of network failures and predict the service impact, we use syslog messages and past traffic volume. However, syslog messages and traffic volume are also challenging to analyze because these data are multimodal, are strongly correlated, and have temporal dependencies. To extract useful features for prediction, we develop a temporal multimodal CNN. We experimentally evaluated DeepSIP in terms of accuracy by comparing it with other NN-based methods by using synthetic and real datasets. For both datasets, the results show that DeepSIP outperformed the baselines.
Hongwei YANG Fucheng XUE Dan LIU Li LI Jiahui FENG
Service composition optimization is a classic NP-hard problem. How to quickly select high-quality services that meet user needs from a large number of candidate services is a hot topic in cloud service composition research. An efficient second-order beetle swarm optimization is proposed with a global search ability to solve the problem of cloud service composition optimization in this study. First, the beetle antennae search algorithm is introduced into the modified particle swarm optimization algorithm, initialize the population bying using a chaotic sequence, and the modified nonlinear dynamic trigonometric learning factors are adopted to control the expanding capacity of particles and global convergence capability. Second, modified secondary oscillation factors are incorporated, increasing the search precision of the algorithm and global searching ability. An adaptive step adjustment is utilized to improve the stability of the algorithm. Experimental results founded on a real data set indicated that the proposed global optimization algorithm can solve web service composition optimization problems in a cloud environment. It exhibits excellent global searching ability, has comparatively fast convergence speed, favorable stability, and requires less time cost.
Hiroki OKADA Atsushi TAKAYASU Kazuhide FUKUSHIMA Shinsaku KIYOMOTO Tsuyoshi TAKAGI
We propose a new lattice-based digital signature scheme MLWRSign by modifying Dilithium, which is one of the second-round candidates of NIST's call for post-quantum cryptographic standards. To the best of our knowledge, our scheme MLWRSign is the first signature scheme whose security is based on the (module) learning with rounding (LWR) problem. Due to the simplicity of the LWR, the secret key size is reduced by approximately 30% in our scheme compared to Dilithium, while achieving the same level of security. Moreover, we implemented MLWRSign and observed that the running time of our scheme is comparable to that of Dilithium.