To mitigate the interference caused by frequency reuse between inter-layer and intra-layer users for Non-Orthogonal Multiple Access (NOMA) based device-to-device (D2D) communication underlaying cellular systems, this paper proposes a joint optimization strategy that combines user grouping and resource allocation. Specifically, the optimization problem is formulated to maximize the sum rate while ensuring the minimum rate of cellular users, considering three optimization parameters: user grouping, sub channel allocation and power allocation. However, this problem is a mixed integer nonlinear programming (MINLP) problem and is hard to solve directly. To address this issue, we divide the problem into two sub-problems: user grouping and resource allocation. First, we classify D2D users into D2D pairs or D2D NOMA groups based on the greedy algorithm. Then, in terms of resource allocation, we allocate the sub-channel to D2D users by swap matching algorithm to reduce the co-channel interference, and optimize the transmission power of D2D by the local search algorithm. Simulation results show that, compared to other schemes, the proposed algorithm significantly improves the system sum rate and spectral utilization.
Recently, multivariate time-series data has been generated in various environments, such as sensor networks and IoT, making anomaly detection in time-series data an essential research topic. Unsupervised learning anomaly detectors identify anomalies by training a model on normal data and producing high residuals for abnormal observations. However, a fundamental issue arises as anomalies do not consistently result in high residuals, necessitating a focus on the time-series patterns of residuals rather than individual residual sizes. In this paper, we present a novel framework comprising two serialized anomaly detectors: the first model calculates residuals as usual, while the second one evaluates the time-series pattern of the computed residuals to determine whether they are normal or abnormal. Experiments conducted on real-world time-series data demonstrate the effectiveness of our proposed framework.
Kota HISAFURU Kazunari TAKASAKI Nozomu TOGAWA
In recent years, with the wide spread of the Internet of Things (IoT) devices, security issues for hardware devices have been increasing, where detecting their anomalous behaviors becomes quite important. One of the effective methods for detecting anomalous behaviors of IoT devices is to utilize consumed energy and operation duration time extracted from their power waveforms. However, the existing methods do not consider the shape of time-series data and cannot distinguish between power waveforms with similar consumed energy and duration time but different shapes. In this paper, we propose a method for detecting anomalous behaviors based on the shape of time-series data by incorporating a shape-based distance (SBD) measure. The proposed method first obtains the entire power waveform of the target IoT device and extracts several application power waveforms. After that, we give the invariances to them, and we can effectively obtain the SBD between every two application power waveforms. Based on the SBD values, the local outlier factor (LOF) method can finally distinguish between normal application behaviors and anomalous application behaviors. Experimental results demonstrate that the proposed method successfully detects anomalous application behaviors, while the existing state-of-the-art method fails to detect them.
Seen from the Internet Service Provider (ISP) side, network traffic monitoring is an indispensable part during network service provisioning, which facilitates maintaining the security and reliability of the communication networks. Among the numerous traffic conditions, we should pay extra attention to traffic anomaly, which significantly affects the network performance. With the advancement of Machine Learning (ML), data-driven traffic anomaly detection algorithms have established high reputation due to the high accuracy and generality. However, they are faced with challenges on inefficient traffic feature extraction and high computational complexity, especially when taking the evolving property of traffic process into consideration. In this paper, we proposed an online learning framework for traffic anomaly detection by embracing Gaussian Process (GP) and Sparse Representation (SR) in two steps: 1). To extract traffic features from past records, and better understand these features, we adopt GP with a special kernel, i.e., mixture of Gaussian in the spectral domain, which makes it possible to more accurately model the network traffic for improving the performance of traffic anomaly detection. 2). To combat noise and modeling error, observing the inherent self-similarity and periodicity properties of network traffic, we manually design a feature vector, based on which SR is adopted to perform robust binary classification. Finally, we demonstrate the superiority of the proposed framework in terms of detection accuracy through simulation.
Non-orthogonal multipe access based multiple-input multiple-output system (MIMO-NOMA) has been widely used in improving user's achievable rate of millimeter wave (mmWave) communication. To meet different requirements of each user in multi-user beams, this paper proposes a power allocation algorithm to satisfy the quality of service (QoS) of head user while maximizing the minimum rate of edge users from the perspective of max-min fairness. Suppose that the user who is closest to the base station (BS) is the head user and the other users are the edge users in each beam in this paper. Then, an optimization problem model of max-min fairness criterion is developed under the constraints of users' minimum rate requirements and the total transmitting power of the BS. The bisection method and Karush-Kuhn-Tucher (KKT) conditions are used to solve this complex non-convex problem, and simulation results show that both the minimum achievable rates of edge users and the average rate of all users are greatly improved significantly compared with the traditional MIMO-NOMA, which only consider max-min fairness of users.
Ryota KOBAYASHI Takanori HARA Yasuaki YUDA Kenichi HIGUCHI
This paper extends our previously reported non-orthogonal multiple access (NOMA)-based highly-efficient and low-latency hybrid automatic repeat request (HARQ) method for ultra-reliable low latency communications (URLLC) to the case with inter-base station cooperation. In the proposed method, delay-sensitive URLLC packets are preferentially multiplexed with best-effort enhanced mobile broadband (eMBB) packets in the same channel using superposition coding to reduce the transmission latency of the URLLC packet while alleviating the throughput loss in eMBB. Although data transmission to the URLLC terminal is conducted by multiple base stations based on inter-base station cooperation, the proposed method allocates radio resources to URLLC terminals which include scheduling (bandwidth allocation) and power allocation at each base station independently to achieve the short transmission latency required for URLLC. To avoid excessive radio resource assignment to URLLC terminals due to independent resource assignment at each base station, which may result in throughput degradation in eMBB terminals, we employ an adaptive path-loss-dependent weighting approach in the scheduling-metric calculation. This achieves appropriate radio resource assignment to URLLC terminals while reducing the packet error rate (PER) and transmission delay time thanks to the inter-base station cooperation. We show that the proposed method significantly improves the overall performance of the system that provides simultaneous eMBB and URLLC services.
Ryota KOBAYASHI Yasuaki YUDA Kenichi HIGUCHI
Hybrid automatic repeat request (HARQ) is an essential technology that efficiently reduces the transmission error rate. However, for ultra-reliable low latency communications (URLLC) in the 5th generation mobile communication systems and beyond, the increase in latency due to retransmission must be minimized in HARQ. In this paper, we propose a highly-efficient low-latency HARQ method built on non-orthogonal multiple access (NOMA) for URLLC while minimizing the performance loss for coexisting services (use cases) such as enhanced mobile broadband (eMBB). The proposed method can be seen as an extension of the conventional link-level non-orthogonal HARQ to the system-level protocol. This mitigates the problems of the conventional link-level non-orthogonal HARQ, which are decoding error under poor channel conditions and an increase in transmission delay due to restrictions in retransmission timing. In the proposed method, delay-sensitive URLLC packets are preferentially multiplexed with best-effort eMBB packets in the same channel using superposition coding to reduce the transmission latency of the URLLC packet while alleviating the throughput loss in eMBB. This is achieved using a weighted channel-aware resource allocator (scheduler). The inter-packet interference multiplexed in the same channel is removed using a successive interference canceller (SIC) at the receiver. Furthermore, the transmission rates for the initial transmission and retransmission are controlled in an appropriate manner for each service in order to deal with decoding errors caused by error in transmission rate control originating from a time varying channel. We show that the proposed method significantly improves the overall performance of a system that simultaneously provides eMBB and URLLC services.
Non-orthogonal multiple access (NOMA), which combines multiple user signals and transmits the combined signal over one channel, can achieve high spectral efficiency for mobile communications. However, combining the multiple signals can lead to degradation of bit error rates (BERs) of NOMA under severe channel conditions. In order to improve the BER performance of NOMA, this paper proposes a new NOMA scheme based on orthogonal space-time block codes (OSTBCs). The proposed scheme transmits several multiplexed signals over their respective orthogonal time-frequency channels, and can gain diversity effects due to the orthogonality of OSTBC. Furthermore, the new scheme can detect the user signals using low-complexity linear detection in contrast with the conventional NOMA. The paper focuses on the Alamouti code, which can be considered the simplest OSTBC, and theoretically analyzes the performance of the linear detection. Computer simulations under the condition of the same bit rate per channel show that the Alamouti code based scheme using two channels is superior to the conventional NOMA using one channel in terms of BER performance. As shown by both the theoretical and simulation analyses, the linear detection for the proposed scheme can maintain the same BER performance as that of the maximum likelihood detection, when the two channels have the same frequency response and do not bring about any diversity effects, which can be regarded as the worst case.
Wan Yeon LEE Yun-Seok CHOI Tong Min KIM
We propose a quantitative measurement technique of video forgery that eliminates the decision burden of subtle boundary between normal and tampered patterns. We also propose the automatic adjustment scheme of spatial and temporal target zones, which maximizes the abnormality measurement of forged videos. Evaluation shows that the proposed scheme provides manifest detection capability against both inter-frame and intra-frame forgeries.
Kengo TAJIRI Ryoichi KAWAHARA Yoichi MATSUO
Machine learning (ML) has been used for various tasks in network operations in recent years. However, since the scale of networks has grown and the amount of data generated has increased, it has been increasingly difficult for network operators to conduct their tasks with a single server using ML. Thus, ML with edge-cloud cooperation has been attracting attention for efficiently processing and analyzing a large amount of data. In the edge-cloud cooperation setting, although transmission latency, bandwidth congestion, and accuracy of tasks using ML depend on the load balance of processing data with edge servers and a cloud server in edge-cloud cooperation, the relationship is too complex to estimate. In this paper, we focus on monitoring anomalous traffic as an example of ML tasks for network operations and formulate transmission latency, bandwidth congestion, and the accuracy of the task with edge-cloud cooperation considering the ratio of the amount of data preprocessed in edge servers to that in a cloud server. Moreover, we formulate an optimization problem under constraints for transmission latency and bandwidth congestion to select the proper ratio by using our formulation. By solving our optimization problem, the optimal load balance between edge servers and a cloud server can be selected, and the accuracy of anomalous traffic monitoring can be estimated. Our formulation and optimization framework can be used for other ML tasks by considering the generating distribution of data and the type of an ML model. In accordance with our formulation, we simulated the optimal load balance of edge-cloud cooperation in a topology that mimicked a Japanese network and conducted an anomalous traffic detection experiment by using real traffic data to compare the estimated accuracy based on our formulation and the actual accuracy based on the experiment.
Thin Tharaphe THEIN Yoshiaki SHIRAISHI Masakatu MORII
With a rapidly escalating number of sophisticated cyber-attacks, protecting Internet of Things (IoT) networks against unauthorized activity is a major concern. The detection of malicious attack traffic is thus crucial for IoT security to prevent unwanted traffic. However, existing traditional malicious traffic detection systems which relied on supervised machine learning approach need a considerable number of benign and malware traffic samples to train the machine learning models. Moreover, in the cases of zero-day attacks, only a few labeled traffic samples are accessible for analysis. To deal with this, we propose a few-shot malicious IoT traffic detection system with a prototypical graph neural network. The proposed approach does not require prior knowledge of network payload binaries or network traffic signatures. The model is trained on labeled traffic data and tested to evaluate its ability to detect new types of attacks when only a few labeled traffic samples are available. The proposed detection system first categorizes the network traffic as a bidirectional flow and visualizes the binary traffic flow as a color image. A neural network is then applied to the visualized traffic to extract important features. After that, using the proposed few-shot graph neural network approach, the model is trained on different few-shot tasks to generalize it to new unseen attacks. The proposed model is evaluated on a network traffic dataset consisting of benign traffic and traffic corresponding to six types of attacks. The results revealed that our proposed model achieved an F1 score of 0.91 and 0.94 in 5-shot and 10-shot classification, respectively, and outperformed the baseline models.
Megumi ASADA Nobuhide NONAKA Kenichi HIGUCHI
We propose an efficient hybrid automatic repeat request (HARQ) method that simultaneously achieves packet combining and resolution of the collisions of random access identifiers (RAIDs) during retransmission in a non-orthogonal multiple access (NOMA)-based random access system. Here, the RAID functions as a separator for simultaneously received packets that use the same channel in NOMA. An example of this is a scrambling code used in 4G and 5G systems. Since users independently select a RAID from the candidate set prepared by the system, the decoding of received packets fails when multiple users select the same RAID. Random RAID reselection by each user when attempting retransmission can resolve a RAID collision; however, packet combining between the previous and retransmitted packets is not possible in this case because the base station receiver does not know the relationship between the RAID of the previously transmitted packet and that of the retransmitted packet. To address this problem, we propose a HARQ method that employs novel hierarchical tree-structured RAID groups in which the RAID for the previous packet transmission has a one-to-one relationship with the set of RAIDs for retransmission. The proposed method resolves RAID collisions at retransmission by randomly reselecting for each user a RAID from the dedicated RAID set from the previous transmission. Since the relationship between the RAIDs at the previous transmission and retransmission is known at the base station, packet combining is achieved simultaneously. Computer simulation results show the effectiveness of the proposed method.
In this letter, we propose a feature-based knowledge distillation scheme which transfers knowledge between intermediate blocks of teacher and student with flow-based architecture, specifically Normalizing flow in our implementation. In addition to the knowledge transfer scheme, we examine how configuration of the distillation positions impacts on the knowledge transfer performance. To evaluate the proposed ideas, we choose two knowledge distillation baseline models which are based on Normalizing flow on different domains: CS-Flow for anomaly detection and SRFlow-DA for super-resolution. A set of performance comparison to the baseline models with popular benchmark datasets shows promising results along with improved inference speed. The comparison includes performance analysis based on various configurations of the distillation positions in the proposed scheme.
He TIAN Kaihong GUO Xueting GUAN Zheng WU
In order to improve the anomaly detection efficiency of network traffic, firstly, the model is established for network flows based on complex networks. Aiming at the uncertainty and fuzziness between network traffic characteristics and network states, the deviation extent is measured from the normal network state using deviation interval uniformly, and the intuitionistic fuzzy sets (IFSs) are established for the various characteristics on the network model that the membership degree, non-membership degree and hesitation margin of the IFSs are used to quantify the ownership of values to be tested and the corresponding network state. Then, the knowledge measure (KM) is introduced into the intuitionistic fuzzy weighted geometry (IFWGω) to weight the results of IFSs corresponding to the same network state with different characteristics together to detect network anomaly comprehensively. Finally, experiments are carried out on different network traffic datasets to analyze the evaluation indicators of network characteristics by our method, and compare with other existing anomaly detection methods. The experimental results demonstrate that the changes of various network characteristics are inconsistent under abnormal attack, and the accuracy of anomaly detection results obtained by our method is higher, verifying our method has a better detection performance.
Xiaoyu WAN Yu WANG Zhengqiang WANG Zifu FAN Bin DUO
In this paper, we investigate the sum rate (SR) maximization problem for downlink cooperative non-orthogonal multiple access (C-NOMA) system under in-phase and quadrature-phase (IQ) imbalance at the base station (BS) and destination. The BS communicates with users by a half-duplex amplified-and-forward (HD-AF) relay under imperfect IQ imbalance. The sum rate maximization problem is formulated as a non-convex optimization with the quality of service (QoS) constraint for each user. We first use the variable substitution method to transform the non-convex SR maximization problem into an equivalent problem. Then, a joint power and rate allocation algorithm is proposed based on successive convex approximation (SCA) to maximize the SR of the systems. Simulation results verify that the algorithm can improve the SR of the C-NOMA compared with the cooperative orthogonal multiple access (C-OMA) scheme.
Shuang WANG Hui CHEN Lei DING He SUI Jianli DING
The issue of a low minority class identification rate caused by data imbalance in anomaly detection tasks is addressed by the proposal of a GAN-SR-based intrusion detection model for industrial control systems. First, to correct the imbalance of minority classes in the dataset, a generative adversarial network (GAN) processes the dataset to reconstruct new minority class training samples accordingly. Second, high-dimensional feature extraction is completed using stacked asymmetric depth self-encoder to address the issues of low reconstruction error and lengthy training times. After that, a random forest (RF) decision tree is built, and intrusion detection is carried out using the features that SNDAE retrieved. According to experimental validation on the UNSW-NB15, SWaT and Gas Pipeline datasets, the GAN-SR model outperforms SNDAE-SVM and SNDAE-KNN in terms of detection performance and stability.
Tao LIU Meiyue WANG Dongyan JIA Yubo LI
In the massive machine-type communication scenario, aiming at the problems of active user detection and channel estimation in the grant-free non-orthogonal multiple access (NOMA) system, new sets of non-orthogonal spreading sequences are proposed by using the zero/low correlation zone sequence set with low correlation among multiple sets. The simulation results show that the resulting sequence set has low coherence, which presents reliable performance for channel estimation and active user detection based on compressed sensing. Compared with the traditional Zadoff-Chu (ZC) sequences, the new non-orthogonal spreading sequences have more flexible lengths, and lower peak-to-average power ratio (PAPR) and smaller alphabet size. Consequently, these sequences will effectively solve the problem of high PAPR of time domain signals and are more suitable for low-cost devices in massive machine-type communication.
Feng LIU Xianlong CHENG Conggai LI Yanli XU
This letter solves the energy efficiency optimization problem for the simultaneous wireless information and power transfer (SWIPT) systems with non-orthogonal multiple access (NOMA), multiple input single output (MISO) and power-splitting structures, where each user may have different individual quality of service (QoS) requirements about information and energy. Nonlinear energy harvesting model is used. Alternate optimization approach is adopted to find the solution, which shows a fast convergence behavior. Simulation results show the proposed scheme has higher energy efficiency than existing dual-layer iteration and throughput maximization methods.
Yuto MUROKI Yotaro MURAKAMI Yoshihisa KISHIYAMA Kenichi HIGUCHI
This paper proposes a novel random access identifier (RAID)-linked receiver beamforming method for time division duplex (TDD)-based random access. When the number of receiver antennas at the base station is large in a massive multiple-input multiple-output (MIMO) scenario, the channel estimation accuracy per receiver antenna at the base station receiver is degraded due to the limited received signal power per antenna from the user terminal. This results in degradation in the receiver beamforming (BF) or antenna diversity combining and active RAID detection. The purpose of the proposed method is to achieve accurate active RAID detection and channel estimation with a reasonable level of computational complexity at the base station receiver. In the proposed method, a unique receiver BF vector applied at the base station is linked to each of the M RAIDs prepared by the system. The user terminal selects an appropriate pair comprising a receiver BF vector and a RAID in advance based on the channel estimation results in the downlink assuming channel reciprocity in a TDD system. Therefore, per-receiver antenna channel estimation for receiver BF is not necessary in the proposed method. Furthermore, in order to utilize fully the knowledge of the channel at the user transmitter, we propose applying transmitter filtering (TF) to the proposed method for effective channel shortening in order to increase the orthogonal preambles for active RAID detection and channel estimation prepared for each RAID. Computer simulation results show that the proposed method greatly improves the accuracy of active RAID detection and channel estimation. This results in lower error rates than that for the conventional method performing channel estimation at each antenna in a massive MIMO environment.
Xiaolin HOU Wenjia LIU Juan LIU Xin WANG Lan CHEN Yoshihisa KISHIYAMA Takahiro ASAI
5G has achieved large-scale commercialization across the world and the global 6G research and development is accelerating. To support more new use cases, 6G mobile communication systems should satisfy extreme performance requirements far beyond 5G. The physical layer key technologies are the basis of the evolution of mobile communication systems of each generation, among which three key technologies, i.e., duplex, waveform and multiple access, are the iconic characteristics of mobile communication systems of each generation. In this paper, we systematically review the development history and trend of the three key technologies and define the Non-Orthogonal Physical Layer (NOPHY) concept for 6G, including Non-Orthogonal Duplex (NOD), Non-Orthogonal Multiple Access (NOMA) and Non-Orthogonal Waveform (NOW). Firstly, we analyze the necessity and feasibility of NOPHY from the perspective of capacity gain and implementation complexity. Then we discuss the recent progress of NOD, NOMA and NOW, and highlight several candidate technologies and their potential performance gain. Finally, combined with the new trend of 6G, we put forward a unified physical layer design based on NOPHY that well balances performance against flexibility, and point out the possible direction for the research and development of 6G physical layer key technologies.