The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

161-180hit(8214hit)

  • Smart Radio Environments with Intelligent Reflecting Surfaces for 6G Sub-Terahertz-Band Communications Open Access

    Yasutaka OGAWA  Shuto TADOKORO  Satoshi SUYAMA  Masashi IWABUCHI  Toshihiko NISHIMURA  Takanori SATO  Junichiro HAGIWARA  Takeo OHGANE  

     
    INVITED PAPER

      Pubricized:
    2023/05/23
      Vol:
    E106-B No:9
      Page(s):
    735-747

    Technology for sixth-generation (6G) mobile communication system is now being widely studied. A sub-Terahertz band is expected to play a great role in 6G to enable extremely high data-rate transmission. This paper has two goals. (1) Introduction of 6G concept and propagation characteristics of sub-Terahertz-band radio waves. (2) Performance evaluation of intelligent reflecting surfaces (IRSs) based on beamforming in a sub-Terahertz band for smart radio environments (SREs). We briefly review research on SREs with reconfigurable intelligent surfaces (RISs), and describe requirements and key features of 6G with a sub-Terahertz band. After that, we explain propagation characteristics of sub-Terahertz band radio waves. Important feature is that the number of multipath components is small in a sub-Terahertz band in indoor office environments. This leads to an IRS control method based on beamforming because the number of radio waves out of the optimum beam is very small and power that is not used for transmission from the IRS to user equipment (UE) is little in the environments. We use beams generated by a Butler matrix or a DFT matrix. In simulations, we compare the received power at a UE with that of the upper bound value. Simulation results show that the proposed method reveals good performance in the sense that the received power is not so lower than the upper bound value.

  • Proof of Concept of Optimum Radio Access Technology Selection Scheme with Radars for Millimeter-Wave Networks Open Access

    Mitsuru UESUGI  Yoshiaki SHINAGAWA  Kazuhiro KOSAKA  Toru OKADA  Takeo UETA  Kosuke ONO  

     
    PAPER

      Pubricized:
    2023/05/23
      Vol:
    E106-B No:9
      Page(s):
    778-785

    With the rapid increase in the amount of data communication in 5G networks, there is a strong demand to reduce the power of the entire network, so the use of highly power-efficient millimeter-wave (mm-wave) networks is being considered. However, while mm-wave communication has high power efficiency, it has strong straightness, so it is difficult to secure stable communication in an environment with blocking. Especially when considering use cases such as autonomous driving, continuous communication is required when transmitting streaming data such as moving images taken by vehicles, it is necessary to compensate the blocking problem. For this reason, the authors examined an optimum radio access technology (RAT) selection scheme which selects mm-wave communication when mm-wave can be used and select wide-area macro-communication when mm-wave may be blocked. In addition, the authors implemented the scheme on a prototype device and conducted field tests and confirmed that mm-wave communication and macro communication were switched at an appropriate timing.

  • Service Deployment Model with Virtual Network Function Resizing Based on Per-Flow Priority

    Keigo AKAHOSHI  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    786-797

    This paper investigates a service deployment model for network function virtualization which handles per-flow priority to minimize the deployment cost. Service providers need to implement network services each of which consists of one or more virtual network functions (VNFs) with satisfying requirements of service delays. In our previous work, we studied the service deployment model with per-host priority; flows belonging to the same service, for the same VNF, and handled on the same host have the same priority. We formulated the model as an optimization problem, and developed a heuristic algorithm named FlexSize to solve it in practical time. In this paper, we address per-flow priority, in which flows of the same service, VNF, and host have different priorities. In addition, we expand FlexSize to handle per-flow priority. We evaluate per-flow and per-host priorities, and the numerical results show that per-flow priority reduces deployment cost compared with per-host priority.

  • Backup Resource Allocation Model with Probabilistic Protection Considering Service Delay

    Shinya HORIMOTO  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    798-816

    This paper proposes a backup resource allocation model for virtual network functions (VNFs) to minimize the total allocated computing capacity for backup with considering the service delay. If failures occur to primary hosts, the VNFs in failed hosts are recovered by backup hosts whose allocation is pre-determined. We introduce probabilistic protection, where the probability that the protection by a backup host fails is limited within a given value; it allows backup resource sharing to reduce the total allocated computing capacity. The previous work does not consider the service delay constraint in the backup resource allocation problem. The proposed model considers that the probability that the service delay, which consists of networking delay between hosts and processing delay in each VNF, exceeds its threshold is constrained within a given value. We introduce a basic algorithm to solve our formulated delay-constraint optimization problem. In a problem with the size that cannot be solved within an acceptable computation time limit by the basic algorithm, we develop a simulated annealing algorithm incorporating Yen's algorithm to handle the delay constraint heuristically. We observe that both algorithms in the proposed model reduce the total allocated computing capacity by up to 56.3% compared to a baseline; the simulated annealing algorithm can get feasible solutions in problems where the basic algorithm cannot.

  • Optimizing Edge-Cloud Cooperation for Machine Learning Accuracy Considering Transmission Latency and Bandwidth Congestion Open Access

    Kengo TAJIRI  Ryoichi KAWAHARA  Yoichi MATSUO  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    827-836

    Machine learning (ML) has been used for various tasks in network operations in recent years. However, since the scale of networks has grown and the amount of data generated has increased, it has been increasingly difficult for network operators to conduct their tasks with a single server using ML. Thus, ML with edge-cloud cooperation has been attracting attention for efficiently processing and analyzing a large amount of data. In the edge-cloud cooperation setting, although transmission latency, bandwidth congestion, and accuracy of tasks using ML depend on the load balance of processing data with edge servers and a cloud server in edge-cloud cooperation, the relationship is too complex to estimate. In this paper, we focus on monitoring anomalous traffic as an example of ML tasks for network operations and formulate transmission latency, bandwidth congestion, and the accuracy of the task with edge-cloud cooperation considering the ratio of the amount of data preprocessed in edge servers to that in a cloud server. Moreover, we formulate an optimization problem under constraints for transmission latency and bandwidth congestion to select the proper ratio by using our formulation. By solving our optimization problem, the optimal load balance between edge servers and a cloud server can be selected, and the accuracy of anomalous traffic monitoring can be estimated. Our formulation and optimization framework can be used for other ML tasks by considering the generating distribution of data and the type of an ML model. In accordance with our formulation, we simulated the optimal load balance of edge-cloud cooperation in a topology that mimicked a Japanese network and conducted an anomalous traffic detection experiment by using real traffic data to compare the estimated accuracy based on our formulation and the actual accuracy based on the experiment.

  • Protection Mechanism of Kernel Data Using Memory Protection Key

    Hiroki KUZUNO  Toshihiro YAMAUCHI  

     
    PAPER

      Pubricized:
    2023/06/30
      Vol:
    E106-D No:9
      Page(s):
    1326-1338

    Memory corruption can modify the kernel data of an operating system kernel through exploiting kernel vulnerabilities that allow privilege escalation and defeats security mechanisms. To prevent memory corruption, the several security mechanisms are proposed. Kernel address space layout randomization randomizes the virtual address layout of the kernel. The kernel control flow integrity verifies the order of invoking kernel codes. The additional kernel observer focuses on the unintended privilege modifications. However, illegal writing of kernel data is not prevented by these existing security mechanisms. Therefore, an adversary can achieve the privilege escalation and the defeat of security mechanisms. This study proposes a kernel data protection mechanism (KDPM), which is a novel security design that restricts the writing of specific kernel data. The KDPM adopts a memory protection key (MPK) to control the write restriction of kernel data. The KDPM with the MPK ensures that the writing of privileged information for user processes and the writing of kernel data related to the mandatory access control. These are dynamically restricted during the invocation of specific system calls and the execution of specific kernel codes. Further, the KDPM is implemented on the latest Linux with an MPK emulator. The evaluation results indicate the possibility of preventing the illegal writing of kernel data. The KDPM showed an acceptable performance cost, measured by the overhead, which was from 2.96% to 9.01% of system call invocations, whereas the performance load on the MPK operations was 22.1ns to 1347.9ns. Additionally, the KDPM requires 137 to 176 instructions for its implementations.

  • Few-Shot Learning-Based Malicious IoT Traffic Detection with Prototypical Graph Neural Networks

    Thin Tharaphe THEIN  Yoshiaki SHIRAISHI  Masakatu MORII  

     
    PAPER

      Pubricized:
    2023/06/22
      Vol:
    E106-D No:9
      Page(s):
    1480-1489

    With a rapidly escalating number of sophisticated cyber-attacks, protecting Internet of Things (IoT) networks against unauthorized activity is a major concern. The detection of malicious attack traffic is thus crucial for IoT security to prevent unwanted traffic. However, existing traditional malicious traffic detection systems which relied on supervised machine learning approach need a considerable number of benign and malware traffic samples to train the machine learning models. Moreover, in the cases of zero-day attacks, only a few labeled traffic samples are accessible for analysis. To deal with this, we propose a few-shot malicious IoT traffic detection system with a prototypical graph neural network. The proposed approach does not require prior knowledge of network payload binaries or network traffic signatures. The model is trained on labeled traffic data and tested to evaluate its ability to detect new types of attacks when only a few labeled traffic samples are available. The proposed detection system first categorizes the network traffic as a bidirectional flow and visualizes the binary traffic flow as a color image. A neural network is then applied to the visualized traffic to extract important features. After that, using the proposed few-shot graph neural network approach, the model is trained on different few-shot tasks to generalize it to new unseen attacks. The proposed model is evaluated on a network traffic dataset consisting of benign traffic and traffic corresponding to six types of attacks. The results revealed that our proposed model achieved an F1 score of 0.91 and 0.94 in 5-shot and 10-shot classification, respectively, and outperformed the baseline models.

  • Malicious Domain Detection Based on Decision Tree

    Thin Tharaphe THEIN  Yoshiaki SHIRAISHI  Masakatu MORII  

     
    LETTER

      Pubricized:
    2023/06/22
      Vol:
    E106-D No:9
      Page(s):
    1490-1494

    Different types of malicious attacks have been increasing simultaneously and have become a serious issue for cybersecurity. Most attacks leverage domain URLs as an attack communications medium and compromise users into a victim of phishing or spam. We take advantage of machine learning methods to detect the maliciousness of a domain automatically using three features: DNS-based, lexical, and semantic features. The proposed approach exhibits high performance even with a small training dataset. The experimental results demonstrate that the proposed scheme achieves an approximate accuracy of 0.927 when using a random forest classifier.

  • On Gradient Descent Training Under Data Augmentation with On-Line Noisy Copies

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/06/12
      Vol:
    E106-D No:9
      Page(s):
    1537-1545

    In machine learning, data augmentation (DA) is a technique for improving the generalization performance of models. In this paper, we mainly consider gradient descent of linear regression under DA using noisy copies of datasets, in which noise is injected into inputs. We analyze the situation where noisy copies are newly generated and injected into inputs at each epoch, i.e., the case of using on-line noisy copies. Therefore, this article can also be viewed as an analysis on a method using noise injection into a training process by DA. We considered the training process under three training situations which are the full-batch training under the sum of squared errors, and full-batch and mini-batch training under the mean squared error. We showed that, in all cases, training for DA with on-line copies is approximately equivalent to the l2 regularization training for which variance of injected noise is important, whereas the number of copies is not. Moreover, we showed that DA with on-line copies apparently leads to an increase of learning rate in full-batch condition under the sum of squared errors and the mini-batch condition under the mean squared error. The apparent increase in learning rate and regularization effect can be attributed to the original input and additive noise in noisy copies, respectively. These results are confirmed in a numerical experiment in which we found that our result can be applied to usual off-line DA in an under-parameterization scenario and can not in an over-parametrization scenario. Moreover, we experimentally investigated the training process of neural networks under DA with off-line noisy copies and found that our analysis on linear regression can be qualitatively applied to neural networks.

  • Shadow Detection Based on Luminance-LiDAR Intensity Uncorrelation

    Shogo SATO  Yasuhiro YAO  Taiga YOSHIDA  Shingo ANDO  Jun SHIMAMURA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2023/06/20
      Vol:
    E106-D No:9
      Page(s):
    1556-1563

    In recent years, there has been a growing demand for urban digitization using cameras and light detection and ranging (LiDAR). Shadows are a condition that affects measurement the most. Therefore, shadow detection technology is essential. In this study, we propose shadow detection utilizing the LiDAR intensity that depends on the surface properties of objects but not on irradiation from other light sources. Unlike conventional LiDAR-intensity-aided shadow detection methods, our method embeds the un-correlation between luminance and LiDAR intensity in each position into the optimization. The energy, which is defined by the un-correlation between luminance and LiDAR intensity in each position, is minimized by graph-cut segmentation to detect shadows. In evaluations on KITTI and Waymo datasets, our shadow-detection method outperformed the previous methods in terms of multiple evaluation indices.

  • Siamese Transformer for Saliency Prediction Based on Multi-Prior Enhancement and Cross-Modal Attention Collaboration

    Fazhan YANG  Xingge GUO  Song LIANG  Peipei ZHAO  Shanhua LI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2023/06/20
      Vol:
    E106-D No:9
      Page(s):
    1572-1583

    Visual saliency prediction has improved dramatically since the advent of convolutional neural networks (CNN). Although CNN achieves excellent performance, it still cannot learn global and long-range contextual information well and lacks interpretability due to the locality of convolution operations. We proposed a saliency prediction model based on multi-prior enhancement and cross-modal attention collaboration (ME-CAS). Concretely, we designed a transformer-based Siamese network architecture as the backbone for feature extraction. One of the transformer branches captures the context information of the image under the self-attention mechanism to obtain a global saliency map. At the same time, we build a prior learning module to learn the human visual center bias prior, contrast prior, and frequency prior. The multi-prior input to another Siamese branch to learn the detailed features of the underlying visual features and obtain the saliency map of local information. Finally, we use an attention calibration module to guide the cross-modal collaborative learning of global and local information and generate the final saliency map. Extensive experimental results demonstrate that our proposed ME-CAS achieves superior results on public benchmarks and competitors of saliency prediction models. Moreover, the multi-prior learning modules enhance images express salient details, and model interpretability.

  • Reconfigurable Pedestrian Detection System Using Deep Learning for Video Surveillance

    M.K. JEEVARAJAN  P. NIRMAL KUMAR  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/06/09
      Vol:
    E106-D No:9
      Page(s):
    1610-1614

    We present a reconfigurable deep learning pedestrian detection system for surveillance systems that detect people with shadows in different lighting and heavily occluded conditions. This work proposes a region-based CNN, combined with CMOS and thermal cameras to obtain human features even under poor lighting conditions. The main advantage of a reconfigurable system with respect to processor-based systems is its high performance and parallelism when processing large amount of data such as video frames. We discuss the details of hardware implementation in the proposed real-time pedestrian detection algorithm on a Zynq FPGA. Simulation results show that the proposed integrated approach of R-CNN architecture with cameras provides better performance in terms of accuracy, precision, and F1-score. The performance of Zynq FPGA was compared to other works, which showed that the proposed architecture is a good trade-off in terms of quality, accuracy, speed, and resource utilization.

  • Dual Cuckoo Filter with a Low False Positive Rate for Deep Packet Inspection

    Yixuan ZHANG  Meiting XUE  Huan ZHANG  Shubiao LIU  Bei ZHAO  

     
    PAPER-Algorithms and Data Structures

      Pubricized:
    2023/01/26
      Vol:
    E106-A No:8
      Page(s):
    1037-1042

    Network traffic control and classification have become increasingly dependent on deep packet inspection (DPI) approaches, which are the most precise techniques for intrusion detection and prevention. However, the increasing traffic volumes and link speed exert considerable pressure on DPI techniques to process packets with high performance in restricted available memory. To overcome this problem, we proposed dual cuckoo filter (DCF) as a data structure based on cuckoo filter (CF). The CF can be extended to the parallel mode called parallel Cuckoo Filter (PCF). The proposed data structure employs an extra hash function to obtain two potential indices of entries. The DCF magnifies the superiority of the CF with no additional memory. Moreover, it can be extended to the parallel mode, resulting in a data structure referred to as parallel Dual Cuckoo filter (PDCF). The implementation results show that using the DCF and PDCF as identification tools in a DPI system results in time improvements of up to 2% and 30% over the CF and PCF, respectively.

  • Signal Detection for OTFS System Based on Improved Particle Swarm Optimization

    Jurong BAI  Lin LAN  Zhaoyang SONG  Huimin DU  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2023/02/16
      Vol:
    E106-B No:8
      Page(s):
    614-621

    The orthogonal time frequency space (OTFS) technique proposed in recent years has excellent anti-Doppler frequency shift and time delay performance, enabling its application in high speed communication scenarios. In this article, a particle swarm optimization (PSO) signal detection algorithm for OTFS system is proposed, an adaptive mechanism for the individual learning factor and global learning factor in the speed formula of the algorithm is designed, and the position update method of the particles is improved, so as to increase the convergence accuracy and avoid the particles to fall into local optimum. The simulation results show that the improved PSO algorithm has the advantages of low bit error rate (BER) and high convergence accuracy compared with the traditional PSO algorithm, and has similar performance to the ideal state maximum likelihood (ML) detection algorithm with lower complexity. In the case of high Doppler shift, OTFS technology has better performance than orthogonal frequency division multiplexing (OFDM) technology by using improved PSO algorithm.

  • Intrusion Detection Model of Internet of Things Based on LightGBM Open Access

    Guosheng ZHAO  Yang WANG  Jian WANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2023/02/20
      Vol:
    E106-B No:8
      Page(s):
    622-634

    Internet of Things (IoT) devices are widely used in various fields. However, their limited computing resources make them extremely vulnerable and difficult to be effectively protected. Traditional intrusion detection systems (IDS) focus on high accuracy and low false alarm rate (FAR), making them often have too high spatiotemporal complexity to be deployed in IoT devices. In response to the above problems, this paper proposes an intrusion detection model of IoT based on the light gradient boosting machine (LightGBM). Firstly, the one-dimensional convolutional neural network (CNN) is used to extract features from network traffic to reduce the feature dimensions. Then, the LightGBM is used for classification to detect the type of network traffic belongs. The LightGBM is more lightweight on the basis of inheriting the advantages of the gradient boosting tree. The LightGBM has a faster decision tree construction process. Experiments on the TON-IoT and BoT-IoT datasets show that the proposed model has stronger performance and more lightweight than the comparison models. The proposed model can shorten the prediction time by 90.66% and is better than the comparison models in accuracy and other performance metrics. The proposed model has strong detection capability for denial of service (DoS) and distributed denial of service (DDoS) attacks. Experimental results on the testbed built with IoT devices such as Raspberry Pi show that the proposed model can perform effective and real-time intrusion detection on IoT devices.

  • Development of a Simple and Lightweight Phantom for Evaluating Human Body Avoidance Technology in Microwave Wireless Power Transfer Open Access

    Kazuki SATO  Kazuyuki SAITO  

     
    PAPER-Energy in Electronics Communications

      Pubricized:
    2023/02/15
      Vol:
    E106-B No:8
      Page(s):
    645-651

    In recent years, microwave wireless power transfer (WPT) has attracted considerable attention due to the increasing demand for various sensors and Internet of Things (IoT) applications. Microwave WPT requires technology that can detect and avoid human bodies in the transmission path. Using a phantom is essential for developing such technology in terms of standardization and human body protection from electromagnetic radiation. In this study, a simple and lightweight phantom was developed focusing on its radar cross-section (RCS) to evaluate human body avoidance technology for use in microwave WPT systems. The developed phantom's RCS is comparable to that of the human body.

  • Data Gathering Scheme for Event Detection and Recognition in Low Power Wide Area Networks

    Taiki SUEHIRO  Tsuyoshi KOBAYASHI  Osamu TAKYU  Yasushi FUWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/01/31
      Vol:
    E106-B No:8
      Page(s):
    669-685

    Event detection and recognition are important for environmental monitoring in the Internet of things and cyber-physical systems. Low power wide area (LPWA) networks are one of the most powerful wireless sensor networks to support data gathering; however, they do not afford peak wireless access from sensors that detect significant changes in sensing data. Various data gathering schemes for event detection and recognition have been proposed. However, these do not satisfy the requirement for the three functions for the detection of the occurrence of an event, the recognition of the position of an event, and the recognition of spillover of impact from an event. This study proposes a three-stage data gathering scheme for LPWA. In the first stage, the access limitation based on the comparison between the detected sensing data and the high-level threshold is effective in reducing the simultaneous accessing sensors; thus, high-speed recognition of the starting event is achieved. In the second stage, the data centre station designates the sensor to inform the sensing data to achieve high accuracy of the position estimation of the event. In the third stage, all the sensors, except for the accessing sensors in the early stage, access the data centre. Owing to the exhaustive gathering of sensing data, the spillover of impact from the event can be recognised with high accuracy. We implement the proposed data gathering scheme for the actual wireless sensor system of the LPWA. From the computer simulation and experimental evaluation, we show the advantage of the proposed scheme compared to the conventional scheme.

  • Distilling Distribution Knowledge in Normalizing Flow

    Jungwoo KWON  Gyeonghwan KIM  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/04/26
      Vol:
    E106-D No:8
      Page(s):
    1287-1291

    In this letter, we propose a feature-based knowledge distillation scheme which transfers knowledge between intermediate blocks of teacher and student with flow-based architecture, specifically Normalizing flow in our implementation. In addition to the knowledge transfer scheme, we examine how configuration of the distillation positions impacts on the knowledge transfer performance. To evaluate the proposed ideas, we choose two knowledge distillation baseline models which are based on Normalizing flow on different domains: CS-Flow for anomaly detection and SRFlow-DA for super-resolution. A set of performance comparison to the baseline models with popular benchmark datasets shows promising results along with improved inference speed. The comparison includes performance analysis based on various configurations of the distillation positions in the proposed scheme.

  • Temporal-Based Action Clustering for Motion Tendencies

    Xingyu QIAN  Xiaogang CHEN  Aximu YUEMAIER  Shunfen LI  Weibang DAI  Zhitang SONG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/05/02
      Vol:
    E106-D No:8
      Page(s):
    1292-1295

    Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.

  • A Note on the Transformation Behaviors between Truth Tables and Algebraic Normal Forms of Boolean Functions

    Jianchao ZHANG  Deng TANG  

     
    LETTER-Cryptography and Information Security

      Pubricized:
    2023/01/18
      Vol:
    E106-A No:7
      Page(s):
    1007-1010

    Let f be a Boolean function in n variables. The Möbius transform and its converse of f can describe the transformation behaviors between the truth table of f and the coefficients of the monomials in the algebraic normal form representation of f. In this letter, we develop the Möbius transform and its converse into a more generalized form, which also includes the known result given by Reed in 1954. We hope that our new result can be used in the design of decoding schemes for linear codes and the cryptanalysis for symmetric cryptography. We also apply our new result to verify the basic idea of the cube attack in a very simple way, in which the cube attack is a powerful technique on the cryptanalysis for symmetric cryptography.

161-180hit(8214hit)