The search functionality is under construction.

Keyword Search Result

[Keyword] machine learning(172hit)

1-20hit(172hit)

  • LSTM Neural Network Algorithm for Handover Improvement in a Non-Ideal Network Using O-RAN Near-RT RIC Open Access

    Baud Haryo PRANANTO   ISKANDAR   HENDRAWAN  Adit KURNIAWAN  

     
    PAPER-Network Management/Operation

      Vol:
    E107-B No:6
      Page(s):
    458-469

    Handover is an important property of cellular communication that enables the user to move from one cell to another without losing the connection. It is a very crucial process for the quality of the user’s experience because it may interrupt data transmission. Therefore, good handover management is very important in the current and future cellular systems. Several techniques have been employed to improve the handover performance, usually to increase the probability of a successful handover. One of the techniques is predictive handover which predicts the target cell using some methods other than the traditional measurement-based algorithm, including using machine learning. Several studies have been conducted in the implementation of predictive handover, most of them by modifying the internal algorithm of existing network elements, such as the base station. We implemented a predictive handover algorithm using an intelligent node outside the existing network elements to minimize the modification of the network and to create modularity in the system. Using a recently standardized Open Radio Access Network (O-RAN) Near Realtime Radio Intelligent Controller (Near-RT RIC), we created a modular application that can improve the handover performance by determining the target cell using machine learning techniques. In our previous research, we modified The Near-RT RIC original software that is using vector autoregression to determine the target cell by predicting the throughput of each neighboring cell. We also modified the method using a Multi-Layer Perceptron (MLP) neural network. In this paper, we redesigned the neural network using Long Short-Term Memory (LSTM) that can better handle time series data. We proved that our proposed LSTM-based machine learning algorithms used in Near-RT RIC can improve the handover performance compared to the traditional measurement-based algorithm.

  • A Multiobjective Approach for Side-Channel Based Hardware Trojan Detection Using Power Traces Open Access

    Priyadharshini MOHANRAJ  Saravanan PARAMASIVAM  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:5
      Page(s):
    825-835

    The detection of hardware trojans has been extensively studied in the past. In this article, we propose a side-channel analysis technique that uses a wrapper-based feature selection technique for hardware trojan detection. The whale optimization algorithm is modified to carefully extract the best feature subset. The aim of the proposed technique is multiobjective: improve the accuracy and minimize the number of features. The power consumption traces measured from AES-128 trojan circuits are used as features in this experiment. The stabilizing property of the feature selection method helps to bring a mutual trade-off between the precision and recall parameters thereby minimizing the number of false negatives. The proposed hardware trojan detection scheme produces a maximum of 10.3% improvement in accuracy and reduction up to a single feature by employing the modified whale optimization technique. Thus the evaluation results conducted on various trust-hub cryptographic benchmark circuits prove to be efficient from the existing state-of-art methods.

  • Implementing Optical Analog Computing and Electrooptic Hopfield Network by Silicon Photonic Circuits Open Access

    Guangwei CONG  Noritsugu YAMAMOTO  Takashi INOUE  Yuriko MAEGAMI  Morifumi OHNO  Shota KITA  Rai KOU  Shu NAMIKI  Koji YAMADA  

     
    INVITED PAPER

      Pubricized:
    2024/01/05
      Vol:
    E107-A No:5
      Page(s):
    700-708

    Wide deployment of artificial intelligence (AI) is inducing exponentially growing energy consumption. Traditional digital platforms are becoming difficult to fulfill such ever-growing demands on energy efficiency as well as computing latency, which necessitates the development of high efficiency analog hardware platforms for AI. Recently, optical and electrooptic hybrid computing is reactivated as a promising analog hardware alternative because it can accelerate the information processing in an energy-efficient way. Integrated photonic circuits offer such an analog hardware solution for implementing photonic AI and machine learning. For this purpose, we proposed a photonic analog of support vector machine and experimentally demonstrated low-latency and low-energy classification computing, which evidences the latency and energy advantages of optical analog computing over traditional digital computing. We also proposed an electrooptic Hopfield network for classifying and recognizing time-series data. This paper will review our work on implementing classification computing and Hopfield network by leveraging silicon photonic circuits.

  • Batch Updating of a Posterior Tree Distribution Over a Meta-Tree

    Yuta NAKAHARA  Toshiyasu MATSUSHIMA  

     
    LETTER-Learning

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:3
      Page(s):
    523-525

    Previously, we proposed a probabilistic data generation model represented by an unobservable tree and a sequential updating method to calculate a posterior distribution over a set of trees. The set is called a meta-tree. In this paper, we propose a more efficient batch updating method.

  • Device Type Classification Based on Two-Stage Traffic Behavior Analysis Open Access

    Chikako TAKASAKI  Tomohiro KORIKAWA  Kyota HATTORI  Hidenari OHWADA  

     
    PAPER

      Pubricized:
    2023/10/17
      Vol:
    E107-B No:1
      Page(s):
    117-125

    In the beyond 5G and 6G networks, the number of connected devices and their types will greatly increase including not only user devices such as smartphones but also the Internet of Things (IoT). Moreover, Non-terrestrial networks (NTN) introduce dynamic changes in the types of connected devices as base stations or access points are moving objects. Therefore, continuous network capacity design is required to fulfill the network requirements of each device. However, continuous optimization of network capacity design for each device within a short time span becomes difficult because of the heavy calculation amount. We introduce device types as groups of devices whose traffic characteristics resemble and optimize network capacity per device type for efficient network capacity design. This paper proposes a method to classify device types by analyzing only encrypted traffic behavior without using payload and packets of specific protocols. In the first stage, general device types, such as IoT and non-IoT, are classified by analyzing packet header statistics using machine learning. Then, in the second stage, connected devices classified as IoT in the first stage are classified into IoT device types, by analyzing a time series of traffic behavior using deep learning. We demonstrate that the proposed method classifies device types by analyzing traffic datasets and outperforms the existing IoT-only device classification methods in terms of the number of types and the accuracy. In addition, the proposed model performs comparable as a state-of-the-art model of traffic classification, ResNet 1D model. The proposed method is suitable to grasp device types in terms of traffic characteristics toward efficient network capacity design in networks where massive devices for various services are connected and the connected devices continuously change.

  • Hardware-Trojan Detection at Gate-Level Netlists Using a Gradient Boosting Decision Tree Model and Its Extension Using Trojan Probability Propagation

    Ryotaro NEGISHI  Tatsuki KURIHARA  Nozomu TOGAWA  

     
    PAPER

      Pubricized:
    2023/08/16
      Vol:
    E107-A No:1
      Page(s):
    63-74

    Technological devices have become deeply embedded in people's lives, and their demand is growing every year. It has been indicated that outsourcing the design and manufacturing of integrated circuits, which are essential for technological devices, may lead to the insertion of malicious circuitry, called hardware Trojans (HTs). This paper proposes an HT detection method at gate-level netlists based on XGBoost, one of the best gradient boosting decision tree models. We first propose the optimal set of HT features among many feature candidates at a netlist level through thorough evaluations. Then, we construct an XGBoost-based HT detection method with its optimized hyperparameters. Evaluation experiments were conducted on the netlists from Trust-HUB benchmarks and showed the average F-measure of 0.842 using the proposed method. Also, we newly propose a Trojan probability propagation method that effectively corrects the HT detection results and apply it to the results obtained by XGBoost-based HT detection. Evaluation experiments showed that the average F-measure is improved to 0.861. This value is 0.194 points higher than that of the existing best method proposed so far.

  • Demodulation Framework Based on Machine Learning for Unrepeated Transmission Systems

    Ryuta SHIRAKI  Yojiro MORI  Hiroshi HASEGAWA  

     
    PAPER

      Pubricized:
    2023/09/14
      Vol:
    E107-B No:1
      Page(s):
    39-48

    We propose a demodulation framework to extend the maximum distance of unrepeated transmission systems, where the simplest back propagation (BP), polarization and phase recovery, data arrangement for machine learning (ML), and symbol decision based on ML are rationally combined. The deterministic waveform distortion caused by fiber nonlinearity and chromatic dispersion is partially eliminated by BP whose calculation cost is minimized by adopting the single-step Fourier method in a pre-processing step. The non-deterministic waveform distortion, i.e., polarization and phase fluctuations, can be eliminated in a precise manner. Finally, the optimized ML model conducts the symbol decision under the influence of residual deterministic waveform distortion that cannot be cancelled by the simplest BP. Extensive numerical simulations confirm that a DP-16QAM signal can be transmitted over 240km of a standard single-mode fiber without optical repeaters. The maximum transmission distance is extended by 25km.

  • Feasibility Study of Numerical Calculation and Machine Learning Hybrid Approach for Renal Denervation Temperature Prediction

    Aditya RAKHMADI  Kazuyuki SAITO  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2023/05/22
      Vol:
    E106-C No:12
      Page(s):
    799-807

    Transcatheter renal denervation (RDN) is a novel treatment to reduce blood pressure in patients with resistant hypertension using an energy-based catheter, mostly radio frequency (RF) current, by eliminating renal sympathetic nerve. However, several inconsistent RDN treatments were reported, mainly due to RF current narrow heating area, and the inability to confirm a successful nerve ablation in a deep area. We proposed microwave energy as an alternative for creating a wider ablation area. However, confirming a successful ablation is still a problem. In this paper, we designed a prediction method for deep renal nerve ablation sites using hybrid numerical calculation-driven machine learning (ML) in combination with a microwave catheter. This work is a first-step investigation to check the hybrid ML prediction capability in a real-world situation. A catheter with a single-slot coaxial antenna at 2.45 GHz with a balloon catheter, combined with a thin thermometer probe on the balloon surface, is proposed. Lumen temperature measured by the probe is used as an ML input to predict the temperature rise at the ablation site. Heating experiments using 6 and 8 mm hole phantom with a 41.3 W excited power, and 8 mm with 36.4 W excited power, were done eight times each to check the feasibility and accuracy of the ML algorithm. In addition, the temperature on the ablation site is measured for reference. Prediction by ML algorithm agrees well with the reference, with a maximum difference of 6°C and 3°C in 6 and 8 mm (both power), respectively. Overall, the proposed ML algorithm is capable of predicting the ablation site temperature rise with high accuracy.

  • A Unified Software and Hardware Platform for Machine Learning Aided Wireless Systems

    Dody ICHWANA PUTRA  Muhammad HARRY BINTANG PRATAMA  Ryotaro ISSHIKI  Yuhei NAGAO  Leonardo LANANTE JR  Hiroshi OCHI  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2023/08/22
      Vol:
    E106-A No:12
      Page(s):
    1493-1503

    This paper presents a unified software and hardware wireless AI platform (USHWAP) for developing and evaluating machine learning in wireless systems. The platform integrates multi-software development such as MATLAB and Python with hardware platforms like FPGA and SDR, allowing for flexible and scalable device and edge computing application development. The USHWAP is implemented and validated using FPGAs and SDRs. Wireless signal classification, wireless LAN sensing, and rate adaptation are used as examples to showcase the platform's capabilities. The platform enables versatile development, including software simulation and real-time hardware implementation, offering flexibility and scalability for multiple applications. It is intended to be used by wireless-AI researchers to develop and evaluate intelligent algorithms in a laboratory environment.

  • Analysis and Identification of Root Cause of 5G Radio Quality Deterioration Using Machine Learning

    Yoshiaki NISHIKAWA  Shohei MARUYAMA  Takeo ONISHI  Eiji TAKAHASHI  

     
    PAPER

      Pubricized:
    2023/06/02
      Vol:
    E106-B No:12
      Page(s):
    1286-1292

    It has become increasingly important for industries to promote digital transformation by utilizing 5G and industrial internet of things (IIoT) to improve productivity. To protect IIoT application performance (work speed, productivity, etc.), it is often necessary to satisfy quality of service (QoS) requirements precisely. For this purpose, there is an increasing need to automatically identify the root causes of radio-quality deterioration in order to take prompt measures when the QoS deteriorates. In this paper, a method for identifying the root cause of 5G radio-quality deterioration is proposed that uses machine learning. This Random Forest based method detects the root cause, such as distance attenuation, shielding, fading, or their combination, by analyzing the coefficients of a quadratic polynomial approximation in addition to the mean values of time-series data of radio quality indicators. The detection accuracy of the proposed method was evaluated in a simulation using the MATLAB 5G Toolbox. The detection accuracy of the proposed method was found to be 98.30% when any of the root causes occurs independently, and 83.13% when the multiple root causes occur simultaneously. The proposed method was compared with deep-learning methods, including bidirectional long short-term memory (bidirectional-LSTM) or one-dimensional convolutional neural network (1D-CNN), that directly analyze the time-series data of the radio quality, and the proposed method was found to be more accurate than those methods.

  • GNSS Spoofing Detection Using Multiple Sensing Devices and LSTM Networks

    Xin QI  Toshio SATO  Zheng WEN  Yutaka KATSUYAMA  Kazuhiko TAMESUE  Takuro SATO  

     
    PAPER

      Pubricized:
    2023/08/03
      Vol:
    E106-B No:12
      Page(s):
    1372-1379

    The rise of next-generation logistics systems featuring autonomous vehicles and drones has brought to light the severe problem of Global navigation satellite system (GNSS) location data spoofing. While signal-based anti-spoofing techniques have been studied, they can be challenging to apply to current commercial GNSS modules in many cases. In this study, we explore using multiple sensing devices and machine learning techniques such as decision tree classifiers and Long short-term memory (LSTM) networks for detecting GNSS location data spoofing. We acquire sensing data from six trajectories and generate spoofing data based on the Software-defined radio (SDR) behavior for evaluation. We define multiple features using GNSS, beacons, and Inertial measurement unit (IMU) data and develop models to detect spoofing. Our experimental results indicate that LSTM networks using ten-sequential past data exhibit higher performance, with the accuracy F1 scores above 0.92 using appropriate features including beacons and generalization ability for untrained test data. Additionally, our results suggest that distance from beacons is a valuable metric for detecting GNSS spoofing and demonstrate the potential for beacon installation along future drone highways.

  • Machine Learning-Based Compensation Methods for Weight Matrices of SVD-MIMO Open Access

    Kiminobu MAKINO  Takayuki NAKAGAWA  Naohiko IAI  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2023/07/24
      Vol:
    E106-B No:12
      Page(s):
    1441-1454

    This paper proposes and evaluates machine learning (ML)-based compensation methods for the transmit (Tx) weight matrices of actual singular value decomposition (SVD)-multiple-input and multiple-output (MIMO) transmissions. These methods train ML models and compensate the Tx weight matrices by using a large amount of training data created from statistical distributions. Moreover, this paper proposes simplified channel metrics based on the channel quality of actual SVD-MIMO transmissions to evaluate compensation performance. The optimal parameters are determined from many ML parameters by using the metrics, and the metrics for this determination are evaluated. Finally, a comprehensive computer simulation shows that the optimal parameters improve performance by up to 7.0dB compared with the conventional method.

  • Comments on Quasi-Linear Support Vector Machine for Nonlinear Classification

    Sei-ichiro KAMATA  Tsunenori MINE  

     
    WRITTEN DISCUSSION-General Fundamentals and Boundaries

      Pubricized:
    2023/05/08
      Vol:
    E106-A No:11
      Page(s):
    1444-1445

    In 2014, the above paper entitled ‘Quasi-Linear Support Vector Machine for Nonlinear Classification’ was published by Zhou, et al. [1]. They proposed a quasi-linear kernel function for support vector machine (SVM). However, in this letter, we point out that this proposed kernel function is a part of multiple kernel functions generated by well-known multiple kernel learning which is proposed by Bach, et al. [2] in 2004. Since then, there have been a lot of related papers on multiple kernel learning with several applications [3]. This letter verifies that the main kernel function proposed by Zhou, et al. [1] can be derived using multiple kernel learning algorithms [3]. In the kernel construction, Zhou, et al. [1] used Gaussian kernels, but the multiple kernel learning had already discussed the locality of additive Gaussian kernels or other kernels in the framework [4], [5]. Especially additive Gaussian or other kernels were discussed in tutorial at major international conference ECCV2012 [6]. The authors did not discuss these matters.

  • Authors' Reply to the Comments by Kamata et al.

    Bo ZHOU  Benhui CHEN  Jinglu HU  

     
    WRITTEN DISCUSSION

      Pubricized:
    2023/05/08
      Vol:
    E106-A No:11
      Page(s):
    1446-1449

    We thank Kamata et al. (2023) [1] for their interest in our work [2], and for providing an explanation of the quasi-linear kernel from a viewpoint of multiple kernel learning. In this letter, we first give a summary of the quasi-linear SVM. Then we provide a discussion on the novelty of quasi-linear kernels against multiple kernel learning. Finally, we explain the contributions of our work [2].

  • Physical Status Representation in Multiple Administrative Optical Networks by Federated Unsupervised Learning

    Takahito TANIMURA  Riu HIRAI  Nobuhiko KIKUCHI  

     
    PAPER

      Pubricized:
    2023/08/01
      Vol:
    E106-B No:11
      Page(s):
    1084-1092

    We present our data-collection and deep neural network (DNN)-training scheme for extracting the optical status from signals received by digital coherent optical receivers in fiber-optic networks. The DNN is trained with unlabeled datasets across multiple administrative network domains by combining federated learning and unsupervised learning. The scheme allows network administrators to train a common DNN-based encoder that extracts optical status in their networks without revealing their private datasets. An early-stage proof of concept was numerically demonstrated by simulation by estimating the optical signal-to-noise ratio and modulation format with 64-GBd 16QAM and quadrature phase-shift keying signals.

  • Practical Improvement and Performance Evaluation of Road Damage Detection Model using Machine Learning

    Tomoya FUJII  Rie JINKI  Yuukou HORITA  

     
    LETTER-Image

      Pubricized:
    2023/06/13
      Vol:
    E106-A No:9
      Page(s):
    1216-1219

    The social infrastructure, including roads and bridges built during period of rapid economic growth in Japan, is now aging, and there is a need to strategically maintain and renew the social infrastructure that is aging. On the other hand, road maintenance in rural areas is facing serious problems such as reduced budgets for maintenance and a shortage of engineers due to the declining birthrate and aging population. Therefore, it is difficult to visually inspect all roads in rural areas by maintenance engineers, and a system to automatically detect road damage is required. This paper reports practical improvements to the road damage model using YOLOv5, an object detection model capable of real-time operation, focusing on road image features.

  • Fish School Behaviour Classification for Optimal Feeding Using Dense Optical Flow

    Kazuki FUKAE  Tetsuo IMAI  Kenichi ARAI  Toru KOBAYASHI  

     
    PAPER

      Pubricized:
    2023/06/20
      Vol:
    E106-D No:9
      Page(s):
    1472-1479

    With the growing global demand for seafood, sustainable aquaculture is attracting more attention than conventional natural fishing, which causes overfishing and damage to the marine environment. However, a major problem facing the aquaculture industry is the cost of feeding, which accounts for about 60% of a fishing expenditure. Excessive feeding increases costs, and the accumulation of residual feed on the seabed negatively impacts the quality of water environments (e.g., causing red tides). Therefore, the importance of raising fishes efficiently with less food by optimizing the timing and quantity of feeding becomes more evident. Thus, we developed a system to quantitate the amount of fish activity for the optimal feeding time and feed quantity based on the images taken. For quantitation, optical flow that is a method for tracking individual objects was used. However, it is difficult to track individual fish and quantitate their activity in the presence of many fishes. Therefore, all fish in the filmed screen were considered as a single school and the amount of change in an entire screen was used as the amount of the school activity. We divided specifically the entire image into fixed regions and quantitated by vectorizing the amount of change in each region using optical flow. A vector represents the moving distance and direction. We used the numerical data of a histogram as the indicator for the amount of fish activity by dividing them into classes and recording the number of occurrences in each class. We verified the effectiveness of the indicator by quantitating the eating and not eating movements during feeding. We evaluated the performance of the quantified indicators by the support vector classification, which is a form of machine learning. We confirmed that the two activities can be correctly classified.

  • Few-Shot Learning-Based Malicious IoT Traffic Detection with Prototypical Graph Neural Networks

    Thin Tharaphe THEIN  Yoshiaki SHIRAISHI  Masakatu MORII  

     
    PAPER

      Pubricized:
    2023/06/22
      Vol:
    E106-D No:9
      Page(s):
    1480-1489

    With a rapidly escalating number of sophisticated cyber-attacks, protecting Internet of Things (IoT) networks against unauthorized activity is a major concern. The detection of malicious attack traffic is thus crucial for IoT security to prevent unwanted traffic. However, existing traditional malicious traffic detection systems which relied on supervised machine learning approach need a considerable number of benign and malware traffic samples to train the machine learning models. Moreover, in the cases of zero-day attacks, only a few labeled traffic samples are accessible for analysis. To deal with this, we propose a few-shot malicious IoT traffic detection system with a prototypical graph neural network. The proposed approach does not require prior knowledge of network payload binaries or network traffic signatures. The model is trained on labeled traffic data and tested to evaluate its ability to detect new types of attacks when only a few labeled traffic samples are available. The proposed detection system first categorizes the network traffic as a bidirectional flow and visualizes the binary traffic flow as a color image. A neural network is then applied to the visualized traffic to extract important features. After that, using the proposed few-shot graph neural network approach, the model is trained on different few-shot tasks to generalize it to new unseen attacks. The proposed model is evaluated on a network traffic dataset consisting of benign traffic and traffic corresponding to six types of attacks. The results revealed that our proposed model achieved an F1 score of 0.91 and 0.94 in 5-shot and 10-shot classification, respectively, and outperformed the baseline models.

  • Malicious Domain Detection Based on Decision Tree

    Thin Tharaphe THEIN  Yoshiaki SHIRAISHI  Masakatu MORII  

     
    LETTER

      Pubricized:
    2023/06/22
      Vol:
    E106-D No:9
      Page(s):
    1490-1494

    Different types of malicious attacks have been increasing simultaneously and have become a serious issue for cybersecurity. Most attacks leverage domain URLs as an attack communications medium and compromise users into a victim of phishing or spam. We take advantage of machine learning methods to detect the maliciousness of a domain automatically using three features: DNS-based, lexical, and semantic features. The proposed approach exhibits high performance even with a small training dataset. The experimental results demonstrate that the proposed scheme achieves an approximate accuracy of 0.927 when using a random forest classifier.

  • Low-Cost Learning-Based Path Loss Estimation Using Correlation Graph CNN

    Keita IMAIZUMI  Koichi ICHIGE  Tatsuya NAGAO  Takahiro HAYASHI  

     
    LETTER-Communication Theory and Signals

      Pubricized:
    2023/01/26
      Vol:
    E106-A No:8
      Page(s):
    1072-1076

    In this paper, we propose a method for predicting radio wave propagation using a correlation graph convolutional neural network (C-Graph CNN). We examine what kind of parameters are suitable to be used as system parameters in C-Graph CNN. Performance of the proposed method is evaluated by the path loss estimation accuracy and the computational cost through simulation.

1-20hit(172hit)