The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ACH(1072hit)

41-60hit(1072hit)

  • Intrusion Detection Model of Internet of Things Based on LightGBM Open Access

    Guosheng ZHAO  Yang WANG  Jian WANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2023/02/20
      Vol:
    E106-B No:8
      Page(s):
    622-634

    Internet of Things (IoT) devices are widely used in various fields. However, their limited computing resources make them extremely vulnerable and difficult to be effectively protected. Traditional intrusion detection systems (IDS) focus on high accuracy and low false alarm rate (FAR), making them often have too high spatiotemporal complexity to be deployed in IoT devices. In response to the above problems, this paper proposes an intrusion detection model of IoT based on the light gradient boosting machine (LightGBM). Firstly, the one-dimensional convolutional neural network (CNN) is used to extract features from network traffic to reduce the feature dimensions. Then, the LightGBM is used for classification to detect the type of network traffic belongs. The LightGBM is more lightweight on the basis of inheriting the advantages of the gradient boosting tree. The LightGBM has a faster decision tree construction process. Experiments on the TON-IoT and BoT-IoT datasets show that the proposed model has stronger performance and more lightweight than the comparison models. The proposed model can shorten the prediction time by 90.66% and is better than the comparison models in accuracy and other performance metrics. The proposed model has strong detection capability for denial of service (DoS) and distributed denial of service (DDoS) attacks. Experimental results on the testbed built with IoT devices such as Raspberry Pi show that the proposed model can perform effective and real-time intrusion detection on IoT devices.

  • Constructions of Low/Zero Correlation Zone Sequence Sets and Their Application in Grant-Free Non-Orthogonal Multiple Access System

    Tao LIU  Meiyue WANG  Dongyan JIA  Yubo LI  

     
    PAPER-Information Theory

      Pubricized:
    2022/12/16
      Vol:
    E106-A No:6
      Page(s):
    907-915

    In the massive machine-type communication scenario, aiming at the problems of active user detection and channel estimation in the grant-free non-orthogonal multiple access (NOMA) system, new sets of non-orthogonal spreading sequences are proposed by using the zero/low correlation zone sequence set with low correlation among multiple sets. The simulation results show that the resulting sequence set has low coherence, which presents reliable performance for channel estimation and active user detection based on compressed sensing. Compared with the traditional Zadoff-Chu (ZC) sequences, the new non-orthogonal spreading sequences have more flexible lengths, and lower peak-to-average power ratio (PAPR) and smaller alphabet size. Consequently, these sequences will effectively solve the problem of high PAPR of time domain signals and are more suitable for low-cost devices in massive machine-type communication.

  • An Efficient Reference Image Sharing Method for the Image-Division Parallel Video Encoding Architecture

    Ken NAKAMURA  Yuya OMORI  Daisuke KOBAYASHI  Koyo NITTA  Kimikazu SANO  Masayuki SATO  Hiroe IWASAKI  Hiroaki KOBAYASHI  

     
    PAPER

      Pubricized:
    2022/11/29
      Vol:
    E106-C No:6
      Page(s):
    312-320

    This paper proposes an efficient reference image sharing method for the image-division parallel video encoding architecture. This method efficiently reduces the amount of data transfer by using pre-transfer with area prediction and on-demand transfer with a transfer management table. Experimental results show that the data transfer can be reduced to 19.8-35.3% of the conventional method on average without major degradation of coding performance. This makes it possible to reduce the required bandwidth of the inter-chip transfer interface by saving the amount of data transfer.

  • Construction of a Support Tool for Japanese User Reading of Privacy Policies and Assessment of its User Impact

    Sachiko KANAMORI  Hirotsune SATO  Naoya TABATA  Ryo NOJIMA  

     
    PAPER

      Pubricized:
    2023/02/08
      Vol:
    E106-D No:5
      Page(s):
    856-867

    To protect user privacy and establish self-information control rights, service providers must notify users of their privacy policies and obtain their consent in advance. The frameworks that impose these requirements are mandatory. Although originally designed to protect user privacy, obtaining user consent in advance has become a mere formality. These problems are induced by the gap between service providers' privacy policies, which prioritize the observance of laws and guidelines, and user expectations which are to easily understand how their data will be handled. To reduce this gap, we construct a tool supporting users in reading privacy policies in Japanese. We designed the tool to present users with separate unique expressions containing relevant information to improve the display format of the privacy policy and render it more comprehensive for Japanese users. To accurately extract the unique expressions from privacy policies, we created training data for machine learning for the constructed tool. The constructed tool provides a summary of privacy policies for users to help them understand the policies of interest. Subsequently, we assess the effectiveness of the constructed tool in experiments and follow-up questionnaires. Our findings reveal that the constructed tool enhances the users' subjective understanding of the services they read about and their awareness of the related risks. We expect that the developed tool will help users better understand the privacy policy content and and make educated decisions based on their understanding of how service providers intend to use their personal data.

  • Study of FIT Dedicated Computer with Dataflow Architecture for High Performance 2-D Magneto-Static Field Simulation

    Chenxu WANG  Hideki KAWAGUCHI  Kota WATANABE  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E106-C No:4
      Page(s):
    136-143

    An approach to dedicated computers is discussed in this study as a possibility for portable, low-cost, and low-power consumption high-performance computing technologies. Particularly, dataflow architecture dedicated computer of the finite integration technique (FIT) for 2D magnetostatic field simulation is considered for use in industrial applications. The dataflow architecture circuit of the BiCG-Stab matrix solver of the FIT matrix calculation is designed by the very high-speed integrated circuit hardware description language (VHDL). The operation of the dedicated computer's designed circuit is considered by VHDL logic circuit simulation.

  • A Methodology on Converting 10-K Filings into a Machine Learning Dataset and Its Applications

    Mustafa SAMI KACAR  Semih YUMUSAK  Halife KODAZ  

     
    PAPER

      Pubricized:
    2022/10/12
      Vol:
    E106-D No:4
      Page(s):
    477-487

    Companies listed on the stock exchange are required to share their annual reports with the U.S. Securities and Exchange Commission (SEC) within the first three months following the fiscal year. These reports, namely 10-K Filings, are presented to public interest by the SEC through an Electronic Data Gathering, Analysis, and Retrieval database. 10-K Filings use standard file formats (xbrl, html, pdf) to publish the financial reports of the companies. Although the file formats propose a standard structure, the content and the meta-data of the financial reports (e.g. tag names) is not strictly bound to a pre-defined schema. This study proposes a data collection and data preprocessing method to semantify the financial reports and use the collected data for further analysis (i.e. machine learning). The analysis of eight different datasets, which were created during the study, are presented using the proposed data transformation methods. As a use case, based on the datasets, five different machine learning algorithms were utilized to predict the existence of the corresponding company in the S&P 500 index. According to the strong machine learning results, the dataset generation methodology is successful and the datasets are ready for further use.

  • An Efficient Combined Bit-Width Reducing Method for Ising Models

    Yuta YACHI  Masashi TAWADA  Nozomu TOGAWA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2023/01/12
      Vol:
    E106-D No:4
      Page(s):
    495-508

    Annealing machines such as quantum annealing machines and semiconductor-based annealing machines have been attracting attention as an efficient computing alternative for solving combinatorial optimization problems. They solve original combinatorial optimization problems by transforming them into a data structure called an Ising model. At that time, the bit-widths of the coefficients of the Ising model have to be kept within the range that an annealing machine can deal with. However, by reducing the Ising-model bit-widths, its minimum energy state, or ground state, may become different from that of the original one, and hence the targeted combinatorial optimization problem cannot be well solved. This paper proposes an effective method for reducing Ising model's bit-widths. The proposed method is composed of two processes: First, given an Ising model with large coefficient bit-widths, the shift method is applied to reduce its bit-widths roughly. Second, the spin-adding method is applied to further reduce its bit-widths to those that annealing machines can deal with. Without adding too many extra spins, we efficiently reduce the coefficient bit-widths of the original Ising model. Furthermore, the ground state before and after reducing the coefficient bit-widths is not much changed in most of the practical cases. Experimental evaluations demonstrate the effectiveness of the proposed method, compared to existing methods.

  • CAMRI Loss: Improving the Recall of a Specific Class without Sacrificing Accuracy

    Daiki NISHIYAMA  Kazuto FUKUCHI  Youhei AKIMOTO  Jun SAKUMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/01/23
      Vol:
    E106-D No:4
      Page(s):
    523-537

    In real world applications of multiclass classification models, misclassification in an important class (e.g., stop sign) can be significantly more harmful than in other classes (e.g., no parking). Thus, it is crucial to improve the recall of an important class while maintaining overall accuracy. For this problem, we found that improving the separation of important classes relative to other classes in the feature space is effective. Existing methods that give a class-sensitive penalty for cross-entropy loss do not improve the separation. Moreover, the methods designed to improve separations between all classes are unsuitable for our purpose because they do not consider the important classes. To achieve the separation, we propose a loss function that explicitly gives loss for the feature space, called class-sensitive additive angular margin (CAMRI) loss. CAMRI loss is expected to reduce the variance of an important class due to the addition of a penalty to the angle between the important class features and the corresponding weight vectors in the feature space. In addition, concentrating the penalty on only the important class hardly sacrifices separating the other classes. Experiments on CIFAR-10, GTSRB, and AwA2 showed that CAMRI loss could improve the recall of a specific class without sacrificing accuracy. In particular, compared with GTSRB's second-worst class recall when trained with cross-entropy loss, CAMRI loss improved recall by 9%.

  • Electromagnetic Wave Pattern Detection with Multiple Sensors in the Manufacturing Field

    Ayano OHNISHI  Michio MIYAMOTO  Yoshio TAKEUCHI  Toshiyuki MAEYAMA  Akio HASEGAWA  Hiroyuki YOKOYAMA  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E106-B No:2
      Page(s):
    109-116

    Multiple wireless communication systems are often operated together in the same area in such manufacturing sites as factories where wideband noise may be emitted from industrial equipment over channels for wireless communication systems. To perform highly reliable wireless communication in such environments, radio wave environments must be monitored that are specific to each manufacturing site to find channels and timing that enable stable communication. The authors studied technologies using machine learning to efficiently analyze a large amount of monitoring data, including signals whose spectrum shape is undefined, such as electromagnetic noise over a wideband. In this paper, we generated common supervised data for multiple sensors by conjointly clustering features after normalizing those calculated in each sensor to recognize the signal reception timing from identical sources and eliminate the complexity of supervised data management. We confirmed our method's effectiveness through signal models and actual data sampled by sensors that we developed.

  • Virtual Reality Campuses as New Educational Metaverses

    Katashi NAGAO  

     
    INVITED PAPER

      Pubricized:
    2022/10/13
      Vol:
    E106-D No:2
      Page(s):
    93-100

    This paper focuses on the potential value and future prospects of using virtual reality (VR) technology in online education. In detailing online education and the latest VR technology, we focus on metaverse construction and artificial intelligence (AI) for educational VR use. In particular, we describe a virtual university campus in which on-demand VR lectures are conducted in virtual lecture halls, automated evaluations of student learning and training using machine learning, and the linking of multiple digital campuses.

  • Combining Spiking Neural Networks with Artificial Neural Networks for Enhanced Image Classification

    Naoya MURAMATSU  Hai-Tao YU  Tetsuji SATOH  

     
    PAPER-Biocybernetics, Neurocomputing

      Pubricized:
    2022/11/07
      Vol:
    E106-D No:2
      Page(s):
    252-261

    With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.

  • Machine Learning in 6G Wireless Communications Open Access

    Tomoaki OHTSUKI  

     
    INVITED PAPER

      Pubricized:
    2022/08/10
      Vol:
    E106-B No:2
      Page(s):
    75-83

    Mobile communication systems are not only the core of the Information and Communication Technology (ICT) infrastructure but also that of our social infrastructure. The 5th generation mobile communication system (5G) has already started and is in use. 5G is expected for various use cases in industry and society. Thus, many companies and research institutes are now trying to improve the performance of 5G, that is, 5G Enhancement and the next generation of mobile communication systems (Beyond 5G (6G)). 6G is expected to meet various highly demanding requirements even compared with 5G, such as extremely high data rate, extremely large coverage, extremely low latency, extremely low energy, extremely high reliability, extreme massive connectivity, and so on. Artificial intelligence (AI) and machine learning (ML), AI/ML, will have more important roles than ever in 6G wireless communications with the above extreme high requirements for a diversity of applications, including new combinations of the requirements for new use cases. We can say that AI/ML will be essential for 6G wireless communications. This paper introduces some ML techniques and applications in 6G wireless communications, mainly focusing on the physical layer.

  • Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network

    Hyun KWON  

     
    LETTER-Information Network

      Pubricized:
    2022/11/07
      Vol:
    E106-D No:2
      Page(s):
    262-266

    Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.

  • A Comparative Study of Data Collection Periods for Just-In-Time Defect Prediction Using the Automatic Machine Learning Method

    Kosuke OHARA  Hirohisa AMAN  Sousuke AMASAKI  Tomoyuki YOKOGAWA  Minoru KAWAHARA  

     
    LETTER

      Pubricized:
    2022/11/11
      Vol:
    E106-D No:2
      Page(s):
    166-169

    This paper focuses on the “data collection period” for training a better Just-In-Time (JIT) defect prediction model — the early commit data vs. the recent one —, and conducts a large-scale comparative study to explore an appropriate data collection period. Since there are many possible machine learning algorithms for training defect prediction models, the selection of machine learning algorithms can become a threat to validity. Hence, this study adopts the automatic machine learning method to mitigate the selection bias in the comparative study. The empirical results using 122 open-source software projects prove the trend that the dataset composed of the recent commits would become a better training set for JIT defect prediction models.

  • Projection-Based Physical Adversarial Attack for Monocular Depth Estimation

    Renya DAIMO  Satoshi ONO  

     
    LETTER

      Pubricized:
    2022/10/17
      Vol:
    E106-D No:1
      Page(s):
    31-35

    Monocular depth estimation has improved drastically due to the development of deep neural networks (DNNs). However, recent studies have revealed that DNNs for monocular depth estimation contain vulnerabilities that can lead to misestimation when perturbations are added to input. This study investigates whether DNNs for monocular depth estimation is vulnerable to misestimation when patterned light is projected on an object using a video projector. To this end, this study proposes an evolutionary adversarial attack method with multi-fidelity evaluation scheme that allows creating adversarial examples under black-box condition while suppressing the computational cost. Experiments in both simulated and real scenes showed that the designed light pattern caused a DNN to misestimate objects as if they have moved to the back.

  • Robust Optimization Model for Primary and Backup Capacity Allocations against Multiple Physical Machine Failures under Uncertain Demands in Cloud

    Mitsuki ITO  Fujun HE  Kento YOKOUCHI  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    18-34

    This paper proposes a robust optimization model for probabilistic protection under uncertain capacity demands to minimize the total required capacity against multiple simultaneous failures of physical machines. The proposed model determines both primary and backup virtual machine allocations simultaneously under the probabilistic protection guarantee. To express the uncertainty of capacity demands, we introduce an uncertainty set that considers the upper bound of the total demand and the upper and lower bounds of each demand. The robust optimization technique is applied to the optimization model to deal with two uncertainties: failure event and capacity demand. With this technique, the model is formulated as a mixed integer linear programming (MILP) problem. To solve larger sized problems, a simulated annealing (SA) heuristic is introduced. In SA, we obtain the capacity demands by solving maximum flow problems. Numerical results show that our proposed model reduces the total required capacity compared with the conventional model by determining both primary and backup virtual machine allocations simultaneously. We also compare the results of MILP, SA, and a baseline greedy algorithm. For a larger sized problem, we obtain approximate solutions in a practical time by using SA and the greedy algorithm.

  • Intelligent Dynamic Channel Assignment with Small-Cells for Uplink Machine-Type Communications

    Se-Jin KIM  

     
    LETTER-Mobile Information Network and Personal Communications

      Pubricized:
    2022/06/27
      Vol:
    E106-A No:1
      Page(s):
    88-91

    This letter proposes a novel intelligent dynamic channel assignment (DCA) scheme with small-cells to improve the system performance for uplink machine-type communications (MTC) based on OFDMA-FDD. Outdoor MTC devices (OMDs) have serious interference from indoor MTC devices (IMDs) served by small-cell access points (SAPs) with frequency reuse. Thus, in the proposed DCA scheme, the macro base station (MBS) first measures the received signal strength from both OMDs and IMDs after setting the transmission power. Then, the MBS dynamically assigns subchannels to each SAP with consideration of strong interference from IMDs to the MBS. Through simulation results, it is shown that the proposed DCA scheme outperforms other schemes in terms of the capacity of OMDs and IMDs.

  • Multilayer Perceptron Training Accelerator Using Systolic Array

    Takeshi SENOO  Akira JINGUJI  Ryosuke KURAMOCHI  Hiroki NAKAHARA  

     
    PAPER

      Pubricized:
    2022/07/21
      Vol:
    E105-D No:12
      Page(s):
    2048-2056

    Multilayer perceptron (MLP) is a basic neural network model that is used in practical industrial applications, such as network intrusion detection (NID) systems. It is also used as a building block in newer models, such as gMLP. Currently, there is a demand for fast training in NID and other areas. However, in training with numerous GPUs, the problems of power consumption and long training times arise. Many of the latest deep neural network (DNN) models and MLPs are trained using a backpropagation algorithm which transmits an error gradient from the output layer to the input layer such that in the sequential computation, the next input cannot be processed until the weights of all layers are updated from the last layer. This is known as backward locking. In this study, a weight parameter update mechanism is proposed with time delays that can accommodate the weight update delay to allow simultaneous forward and backward computation. To this end, a one-dimensional systolic array structure was designed on a Xilinx U50 Alveo FPGA card in which each layer of the MLP is assigned to a processing element (PE). The time-delay backpropagation algorithm executes all layers in parallel, and transfers data between layers in a pipeline. Compared to the Intel Core i9 CPU and NVIDIA RTX 3090 GPU, it is 3 times faster than the CPU and 2.5 times faster than the GPU. The processing speed per power consumption is 11.5 times better than that of the CPU and 21.4 times better than that of the GPU. From these results, it is concluded that a training accelerator on an FPGA can achieve high speed and energy efficiency.

  • Robust Speech Recognition Using Teacher-Student Learning Domain Adaptation

    Han MA  Qiaoling ZHANG  Roubing TANG  Lu ZHANG  Yubo JIA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2022/09/09
      Vol:
    E105-D No:12
      Page(s):
    2112-2118

    Recently, robust speech recognition for real-world applications has attracted much attention. This paper proposes a robust speech recognition method based on the teacher-student learning framework for domain adaptation. In particular, the student network will be trained based on a novel optimization criterion defined by the encoder outputs of both teacher and student networks rather than the final output posterior probabilities, which aims to make the noisy audio map to the same embedding space as clean audio, so that the student network is adaptive in the noise domain. Comparative experiments demonstrate that the proposed method obtained good robustness against noise.

  • Reinforcement Learning for QoS-Constrained Autonomous Resource Allocation with H2H/M2M Co-Existence in Cellular Networks

    Xing WEI  Xuehua LI  Shuo CHEN  Na LI  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1332-1341

    Machine-to-Machine (M2M) communication plays a pivotal role in the evolution of Internet of Things (IoT). Cellular networks are considered to be a key enabler for M2M communications, which are originally designed mainly for Human-to-Human (H2H) communications. The introduction of M2M users will cause a series of problems to traditional H2H users, i.e., interference between various traffic. Resource allocation is an effective solution to these problems. In this paper, we consider a shared resource block (RB) and power allocation in an H2H/M2M coexistence scenario, where M2M users are subdivided into delay-tolerant and delay-sensitive types. We first model the RB-power allocation problem as maximization of capacity under Quality-of-Service (QoS) constraints of different types of traffic. Then, a learning framework is introduced, wherein a complex agent is built from simpler subagents, which provides the basis for distributed deployment scheme. Further, we proposed distributed Q-learning based autonomous RB-power allocation algorithm (DQ-ARPA), which enables the machine type network gateways (MTCG) as agents to learn the wireless environment and choose the RB-power autonomously to maximize M2M pairs' capacity while ensuring the QoS requirements of critical services. Simulation results indicates that with an appropriate reward design, our proposed scheme succeeds in reducing the impact of delay-tolerant machine type users on critical services in terms of SINR thresholds and outage ratios.

41-60hit(1072hit)