The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MAC(837hit)

41-60hit(837hit)

  • Study of FIT Dedicated Computer with Dataflow Architecture for High Performance 2-D Magneto-Static Field Simulation

    Chenxu WANG  Hideki KAWAGUCHI  Kota WATANABE  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E106-C No:4
      Page(s):
    136-143

    An approach to dedicated computers is discussed in this study as a possibility for portable, low-cost, and low-power consumption high-performance computing technologies. Particularly, dataflow architecture dedicated computer of the finite integration technique (FIT) for 2D magnetostatic field simulation is considered for use in industrial applications. The dataflow architecture circuit of the BiCG-Stab matrix solver of the FIT matrix calculation is designed by the very high-speed integrated circuit hardware description language (VHDL). The operation of the dedicated computer's designed circuit is considered by VHDL logic circuit simulation.

  • A Methodology on Converting 10-K Filings into a Machine Learning Dataset and Its Applications

    Mustafa SAMI KACAR  Semih YUMUSAK  Halife KODAZ  

     
    PAPER

      Pubricized:
    2022/10/12
      Vol:
    E106-D No:4
      Page(s):
    477-487

    Companies listed on the stock exchange are required to share their annual reports with the U.S. Securities and Exchange Commission (SEC) within the first three months following the fiscal year. These reports, namely 10-K Filings, are presented to public interest by the SEC through an Electronic Data Gathering, Analysis, and Retrieval database. 10-K Filings use standard file formats (xbrl, html, pdf) to publish the financial reports of the companies. Although the file formats propose a standard structure, the content and the meta-data of the financial reports (e.g. tag names) is not strictly bound to a pre-defined schema. This study proposes a data collection and data preprocessing method to semantify the financial reports and use the collected data for further analysis (i.e. machine learning). The analysis of eight different datasets, which were created during the study, are presented using the proposed data transformation methods. As a use case, based on the datasets, five different machine learning algorithms were utilized to predict the existence of the corresponding company in the S&P 500 index. According to the strong machine learning results, the dataset generation methodology is successful and the datasets are ready for further use.

  • An Efficient Combined Bit-Width Reducing Method for Ising Models

    Yuta YACHI  Masashi TAWADA  Nozomu TOGAWA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2023/01/12
      Vol:
    E106-D No:4
      Page(s):
    495-508

    Annealing machines such as quantum annealing machines and semiconductor-based annealing machines have been attracting attention as an efficient computing alternative for solving combinatorial optimization problems. They solve original combinatorial optimization problems by transforming them into a data structure called an Ising model. At that time, the bit-widths of the coefficients of the Ising model have to be kept within the range that an annealing machine can deal with. However, by reducing the Ising-model bit-widths, its minimum energy state, or ground state, may become different from that of the original one, and hence the targeted combinatorial optimization problem cannot be well solved. This paper proposes an effective method for reducing Ising model's bit-widths. The proposed method is composed of two processes: First, given an Ising model with large coefficient bit-widths, the shift method is applied to reduce its bit-widths roughly. Second, the spin-adding method is applied to further reduce its bit-widths to those that annealing machines can deal with. Without adding too many extra spins, we efficiently reduce the coefficient bit-widths of the original Ising model. Furthermore, the ground state before and after reducing the coefficient bit-widths is not much changed in most of the practical cases. Experimental evaluations demonstrate the effectiveness of the proposed method, compared to existing methods.

  • CAMRI Loss: Improving the Recall of a Specific Class without Sacrificing Accuracy

    Daiki NISHIYAMA  Kazuto FUKUCHI  Youhei AKIMOTO  Jun SAKUMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/01/23
      Vol:
    E106-D No:4
      Page(s):
    523-537

    In real world applications of multiclass classification models, misclassification in an important class (e.g., stop sign) can be significantly more harmful than in other classes (e.g., no parking). Thus, it is crucial to improve the recall of an important class while maintaining overall accuracy. For this problem, we found that improving the separation of important classes relative to other classes in the feature space is effective. Existing methods that give a class-sensitive penalty for cross-entropy loss do not improve the separation. Moreover, the methods designed to improve separations between all classes are unsuitable for our purpose because they do not consider the important classes. To achieve the separation, we propose a loss function that explicitly gives loss for the feature space, called class-sensitive additive angular margin (CAMRI) loss. CAMRI loss is expected to reduce the variance of an important class due to the addition of a penalty to the angle between the important class features and the corresponding weight vectors in the feature space. In addition, concentrating the penalty on only the important class hardly sacrifices separating the other classes. Experiments on CIFAR-10, GTSRB, and AwA2 showed that CAMRI loss could improve the recall of a specific class without sacrificing accuracy. In particular, compared with GTSRB's second-worst class recall when trained with cross-entropy loss, CAMRI loss improved recall by 9%.

  • Electromagnetic Wave Pattern Detection with Multiple Sensors in the Manufacturing Field

    Ayano OHNISHI  Michio MIYAMOTO  Yoshio TAKEUCHI  Toshiyuki MAEYAMA  Akio HASEGAWA  Hiroyuki YOKOYAMA  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E106-B No:2
      Page(s):
    109-116

    Multiple wireless communication systems are often operated together in the same area in such manufacturing sites as factories where wideband noise may be emitted from industrial equipment over channels for wireless communication systems. To perform highly reliable wireless communication in such environments, radio wave environments must be monitored that are specific to each manufacturing site to find channels and timing that enable stable communication. The authors studied technologies using machine learning to efficiently analyze a large amount of monitoring data, including signals whose spectrum shape is undefined, such as electromagnetic noise over a wideband. In this paper, we generated common supervised data for multiple sensors by conjointly clustering features after normalizing those calculated in each sensor to recognize the signal reception timing from identical sources and eliminate the complexity of supervised data management. We confirmed our method's effectiveness through signal models and actual data sampled by sensors that we developed.

  • Virtual Reality Campuses as New Educational Metaverses

    Katashi NAGAO  

     
    INVITED PAPER

      Pubricized:
    2022/10/13
      Vol:
    E106-D No:2
      Page(s):
    93-100

    This paper focuses on the potential value and future prospects of using virtual reality (VR) technology in online education. In detailing online education and the latest VR technology, we focus on metaverse construction and artificial intelligence (AI) for educational VR use. In particular, we describe a virtual university campus in which on-demand VR lectures are conducted in virtual lecture halls, automated evaluations of student learning and training using machine learning, and the linking of multiple digital campuses.

  • Combining Spiking Neural Networks with Artificial Neural Networks for Enhanced Image Classification

    Naoya MURAMATSU  Hai-Tao YU  Tetsuji SATOH  

     
    PAPER-Biocybernetics, Neurocomputing

      Pubricized:
    2022/11/07
      Vol:
    E106-D No:2
      Page(s):
    252-261

    With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.

  • Machine Learning in 6G Wireless Communications Open Access

    Tomoaki OHTSUKI  

     
    INVITED PAPER

      Pubricized:
    2022/08/10
      Vol:
    E106-B No:2
      Page(s):
    75-83

    Mobile communication systems are not only the core of the Information and Communication Technology (ICT) infrastructure but also that of our social infrastructure. The 5th generation mobile communication system (5G) has already started and is in use. 5G is expected for various use cases in industry and society. Thus, many companies and research institutes are now trying to improve the performance of 5G, that is, 5G Enhancement and the next generation of mobile communication systems (Beyond 5G (6G)). 6G is expected to meet various highly demanding requirements even compared with 5G, such as extremely high data rate, extremely large coverage, extremely low latency, extremely low energy, extremely high reliability, extreme massive connectivity, and so on. Artificial intelligence (AI) and machine learning (ML), AI/ML, will have more important roles than ever in 6G wireless communications with the above extreme high requirements for a diversity of applications, including new combinations of the requirements for new use cases. We can say that AI/ML will be essential for 6G wireless communications. This paper introduces some ML techniques and applications in 6G wireless communications, mainly focusing on the physical layer.

  • Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network

    Hyun KWON  

     
    LETTER-Information Network

      Pubricized:
    2022/11/07
      Vol:
    E106-D No:2
      Page(s):
    262-266

    Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.

  • A Comparative Study of Data Collection Periods for Just-In-Time Defect Prediction Using the Automatic Machine Learning Method

    Kosuke OHARA  Hirohisa AMAN  Sousuke AMASAKI  Tomoyuki YOKOGAWA  Minoru KAWAHARA  

     
    LETTER

      Pubricized:
    2022/11/11
      Vol:
    E106-D No:2
      Page(s):
    166-169

    This paper focuses on the “data collection period” for training a better Just-In-Time (JIT) defect prediction model — the early commit data vs. the recent one —, and conducts a large-scale comparative study to explore an appropriate data collection period. Since there are many possible machine learning algorithms for training defect prediction models, the selection of machine learning algorithms can become a threat to validity. Hence, this study adopts the automatic machine learning method to mitigate the selection bias in the comparative study. The empirical results using 122 open-source software projects prove the trend that the dataset composed of the recent commits would become a better training set for JIT defect prediction models.

  • Projection-Based Physical Adversarial Attack for Monocular Depth Estimation

    Renya DAIMO  Satoshi ONO  

     
    LETTER

      Pubricized:
    2022/10/17
      Vol:
    E106-D No:1
      Page(s):
    31-35

    Monocular depth estimation has improved drastically due to the development of deep neural networks (DNNs). However, recent studies have revealed that DNNs for monocular depth estimation contain vulnerabilities that can lead to misestimation when perturbations are added to input. This study investigates whether DNNs for monocular depth estimation is vulnerable to misestimation when patterned light is projected on an object using a video projector. To this end, this study proposes an evolutionary adversarial attack method with multi-fidelity evaluation scheme that allows creating adversarial examples under black-box condition while suppressing the computational cost. Experiments in both simulated and real scenes showed that the designed light pattern caused a DNN to misestimate objects as if they have moved to the back.

  • Robust Optimization Model for Primary and Backup Capacity Allocations against Multiple Physical Machine Failures under Uncertain Demands in Cloud

    Mitsuki ITO  Fujun HE  Kento YOKOUCHI  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    18-34

    This paper proposes a robust optimization model for probabilistic protection under uncertain capacity demands to minimize the total required capacity against multiple simultaneous failures of physical machines. The proposed model determines both primary and backup virtual machine allocations simultaneously under the probabilistic protection guarantee. To express the uncertainty of capacity demands, we introduce an uncertainty set that considers the upper bound of the total demand and the upper and lower bounds of each demand. The robust optimization technique is applied to the optimization model to deal with two uncertainties: failure event and capacity demand. With this technique, the model is formulated as a mixed integer linear programming (MILP) problem. To solve larger sized problems, a simulated annealing (SA) heuristic is introduced. In SA, we obtain the capacity demands by solving maximum flow problems. Numerical results show that our proposed model reduces the total required capacity compared with the conventional model by determining both primary and backup virtual machine allocations simultaneously. We also compare the results of MILP, SA, and a baseline greedy algorithm. For a larger sized problem, we obtain approximate solutions in a practical time by using SA and the greedy algorithm.

  • Intelligent Dynamic Channel Assignment with Small-Cells for Uplink Machine-Type Communications

    Se-Jin KIM  

     
    LETTER-Mobile Information Network and Personal Communications

      Pubricized:
    2022/06/27
      Vol:
    E106-A No:1
      Page(s):
    88-91

    This letter proposes a novel intelligent dynamic channel assignment (DCA) scheme with small-cells to improve the system performance for uplink machine-type communications (MTC) based on OFDMA-FDD. Outdoor MTC devices (OMDs) have serious interference from indoor MTC devices (IMDs) served by small-cell access points (SAPs) with frequency reuse. Thus, in the proposed DCA scheme, the macro base station (MBS) first measures the received signal strength from both OMDs and IMDs after setting the transmission power. Then, the MBS dynamically assigns subchannels to each SAP with consideration of strong interference from IMDs to the MBS. Through simulation results, it is shown that the proposed DCA scheme outperforms other schemes in terms of the capacity of OMDs and IMDs.

  • Multilayer Perceptron Training Accelerator Using Systolic Array

    Takeshi SENOO  Akira JINGUJI  Ryosuke KURAMOCHI  Hiroki NAKAHARA  

     
    PAPER

      Pubricized:
    2022/07/21
      Vol:
    E105-D No:12
      Page(s):
    2048-2056

    Multilayer perceptron (MLP) is a basic neural network model that is used in practical industrial applications, such as network intrusion detection (NID) systems. It is also used as a building block in newer models, such as gMLP. Currently, there is a demand for fast training in NID and other areas. However, in training with numerous GPUs, the problems of power consumption and long training times arise. Many of the latest deep neural network (DNN) models and MLPs are trained using a backpropagation algorithm which transmits an error gradient from the output layer to the input layer such that in the sequential computation, the next input cannot be processed until the weights of all layers are updated from the last layer. This is known as backward locking. In this study, a weight parameter update mechanism is proposed with time delays that can accommodate the weight update delay to allow simultaneous forward and backward computation. To this end, a one-dimensional systolic array structure was designed on a Xilinx U50 Alveo FPGA card in which each layer of the MLP is assigned to a processing element (PE). The time-delay backpropagation algorithm executes all layers in parallel, and transfers data between layers in a pipeline. Compared to the Intel Core i9 CPU and NVIDIA RTX 3090 GPU, it is 3 times faster than the CPU and 2.5 times faster than the GPU. The processing speed per power consumption is 11.5 times better than that of the CPU and 21.4 times better than that of the GPU. From these results, it is concluded that a training accelerator on an FPGA can achieve high speed and energy efficiency.

  • Reinforcement Learning for QoS-Constrained Autonomous Resource Allocation with H2H/M2M Co-Existence in Cellular Networks

    Xing WEI  Xuehua LI  Shuo CHEN  Na LI  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1332-1341

    Machine-to-Machine (M2M) communication plays a pivotal role in the evolution of Internet of Things (IoT). Cellular networks are considered to be a key enabler for M2M communications, which are originally designed mainly for Human-to-Human (H2H) communications. The introduction of M2M users will cause a series of problems to traditional H2H users, i.e., interference between various traffic. Resource allocation is an effective solution to these problems. In this paper, we consider a shared resource block (RB) and power allocation in an H2H/M2M coexistence scenario, where M2M users are subdivided into delay-tolerant and delay-sensitive types. We first model the RB-power allocation problem as maximization of capacity under Quality-of-Service (QoS) constraints of different types of traffic. Then, a learning framework is introduced, wherein a complex agent is built from simpler subagents, which provides the basis for distributed deployment scheme. Further, we proposed distributed Q-learning based autonomous RB-power allocation algorithm (DQ-ARPA), which enables the machine type network gateways (MTCG) as agents to learn the wireless environment and choose the RB-power autonomously to maximize M2M pairs' capacity while ensuring the QoS requirements of critical services. Simulation results indicates that with an appropriate reward design, our proposed scheme succeeds in reducing the impact of delay-tolerant machine type users on critical services in terms of SINR thresholds and outage ratios.

  • Multi-Targeted Poisoning Attack in Deep Neural Networks

    Hyun KWON  Sunghwan CHO  

     
    LETTER

      Pubricized:
    2022/08/09
      Vol:
    E105-D No:11
      Page(s):
    1916-1920

    Deep neural networks show good performance in image recognition, speech recognition, and pattern analysis. However, deep neural networks also have weaknesses, one of which is vulnerability to poisoning attacks. A poisoning attack reduces the accuracy of a model by training the model on malicious data. A number of studies have been conducted on such poisoning attacks. The existing type of poisoning attack causes misrecognition by one classifier. In certain situations, however, it is necessary for multiple models to misrecognize certain data as different specific classes. For example, if there are enemy autonomous vehicles A, B, and C, a poisoning attack could mislead A to turn to the left, B to stop, and C to turn to the right simply by using a traffic sign. In this paper, we propose a multi-targeted poisoning attack method that causes each of several models to misrecognize certain data as a different target class. This study used MNIST and CIFAR10 as datasets and Tensorflow as a machine learning library. The experimental results show that the proposed scheme has a 100% average attack success rate on MNIST and CIFAR10 when malicious data accounting for 5% of the training dataset have been used for training.

  • Toward Selective Membership Inference Attack against Deep Learning Model

    Hyun KWON  Yongchul KIM  

     
    LETTER

      Pubricized:
    2022/07/26
      Vol:
    E105-D No:11
      Page(s):
    1911-1915

    In this paper, we propose a selective membership inference attack method that determines whether certain data corresponding to a specific class are being used as training data for a machine learning model or not. By using the proposed method, membership or non-membership can be inferred by generating a decision model from the prediction of the inference models and training the confidence values for the data corresponding to the selected class. We used MNIST as an experimental dataset and Tensorflow as a machine learning library. Experimental results show that the proposed method has a 92.4% success rate with 5 inference models for data corresponding to a specific class.

  • A COM Based High Speed Serial Link Optimization Using Machine Learning Open Access

    Yan WANG  Qingsheng HU  

     
    PAPER

      Pubricized:
    2022/05/09
      Vol:
    E105-C No:11
      Page(s):
    684-691

    This paper presents a channel operating margin (COM) based high-speed serial link optimization using machine learning (ML). COM that is proposed for evaluating serial link is calculated at first and during the calculation several important equalization parameters corresponding to the best configuration are extracted which can be used for the ML modeling of serial link. Then a deep neural network containing hidden layers are investigated to model a whole serial link equalization including transmitter feed forward equalizer (FFE), receiver continuous time linear equalizer (CTLE) and decision feedback equalizer (DFE). By training, validating and testing a lot of samples that meet the COM specification of 400GAUI-8 C2C, an effective ML model is generated and the maximum relative error is only 0.1 compared with computation results. At last 3 link configurations are discussed from the view of tradeoff between the link performance and cost, illustrating that our COM based ML modeling method can be applied to advanced serial link design for NRZ, PAM4 or even other higher level pulse amplitude modulation signal.

  • Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier

    Hyun KWON  Changhyun CHO  Jun LEE  

     
    PAPER

      Pubricized:
    2022/08/23
      Vol:
    E105-D No:11
      Page(s):
    1880-1889

    Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).

  • Frank-Wolfe for Sign-Constrained Support Vector Machines

    Kenya TAJIMA  Takahiko HENMI  Tsuyoshi KATO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2022/06/27
      Vol:
    E105-D No:10
      Page(s):
    1734-1742

    Domain knowledge is useful to improve the generalization performance of learning machines. Sign constraints are a handy representation to combine domain knowledge with learning machine. In this paper, we consider constraining the signs of the weight coefficients in learning the linear support vector machine, and develop an optimization algorithm for minimizing the empirical risk under the sign constraints. The algorithm is based on the Frank-Wolfe method that also converges sublinearly and possesses a clear termination criterion. We show that each iteration of the Frank-Wolfe also requires O(nd+d2) computational cost. Furthermore, we derive the explicit expression for the minimal iteration number to ensure an ε-accurate solution by analyzing the curvature of the objective function. Finally, we empirically demonstrate that the sign constraints are a promising technique when similarities to the training examples compose the feature vector.

41-60hit(837hit)