The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

101-120hit(20498hit)

  • MuSRGM: A Genetic Algorithm-Based Dynamic Combinatorial Deep Learning Model for Software Reliability Engineering Open Access

    Ning FU  Duksan RYU  Suntae KIM  

     
    PAPER-Software Engineering

      Pubricized:
    2024/02/06
      Vol:
    E107-D No:6
      Page(s):
    761-771

    In the software testing phase, software reliability growth models (SRGMs) are commonly used to evaluate the reliability of software systems. Traditional SRGMs are restricted by their assumption of a continuous growth pattern for the failure detection rate (FDR) throughout the testing phase. However, the assumption is compromised by Change-Point phenomena, where FDR fluctuations stem from variations in testing personnel or procedural modifications, leading to reduced prediction accuracy and compromised software reliability assessments. Therefore, the objective of this study is to improve software reliability prediction using a novel approach that combines genetic algorithm (GA) and deep learning-based SRGMs to account for the Change-point phenomenon. The proposed approach uses a GA to dynamically combine activation functions from various deep learning-based SRGMs into a new mutated SRGM called MuSRGM. The MuSRGM captures the advantages of both concave and S-shaped SRGMs and is better suited to capture the change-point phenomenon during testing and more accurately reflect actual testing situations. Additionally, failure data is treated as a time series and analyzed using a combination of Long Short-Term Memory (LSTM) and Attention mechanisms. To assess the performance of MuSRGM, we conducted experiments on three distinct failure datasets. The results indicate that MuSRGM outperformed the baseline method, exhibiting low prediction error (MSE) on all three datasets. Furthermore, MuSRGM demonstrated remarkable generalization ability on these datasets, remaining unaffected by uneven data distribution. Therefore, MuSRGM represents a highly promising advanced solution that can provide increased accuracy and applicability for software reliability assessment during the testing phase.

  • Dataset of Functionally Equivalent Java Methods and Its Application to Evaluating Clone Detection Tools Open Access

    Yoshiki HIGO  

     
    PAPER-Software System

      Pubricized:
    2024/02/21
      Vol:
    E107-D No:6
      Page(s):
    751-760

    Modern high-level programming languages have a wide variety of grammar and can implement the required functionality in different ways. The authors believe that a large amount of code that implements the same functionality in different ways exists even in open source software where the source code is publicly available, and that by collecting such code, a useful data set can be constructed for various studies in software engineering. In this study, we construct a dataset of pairs of Java methods that have the same functionality but different structures from approximately 314 million lines of source code. To construct this dataset, the authors used an automated test generation technique, EvoSuite. Test cases generated by automated test generation techniques have the property that the test cases always succeed. In constructing the dataset, using this property, test cases generated from two methods were executed against each other to automatically determine whether the behavior of the two methods is the same to some extent. Pairs of methods for which all test cases succeeded in cross-running test cases are manually investigated to be functionally equivalent. This paper also reports the results of an accuracy evaluation of code clone detection tools using the constructed dataset. The purpose of this evaluation is assessing how accurately code clone detection tools could find the functionally equivalent methods, not assessing the accuracy of detecting ordinary clones. The constructed dataset is available at github (https://github.com/YoshikiHigo/FEMPDataset).

  • Simulation of Scalar-Mode Optically Pumped Magnetometers to Search Optimal Operating Conditions Open Access

    Yosuke ITO  Tatsuya GOTO  Takuma HORI  

     
    INVITED PAPER

      Pubricized:
    2023/12/04
      Vol:
    E107-C No:6
      Page(s):
    164-170

    In recent years, measuring biomagnetic fields in the Earth’s field by differential measurements of scalar-mode OPMs have been actively attempted. In this study, the sensitivity of the scalar-mode OPMs under the geomagnetic environment in the laboratory was studied by numerical simulation. Although the noise level of the scalar-mode OPM in the laboratory environment was calculated to be 104 pT/$\sqrt{\mathrm{Hz}}$, the noise levels using the first-order and the second-order differential configurations were found to be 529 fT/cm/$\sqrt{\mathrm{Hz}}$ and 17.2 fT/cm2/$\sqrt{\mathrm{Hz}}$, respectively. This result indicated that scalar-mode OPMs can measure very weak magnetic fields such as MEG without high-performance magnetic shield roomns. We also studied the operating conditions by varying repetition frequency and temperature. We found that scalar-mode OPMs have an upper limit of repetition frequency and temperature, and that the repetition frequency should be set below 4 kHz and the temperature should be set below 120°C.

  • A 0.13 mJ/Prediction CIFAR-100 Fully Synthesizable Raster-Scan-Based Wired-Logic Processor in 16-nm FPGA Open Access

    Dongzhu LI  Zhijie ZHAN  Rei SUMIKAWA  Mototsugu HAMADA  Atsutake KOSUGE  Tadahiro KURODA  

     
    PAPER

      Pubricized:
    2023/11/24
      Vol:
    E107-C No:6
      Page(s):
    155-162

    A 0.13mJ/prediction with 68.6% accuracy wired-logic deep neural network (DNN) processor is developed in a single 16-nm field-programmable gate array (FPGA) chip. Compared with conventional von-Neumann architecture DNN processors, the energy efficiency is greatly improved by eliminating DRAM/BRAM access. A technical challenge for conventional wired-logic processors is the large amount of hardware resources required for implementing large-scale neural networks. To implement a large-scale convolutional neural network (CNN) into a single FPGA chip, two technologies are introduced: (1) a sparse neural network known as a non-linear neural network (NNN), and (2) a newly developed raster-scan wired-logic architecture. Furthermore, a novel high-level synthesis (HLS) technique for wired-logic processor is proposed. The proposed HLS technique enables the automatic generation of two key components: (1) Verilog-hardware description language (HDL) code for a raster-scan-based wired-logic processor and (2) test bench code for conducting equivalence checking. The automated process significantly mitigates the time and effort required for implementation and debugging. Compared with the state-of-the-art FPGA-based processor, 238 times better energy efficiency is achieved with only a slight decrease in accuracy on the CIFAR-100 task. In addition, 7 times better energy efficiency is achieved compared with the state-of-the-art network-optimized application-specific integrated circuit (ASIC).

  • A Novel Remote-Tracking Heart Rate Measurement Method Based on Stepping Motor and mm-Wave FMCW Radar Open Access

    Yaokun HU  Xuanyu PENG  Takeshi TODA  

     
    PAPER-Sensing

      Vol:
    E107-B No:6
      Page(s):
    470-486

    The subject must be motionless for conventional radar-based non-contact vital signs measurements. Additionally, the measurement range is limited by the design of the radar module itself. Although the accuracy of measurements has been improving, the prospects for their application could have been faster to develop. This paper proposed a novel radar-based adaptive tracking method for measuring the heart rate of the moving monitored person. The radar module is fixed on a circular plate and driven by stepping motors to rotate it. In order to protect the user’s privacy, the method uses radar signal processing to detect the subject’s position to control a stepping motor that adjusts the radar’s measurement range. The results of the fixed-route experiments revealed that when the subject was moving at a speed of 0.5 m/s, the mean values of RMSE for heart rate measurements were all below 2.85 beat per minute (bpm), and when moving at a speed of 1 m/s, they were all below 4.05 bpm. When subjects walked at random routes and speeds, the RMSE of the measurements were all below 6.85 bpm, with a mean value of 4.35 bpm. The average RR interval time of the reconstructed heartbeat signal was highly correlated with the electrocardiography (ECG) data, with a correlation coefficient of 0.9905. In addition, this study not only evaluated the potential effect of arm swing (more normal walking motion) on heart rate measurement but also demonstrated the ability of the proposed method to measure heart rate in a multiple-people scenario.

  • LSTM Neural Network Algorithm for Handover Improvement in a Non-Ideal Network Using O-RAN Near-RT RIC Open Access

    Baud Haryo PRANANTO   ISKANDAR   HENDRAWAN  Adit KURNIAWAN  

     
    PAPER-Network Management/Operation

      Vol:
    E107-B No:6
      Page(s):
    458-469

    Handover is an important property of cellular communication that enables the user to move from one cell to another without losing the connection. It is a very crucial process for the quality of the user’s experience because it may interrupt data transmission. Therefore, good handover management is very important in the current and future cellular systems. Several techniques have been employed to improve the handover performance, usually to increase the probability of a successful handover. One of the techniques is predictive handover which predicts the target cell using some methods other than the traditional measurement-based algorithm, including using machine learning. Several studies have been conducted in the implementation of predictive handover, most of them by modifying the internal algorithm of existing network elements, such as the base station. We implemented a predictive handover algorithm using an intelligent node outside the existing network elements to minimize the modification of the network and to create modularity in the system. Using a recently standardized Open Radio Access Network (O-RAN) Near Realtime Radio Intelligent Controller (Near-RT RIC), we created a modular application that can improve the handover performance by determining the target cell using machine learning techniques. In our previous research, we modified The Near-RT RIC original software that is using vector autoregression to determine the target cell by predicting the throughput of each neighboring cell. We also modified the method using a Multi-Layer Perceptron (MLP) neural network. In this paper, we redesigned the neural network using Long Short-Term Memory (LSTM) that can better handle time series data. We proved that our proposed LSTM-based machine learning algorithms used in Near-RT RIC can improve the handover performance compared to the traditional measurement-based algorithm.

  • Federated Deep Reinforcement Learning for Multimedia Task Offloading and Resource Allocation in MEC Networks Open Access

    Rongqi ZHANG  Chunyun PAN  Yafei WANG  Yuanyuan YAO  Xuehua LI  

     
    PAPER-Network

      Vol:
    E107-B No:6
      Page(s):
    446-457

    With maturation of 5G technology in recent years, multimedia services such as live video streaming and online games on the Internet have flourished. These multimedia services frequently require low latency, which pose a significant challenge to compute the high latency requirements multimedia tasks. Mobile edge computing (MEC), is considered a key technology solution to address the above challenges. It offloads computation-intensive tasks to edge servers by sinking mobile nodes, which reduces task execution latency and relieves computing pressure on multimedia devices. In order to use MEC paradigm reasonably and efficiently, resource allocation has become a new challenge. In this paper, we focus on the multimedia tasks which need to be uploaded and processed in the network. We set the optimization problem with the goal of minimizing the latency and energy consumption required to perform tasks in multimedia devices. To solve the complex and non-convex problem, we formulate the optimization problem as a distributed deep reinforcement learning (DRL) problem and propose a federated Dueling deep Q-network (DDQN) based multimedia task offloading and resource allocation algorithm (FDRL-DDQN). In the algorithm, DRL is trained on the local device, while federated learning (FL) is responsible for aggregating and updating the parameters from the trained local models. Further, in order to solve the not identically and independently distributed (non-IID) data problem of multimedia devices, we develop a method for selecting participating federated devices. The simulation results show that the FDRL-DDQN algorithm can reduce the total cost by 31.3% compared to the DQN algorithm when the task data is 1000 kbit, and the maximum reduction can be 35.3% compared to the traditional baseline algorithm.

  • Physical Layer Security Enhancement for mmWave System with Multiple RISs and Imperfect CSI Open Access

    Qingqing TU  Zheng DONG  Xianbing ZOU  Ning WEI  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E107-B No:6
      Page(s):
    430-445

    Despite the appealing advantages of reconfigurable intelligent surfaces (RIS) aided mmWave communications, there remain practical issues that need to be addressed before the large-scale deployment of RISs in future wireless networks. In this study, we jointly consider the non-neglectable practical issues in a multi-RIS-aided mmWave system, which can significantly affect the secrecy performance, including the high computational complexity, imperfect channel state information (CSI), and finite resolution of phase shifters. To solve this non-convex challenging stochastic optimization problem, we propose a robust and low-complexity algorithm to maximize the achievable secrete rate. Specially, by combining the benefits of fractional programming and the stochastic successive convex approximation techniques, we transform the joint optimization problem into some convex ones and solve them sub-optimally. The theoretical analysis and simulation results demonstrate that the proposed algorithms could mitigate the joint negative effects of practical issues and yielded a tradeoff between secure performance and complexity/overhead outperforming non-robust benchmarks, which increases the robustness and flexibility of multiple RIS deployments in future wireless networks.

  • An Adaptively Biased OFDM Based on Hartley Transform for Visible Light Communication Systems Open Access

    Menglong WU  Yongfa XIE  Yongchao SHI  Jianwen ZHANG  Tianao YAO  Wenkai LIU  

     
    LETTER-Communication Theory and Signals

      Pubricized:
    2023/09/20
      Vol:
    E107-A No:6
      Page(s):
    928-931

    Direct-current biased optical orthogonal frequency division multiplexing (DCO-OFDM) converts bipolar OFDM signals into unipolar non-negative signals by introducing a high DC bias, which satisfies the requirement that the signal transmitted by intensity modulated/direct detection (IM/DD) must be positive. However, the high DC bias results in low power efficiency of DCO-OFDM. An adaptively biased optical OFDM was proposed, which could be designed with different biases according to the signal amplitude to improve power efficiency in this letter. The adaptive bias does not need to be taken off deliberately at the receiver, and the interference caused by the adaptive bias will only be placed on the reserved subcarriers, which will not affect the effective information. Moreover, the proposed OFDM uses Hartley transform instead of Fourier transform used in conventional optical OFDM, which makes this OFDM have low computational complexity and high spectral efficiency. The simulation results show that the normalized optical bit energy to noise power ratio (Eb(opt)/N0) required by the proposed OFDM at the bit error rate (BER) of 10-3 is, on average, 7.5 dB and 3.4 dB lower than that of DCO-OFDM and superimposed asymmetrically clipped optical OFDM (ACO-OFDM), respectively.

  • Secrecy Outage Probability and Secrecy Diversity Order of Alamouti STBC with Decision Feedback Detection over Time-Selective Fading Channels Open Access

    Gyulim KIM  Hoojin LEE  Xinrong LI  Seong Ho CHAE  

     
    LETTER-Communication Theory and Signals

      Pubricized:
    2023/09/19
      Vol:
    E107-A No:6
      Page(s):
    923-927

    This letter studies the secrecy outage probability (SOP) and the secrecy diversity order of Alamouti STBC with decision feedback (DF) detection over the time-selective fading channels. For given temporal correlations, we have derived the exact SOPs and their asymptotic approximations for all possible combinations of detection schemes including joint maximum likehood (JML), zero-forcing (ZF), and DF at Bob and Eve. We reveal that the SOP is mainly influenced by the detection scheme of the legitimate receiver rather than eavesdropper and the achievable secrecy diversity order converges to two and one for JML only at Bob (i.e., JML-JML/ZF/DF) and for the other cases (i.e., ZF-JML/ZF/DF, DF-JML/ZF/DF), respectively. Here, p-q combination pair indicates that Bob and Eve adopt the detection method p ∈ {JML, ZF, DF} and q ∈ {JML, ZF, DF}, respectively.

  • Dynamic Limited Variable Step-Size Algorithm Based on the MSD Variation Cost Function Open Access

    Yufei HAN  Jiaye XIE  Yibo LI  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2023/09/11
      Vol:
    E107-A No:6
      Page(s):
    919-922

    The steady-state and convergence performances are important indicators to evaluate adaptive algorithms. The step-size affects these two important indicators directly. Many relevant scholars have also proposed some variable step-size adaptive algorithms for improving performance. However, there are still some problems in these existing variable step-size adaptive algorithms, such as the insufficient theoretical analysis, the imbalanced performance and the unachievable parameter. These problems influence the actual performance of some algorithms greatly. Therefore, we intend to further explore an inherent relationship between the key performance and the step-size in this paper. The variation of mean square deviation (MSD) is adopted as the cost function. Based on some theoretical analyses and derivations, a novel variable step-size algorithm with a dynamic limited function (DLF) was proposed. At the same time, the sufficient theoretical analysis is conducted on the weight deviation and the convergence stability. The proposed algorithm is also tested with some typical algorithms in many different environments. Both the theoretical analysis and the experimental result all have verified that the proposed algorithm equips a superior performance.

  • FA-YOLO: A High-Precision and Efficient Method for Fabric Defect Detection in Textile Industry Open Access

    Kai YU  Wentao LYU  Xuyi YU  Qing GUO  Weiqiang XU  Lu ZHANG  

     
    PAPER-Neural Networks and Bioengineering

      Pubricized:
    2023/09/04
      Vol:
    E107-A No:6
      Page(s):
    890-898

    The automatic defect detection for fabric images is an essential mission in textile industry. However, there are some inherent difficulties in the detection of fabric images, such as complexity of the background and the highly uneven scales of defects. Moreover, the trade-off between accuracy and speed should be considered in real applications. To address these problems, we propose a novel model based on YOLOv4 to detect defects in fabric images, called Feature Augmentation YOLO (FA-YOLO). In terms of network structure, FA-YOLO adds an additional detection head to improve the detection ability of small defects and builds a powerful Neck structure to enhance feature fusion. First, to reduce information loss during feature fusion, we perform the residual feature augmentation (RFA) on the features after dimensionality reduction by using 1×1 convolution. Afterward, the attention module (SimAM) is embedded into the locations with rich features to improve the adaptation ability to complex backgrounds. Adaptive spatial feature fusion (ASFF) is also applied to output of the Neck to filter inconsistencies across layers. Finally, the cross-stage partial (CSP) structure is introduced for optimization. Experimental results based on three real industrial datasets, including Tianchi fabric dataset (72.5% mAP), ZJU-Leaper fabric dataset (0.714 of average F1-score) and NEU-DET steel dataset (77.2% mAP), demonstrate the proposed FA-YOLO achieves competitive results compared to other state-of-the-art (SoTA) methods.

  • Operational Resilience of Network Considering Common-Cause Failures Open Access

    Tetsushi YUGE  Yasumasa SAGAWA  Natsumi TAKAHASHI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Pubricized:
    2023/09/11
      Vol:
    E107-A No:6
      Page(s):
    855-863

    This paper discusses the resilience of networks based on graph theory and stochastic process. The electric power network where edges may fail simultaneously and the performance of the network is measured by the ratio of connected nodes is supposed for the target network. For the restoration, under the constraint that the resources are limited, the failed edges are repaired one by one, and the order of the repair for several failed edges is determined with the priority to the edge that the amount of increasing system performance is the largest after the completion of repair. Two types of resilience are discussed, one is resilience in the recovery stage according to the conventional definition of resilience and the other is steady state operational resilience considering the long-term operation in which the network state changes stochastically. The second represents a comprehensive capacity of resilience for a system and is analytically derived by Markov analysis. We assume that the large-scale disruption occurs due to the simultaneous failure of edges caused by the common cause failures in the analysis. Marshall-Olkin type shock model and α factor method are incorporated to model the common cause failures. Then two resilience measures, “operational resilience” and “operational resilience in recovery stage” are proposed. We also propose approximation methods to obtain these two operational resilience measures for complex networks.

  • Fresh Tea Sprouts Segmentation via Capsule Network Open Access

    Chunhua QIAN  Xiaoyan QIN  Hequn QIANG  Changyou QIN  Minyang LI  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2024/01/17
      Vol:
    E107-D No:5
      Page(s):
    728-731

    The segmentation performance of fresh tea sprouts is inadequate due to the uncontrollable posture. A novel method for Fresh Tea Sprouts Segmentation based on Capsule Network (FTS-SegCaps) is proposed in this paper. The spatial relationship between local parts and whole tea sprout is retained and effectively utilized by a deep encoder-decoder capsule network, which can reduce the effect of tea sprouts with uncontrollable posture. Meanwhile, a patch-based local dynamic routing algorithm is also proposed to solve the parameter explosion problem. The experimental results indicate that the segmented tea sprouts via FTS-SegCaps are almost coincident with the ground truth, and also show that the proposed method has a better performance than the state-of-the-art methods.

  • Investigating the Efficacy of Partial Decomposition in Kit-Build Concept Maps for Reducing Cognitive Load and Enhancing Reading Comprehension Open Access

    Nawras KHUDHUR  Aryo PINANDITO  Yusuke HAYASHI  Tsukasa HIRASHIMA  

     
    PAPER-Educational Technology

      Pubricized:
    2024/01/11
      Vol:
    E107-D No:5
      Page(s):
    714-727

    This study investigates the efficacy of a partial decomposition approach in concept map recomposition tasks to reduce cognitive load while maintaining the benefits of traditional recomposition approaches. Prior research has demonstrated that concept map recomposition, involving the rearrangement of unconnected concepts and links, can enhance reading comprehension. However, this task often imposes a significant burden on learners’ working memory. To address this challenge, this study proposes a partial recomposition approach where learners are tasked with recomposing only a portion of the concept map, thereby reducing the problem space. The proposed approach aims at lowering the cognitive load while maintaining the benefits of traditional recomposition task, that is, learning effect and motivation. To investigate the differences in cognitive load, learning effect, and motivation between the full decomposition (the traditional approach) and partial decomposition (the proposed approach), we have conducted an experiment (N=78) where the participants were divided into two groups of “full decomposition” and “partial decomposition”. The full decomposition group was assigned the task of recomposing a concept map from a set of unconnected concept nodes and links, while the partial decomposition group worked with partially connected nodes and links. The experimental results show a significant reduction in the embedded cognitive load of concept map recomposition across different dimensions while learning effect and motivation remained similar between the conditions. On the basis of these findings, educators are recommended to incorporate partially disconnected concept maps in recomposition tasks to optimize time management and sustain learner motivation. By implementing this approach, instructors can conserve cognitive resources and allocate saved energy and time to other activities that enhance the overall learning process.

  • Weighted Generalized Hesitant Fuzzy Sets and Its Application in Ensemble Learning Open Access

    Haijun ZHOU  Weixiang LI  Ming CHENG  Yuan SUN  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2024/01/22
      Vol:
    E107-D No:5
      Page(s):
    694-703

    Traditional intuitionistic fuzzy sets and hesitant fuzzy sets will lose some information while representing vague information, to avoid this problem, this paper constructs weighted generalized hesitant fuzzy sets by remaining multiple intuitionistic fuzzy values and giving them corresponding weights. For weighted generalized hesitant fuzzy elements in weighted generalized hesitant fuzzy sets, the paper defines some basic operations and proves their operation properties. On this basis, the paper gives the comparison rules of weighted generalized hesitant fuzzy elements and presents two kinds of aggregation operators. As for weighted generalized hesitant fuzzy preference relation, this paper proposes its definition and computing method of its corresponding consistency index. Furthermore, the paper designs an ensemble learning algorithm based on weighted generalized hesitant fuzzy sets, carries out experiments on 6 datasets in UCI database and compares with various classification algorithms. The experiments show that the ensemble learning algorithm based on weighted generalized hesitant fuzzy sets has better performance in all indicators.

  • Multi-Dimensional Fused Gromov Wasserstein Discrepancy for Edge-Attributed Graphs Open Access

    Keisuke KAWANO  Satoshi KOIDE  Hiroaki SHIOKAWA  Toshiyuki AMAGASA  

     
    PAPER

      Pubricized:
    2024/01/12
      Vol:
    E107-D No:5
      Page(s):
    683-693

    Graph dissimilarities provide a powerful and ubiquitous approach for applying machine learning algorithms to edge-attributed graphs. However, conventional optimal transport-based dissimilarities cannot handle edge-attributes. In this paper, we propose an optimal transport-based dissimilarity between graphs with edge-attributes. The proposed method, multi-dimensional fused Gromov-Wasserstein discrepancy (MFGW), naturally incorporates the mismatch of edge-attributes into the optimal transport theory. Unlike conventional optimal transport-based dissimilarities, MFGW can directly handle edge-attributes in addition to structural information of graphs. Furthermore, we propose an iterative algorithm, which can be computed on GPUs, to solve non-convex quadratic programming problems involved in MFGW.  Experimentally, we demonstrate that MFGW outperforms the conventional optimal transport-based dissimilarity in several machine learning applications including supervised classification, subgraph matching, and graph barycenter calculation.

  • Automated Labeling of Entities in CVE Vulnerability Descriptions with Natural Language Processing Open Access

    Kensuke SUMOTO  Kenta KANAKOGI  Hironori WASHIZAKI  Naohiko TSUDA  Nobukazu YOSHIOKA  Yoshiaki FUKAZAWA  Hideyuki KANUKA  

     
    PAPER

      Pubricized:
    2024/02/09
      Vol:
    E107-D No:5
      Page(s):
    674-682

    Security-related issues have become more significant due to the proliferation of IT. Collating security-related information in a database improves security. For example, Common Vulnerabilities and Exposures (CVE) is a security knowledge repository containing descriptions of vulnerabilities about software or source code. Although the descriptions include various entities, there is not a uniform entity structure, making security analysis difficult using individual entities. Developing a consistent entity structure will enhance the security field. Herein we propose a method to automatically label select entities from CVE descriptions by applying the Named Entity Recognition (NER) technique. We manually labeled 3287 CVE descriptions and conducted experiments using a machine learning model called BERT to compare the proposed method to labeling with regular expressions. Machine learning using the proposed method significantly improves the labeling accuracy. It has an f1 score of about 0.93, precision of about 0.91, and recall of about 0.95, demonstrating that our method has potential to automatically label select entities from CVE descriptions.

  • A Personalised Session-Based Recommender System with Sequential Updating Based on Aggregation of Item Embeddings Open Access

    Yuma NAGI  Kazushi OKAMOTO  

     
    PAPER

      Pubricized:
    2024/01/09
      Vol:
    E107-D No:5
      Page(s):
    638-649

    The study proposes a personalised session-based recommender system that embeds items by using Word2Vec and sequentially updates the session and user embeddings with the hierarchicalization and aggregation of item embeddings. To process a recommendation request, the system constructs a real-time user embedding that considers users’ general preferences and sequential behaviour to handle short-term changes in user preferences with a low computational cost. The system performance was experimentally evaluated in terms of the accuracy, diversity, and novelty of the ranking of recommended items and the training and prediction times of the system for three different datasets. The results of these evaluations were then compared with those of the five baseline systems. According to the evaluation experiment, the proposed system achieved a relatively high recommendation accuracy compared with baseline systems and the diversity and novelty scores of the proposed system did not fall below 90% for any dataset. Furthermore, the training times of the Word2Vec-based systems, including the proposed system, were shorter than those of FPMC and GRU4Rec. The evaluation results suggest that the proposed recommender system succeeds in keeping the computational cost for training low while maintaining high-level recommendation accuracy, diversity, and novelty.

  • Finformer: Fast Incremental and General Time Series Data Prediction Open Access

    Savong BOU  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Pubricized:
    2024/01/09
      Vol:
    E107-D No:5
      Page(s):
    625-637

    Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.

101-120hit(20498hit)