The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ACH(1072hit)

1-20hit(1072hit)

  • Machine Learning-Based System for Heat-Resistant Analysis of Car Lamp Design Open Access

    Hyebong CHOI  Joel SHIN  Jeongho KIM  Samuel YOON  Hyeonmin PARK  Hyejin CHO  Jiyoung JUNG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2024/04/03
      Vol:
    E107-D No:8
      Page(s):
    1050-1058

    The design of automobile lamps requires accurate estimation of heat distribution to prevent overheating and deformation of the product. Traditional heat resistant analysis using Computational Fluid Dynamics (CFD) is time-consuming and requires expertise in thermofluid mechanics, making real-time temperature analysis less accessible to lamp designers. We propose a machine learning-based temperature prediction system for automobile lamp design. We trained our machine learning models using CFD results of various lamp designs, providing lamp designers real-time Heat-Resistant Analysis. Comprehensive tests on real lamp products demonstrate that our prediction model accurately estimates heat distribution comparable to CFD analysis within a minute. Our system visualizes the estimated heat distribution of car lamp design supporting quick decision-making by lamp designer. It is expected to shorten the product design process, improving the market competitiveness.

  • On Easily Reconstructable Logic Functions Open Access

    Tsutomu SASAO  

     
    PAPER

      Pubricized:
    2024/04/16
      Vol:
    E107-D No:8
      Page(s):
    913-921

    This paper shows that sum-of-product expression (SOP) minimization produces the generalization ability. We show this in three steps. First, various classes of SOPs are generated. Second, minterms of SOP are randomly selected to generate partially defined functions. And, third, from the partially defined functions, original functions are reconstructed by SOP minimization. We consider Achilles heel functions, majority functions, monotone increasing cascade functions, functions generated from random SOPs, monotone increasing random SOPs, circle functions, and globe functions. As for the generalization ability, the presented method is compared with Naive Bayes, multi-level perceptron, support vector machine, JRIP, J48, and random forest. For these functions, in many cases, only 10% of the input combinations are sufficient to reconstruct more than 90% of the truth tables of the original functions.

  • Video Reflection Removal by Modified EDVR and 3D Convolution Open Access

    Sota MORIYAMA  Koichi ICHIGE  Yuichi HORI  Masayuki TACHI  

     
    LETTER-Image

      Pubricized:
    2023/12/11
      Vol:
    E107-A No:8
      Page(s):
    1430-1434

    In this paper, we propose a method for video reflection removal using a video restoration framework with enhanced deformable networks (EDVR). We examine the effect of each module in EDVR on video reflection removal and modify the models using 3D convolutions. The performance of each modified model is evaluated in terms of the RMSE between the structural similarity (SSIM) and the smoothed SSIM representing temporal consistency.

  • Mixed-Integer Linear Optimization Formulations for Feature Subset Selection in Kernel SVM Classification Open Access

    Ryuta TAMURA  Yuichi TAKANO  Ryuhei MIYASHIRO  

     
    PAPER-Numerical Analysis and Optimization

      Pubricized:
    2024/02/08
      Vol:
    E107-A No:8
      Page(s):
    1151-1162

    We study the mixed-integer optimization (MIO) approach to feature subset selection in nonlinear kernel support vector machines (SVMs) for binary classification. To measure the performance of subset selection, we use the distance between two classes (DBTC) in a high-dimensional feature space based on the Gaussian kernel function. However, DBTC to be maximized as an objective function is nonlinear, nonconvex and nonconcave. Despite the difficulty of linearizing such a nonlinear function in general, our major contribution is to propose a mixed-integer linear optimization (MILO) formulation to maximize DBTC for feature subset selection, and this MILO problem can be solved to optimality using optimization software. We also derive a reduced version of the MILO problem to accelerate our MILO computations. Experimental results show good computational efficiency for our MILO formulation with the reduced problem. Moreover, our method can often outperform the linear-SVM-based MILO formulation and recursive feature elimination in prediction performance, especially when there are relatively few data instances.

  • Real-Time Monitoring Systems That Provide M2M Communication between Machines Open Access

    Ya ZHONG  

     
    PAPER-Language, Thought, Knowledge and Intelligence

      Pubricized:
    2023/10/17
      Vol:
    E107-A No:7
      Page(s):
    1019-1026

    Artificial intelligence and the introduction of Internet of Things technologies have benefited from technological advances and new automated computer system technologies. Eventually, it is now possible to integrate them into a single offline industrial system. This is accomplished through machine-to-machine communication, which eliminates the human factor. The purpose of this article is to examine security systems for machine-to-machine communication systems that rely on identification and authentication algorithms for real-time monitoring. The article investigates security methods for quickly resolving data processing issues by using the Security operations Center’s main machine to identify and authenticate devices from 19 different machines. The results indicate that when machines are running offline and performing various tasks, they can be exposed to data leaks and malware attacks by both the individual machine and the system as a whole. The study looks at the operation of 19 computers, 7 of which were subjected to data leakage and malware attacks. AnyLogic software is used to create visual representations of the results using wireless networks and algorithms based on previously processed methods. The W76S is used as a protective element within intelligent sensors due to its built-in memory protection. For 4 machines, the data leakage time with malware attacks was 70 s. For 10 machines, the duration was 150 s with 3 attacks. Machine 15 had the longest attack duration, lasting 190 s and involving 6 malware attacks, while machine 19 had the shortest attack duration, lasting 200 s and involving 7 malware attacks. The highest numbers indicated that attempting to hack a system increased the risk of damaging a device, potentially resulting in the entire system with connected devices failing. Thus, illegal attacks by attackers using malware may be identified over time, and data processing effects can be prevented by intelligent control. The results reveal that applying identification and authentication methods using a protocol increases cyber-physical system security while also allowing real-time monitoring of offline system security.

  • LSTM Neural Network Algorithm for Handover Improvement in a Non-Ideal Network Using O-RAN Near-RT RIC Open Access

    Baud Haryo PRANANTO   ISKANDAR   HENDRAWAN  Adit KURNIAWAN  

     
    PAPER-Network Management/Operation

      Vol:
    E107-B No:6
      Page(s):
    458-469

    Handover is an important property of cellular communication that enables the user to move from one cell to another without losing the connection. It is a very crucial process for the quality of the user’s experience because it may interrupt data transmission. Therefore, good handover management is very important in the current and future cellular systems. Several techniques have been employed to improve the handover performance, usually to increase the probability of a successful handover. One of the techniques is predictive handover which predicts the target cell using some methods other than the traditional measurement-based algorithm, including using machine learning. Several studies have been conducted in the implementation of predictive handover, most of them by modifying the internal algorithm of existing network elements, such as the base station. We implemented a predictive handover algorithm using an intelligent node outside the existing network elements to minimize the modification of the network and to create modularity in the system. Using a recently standardized Open Radio Access Network (O-RAN) Near Realtime Radio Intelligent Controller (Near-RT RIC), we created a modular application that can improve the handover performance by determining the target cell using machine learning techniques. In our previous research, we modified The Near-RT RIC original software that is using vector autoregression to determine the target cell by predicting the throughput of each neighboring cell. We also modified the method using a Multi-Layer Perceptron (MLP) neural network. In this paper, we redesigned the neural network using Long Short-Term Memory (LSTM) that can better handle time series data. We proved that our proposed LSTM-based machine learning algorithms used in Near-RT RIC can improve the handover performance compared to the traditional measurement-based algorithm.

  • A POMDP-Based Approach to Assortment Optimization Problem for Vending Machine Open Access

    Gaku NEMOTO  Kunihiko HIRAISHI  

     
    PAPER-Mathematical Systems Science

      Pubricized:
    2023/09/05
      Vol:
    E107-A No:6
      Page(s):
    909-918

    Assortment optimization is one of main problems for retailers, and has been widely studied. In this paper, we focus on vending machines, which have many characteristic issues to be considered. We first formulate an assortment optimization problem for vending machines, next propose a model that represents consumer’s decision making, and then show a solution method based on partially observable Markov decision process (POMDP). The problem includes incomplete state observation, stochastic consumer behavior and policy decisions that maximize future expected rewards. Using computer simulation, we observe that sales increases compared to that by heuristic methods under the same condition. Moreover, the sales approaches the theoretical upper bound.

  • Effects of Electromagnet Interference on Speed and Position Estimations of Sensorless SPMSM Open Access

    Yuanhe XUE  Wei YAN  Xuan LIU  Mengxia ZHOU  Yang ZHAO  Hao MA  

     
    PAPER-Electromechanical Devices and Components

      Pubricized:
    2023/11/10
      Vol:
    E107-C No:5
      Page(s):
    124-131

    Model-based sensorless control of permanent magnet synchronous motor (PMSM) is promising for high-speed operation to estimate motor state, which is the speed and the position of the rotor, via electric signals of the stator, beside the inevitable fact that estimation accuracy is degraded by electromagnet interference (EMI) from switching devices of the converter. In this paper, the simulation system based on Luenberger observer and phase-locked loop (PLL) has been established, analyzing impacts of EMI on motor state estimations theoretically, exploring influences of EMI with different cutoff frequency, rated speeds, frequencies and amplitudes. The results show that Luenberger observer and PLL have strong immunity, which enable PMSM can still operate stably even under certain degrees of interference. EMI produces sideband harmonics that enlarge pulsation errors of speed and position estimations. Additionally, estimation errors are positively correlated with cutoff frequency of low-pass filter and the amplitude of EMI, and negatively correlated with rated speed of the motor and the frequency of EMI.  When the frequency is too high, its effects on motor state estimations are negligible. This work contributes to the comprehensive understanding of how EMI affects motor state estimations, which further enhances practical application of sensorless PMSM.

  • PopDCN: Popularity-Aware Dynamic Clustering Scheme for Distributed Caching in ICN Open Access

    Mikiya YOSHIDA  Yusuke ITO  Yurino SATO  Hiroyuki KOGA  

     
    PAPER-Network

      Vol:
    E107-B No:5
      Page(s):
    398-407

    Information-centric networking (ICN) provides low-latency content delivery with in-network caching, but delivery latency depends on cache distance from consumers. To reduce delivery latency, a scheme to cluster domains and retain the main popular content in each cluster with a cache distribution range has been proposed, which enables consumers to retrieve content from neighboring clusters/caches. However, when the distribution of content popularity changes, all content caches may not be distributed adequately in a cluster, so consumers cannot retrieve them from nearby caches. We therefore propose a dynamic clustering scheme to adjust the cache distribution range in accordance with the change in content popularity and evaluate the effectiveness of the proposed scheme through simulation.

  • A Multiobjective Approach for Side-Channel Based Hardware Trojan Detection Using Power Traces Open Access

    Priyadharshini MOHANRAJ  Saravanan PARAMASIVAM  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:5
      Page(s):
    825-835

    The detection of hardware trojans has been extensively studied in the past. In this article, we propose a side-channel analysis technique that uses a wrapper-based feature selection technique for hardware trojan detection. The whale optimization algorithm is modified to carefully extract the best feature subset. The aim of the proposed technique is multiobjective: improve the accuracy and minimize the number of features. The power consumption traces measured from AES-128 trojan circuits are used as features in this experiment. The stabilizing property of the feature selection method helps to bring a mutual trade-off between the precision and recall parameters thereby minimizing the number of false negatives. The proposed hardware trojan detection scheme produces a maximum of 10.3% improvement in accuracy and reduction up to a single feature by employing the modified whale optimization technique. Thus the evaluation results conducted on various trust-hub cryptographic benchmark circuits prove to be efficient from the existing state-of-art methods.

  • Implementing Optical Analog Computing and Electrooptic Hopfield Network by Silicon Photonic Circuits Open Access

    Guangwei CONG  Noritsugu YAMAMOTO  Takashi INOUE  Yuriko MAEGAMI  Morifumi OHNO  Shota KITA  Rai KOU  Shu NAMIKI  Koji YAMADA  

     
    INVITED PAPER

      Pubricized:
    2024/01/05
      Vol:
    E107-A No:5
      Page(s):
    700-708

    Wide deployment of artificial intelligence (AI) is inducing exponentially growing energy consumption. Traditional digital platforms are becoming difficult to fulfill such ever-growing demands on energy efficiency as well as computing latency, which necessitates the development of high efficiency analog hardware platforms for AI. Recently, optical and electrooptic hybrid computing is reactivated as a promising analog hardware alternative because it can accelerate the information processing in an energy-efficient way. Integrated photonic circuits offer such an analog hardware solution for implementing photonic AI and machine learning. For this purpose, we proposed a photonic analog of support vector machine and experimentally demonstrated low-latency and low-energy classification computing, which evidences the latency and energy advantages of optical analog computing over traditional digital computing. We also proposed an electrooptic Hopfield network for classifying and recognizing time-series data. This paper will review our work on implementing classification computing and Hopfield network by leveraging silicon photonic circuits.

  • Batch Updating of a Posterior Tree Distribution Over a Meta-Tree

    Yuta NAKAHARA  Toshiyasu MATSUSHIMA  

     
    LETTER-Learning

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:3
      Page(s):
    523-525

    Previously, we proposed a probabilistic data generation model represented by an unobservable tree and a sequential updating method to calculate a posterior distribution over a set of trees. The set is called a meta-tree. In this paper, we propose a more efficient batch updating method.

  • Channel Capacity with Cost Constraint Allowing Cost Overrun

    Masaki HORI  Mikihiko NISHIARA  

     
    PAPER-Shannon Theory

      Pubricized:
    2023/10/10
      Vol:
    E107-A No:3
      Page(s):
    458-463

    A channel coding problem with cost constraint for general channels is considered. Verdú and Han derived ϵ-capacity for general channels. Following the same lines of its proof, we can also derive ϵ-capacity with cost constraint. In this paper, we derive a formula for ϵ-capacity with cost constraint allowing overrun. In order to prove this theorem, a new variation of Feinstein's lemma is applied to select codewords satisfying cost constraint and codewords not satisfying cost constraint.

  • RR-Row: Redirect-on-Write Based Virtual Machine Disk for Record/Replay

    Ying ZHAO  Youquan XIAN  Yongnan LI  Peng LIU  Dongcheng LI  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2023/11/06
      Vol:
    E107-D No:2
      Page(s):
    169-179

    Record/replay is one essential tool in clouds to provide many capabilities such as fault tolerance, software debugging, and security analysis by recording the execution into a log and replaying it deterministically later on. However, in virtualized environments, the log file increases heavily due to saving a considerable amount of I/O data, finally introducing significant storage costs. To mitigate this problem, this paper proposes RR-Row, a redirect-on-write based virtual machine disk for record/replay scenarios. RR-Row appends the written data into new blocks rather than overwrites the original blocks during normal execution so that all written data are reserved in the disk. In this way, the record system only saves the block id instead of the full content, and the replay system can directly fetch the data from the disk rather than the log, thereby reducing the log size a lot. In addition, we propose several optimizations for improving I/O performance so that it is also suitable for normal execution. We implement RR-Row for QEMU and conduct a set of experiments. The results show that RR-Row reduces the log size by 68% compared to the currently used Raw/QCow2 disk without compromising I/O performance.

  • A Recommendation-Based Auxiliary Caching for Mapping Record

    Zhaolin MA  Jiali YOU  Haojiang DENG  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E107-B No:2
      Page(s):
    286-295

    Due to the increase in the volume of data and intensified concurrent requests, distributed caching is commonly used to manage high-concurrency requests and alleviate pressure on databases. However, there is limited research on distributed record mapping caching, and traditional caching algorithms have suboptimal resolution performance for mapping records that typically follow a long-tail distribution. To address the aforementioned issue, in this paper, we propose a recommendation-based adaptive auxiliary caching method, AC-REC, which delivers the primary cache record along with a list of additional cache records. The method uses request correlations as a basis for recommendations, customizes the number of additional cache entries provided, and dynamically adjusts the time-to-live. We conducted evaluations to compare the performance of our method against various benchmark strategies. The results show that our proposed method, as compared to the conventional LCE method, increased the cache hit ratio by an average of 20%, Moreover, this improvement is achieved while effectively utilizing the cache space. We believe that our strategy will contribute an effective solution to the related studies in both traditional network architecture and caching in paradigms like ICN.

  • Inference Discrepancy Based Curriculum Learning for Neural Machine Translation

    Lei ZHOU  Ryohei SASANO  Koichi TAKEDA  

     
    PAPER-Natural Language Processing

      Pubricized:
    2023/10/18
      Vol:
    E107-D No:1
      Page(s):
    135-143

    In practice, even a well-trained neural machine translation (NMT) model can still make biased inferences on the training set due to distribution shifts. For the human learning process, if we can not reproduce something correctly after learning it multiple times, we consider it to be more difficult. Likewise, a training example causing a large discrepancy between inference and reference implies higher learning difficulty for the MT model. Therefore, we propose to adopt the inference discrepancy of each training example as the difficulty criterion, and according to which rank training examples from easy to hard. In this way, a trained model can guide the curriculum learning process of an initial model identical to itself. We put forward an analogy to this training scheme as guiding the learning process of a curriculum NMT model by a pretrained vanilla model. In this paper, we assess the effectiveness of the proposed training scheme and take an insight into the influence of translation direction, evaluation metrics and different curriculum schedules. Experimental results on translation benchmarks WMT14 English ⇒ German, WMT17 Chinese ⇒ English and Multitarget TED Talks Task (MTTT) English ⇔ German, English ⇔ Chinese, English ⇔ Russian demonstrate that our proposed method consistently improves the translation performance against the advanced Transformer baseline.

  • Multi-Task Learning of Japanese How-to Tip Machine Reading Comprehension by a Generative Model

    Xiaotian WANG  Tingxuan LI  Takuya TAMURA  Shunsuke NISHIDA  Takehito UTSURO  

     
    PAPER-Natural Language Processing

      Pubricized:
    2023/10/23
      Vol:
    E107-D No:1
      Page(s):
    125-134

    In the research of machine reading comprehension of Japanese how-to tip QA tasks, conventional extractive machine reading comprehension methods have difficulty in dealing with cases in which the answer string spans multiple locations in the context. The method of fine-tuning of the BERT model for machine reading comprehension tasks is not suitable for such cases. In this paper, we trained a generative machine reading comprehension model of Japanese how-to tip by constructing a generative dataset based on the website “wikihow” as a source of information. We then proposed two methods for multi-task learning to fine-tune the generative model. The first method is the multi-task learning with a generative and extractive hybrid training dataset, where both generative and extractive datasets are simultaneously trained on a single model. The second method is the multi-task learning with the inter-sentence semantic similarity and answer generation, where, drawing upon the answer generation task, the model additionally learns the distance between the sentences of the question/context and the answer in the training examples. The evaluation results showed that both of the multi-task learning methods significantly outperformed the single-task learning method in generative question-and-answer examples. Between the two methods for multi-task learning, that with the inter-sentence semantic similarity and answer generation performed the best in terms of the manual evaluation result. The data and the code are available at https://github.com/EternalEdenn/multitask_ext-gen_sts-gen.

  • Device Type Classification Based on Two-Stage Traffic Behavior Analysis Open Access

    Chikako TAKASAKI  Tomohiro KORIKAWA  Kyota HATTORI  Hidenari OHWADA  

     
    PAPER

      Pubricized:
    2023/10/17
      Vol:
    E107-B No:1
      Page(s):
    117-125

    In the beyond 5G and 6G networks, the number of connected devices and their types will greatly increase including not only user devices such as smartphones but also the Internet of Things (IoT). Moreover, Non-terrestrial networks (NTN) introduce dynamic changes in the types of connected devices as base stations or access points are moving objects. Therefore, continuous network capacity design is required to fulfill the network requirements of each device. However, continuous optimization of network capacity design for each device within a short time span becomes difficult because of the heavy calculation amount. We introduce device types as groups of devices whose traffic characteristics resemble and optimize network capacity per device type for efficient network capacity design. This paper proposes a method to classify device types by analyzing only encrypted traffic behavior without using payload and packets of specific protocols. In the first stage, general device types, such as IoT and non-IoT, are classified by analyzing packet header statistics using machine learning. Then, in the second stage, connected devices classified as IoT in the first stage are classified into IoT device types, by analyzing a time series of traffic behavior using deep learning. We demonstrate that the proposed method classifies device types by analyzing traffic datasets and outperforms the existing IoT-only device classification methods in terms of the number of types and the accuracy. In addition, the proposed model performs comparable as a state-of-the-art model of traffic classification, ResNet 1D model. The proposed method is suitable to grasp device types in terms of traffic characteristics toward efficient network capacity design in networks where massive devices for various services are connected and the connected devices continuously change.

  • Virtualizing DVFS for Energy Minimization of Embedded Dual-OS Platform

    Takumi KOMORI  Yutaka MASUDA  Tohru ISHIHARA  

     
    PAPER

      Pubricized:
    2023/07/12
      Vol:
    E107-A No:1
      Page(s):
    3-15

    Recent embedded systems require both traditional machinery control and information processing, such as network and GUI handling. A dual-OS platform consolidates a real-time OS (RTOS) and general-purpose OS (GPOS) to realize efficient software development on one physical processor. Although the dual-OS platform attracts increasing attention, it often suffers from energy inefficiency in the GPOS for guaranteeing real-time responses of the RTOS. This paper proposes an energy minimization method called DVFS virtualization, which allows running multiple DVFS policies dedicated to the RTOS and GPOS, respectively. The experimental evaluation using a commercial microcontroller showed that the proposed hardware could change the supply voltage within 500 ns and reduce the energy consumption of typical applications by 60 % in the best case compared to conventional dual-OS platforms. Furthermore, evaluation using a commercial microprocessor achieved a 15 % energy reduction of practical open-source software at best.

  • Ising-Machine-Based Solver for Constrained Graph Coloring Problems

    Soma KAWAKAMI  Yosuke MUKASA  Siya BAO  Dema BA  Junya ARAI  Satoshi YAGI  Junji TERAMOTO  Nozomu TOGAWA  

     
    PAPER

      Pubricized:
    2023/09/12
      Vol:
    E107-A No:1
      Page(s):
    38-51

    Ising machines can find optimum or quasi-optimum solutions of combinatorial optimization problems efficiently and effectively. The graph coloring problem, which is one of the difficult combinatorial optimization problems, is to assign a color to each vertex of a graph such that no two vertices connected by an edge have the same color. Although methods to map the graph coloring problem onto the Ising model or quadratic unconstrained binary optimization (QUBO) model are proposed, none of them considers minimizing the number of colors. In addition, there is no Ising-machine-based method considering additional constraints in order to apply to practical problems. In this paper, we propose a mapping method of the graph coloring problem including minimizing the number of colors and additional constraints to the QUBO model. As well as the constraint terms for the graph coloring problem, we firstly propose an objective function term that can minimize the number of colors so that the number of used spins cannot increase exponentially. Secondly, we propose two additional constraint terms: One is that specific vertices have to be colored with specified colors; The other is that specific colors cannot be used more than the number of times given in advance. We theoretically prove that, if the energy of the proposed QUBO mapping is minimized, all the constraints are satisfied and the objective function is minimized. The result of the experiment using an Ising machine showed that the proposed method reduces the number of used colors by up to 75.1% on average compared to the existing baseline method when additional constraints are not considered. Considering the additional constraints, the proposed method can effectively find feasible solutions satisfying all the constraints.

1-20hit(1072hit)