The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] EE(4079hit)

61-80hit(4079hit)

  • Location and History Information Aided Efficient Initial Access Scheme for High-Speed Railway Communications

    Chang SUN  Xiaoyu SUN  Jiamin LI  Pengcheng ZHU  Dongming WANG  Xiaohu YOU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/09/14
      Vol:
    E107-B No:1
      Page(s):
    214-222

    The application of millimeter wave (mmWave) directional transmission technology in high-speed railway (HSR) scenarios helps to achieve the goal of multiple gigabit data rates with low latency. However, due to the high mobility of trains, the traditional initial access (IA) scheme with high time consumption is difficult to guarantee the effectiveness of the beam alignment. In addition, the high path loss at the coverage edge of the millimeter wave remote radio unit (mmW-RRU) will also bring great challenges to the stability of IA performance. Fortunately, the train trajectory in HSR scenarios is periodic and regular. Moreover, the cell-free network helps to improve the system coverage performance. Based on these observations, this paper proposes an efficient IA scheme based on location and history information in cell-free networks, where the train can flexibly select a set of mmW-RRUs according to the received signal quality. We specifically analyze the collaborative IA process based on the exhaustive search and based on location and history information, derive expressions for IA success probability and delay, and perform the numerical analysis. The results show that the proposed scheme can significantly reduce the IA delay and effectively improve the stability of IA success probability.

  • MSLT: A Scalable Solution for Blockchain Network Transport Layer Based on Multi-Scale Node Management Open Access

    Longle CHENG  Xiaofeng LI  Haibo TAN  He ZHAO  Bin YU  

     
    PAPER-Network

      Pubricized:
    2023/09/12
      Vol:
    E107-B No:1
      Page(s):
    185-196

    Blockchain systems rely on peer-to-peer (P2P) overlay networks to propagate transactions and blocks. The node management of P2P networks affects the overall performance and reliability of the system. The traditional structure is based on random connectivity, which is known to be an inefficient operation. Therefore, we propose MSLT, a multiscale blockchain P2P network node management method to improve transaction performance. This approach involves configuring the network to operate at multiple scales, where blockchain nodes are grouped into different ranges at each scale. To minimize redundancy and manage traffic efficiently, neighboring nodes are selected from each range based on a predetermined set of rules. Additionally, a node updating method is implemented to improve the reliability of the network. Compared with existing transmission models in efficiency, utilization, and maximum transaction throughput, the MSLT node management model improves the data transmission performance.

  • Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning

    Kairi TOKUDA  Takehiro SATO  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/10/06
      Vol:
    E107-B No:1
      Page(s):
    173-184

    Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.

  • Introduction to Compressed Sensing with Python Open Access

    Masaaki NAGAHARA  

     
    INVITED PAPER-Fundamental Theories for Communications

      Pubricized:
    2023/08/15
      Vol:
    E107-B No:1
      Page(s):
    126-138

    Compressed sensing is a rapidly growing research field in signal and image processing, machine learning, statistics, and systems control. In this survey paper, we provide a review of the theoretical foundations of compressed sensing and present state-of-the-art algorithms for solving the corresponding optimization problems. Additionally, we discuss several practical applications of compressed sensing, such as group testing, sparse system identification, and sparse feedback gain design, and demonstrate their effectiveness through Python programs. This survey paper aims to contribute to the advancement of compressed sensing research and its practical applications in various scientific disciplines.

  • Device Type Classification Based on Two-Stage Traffic Behavior Analysis Open Access

    Chikako TAKASAKI  Tomohiro KORIKAWA  Kyota HATTORI  Hidenari OHWADA  

     
    PAPER

      Pubricized:
    2023/10/17
      Vol:
    E107-B No:1
      Page(s):
    117-125

    In the beyond 5G and 6G networks, the number of connected devices and their types will greatly increase including not only user devices such as smartphones but also the Internet of Things (IoT). Moreover, Non-terrestrial networks (NTN) introduce dynamic changes in the types of connected devices as base stations or access points are moving objects. Therefore, continuous network capacity design is required to fulfill the network requirements of each device. However, continuous optimization of network capacity design for each device within a short time span becomes difficult because of the heavy calculation amount. We introduce device types as groups of devices whose traffic characteristics resemble and optimize network capacity per device type for efficient network capacity design. This paper proposes a method to classify device types by analyzing only encrypted traffic behavior without using payload and packets of specific protocols. In the first stage, general device types, such as IoT and non-IoT, are classified by analyzing packet header statistics using machine learning. Then, in the second stage, connected devices classified as IoT in the first stage are classified into IoT device types, by analyzing a time series of traffic behavior using deep learning. We demonstrate that the proposed method classifies device types by analyzing traffic datasets and outperforms the existing IoT-only device classification methods in terms of the number of types and the accuracy. In addition, the proposed model performs comparable as a state-of-the-art model of traffic classification, ResNet 1D model. The proposed method is suitable to grasp device types in terms of traffic characteristics toward efficient network capacity design in networks where massive devices for various services are connected and the connected devices continuously change.

  • Hardware-Trojan Detection at Gate-Level Netlists Using a Gradient Boosting Decision Tree Model and Its Extension Using Trojan Probability Propagation

    Ryotaro NEGISHI  Tatsuki KURIHARA  Nozomu TOGAWA  

     
    PAPER

      Pubricized:
    2023/08/16
      Vol:
    E107-A No:1
      Page(s):
    63-74

    Technological devices have become deeply embedded in people's lives, and their demand is growing every year. It has been indicated that outsourcing the design and manufacturing of integrated circuits, which are essential for technological devices, may lead to the insertion of malicious circuitry, called hardware Trojans (HTs). This paper proposes an HT detection method at gate-level netlists based on XGBoost, one of the best gradient boosting decision tree models. We first propose the optimal set of HT features among many feature candidates at a netlist level through thorough evaluations. Then, we construct an XGBoost-based HT detection method with its optimized hyperparameters. Evaluation experiments were conducted on the netlists from Trust-HUB benchmarks and showed the average F-measure of 0.842 using the proposed method. Also, we newly propose a Trojan probability propagation method that effectively corrects the HT detection results and apply it to the results obtained by XGBoost-based HT detection. Evaluation experiments showed that the average F-measure is improved to 0.861. This value is 0.194 points higher than that of the existing best method proposed so far.

  • High Precision Fingerprint Verification for Small Area Sensor Based on Deep Learning

    Nabilah SHABRINA  Dongju LI  Tsuyoshi ISSHIKI  

     
    PAPER-Biometrics

      Pubricized:
    2023/06/26
      Vol:
    E107-A No:1
      Page(s):
    157-168

    The fingerprint verification system is widely used in mobile devices because of fingerprint's distinctive features and ease of capture. Typically, mobile devices utilize small sensors, which have limited area, to capture fingerprint. Meanwhile, conventional fingerprint feature extraction methods need detailed fingerprint information, which is unsuitable for those small sensors. This paper proposes a novel fingerprint verification method for small area sensors based on deep learning. A systematic method combines deep convolutional neural network (DCNN) in a Siamese network for feature extraction and XGBoost for fingerprint similarity training. In addition, a padding technique also introduced to avoid wraparound error problem. Experimental results show that the method achieves an improved accuracy of 66.6% and 22.6% in the FingerPassDB7 and FVC2006DB1B dataset, respectively, compared to the existing methods.

  • A Fast Intra Mode Decision Algorithm in VVC Based on Feature Cross for Screen Content Videos

    Zhi LIU  Siyuan ZHANG  Xiaohan GUAN  Mengmeng ZHANG  

     
    LETTER-Coding Theory

      Pubricized:
    2023/07/24
      Vol:
    E107-A No:1
      Page(s):
    178-181

    In previous machine learning based fast intra mode decision algorithms for screen content videos, feature design is a key task and it is always difficult to obtain distinguishable features. In this paper, the idea of interaction of features is introduced to fast video coding algorithm, and a fast intra mode decision algorithm based on feature cross is proposed for screen content videos. The numeric features and category features are designed based on the characteristics of screen content videos, and the adaptive factorization network (AFN) is improved and adopted to carry out feature interaction to designed features, and output distinguishable features. The experimental results show that for AI (All Intra) configuration, compared with standard VVC/H.266, the coding time is reduced by 29.64% and the BD rate is increased only by 1.65%.

  • Recent Progress in Optical Network Design and Control towards Human-Centered Smart Society Open Access

    Takashi MIYAMURA  Akira MISAWA  

     
    INVITED PAPER

      Pubricized:
    2023/09/19
      Vol:
    E107-B No:1
      Page(s):
    2-15

    In this paper, we investigate the evolution of an optical network architecture and discuss the future direction of research on optical network design and control. We review existing research on optical network design and control and present some open challenges. One of the important open challenges lies in multilayer resource optimization including IT and optical network resources. We propose an adaptive joint optimization method of IT resources and optical spectrum under time-varying traffic demand in optical networks while avoiding an increase in operation cost. We formulate the problem as mixed integer linear programming and then quantitatively evaluate the trade-off relationship between the optimality of reconfiguration and operation cost. We demonstrate that we can achieve sufficient network performance through the adaptive joint optimization while suppressing an increase in operation cost.

  • Adaptive K-Repetition Transmission with Site Diversity Reception for Energy-Efficient Grant-Free URLLC in 5G NR

    Arif DATAESATU  Kosuke SANADA  Hiroyuki HATANO  Kazuo MORI  Pisit BOONSRIMUANG  

     
    PAPER

      Pubricized:
    2023/10/11
      Vol:
    E107-B No:1
      Page(s):
    74-84

    The fifth-generation (5G) new radio (NR) standard employs ultra-reliable and low-latency communication (URLLC) to provide real-time wireless interactive capability for the internet of things (IoT) applications. To satisfy the stringent latency and reliability demands of URLLC services, grant-free (GF) transmissions with the K-repetition transmission (K-Rep) have been introduced. However, fading fluctuations can negatively impact signal quality at the base station (BS), leading to an increase in the number of repetitions and raising concerns about interference and energy consumption for IoT user equipment (UE). To overcome these challenges, this paper proposes novel adaptive K-Rep control schemes that employ site diversity reception to enhance signal quality and reduce energy consumption. The performance evaluation demonstrates that the proposed adaptive K-Rep control schemes significantly improve communication reliability and reduce transmission energy consumption compared with the conventional K-Rep scheme, and then satisfy the URLLC requirements while reducing energy consumption.

  • Minimization of Energy Consumption in TDMA-Based Wireless-Powered Multi-Access Edge Computing Networks

    Xi CHEN  Guodong JIANG  Kaikai CHI  Shubin ZHANG  Gang CHEN  Jiang LIU  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2023/06/19
      Vol:
    E106-A No:12
      Page(s):
    1544-1554

    Many nodes in Internet of Things (IoT) rely on batteries for power. Additionally, the demand for executing compute-intensive and latency-sensitive tasks is increasing for IoT nodes. In some practical scenarios, the computation tasks of WDs have the non-separable characteristic, that is, binary offloading strategies should be used. In this paper, we focus on the design of an efficient binary offloading algorithm that minimizes system energy consumption (EC) for TDMA-based wireless-powered multi-access edge computing networks, where WDs either compute tasks locally or offload them to hybrid access points (H-APs). We formulate the EC minimization problem which is a non-convex problem and decompose it into a master problem optimizing binary offloading decision and a subproblem optimizing WPT duration and task offloading transmission durations. For the master problem, a DRL based method is applied to obtain the near-optimal offloading decision. For the subproblem, we firstly consider the scenario where the nodes do not have completion time constraints and obtain the optimal analytical solution. Then we consider the scenario with the constraints. By jointly using the Golden Section Method and bisection method, the optimal solution can be obtained due to the convexity of the constraint function. Simulation results show that the proposed offloading algorithm based on DRL can achieve the near-minimal EC.

  • Optimal (r, δ)-Locally Repairable Codes from Reed-Solomon Codes

    Lin-Zhi SHEN  Yu-Jie WANG  

     
    LETTER-Coding Theory

      Pubricized:
    2023/05/30
      Vol:
    E106-A No:12
      Page(s):
    1589-1592

    For an [n, k, d] (r, δ)-locally repairable codes ((r, δ)-LRCs), its minimum distance d satisfies the Singleton-like bound. The construction of optimal (r, δ)-LRC, attaining this Singleton-like bound, is an important research problem in recent years for thier applications in distributed storage systems. In this letter, we use Reed-Solomon codes to construct two classes of optimal (r, δ)-LRCs. The optimal LRCs are given by the evaluations of multiple polynomials of degree at most r - 1 at some points in Fq. The first class gives the [(r + δ - 1)t, rt - s, δ + s] optimal (r, δ)-LRC over Fq provided that r + δ + s - 1≤q, s≤δ, s

  • Analysis and Identification of Root Cause of 5G Radio Quality Deterioration Using Machine Learning

    Yoshiaki NISHIKAWA  Shohei MARUYAMA  Takeo ONISHI  Eiji TAKAHASHI  

     
    PAPER

      Pubricized:
    2023/06/02
      Vol:
    E106-B No:12
      Page(s):
    1286-1292

    It has become increasingly important for industries to promote digital transformation by utilizing 5G and industrial internet of things (IIoT) to improve productivity. To protect IIoT application performance (work speed, productivity, etc.), it is often necessary to satisfy quality of service (QoS) requirements precisely. For this purpose, there is an increasing need to automatically identify the root causes of radio-quality deterioration in order to take prompt measures when the QoS deteriorates. In this paper, a method for identifying the root cause of 5G radio-quality deterioration is proposed that uses machine learning. This Random Forest based method detects the root cause, such as distance attenuation, shielding, fading, or their combination, by analyzing the coefficients of a quadratic polynomial approximation in addition to the mean values of time-series data of radio quality indicators. The detection accuracy of the proposed method was evaluated in a simulation using the MATLAB 5G Toolbox. The detection accuracy of the proposed method was found to be 98.30% when any of the root causes occurs independently, and 83.13% when the multiple root causes occur simultaneously. The proposed method was compared with deep-learning methods, including bidirectional long short-term memory (bidirectional-LSTM) or one-dimensional convolutional neural network (1D-CNN), that directly analyze the time-series data of the radio quality, and the proposed method was found to be more accurate than those methods.

  • Deep Neural Networks Based End-to-End DOA Estimation System Open Access

    Daniel Akira ANDO  Yuya KASE  Toshihiko NISHIMURA  Takanori SATO  Takeo OHGANE  Yasutaka OGAWA  Junichiro HAGIWARA  

     
    PAPER

      Pubricized:
    2023/09/11
      Vol:
    E106-B No:12
      Page(s):
    1350-1362

    Direction of arrival (DOA) estimation is an antenna array signal processing technique used in, for instance, radar and sonar systems, source localization, and channel state information retrieval. As new applications and use cases appear with the development of next generation mobile communications systems, DOA estimation performance must be continually increased in order to support the nonstop growing demand for wireless technologies. In previous works, we verified that a deep neural network (DNN) trained offline is a strong candidate tool with the promise of achieving great on-grid DOA estimation performance, even compared to traditional algorithms. In this paper, we propose new techniques for further DOA estimation accuracy enhancement incorporating signal-to-noise ratio (SNR) prediction and an end-to-end DOA estimation system, which consists of three components: source number estimator, DOA angular spectrum grid estimator, and DOA detector. Here, we expand the performance of the DOA detector and angular spectrum estimator, and present a new solution for source number estimation based on DNN with very simple design. The proposed DNN system applied with said enhancement techniques has shown great estimation performance regarding the success rate metric for the case of two radio wave sources although not fully satisfactory results are obtained for the case of three sources.

  • Joint Virtual Network Function Deployment and Scheduling via Heuristics and Deep Reinforcement Learning

    Zixiao ZHANG  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/08/01
      Vol:
    E106-B No:12
      Page(s):
    1424-1440

    This paper introduces heuristic approaches and a deep reinforcement learning approach to solve a joint virtual network function deployment and scheduling problem in a dynamic scenario. We formulate the problem as an optimization problem. Based on the mathematical description of the optimization problem, we introduce three heuristic approaches and a deep reinforcement learning approach to solve the problem. We define an objective to maximize the ratio of delay-satisfied requests while minimizing the average resource cost for a dynamic scenario. Our introduced two greedy approaches are named finish time greedy and computational resource greedy, respectively. In the finish time greedy approach, we make each request be finished as soon as possible despite its resource cost; in the computational resource greedy approach, we make each request occupy as few resources as possible despite its finish time. Our introduced simulated annealing approach generates feasible solutions randomly and converges to an approximate solution. In our learning-based approach, neural networks are trained to make decisions. We use a simulated environment to evaluate the performances of our introduced approaches. Numerical results show that the introduced deep reinforcement learning approach has the best performance in terms of benefit in our examined cases.

  • Improvement of Differential-GNSS Positioning by Estimating Code Double-Difference-Error Using Machine Learning

    Hirotaka KATO  Junichi MEGURO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2023/09/12
      Vol:
    E106-D No:12
      Page(s):
    2069-2077

    Recently, Global navigation satellite system (GNSS) positioning has been widely used in various applications (e.g. car navigation system, smartphone map application, autonomous driving). In GNSS positioning, coordinates are calculated from observed satellite signals. The observed signals contain various errors, so the calculated coordinates also have some errors. Double-difference is one of the widely used ideas to reduce the errors of the observed signals. Although double-difference can remove many kinds of errors from the observed signals, some errors still remain (e.g. multipath error). In this paper, we define the remaining error as “double-difference-error (DDE)” and propose a method for estimating DDE using machine learning. In addition, we attempt to improve DGNSS positioning by feeding back the estimated DDE. Previous research applying machine learning to GNSS has focused on classifying whether the signal is LOS (Line Of Sight) or NLOS (Non Line Of Sight), and there is no study that attempts to estimate the amount of error itself as far as we know. Furthermore, previous studies had the limitation that their dataset was recorded at only a few locations in the same city. This is because these studies are mainly aimed at improving the positioning accuracy of vehicles, and collecting large amounts of data using vehicles is costly. To avoid this problem, in this research, we use a huge amount of openly available stationary point data for training. Through the experiments, we confirmed that the proposed method can reduce the DGNSS positioning error. Even though the DDE estimator was trained only on stationary point data, the proposed method improved the DGNSS positioning accuracy not only with stationary point but also with mobile rover. In addition, by comparing with the previous (detect and remove) approach, we confirmed the effectiveness of the DDE feedback approach.

  • Shift Quality Classifier Using Deep Neural Networks on Small Data with Dropout and Semi-Supervised Learning

    Takefumi KAWAKAMI  Takanori IDE  Kunihito HOKI  Masakazu MURAMATSU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2023/09/05
      Vol:
    E106-D No:12
      Page(s):
    2078-2084

    In this paper, we apply two methods in machine learning, dropout and semi-supervised learning, to a recently proposed method called CSQ-SDL which uses deep neural networks for evaluating shift quality from time-series measurement data. When developing a new Automatic Transmission (AT), calibration takes place where many parameters of the AT are adjusted to realize pleasant driving experience in all situations that occur on all roads around the world. Calibration requires an expert to visually assess the shift quality from the time-series measurement data of the experiments each time the parameters are changed, which is iterative and time-consuming. The CSQ-SDL was developed to shorten time consumed by the visual assessment, and its effectiveness depends on acquiring a sufficient number of data points. In practice, however, data amounts are often insufficient. The methods proposed here can handle such cases. For the cases wherein only a small number of labeled data points is available, we propose a method that uses dropout. For those cases wherein the number of labeled data points is small but the number of unlabeled data is sufficient, we propose a method that uses semi-supervised learning. Experiments show that while the former gives moderate improvement, the latter offers a significant performance improvement.

  • User Verification Using Evoked EEG by Invisible Visual Stimulation

    Atikur RAHMAN  Nozomu KINJO  Isao NAKANISHI  

     
    PAPER-Biometrics

      Pubricized:
    2023/06/19
      Vol:
    E106-A No:12
      Page(s):
    1569-1576

    Person authentication using biometric information has recently become popular among researchers. User management based on biometrics is more reliable than that using conventional methods. To secure private information, it is necessary to build continuous authentication-based user management systems. Brain waves are suitable biometric modalities for continuous authentication. This study is based on biometric authentication using brain waves evoked by invisible visual stimuli. Invisible visual stimulation is considered over visual stimulation to overcome the obstacles faced by a user when using a system. Invisible stimuli are confirmed by changing the intensity of the image and presenting high-speed stimulation. To ensure invisibility, stimuli of different intensities were tested, and the stimuli with an intensity of 5% was confirmed to be invisible. To improve the verification performance, a continuous wavelet transform was introduced over the Fourier transform because it extracts both time and frequency information from the brain wave. The scalogram obtained by the wavelet transform was used as an individual feature and for synchronizing the template and test data. Furthermore, to improve the synchronization performance, the waveband was split based on the power distribution of the scalogram. A performance evaluation using 20 subjects showed an equal error rate of 3.8%.

  • Deep Unrolling of Non-Linear Diffusion with Extended Morphological Laplacian

    Gouki OKADA  Makoto NAKASHIZUKA  

     
    PAPER-Image

      Pubricized:
    2023/07/21
      Vol:
    E106-A No:11
      Page(s):
    1395-1405

    This paper presents a deep network based on unrolling the diffusion process with the morphological Laplacian. The diffusion process is an iterative algorithm that can solve the diffusion equation and represents time evolution with Laplacian. The diffusion process is applied to smoothing of images and has been extended with non-linear operators for various image processing tasks. In this study, we introduce the morphological Laplacian to the basic diffusion process and unwrap to deep networks. The morphological filters are non-linear operators with parameters that are referred to as structuring elements. The discrete Laplacian can be approximated with the morphological filters without multiplications. Owing to the non-linearity of the morphological filter with trainable structuring elements, the training uses error back propagation and the network of the morphology can be adapted to specific image processing applications. We introduce two extensions of the morphological Laplacian for deep networks. Since the morphological filters are realized with addition, max, and min, the error caused by the limited bit-length is not amplified. Consequently, the morphological parts of the network are implemented in unsigned 8-bit integer with single instruction multiple data set (SIMD) to achieve fast computation on small devices. We applied the proposed network to image completion and Gaussian denoising. The results and computational time are compared with other denoising algorithm and deep networks.

  • Decomposition of P6-Free Chordal Bipartite Graphs

    Asahi TAKAOKA  

     
    LETTER-Graphs and Networks

      Pubricized:
    2023/05/17
      Vol:
    E106-A No:11
      Page(s):
    1436-1439

    Canonical decomposition for bipartite graphs, which was introduced by Fouquet, Giakoumakis, and Vanherpe (1999), is a decomposition scheme for bipartite graphs associated with modular decomposition. Weak-bisplit graphs are bipartite graphs totally decomposable (i.e., reducible to single vertices) by canonical decomposition. Canonical decomposition comprises series, parallel, and K+S decomposition. This paper studies a decomposition scheme comprising only parallel and K+S decomposition. We show that bipartite graphs totally decomposable by this decomposition are precisely P6-free chordal bipartite graphs. This characterization indicates that P6-free chordal bipartite graphs can be recognized in linear time using the recognition algorithm for weak-bisplit graphs presented by Giakoumakis and Vanherpe (2003).

61-80hit(4079hit)