The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] RIN(2923hit)

141-160hit(2923hit)

  • Anomaly Prediction for Wind Turbines Using an Autoencoder with Vibration Data Supported by Power-Curve Filtering

    Masaki TAKANASHI  Shu-ichi SATO  Kentaro INDO  Nozomu NISHIHARA  Hiroki HAYASHI  Toru SUZUKI  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/12/07
      Vol:
    E105-D No:3
      Page(s):
    732-735

    The prediction of the malfunction timing of wind turbines is essential for maintaining the high profitability of the wind power generation industry. Studies have been conducted on machine learning methods that use condition monitoring system data, such as vibration data, and supervisory control and data acquisition (SCADA) data to detect and predict anomalies in wind turbines automatically. Autoencoder-based techniques that use unsupervised learning where the anomaly pattern is unknown have attracted significant interest in the area of anomaly detection and prediction. In particular, vibration data are considered useful because they include the changes that occur in the early stages of a malfunction. However, when autoencoder-based techniques are applied for prediction purposes, in the training process it is difficult to distinguish the difference between operating and non-operating condition data, which leads to the degradation of the prediction performance. In this letter, we propose a method in which both vibration data and SCADA data are utilized to improve the prediction performance, namely, a method that uses a power curve composed of active power and wind speed. We evaluated the method's performance using vibration and SCADA data obtained from an actual wind farm.

  • Efficient Zero-Knowledge Proofs of Graph Signature for Connectivity and Isolation Using Bilinear-Map Accumulator

    Toru NAKANISHI  Hiromi YOSHINO  Tomoki MURAKAMI  Guru-Vamsi POLICHARLA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2021/09/08
      Vol:
    E105-A No:3
      Page(s):
    389-403

    To prove the graph relations such as the connectivity and isolation for a certified graph, a system of a graph signature and proofs has been proposed. In this system, an issuer generates a signature certifying the topology of an undirected graph, and issues the signature to a prover. The prover can prove the knowledge of the signature and the graph in the zero-knowledge, i.e., the signature and the signed graph are hidden. In addition, the prover can prove relations on the certified graph such as the connectivity and isolation between two vertexes. In the previous system, using integer commitments on RSA modulus, the graph relations are proved. However, the RSA modulus needs a longer size for each element. Furthermore, the proof size and verification cost depend on the total numbers of vertexes and edges. In this paper, we propose a graph signature and proof system, where these are computed on bilinear groups without the RSA modulus. Moreover, using a bilinear map accumulator, the prover can prove the connectivity and isolation on a graph, where the proof size and verification cost become independent from the total numbers of vertexes and edges.

  • Weakly Byzantine Gathering with a Strong Team

    Jion HIROSE  Junya NAKAMURA  Fukuhito OOSHITA  Michiko INOUE  

     
    PAPER

      Pubricized:
    2021/10/11
      Vol:
    E105-D No:3
      Page(s):
    541-555

    We study the gathering problem requiring a team of mobile agents to gather at a single node in arbitrary networks. The team consists of k agents with unique identifiers (IDs), and f of them are weakly Byzantine agents, which behave arbitrarily except falsifying their identifiers. The agents move in synchronous rounds and cannot leave any information on nodes. If the number of nodes n is given to agents, the existing fastest algorithm tolerates any number of weakly Byzantine agents and achieves gathering with simultaneous termination in O(n4·|Λgood|·X(n)) rounds, where |Λgood| is the length of the maximum ID of non-Byzantine agents and X(n) is the number of rounds required to explore any network composed of n nodes. In this paper, we ask the question of whether we can reduce the time complexity if we have a strong team, i.e., a team with a few Byzantine agents, because not so many agents are subject to faults in practice. We give a positive answer to this question by proposing two algorithms in the case where at least 4f2+9f+4 agents exist. Both the algorithms assume that the upper bound N of n is given to agents. The first algorithm achieves gathering with non-simultaneous termination in O((f+|&Lambdagood|)·X(N)) rounds. The second algorithm achieves gathering with simultaneous termination in O((f+|&Lambdaall|)·X(N)) rounds, where |&Lambdaall| is the length of the maximum ID of all agents. The second algorithm significantly reduces the time complexity compared to the existing one if n is given to agents and |&Lambdaall|=O(|&Lambdagood|) holds.

  • An Equivalent Expression for the Wyner-Ziv Source Coding Problem Open Access

    Tetsunao MATSUTA  Tomohiko UYEMATSU  

     
    PAPER-Information Theory

      Pubricized:
    2021/09/09
      Vol:
    E105-A No:3
      Page(s):
    353-362

    We consider the coding problem for lossy source coding with side information at the decoder, which is known as the Wyner-Ziv source coding problem. The goal of the coding problem is to find the minimum rate such that the probability of exceeding a given distortion threshold is less than the desired level. We give an equivalent expression of the minimum rate by using the chromatic number and notions of covering of a set. This allows us to analyze the coding problem in terms of graph coloring and covering.

  • GPGPU Implementation of Variational Bayesian Gaussian Mixture Models

    Hiroki NISHIMOTO  Renyuan ZHANG  Yasuhiko NAKASHIMA  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2021/11/24
      Vol:
    E105-D No:3
      Page(s):
    611-622

    The efficient implementation strategy for speeding up high-quality clustering algorithms is developed on the basis of general purpose graphic processing units (GPGPUs) in this work. Among various clustering algorithms, a sophisticated Gaussian mixture model (GMM) by estimating parameters through variational Bayesian (VB) mechanism is conducted due to its superior performances. Since the VB-GMM methodology is computation-hungry, the GPGPU is employed to carry out massive matrix-computations. To efficiently migrate the conventional CPU-oriented schemes of VB-GMM onto GPGPU platforms, an entire migration-flow with thirteen stages is presented in detail. The CPU-GPGPU co-operation scheme, execution re-order, and memory access optimization are proposed for optimizing the GPGPU utilization and maximizing the clustering speed. Five types of real-world applications along with relevant data-sets are introduced for the cross-validation. From the experimental results, the feasibility of implementing VB-GMM algorithm by GPGPU is verified with practical benefits. The proposed GPGPU migration achieves 192x speedup in maximum. Furthermore, it succeeded in identifying the proper number of clusters, which is hardly conducted by the EM-algotihm.

  • Adaptive Binarization for Vehicle State Images Based on Contrast Preserving Decolorization and Major Cluster Estimation

    Ye TIAN  Mei HAN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2021/12/07
      Vol:
    E105-D No:3
      Page(s):
    679-688

    A new adaptive binarization method is proposed for the vehicle state images obtained from the intelligent operation and maintenance system of rail transit. The method can check the corresponding vehicle status information in the intelligent operation and maintenance system of rail transit more quickly and effectively, track and monitor the vehicle operation status in real time, and improve the emergency response ability of the system. The advantages of the proposed method mainly include two points. For decolorization, we use the method of contrast preserving decolorization[1] obtain the appropriate ratio of R, G, and B for the grayscale of the RGB image which can retain the color information of the vehicle state images background to the maximum, and maintain the contrast between the foreground and the background. In terms of threshold selection, the mean value and standard deviation of gray value corresponding to multi-color background of vehicle state images are obtained by using major cluster estimation[2], and the adaptive threshold is determined by the 2 sigma principle for binarization, which can extract text, identifier and other target information effectively. The experimental results show that, regarding the vehicle state images with rich background color information, this method is better than the traditional binarization methods, such as the global threshold Otsu algorithm[3] and the local threshold Sauvola algorithm[4],[5] based on threshold, Mean-Shift algorithm[6], K-Means algorithm[7] and Fuzzy C Means[8] algorithm based on statistical learning. As an image preprocessing scheme for intelligent rail transit data verification, the method can improve the accuracy of text and identifier recognition effectively by verifying the optical character recognition through a data set containing images of different vehicle statuses.

  • Efficiency and Accuracy Improvements of Secure Floating-Point Addition over Secret Sharing Open Access

    Kota SASAKI  Koji NUIDA  

     
    PAPER

      Pubricized:
    2021/09/09
      Vol:
    E105-A No:3
      Page(s):
    231-241

    In secure multiparty computation (MPC), floating-point numbers should be handled in many potential applications, but these are basically expensive. In particular, for MPC based on secret sharing (SS), the floating-point addition takes many communication rounds though the addition is the most fundamental operation. In this paper, we propose an SS-based two-party protocol for floating-point addition with 13 rounds (for single/double precision numbers), which is much fewer than the milestone work of Aliasgari et al. in NDSS 2013 (34 and 36 rounds, respectively) and also fewer than the state of the art in the literature. Moreover, in contrast to the existing SS-based protocols which are all based on “roundTowardZero” rounding mode in the IEEE 754 standard, we propose another protocol with 15 rounds which is the first result realizing more accurate “roundTiesToEven” rounding mode. We also discuss possible applications of the latter protocol to secure Validated Numerics (a.k.a. Rigorous Computation) by implementing a simple example.

  • Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models Open Access

    Tatsuya OYAMA  Shunsuke OKURA  Kota YOSHIDA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    336-343

    A backdoor attack is a type of attack method inducing deep neural network (DNN) misclassification. An adversary mixes poison data, which consist of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies only images with adversarial marks into an adversarial target class and other images into the correct classes. However, the attack performance degrades sharply when the location of the adversarial marks is slightly shifted. An adversarial mark that induces the misclassification of a DNN is usually applied when a picture is taken, so the backdoor attack will have difficulty succeeding in the physical world because the adversarial mark position fluctuates. This paper proposes a new approach in which an adversarial mark is applied using fault injection on the mobile industry processor interface (MIPI) between an image sensor and the image recognition processor. Two independent attack drivers are electrically connected to the MIPI data lane in our attack system. While almost all image signals are transferred from the sensor to the processor without tampering by canceling the attack signal between the two drivers, the adversarial mark is injected into a given location of the image signal by activating the attack signal generated by the two attack drivers. In an experiment, the DNN was implemented on a Raspberry pi 4 to classify MNIST handwritten images transferred from the image sensor over the MIPI. The adversarial mark successfully appeared in a specific small part of the MNIST images using our attack system. The success rate of the backdoor attack using this adversarial mark was 91%, which is much higher than the 18% rate achieved using conventional input image tampering.

  • A Novel Method for Adaptive Beamforming under the Strong Interference Condition

    Zongli RUAN  Hongshu LIAO  Guobing QIAN  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2021/08/02
      Vol:
    E105-A No:2
      Page(s):
    109-113

    In this letter, firstly, a novel adaptive beamformer using independent component analysis (ICA) algorithm is proposed. By this algorithm, the ambiguity of amplitude and phase resulted from blind source separation is removed utilizing the special structure of array manifolds matrix. However, there might exist great calibration error when the powers of interferences are far larger than that of desired signal at many applications such as sonar, radio astronomy, biomedical engineering and earthquake detection. As a result, this will lead to a significant reduction in separation performance. Then, a new method based on the combination of ICA and primary component analysis (PCA) is proposed to recover the desired signal's amplitude under strong interference. Finally, computer simulation is carried out to indicate the effectiveness of our methods. The simulation results show that the proposed methods can obtain higher SNR and more accurate power estimation of desired signal than diagonal loading sample matrix inversion (LSMI) and worst-case performance optimization (WCPO) method.

  • Efficient Task Allocation Protocol for a Hybrid-Hierarchical Spatial-Aerial-Terrestrial Edge-Centric IoT Architecture Open Access

    Abbas JAMALIPOUR  Forough SHIRIN ABKENAR  

     
    INVITED PAPER

      Pubricized:
    2021/08/17
      Vol:
    E105-B No:2
      Page(s):
    116-130

    In this paper, we propose a novel Hybrid-Hierarchical spatial-aerial-Terrestrial Edge-Centric (H2TEC) for the space-air integrated Internet of Things (IoT) networks. (H2TEC) comprises unmanned aerial vehicles (UAVs) that act as mobile fog nodes to provide the required services for terminal nodes (TNs) in cooperation with the satellites. TNs in (H2TEC) offload their generated tasks to the UAVs for further processing. Due to the limited energy budget of TNs, a novel task allocation protocol, named TOP, is proposed to minimize the energy consumption of TNs while guaranteeing the outage probability and network reliability for which the transmission rate of TNs is optimized. TOP also takes advantage of the energy harvesting by which the low earth orbit satellites transfer energy to the UAVs when the remaining energy of the UAVs is below a predefined threshold. To this end, the harvested power of the UAVs is optimized alongside the corresponding harvesting time so that the UAVs can improve the network throughput via processing more bits. Numerical results reveal that TOP outperforms the baseline method in critical situations that more power is required to process the task. It is also found that even in such situations, the energy harvesting mechanism provided in the TOP yields a more efficient network throughput.

  • Simulation-Based Understanding of “Charge-Sharing Phenomenon” Induced by Heavy-Ion Incident on a 65nm Bulk CMOS Memory Circuit

    Akifumi MARU  Akifumi MATSUDA  Satoshi KUBOYAMA  Mamoru YOSHIMOTO  

     
    BRIEF PAPER-Electronic Circuits

      Pubricized:
    2021/08/05
      Vol:
    E105-C No:1
      Page(s):
    47-50

    In order to expect the single event occurrence on highly integrated CMOS memory circuit, quantitative evaluation of charge sharing between memory cells is needed. In this study, charge sharing area induced by heavy ion incident is quantitatively calculated by using device-simulation-based method. The validity of this method is experimentally confirmed using the charged heavy ion accelerator.

  • Balanced, Unbalances, and One-Sided Distributed Teams - An Empirical View on Global Software Engineering Education

    Daniel Moritz MARUTSCHKE  Victor V. KRYSSANOV  Patricia BROCKMANN  

     
    PAPER

      Pubricized:
    2021/09/30
      Vol:
    E105-D No:1
      Page(s):
    2-10

    Global software engineering education faces unique challenges to reflect as close as possible real-world distributed team development in various forms. The complex nature of planning, collaborating, and upholding partnerships present administrative difficulties on top of budgetary constrains. These lead to limited opportunities for students to gain international experiences and for researchers to propagate educational and practical insights. This paper presents an empirical view on three different course structures conducted by the same research and educational team over a four-year time span. The courses were managed in Japan and Germany, facing cultural challenges, time-zone differences, language barriers, heterogeneous and homogeneous team structures, amongst others. Three semesters were carried out before and one during the Covid-19 pandemic. Implications for a recent focus on online education for software engineering education and future directions are discussed. As administrational and institutional differences typically do not guarantee the same number of students on all sides, distributed teams can be 1. balanced, where the number of students on one side is less than double the other, 2. unbalanced, where the number of students on one side is significantly larger than double the other, or 3. one-sided, where one side lacks students altogether. An approach for each of these three course structures is presented and discussed. Empirical analyses and reoccurring patterns in global software engineering education are reported. In the most recent three global software engineering classes, students were surveyed at the beginning and the end of the semester. The questionnaires ask students to rank how impactful they perceive factors related to global software development such as cultural aspects, team structure, language, and interaction. Results of the shift in mean perception are compared and discussed for each of the three team structures.

  • Design of the Circularly Polarized Ring Microstrip Antenna with Shorting Pins

    Jun GOTO  Akimichi HIROTA  Kyosuke MOCHIZUKI  Satoshi YAMAGUCHI  Kazunari KIHIRA  Toru TAKAHASHI  Hideo SUMIYOSHI  Masataka OTSUKA  Naofumi YONEDA  Jiro HIROKAWA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2021/08/05
      Vol:
    E105-B No:1
      Page(s):
    34-43

    We present a novel circularly polarized ring microstrip antenna and its design. The shorting pins discretely disposed on the inner edge of the ring microstrip antenna are introduced as a new degree of freedom for improving the resonance frequency control. The number and diameter of the shorting pins control the resonance frequency; the resonance frequency can be almost constant with respect to the inner/outer diameter ratio, which expands the use of the ring microstrip antenna. The dual-band antenna where the proposed antenna includes another ring microstrip antenna is designed and measured, and simulated results agree well with the measured one.

  • Monitoring Trails Computation within Allowable Expected Period Specified for Transport Networks

    Nagao OGINO  Takeshi KITAHARA  

     
    PAPER-Network Management/Operation

      Pubricized:
    2021/07/09
      Vol:
    E105-B No:1
      Page(s):
    21-33

    Active network monitoring based on Boolean network tomography is a promising technique to localize link failures instantly in transport networks. However, the required set of monitoring trails must be recomputed after each link failure has occurred to handle succeeding link failures. Existing heuristic methods cannot compute the required monitoring trails in a sufficiently short time when multiple-link failures must be localized in the whole of large-scale managed networks. This paper proposes an approach for computing the required monitoring trails within an allowable expected period specified beforehand. A random walk-based analysis estimates the number of monitoring trails to be computed in the proposed approach. The estimated number of monitoring trails are computed by a lightweight method that only guarantees partial localization within restricted areas. The lightweight method is repeatedly executed until a successful set of monitoring trails achieving unambiguous localization in the entire managed networks can be obtained. This paper demonstrates that the proposed approach can compute a small number of monitoring trails for localizing all independent dual-link failures in managed networks made up of thousands of links within a given expected short period.

  • Study in CSI Correction Localization Algorithm with DenseNet Open Access

    Junna SHANG  Ziyang YAO  

     
    PAPER-Navigation, Guidance and Control Systems

      Pubricized:
    2021/06/23
      Vol:
    E105-B No:1
      Page(s):
    76-84

    With the arrival of 5G and the popularity of smart devices, indoor localization technical feasibility has been verified, and its market demands is huge. The channel state information (CSI) extracted from Wi-Fi is physical layer information which is more fine-grained than the received signal strength indication (RSSI). This paper proposes a CSI correction localization algorithm using DenseNet, which is termed CorFi. This method first uses isolation forest to eliminate abnormal CSI, and then constructs a CSI amplitude fingerprint containing time, frequency and antenna pair information. In an offline stage, the densely connected convolutional networks (DenseNet) are trained to establish correspondence between CSI and spatial position, and generalized extended interpolation is applied to construct the interpolated fingerprint database. In an online stage, DenseNet is used for position estimation, and the interpolated fingerprint database and K-nearest neighbor (KNN) are combined to correct the position of the prediction results with low maximum probability. In an indoor corridor environment, the average localization error is 0.536m.

  • Tighter Reduction for Lattice-Based Multisignature Open Access

    Masayuki FUKUMITSU  Shingo HASEGAWA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2021/05/25
      Vol:
    E104-A No:12
      Page(s):
    1685-1697

    Multisignatures enable multiple users to sign a message interactively. Many instantiations are proposed for multisignatures, however, most of them are quantum-insecure, because these are based on the integer factoring assumption or the discrete logarithm assumption. Although there exist some constructions based on the lattice problems, which are believed to be quantum-secure, their security reductions are loose. In this paper, we aim to improve the security reduction of lattice-based multisignature schemes concerning tightness. Our basic strategy is combining the multisignature scheme proposed by El Bansarkhani and Sturm with the lattice-based signature scheme by Abdalla, Fouque, Lyubashevsky, and Tibouchi which has a tight security reduction from the Ring-LWE (Ring Learning with Errors) assumption. Our result shows that proof techniques for standard signature schemes can be applied to multisignature schemes, then we can improve the polynomial loss factor concerning the Ring-LWE assumption. Our second result is to address the problem of security proofs of existing lattice-based multisignature schemes pointed out by Damgård, Orlandi, Takahashi, and Tibouchi. We employ a new cryptographic assumption called the Rejected-Ring-LWE assumption, to complete the security proof.

  • Proposal and Evaluation of IO Concentration-Aware Mechanisms to Improve Efficiency of Hybrid Storage Systems

    Kazuichi OE  Takeshi NANRI  

     
    PAPER

      Pubricized:
    2021/07/30
      Vol:
    E104-D No:12
      Page(s):
    2109-2120

    Hybrid storage techniques are useful methods to improve the cost performance for input-output (IO) intensive workloads. These techniques choose areas of concentrated IO accesses and migrate them to an upper tier to extract as much performance as possible through greater use of upper tier areas. Automated tiered storage with fast memory and slow flash storage (ATSMF) is a hybrid storage system situated between non-volatile memories (NVMs) and solid-state drives (SSDs). ATSMF aims to reduce the average response time for IO accesses by migrating areas of concentrated IO access from an SSD to an NVM. When a concentrated IO access finishes, the system migrates these areas from the NVM back to the SSD. Unfortunately, the published ATSMF implementation temporarily consumes much NVM capacity upon migrating concentrated IO access areas to NVM, because its algorithm executes NVM migration with high priority. As a result, it often delays evicting areas in which IO concentrations have ended to the SSD. Therefore, to reduce the consumption of NVM while maintaining the average response time, we developed new techniques for making ATSMF more practical. The first is a queue handling technique based on the number of IO accesses for NVM migration and eviction. The second is an eviction method that selects only write-accessed partial regions in finished areas. The third is a technique for variable eviction timing to balance the NVM consumption and average response time. Experimental results indicate that the average response times of the proposed ATSMF are almost the same as those of the published ATSMF, while the NVM consumption is three times lower in best case.

  • Lempel-Ziv Factorization in Linear-Time O(1)-Workspace for Constant Alphabets

    Weijun LIU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2021/08/30
      Vol:
    E104-D No:12
      Page(s):
    2145-2153

    Computing the Lempel-Ziv Factorization (LZ77) of a string is one of the most important problems in computer science. Nowadays, it has been widely used in many applications such as data compression, text indexing and pattern discovery, and already become the heart of many file compressors like gzip and 7zip. In this paper, we show a linear time algorithm called Xone for computing the LZ77, which has the same space requirement with the previous best space requirement for linear time LZ77 factorization called BGone. Xone greatly improves the efficiency of BGone. Experiments show that the two versions of Xone: XoneT and XoneSA are about 27% and 31% faster than BGoneT and BGoneSA, respectively.

  • Weighted PCA-LDA Based Color Quantization Method Suppressing Saturation Decrease

    Seiichi KOJIMA  Momoka HARADA  Yoshiaki UEDA  Noriaki SUETAKE  

     
    LETTER-Image

      Pubricized:
    2021/06/02
      Vol:
    E104-A No:12
      Page(s):
    1728-1732

    In this letter, we propose a new color quantization method suppressing saturation decrease. In the proposed method, saturation-based weight and intensity-based weight are used so that vivid colors are selected as the representative colors preferentially. Experiments show that the proposed method tends to select vivid colors even if they occupy only a small area in the image.

  • Joint Wireless and Computational Resource Allocation Based on Hierarchical Game for Mobile Edge Computing

    Weiwei XIA  Zhuorui LAN  Lianfeng SHEN  

     
    PAPER-Network

      Pubricized:
    2021/05/14
      Vol:
    E104-B No:11
      Page(s):
    1395-1407

    In this paper, we propose a hierarchical Stackelberg game based resource allocation algorithm (HGRAA) to jointly allocate the wireless and computational resources of a mobile edge computing (MEC) system. The proposed HGRAA is composed of two levels: the lower-level evolutionary game (LEG) minimizes the cost of mobile terminals (MTs), and the upper-level exact potential game (UEPG) maximizes the utility of MEC servers. At the lower-level, the MTs are divided into delay-sensitive MTs (DSMTs) and non-delay-sensitive MTs (NDSMTs) according to their different quality of service (QoS) requirements. The competition among DSMTs and NDSMTs in different service areas to share the limited available wireless and computational resources is formulated as a dynamic evolutionary game. The dynamic replicator is applied to obtain the evolutionary equilibrium so as to minimize the costs imposed on MTs. At the upper level, the exact potential game is formulated to solve the resource sharing problem among MEC servers and the resource sharing problem is transferred to nonlinear complementarity. The existence of Nash equilibrium (NE) is proved and is obtained through the Karush-Kuhn-Tucker (KKT) condition. Simulations illustrate that substantial performance improvements such as average utility and the resource utilization of MEC servers can be achieved by applying the proposed HGRAA. Moreover, the cost of MTs is significantly lower than other existing algorithms with the increasing size of input data, and the QoS requirements of different kinds of MTs are well guaranteed in terms of average delay and transmission data rate.

141-160hit(2923hit)