The search functionality is under construction.

Keyword Search Result

[Keyword] edge(504hit)

1-20hit(504hit)

  • Improving Sliced Wasserstein Distance with Geometric Median for Knowledge Distillation Open Access

    Hongyun LU  Mengmeng ZHANG  Hongyuan JING  Zhi LIU  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2024/03/08
      Vol:
    E107-D No:7
      Page(s):
    890-893

    Currently, the most advanced knowledge distillation models use a metric learning approach based on probability distributions. However, the correlation between supervised probability distributions is typically geometric and implicit, causing inefficiency and an inability to capture structural feature representations among different tasks. To overcome this problem, we propose a knowledge distillation loss using the robust sliced Wasserstein distance with geometric median (GMSW) to estimate the differences between the teacher and student representations. Due to the intuitive geometric properties of GMSW, the student model can effectively learn to align its produced hidden states from the teacher model, thereby establishing a robust correlation among implicit features. In experiment, our method outperforms state-of-the-art models in both high-resource and low-resource settings.

  • Cloud-Edge-Device Collaborative High Concurrency Access Management for Massive IoT Devices in Distribution Grid Open Access

    Shuai LI  Xinhong YOU  Shidong ZHANG  Mu FANG  Pengping ZHANG  

     
    PAPER-Systems and Control

      Pubricized:
    2023/10/26
      Vol:
    E107-A No:7
      Page(s):
    946-957

    Emerging data-intensive services in distribution grid impose requirements of high-concurrency access for massive internet of things (IoT) devices. However, the lack of effective high-concurrency access management results in severe performance degradation. To address this challenge, we propose a cloud-edge-device collaborative high-concurrency access management algorithm based on multi-timescale joint optimization of channel pre-allocation and load balancing degree. We formulate an optimization problem to minimize the weighted sum of edge-cloud load balancing degree and queuing delay under the constraint of access success rate. The problem is decomposed into a large-timescale channel pre-allocation subproblem solved by the device-edge collaborative access priority scoring mechanism, and a small-timescale data access control subproblem solved by the discounted empirical matching mechanism (DEM) with the perception of high-concurrency number and queue backlog. Particularly, information uncertainty caused by externalities is tackled by exploiting discounted empirical performance which accurately captures the performance influence of historical time points on present preference value. Simulation results demonstrate the effectiveness of the proposed algorithm in reducing edge-cloud load balancing degree and queuing delay.

  • A Sequential Approach to Detect Drifts and Retrain Neural Networks on Resource-Limited Edge Devices Open Access

    Kazuki SUNAGA  Takeya YAMADA  Hiroki MATSUTANI  

     
    PAPER-Software System

      Pubricized:
    2024/02/09
      Vol:
    E107-D No:6
      Page(s):
    741-750

    A practical issue of edge AI systems is that data distributions of trained dataset and deployed environment may differ due to noise and environmental changes over time. Such a phenomenon is known as a concept drift, and this gap degrades the performance of edge AI systems and may introduce system failures. To address this gap, retraining of neural network models triggered by concept drift detection is a practical approach. However, since available compute resources are strictly limited in edge devices, in this paper we propose a fully sequential concept drift detection method in cooperation with an on-device sequential learning technique of neural networks. In this case, both the neural network retraining and the proposed concept drift detection are done only by sequential computation to reduce computation cost and memory utilization. We use three datasets for experiments and compare the proposed approach with existing batch-based detection methods. It is also compared with a DNN-based approach without concept drift detection. The evaluation results of the proposed approach show that the proposed method is capable of detecting each of four concept drift types. The results also show that, while the accuracy is decreased by up to 0.9% compared to the existing batch-based detection methods, it decreases the memory size by 88.9%-96.4% and the execution time by 45.0%-87.6%. As a result, the combination of the neural network retraining and the proposed concept drift detection method is demonstrated on Raspberry Pi Pico that has 264 kB memory.

  • Reservoir-Based 1D Convolution: Low-Training-Cost AI Open Access

    Yuichiro TANAKA  Hakaru TAMUKOH  

     
    LETTER-Neural Networks and Bioengineering

      Pubricized:
    2023/09/11
      Vol:
    E107-A No:6
      Page(s):
    941-944

    In this study, we introduce a reservoir-based one-dimensional (1D) convolutional neural network that processes time-series data at a low computational cost, and investigate its performance and training time. Experimental results show that the proposed network consumes lower training computational costs and that it outperforms the conventional reservoir computing in a sound-classification task.

  • Data-Quality Aware Incentive Mechanism Based on Stackelberg Game in Mobile Edge Computing Open Access

    Shuyun LUO  Wushuang WANG  Yifei LI  Jian HOU  Lu ZHANG  

     
    PAPER-Mobile Information Network and Personal Communications

      Pubricized:
    2023/09/14
      Vol:
    E107-A No:6
      Page(s):
    873-880

    Crowdsourcing becomes a popular data-collection method to relieve the burden of high cost and latency for data-gathering. Since the involved users in crowdsourcing are volunteers, need incentives to encourage them to provide data. However, the current incentive mechanisms mostly pay attention to the data quantity, while ignoring the data quality. In this paper, we design a Data-quality awaRe IncentiVe mEchanism (DRIVE) for collaborative tasks based on the Stackelberg game to motivate users with high quality, the highlight of which is the dynamic reward allocation scheme based on the proposed data quality evaluation method. In order to guarantee the data quality evaluation response in real-time, we introduce the mobile edge computing framework. Finally, one case study is given and its real-data experiments demonstrate the superior performance of DRIVE.

  • Automated Labeling of Entities in CVE Vulnerability Descriptions with Natural Language Processing Open Access

    Kensuke SUMOTO  Kenta KANAKOGI  Hironori WASHIZAKI  Naohiko TSUDA  Nobukazu YOSHIOKA  Yoshiaki FUKAZAWA  Hideyuki KANUKA  

     
    PAPER

      Pubricized:
    2024/02/09
      Vol:
    E107-D No:5
      Page(s):
    674-682

    Security-related issues have become more significant due to the proliferation of IT. Collating security-related information in a database improves security. For example, Common Vulnerabilities and Exposures (CVE) is a security knowledge repository containing descriptions of vulnerabilities about software or source code. Although the descriptions include various entities, there is not a uniform entity structure, making security analysis difficult using individual entities. Developing a consistent entity structure will enhance the security field. Herein we propose a method to automatically label select entities from CVE descriptions by applying the Named Entity Recognition (NER) technique. We manually labeled 3287 CVE descriptions and conducted experiments using a machine learning model called BERT to compare the proposed method to labeling with regular expressions. Machine learning using the proposed method significantly improves the labeling accuracy. It has an f1 score of about 0.93, precision of about 0.91, and recall of about 0.95, demonstrating that our method has potential to automatically label select entities from CVE descriptions.

  • An Extension of Physical Optics Approximation for Dielectric Wedge Diffraction for a TM-Polarized Plane Wave Open Access

    Duc Minh NGUYEN  Hiroshi SHIRAI  Se-Yun KIM  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2023/11/08
      Vol:
    E107-C No:5
      Page(s):
    115-123

    In this study, the edge diffraction of a TM-polarized electromagnetic plane wave by two-dimensional dielectric wedges has been analyzed. An asymptotic solution for the radiation field has been derived from equivalent electric and magnetic currents which can be determined by the geometrical optics (GO) rays. This method may be regarded as an extended version of physical optics (PO). The diffracted field has been represented in terms of cotangent functions whose singularity behaviors are closely related to GO shadow boundaries. Numerical calculations are performed to compare the results with those by other reference solutions, such as the hidden rays of diffraction (HRD) and a numerical finite-difference time-domain (FDTD) simulation. Comparisons of the diffraction effect among these results have been made to propose additional lateral waves in the denser media.

  • Conceptual Knowledge Enhanced Model for Multi-Intent Detection and Slot Filling Open Access

    Li HE  Jingxuan ZHAO  Jianyong DUAN  Hao WANG  Xin LI  

     
    PAPER

      Pubricized:
    2023/10/25
      Vol:
    E107-D No:4
      Page(s):
    468-476

    In Natural Language Understanding, intent detection and slot filling have been widely used to understand user queries. However, current methods tend to rely on single words and sentences to understand complex semantic concepts, and can only consider local information within the sentence. Therefore, they usually cannot capture long-distance dependencies well and are prone to problems where complex intentions in sentences are difficult to recognize. In order to solve the problem of long-distance dependency of the model, this paper uses ConceptNet as an external knowledge source and introduces its extensive semantic information into the multi-intent detection and slot filling model. Specifically, for a certain sentence, based on confidence scores and semantic relationships, the most relevant conceptual knowledge is selected to equip the sentence, and a concept context map with rich information is constructed. Then, the multi-head graph attention mechanism is used to strengthen context correlation and improve the semantic understanding ability of the model. The experimental results indicate that the model has significantly improved performance compared to other models on the MixATIS and MixSNIPS multi-intent datasets.

  • Research on Lightweight Acoustic Scene Perception Method Based on Drunkard Methodology

    Wenkai LIU  Lin ZHANG  Menglong WU  Xichang CAI  Hongxia DONG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/10/23
      Vol:
    E107-D No:1
      Page(s):
    83-92

    The goal of Acoustic Scene Classification (ASC) is to simulate human analysis of the surrounding environment and make accurate decisions promptly. Extracting useful information from audio signals in real-world scenarios is challenging and can lead to suboptimal performance in acoustic scene classification, especially in environments with relatively homogeneous backgrounds. To address this problem, we model the sobering-up process of “drunkards” in real-life and the guiding behavior of normal people, and construct a high-precision lightweight model implementation methodology called the “drunkard methodology”. The core idea includes three parts: (1) designing a special feature transformation module based on the different mechanisms of information perception between drunkards and ordinary people, to simulate the process of gradually sobering up and the changes in feature perception ability; (2) studying a lightweight “drunken” model that matches the normal model's perception processing process. The model uses a multi-scale class residual block structure and can obtain finer feature representations by fusing information extracted at different scales; (3) introducing a guiding and fusion module of the conventional model to the “drunken” model to speed up the sobering-up process and achieve iterative optimization and accuracy improvement. Evaluation results on the official dataset of DCASE2022 Task1 demonstrate that our baseline system achieves 40.4% accuracy and 2.284 loss under the condition of 442.67K parameters and 19.40M MAC (multiply-accumulate operations). After adopting the “drunkard” mechanism, the accuracy is improved to 45.2%, and the loss is reduced by 0.634 under the condition of 551.89K parameters and 23.6M MAC.

  • Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning

    Kairi TOKUDA  Takehiro SATO  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/10/06
      Vol:
    E107-B No:1
      Page(s):
    173-184

    Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.

  • Minimization of Energy Consumption in TDMA-Based Wireless-Powered Multi-Access Edge Computing Networks

    Xi CHEN  Guodong JIANG  Kaikai CHI  Shubin ZHANG  Gang CHEN  Jiang LIU  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2023/06/19
      Vol:
    E106-A No:12
      Page(s):
    1544-1554

    Many nodes in Internet of Things (IoT) rely on batteries for power. Additionally, the demand for executing compute-intensive and latency-sensitive tasks is increasing for IoT nodes. In some practical scenarios, the computation tasks of WDs have the non-separable characteristic, that is, binary offloading strategies should be used. In this paper, we focus on the design of an efficient binary offloading algorithm that minimizes system energy consumption (EC) for TDMA-based wireless-powered multi-access edge computing networks, where WDs either compute tasks locally or offload them to hybrid access points (H-APs). We formulate the EC minimization problem which is a non-convex problem and decompose it into a master problem optimizing binary offloading decision and a subproblem optimizing WPT duration and task offloading transmission durations. For the master problem, a DRL based method is applied to obtain the near-optimal offloading decision. For the subproblem, we firstly consider the scenario where the nodes do not have completion time constraints and obtain the optimal analytical solution. Then we consider the scenario with the constraints. By jointly using the Golden Section Method and bisection method, the optimal solution can be obtained due to the convexity of the constraint function. Simulation results show that the proposed offloading algorithm based on DRL can achieve the near-minimal EC.

  • A Network Design Scheme in Delay Sensitive Monitoring Services Open Access

    Akio KAWABATA  Takuya TOJO  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/04/19
      Vol:
    E106-B No:10
      Page(s):
    903-914

    Mission-critical monitoring services, such as finding criminals with a monitoring camera, require rapid detection of newly updated data, where suppressing delay is desirable. Taking this direction, this paper proposes a network design scheme to minimize this delay for monitoring services that consist of Internet-of-Things (IoT) devices located at terminal endpoints (TEs), databases (DB), and applications (APLs). The proposed scheme determines the allocation of DB and APLs and the selection of the server to which TE belongs. DB and APL are allocated on an optimal server from multiple servers in the network. We formulate the proposed network design scheme as an integer linear programming problem. The delay reduction effect of the proposed scheme is evaluated under two network topologies and a monitoring camera system network. In the two network topologies, the delays of the proposed scheme are 78 and 80 percent, compared to that of the conventional scheme. In the monitoring camera system network, the delay of the proposed scheme is 77 percent compared to that of the conventional scheme. These results indicate that the proposed scheme reduces the delay compared to the conventional scheme where APLs are located near TEs. The computation time of the proposed scheme is acceptable for the design phase before the service is launched. The proposed scheme can contribute to a network design that detects newly added objects quickly in the monitoring services.

  • Local-to-Global Structure-Aware Transformer for Question Answering over Structured Knowledge

    Yingyao WANG  Han WANG  Chaoqun DUAN  Tiejun ZHAO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/06/27
      Vol:
    E106-D No:10
      Page(s):
    1705-1714

    Question-answering tasks over structured knowledge (i.e., tables and graphs) require the ability to encode structural information. Traditional pre-trained language models trained on linear-chain natural language cannot be directly applied to encode tables and graphs. The existing methods adopt the pre-trained models in such tasks by flattening structured knowledge into sequences. However, the serialization operation will lead to the loss of the structural information of knowledge. To better employ pre-trained transformers for structured knowledge representation, we propose a novel structure-aware transformer (SATrans) that injects the local-to-global structural information of the knowledge into the mask of the different self-attention layers. Specifically, in the lower self-attention layers, SATrans focus on the local structural information of each knowledge token to learn a more robust representation of it. In the upper self-attention layers, SATrans further injects the global information of the structured knowledge to integrate the information among knowledge tokens. In this way, the SATrans can effectively learn the semantic representation and structural information from the knowledge sequence and the attention mask, respectively. We evaluate SATrans on the table fact verification task and the knowledge base question-answering task. Furthermore, we explore two methods to combine symbolic and linguistic reasoning for these tasks to solve the problem that the pre-trained models lack symbolic reasoning ability. The experiment results reveal that the methods consistently outperform strong baselines on the two benchmarks.

  • Backup Resource Allocation Model with Probabilistic Protection Considering Service Delay

    Shinya HORIMOTO  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    798-816

    This paper proposes a backup resource allocation model for virtual network functions (VNFs) to minimize the total allocated computing capacity for backup with considering the service delay. If failures occur to primary hosts, the VNFs in failed hosts are recovered by backup hosts whose allocation is pre-determined. We introduce probabilistic protection, where the probability that the protection by a backup host fails is limited within a given value; it allows backup resource sharing to reduce the total allocated computing capacity. The previous work does not consider the service delay constraint in the backup resource allocation problem. The proposed model considers that the probability that the service delay, which consists of networking delay between hosts and processing delay in each VNF, exceeds its threshold is constrained within a given value. We introduce a basic algorithm to solve our formulated delay-constraint optimization problem. In a problem with the size that cannot be solved within an acceptable computation time limit by the basic algorithm, we develop a simulated annealing algorithm incorporating Yen's algorithm to handle the delay constraint heuristically. We observe that both algorithms in the proposed model reduce the total allocated computing capacity by up to 56.3% compared to a baseline; the simulated annealing algorithm can get feasible solutions in problems where the basic algorithm cannot.

  • Optimizing Edge-Cloud Cooperation for Machine Learning Accuracy Considering Transmission Latency and Bandwidth Congestion Open Access

    Kengo TAJIRI  Ryoichi KAWAHARA  Yoichi MATSUO  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    827-836

    Machine learning (ML) has been used for various tasks in network operations in recent years. However, since the scale of networks has grown and the amount of data generated has increased, it has been increasingly difficult for network operators to conduct their tasks with a single server using ML. Thus, ML with edge-cloud cooperation has been attracting attention for efficiently processing and analyzing a large amount of data. In the edge-cloud cooperation setting, although transmission latency, bandwidth congestion, and accuracy of tasks using ML depend on the load balance of processing data with edge servers and a cloud server in edge-cloud cooperation, the relationship is too complex to estimate. In this paper, we focus on monitoring anomalous traffic as an example of ML tasks for network operations and formulate transmission latency, bandwidth congestion, and the accuracy of the task with edge-cloud cooperation considering the ratio of the amount of data preprocessed in edge servers to that in a cloud server. Moreover, we formulate an optimization problem under constraints for transmission latency and bandwidth congestion to select the proper ratio by using our formulation. By solving our optimization problem, the optimal load balance between edge servers and a cloud server can be selected, and the accuracy of anomalous traffic monitoring can be estimated. Our formulation and optimization framework can be used for other ML tasks by considering the generating distribution of data and the type of an ML model. In accordance with our formulation, we simulated the optimal load balance of edge-cloud cooperation in a topology that mimicked a Japanese network and conducted an anomalous traffic detection experiment by using real traffic data to compare the estimated accuracy based on our formulation and the actual accuracy based on the experiment.

  • Design of Enclosing Signing Keys by All Issuers in Distributed Public Key Certificate-Issuing Infrastructure

    Shohei KAKEI  Hiroaki SEKO  Yoshiaki SHIRAISHI  Shoichi SAITO  

     
    LETTER

      Pubricized:
    2023/05/25
      Vol:
    E106-D No:9
      Page(s):
    1495-1498

    This paper first takes IoT as an example to provide the motivation for eliminating the single point of trust (SPOT) in a CA-based private PKI. It then describes a distributed public key certificate-issuing infrastructure that eliminates the SPOT and its limitation derived from generating signing keys. Finally, it proposes a method to address its limitation by all certificate issuers.

  • Distilling Distribution Knowledge in Normalizing Flow

    Jungwoo KWON  Gyeonghwan KIM  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/04/26
      Vol:
    E106-D No:8
      Page(s):
    1287-1291

    In this letter, we propose a feature-based knowledge distillation scheme which transfers knowledge between intermediate blocks of teacher and student with flow-based architecture, specifically Normalizing flow in our implementation. In addition to the knowledge transfer scheme, we examine how configuration of the distillation positions impacts on the knowledge transfer performance. To evaluate the proposed ideas, we choose two knowledge distillation baseline models which are based on Normalizing flow on different domains: CS-Flow for anomaly detection and SRFlow-DA for super-resolution. A set of performance comparison to the baseline models with popular benchmark datasets shows promising results along with improved inference speed. The comparison includes performance analysis based on various configurations of the distillation positions in the proposed scheme.

  • Persymmetric Structured Covariance Matrix Estimation Based on Whitening for Airborne STAP

    Quanxin MA  Xiaolin DU  Jianbo LI  Yang JING  Yuqing CHANG  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2022/12/27
      Vol:
    E106-A No:7
      Page(s):
    1002-1006

    The estimation problem of structured clutter covariance matrix (CCM) in space-time adaptive processing (STAP) for airborne radar systems is studied in this letter. By employing the prior knowledge and the persymmetric covariance structure, a new estimation algorithm is proposed based on the whitening ability of the covariance matrix. The proposed algorithm is robust to prior knowledge of different accuracy, and can whiten the observed interference data to obtain the optimal solution. In addition, the extended factored approach (EFA) is used in the optimization for dimensionality reduction, which reduces the computational burden. Simulation results show that the proposed algorithm can effectively improve STAP performance even under the condition of some errors in prior knowledge.

  • A Low-Cost Neural ODE with Depthwise Separable Convolution for Edge Domain Adaptation on FPGAs

    Hiroki KAWAKAMI  Hirohisa WATANABE  Keisuke SUGIURA  Hiroki MATSUTANI  

     
    PAPER-Computer System

      Pubricized:
    2023/04/05
      Vol:
    E106-D No:7
      Page(s):
    1186-1197

    High-performance deep neural network (DNN)-based systems are in high demand in edge environments. Due to its high computational complexity, it is challenging to deploy DNNs on edge devices with strict limitations on computational resources. In this paper, we derive a compact while highly-accurate DNN model, termed dsODENet, by combining recently-proposed parameter reduction techniques: Neural ODE (Ordinary Differential Equation) and DSC (Depthwise Separable Convolution). Neural ODE exploits a similarity between ResNet and ODE, and shares most of weight parameters among multiple layers, which greatly reduces the memory consumption. We apply dsODENet to a domain adaptation as a practical use case with image classification datasets. We also propose a resource-efficient FPGA-based design for dsODENet, where all the parameters and feature maps except for pre- and post-processing layers can be mapped onto on-chip memories. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, inference speed, FPGA resource utilization, and speedup rate compared to a software counterpart. The results demonstrate that dsODENet achieves comparable or slightly better domain adaptation accuracy compared to our baseline Neural ODE implementation, while the total parameter size without pre- and post-processing layers is reduced by 54.2% to 79.8%. Our FPGA implementation accelerates the inference speed by 23.8 times.

  • ZGridBC: Zero-Knowledge Proof Based Scalable and Privacy-Enhanced Blockchain Platform for Electricity Tracking

    Takeshi MIYAMAE  Fumihiko KOZAKURA  Makoto NAKAMURA  Masanobu MORINAGA  

     
    PAPER-Information Network

      Pubricized:
    2023/04/14
      Vol:
    E106-D No:7
      Page(s):
    1219-1229

    The total number of solar power-producing facilities whose Feed-in Tariff (FIT) Program-based ten-year contracts will expire by 2023 is expected to reach approximately 1.65 million in Japan. If the facilities that produce or consume renewable energy would increase to reach a large number, e.g., two million, blockchain would not be capable of processing all the transactions. In this work, we propose a blockchain-based electricity-tracking platform for renewable energy, called ‘ZGridBC,’ which consists of mutually cooperative two novel decentralized schemes to solve scalability, storage cost, and privacy issues at the same time. One is the electricity production resource management, which is an efficient data management scheme that manages electricity production resources (EPRs) on the blockchain by using UTXO tokens extended to two-dimension (period and electricity amount) to prevent double-spending. The other is the electricity-tracking proof, which is a massive data aggregation scheme that significantly reduces the amount of data managed on the blockchain by using zero-knowledge proof (ZKP). Thereafter, we illustrate the architecture of ZGridBC, consider its scalability, security, and privacy, and illustrate the implementation of ZGridBC. Finally, we evaluate the scalability of ZGridBC, which handles two million electricity facilities with far less cost per environmental value compared with the price of the environmental value proposed by METI (=0.3 yen/kWh).

1-20hit(504hit)