The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] edge(512hit)

21-40hit(512hit)

  • Local-to-Global Structure-Aware Transformer for Question Answering over Structured Knowledge

    Yingyao WANG  Han WANG  Chaoqun DUAN  Tiejun ZHAO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/06/27
      Vol:
    E106-D No:10
      Page(s):
    1705-1714

    Question-answering tasks over structured knowledge (i.e., tables and graphs) require the ability to encode structural information. Traditional pre-trained language models trained on linear-chain natural language cannot be directly applied to encode tables and graphs. The existing methods adopt the pre-trained models in such tasks by flattening structured knowledge into sequences. However, the serialization operation will lead to the loss of the structural information of knowledge. To better employ pre-trained transformers for structured knowledge representation, we propose a novel structure-aware transformer (SATrans) that injects the local-to-global structural information of the knowledge into the mask of the different self-attention layers. Specifically, in the lower self-attention layers, SATrans focus on the local structural information of each knowledge token to learn a more robust representation of it. In the upper self-attention layers, SATrans further injects the global information of the structured knowledge to integrate the information among knowledge tokens. In this way, the SATrans can effectively learn the semantic representation and structural information from the knowledge sequence and the attention mask, respectively. We evaluate SATrans on the table fact verification task and the knowledge base question-answering task. Furthermore, we explore two methods to combine symbolic and linguistic reasoning for these tasks to solve the problem that the pre-trained models lack symbolic reasoning ability. The experiment results reveal that the methods consistently outperform strong baselines on the two benchmarks.

  • Backup Resource Allocation Model with Probabilistic Protection Considering Service Delay

    Shinya HORIMOTO  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    798-816

    This paper proposes a backup resource allocation model for virtual network functions (VNFs) to minimize the total allocated computing capacity for backup with considering the service delay. If failures occur to primary hosts, the VNFs in failed hosts are recovered by backup hosts whose allocation is pre-determined. We introduce probabilistic protection, where the probability that the protection by a backup host fails is limited within a given value; it allows backup resource sharing to reduce the total allocated computing capacity. The previous work does not consider the service delay constraint in the backup resource allocation problem. The proposed model considers that the probability that the service delay, which consists of networking delay between hosts and processing delay in each VNF, exceeds its threshold is constrained within a given value. We introduce a basic algorithm to solve our formulated delay-constraint optimization problem. In a problem with the size that cannot be solved within an acceptable computation time limit by the basic algorithm, we develop a simulated annealing algorithm incorporating Yen's algorithm to handle the delay constraint heuristically. We observe that both algorithms in the proposed model reduce the total allocated computing capacity by up to 56.3% compared to a baseline; the simulated annealing algorithm can get feasible solutions in problems where the basic algorithm cannot.

  • Optimizing Edge-Cloud Cooperation for Machine Learning Accuracy Considering Transmission Latency and Bandwidth Congestion Open Access

    Kengo TAJIRI  Ryoichi KAWAHARA  Yoichi MATSUO  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    827-836

    Machine learning (ML) has been used for various tasks in network operations in recent years. However, since the scale of networks has grown and the amount of data generated has increased, it has been increasingly difficult for network operators to conduct their tasks with a single server using ML. Thus, ML with edge-cloud cooperation has been attracting attention for efficiently processing and analyzing a large amount of data. In the edge-cloud cooperation setting, although transmission latency, bandwidth congestion, and accuracy of tasks using ML depend on the load balance of processing data with edge servers and a cloud server in edge-cloud cooperation, the relationship is too complex to estimate. In this paper, we focus on monitoring anomalous traffic as an example of ML tasks for network operations and formulate transmission latency, bandwidth congestion, and the accuracy of the task with edge-cloud cooperation considering the ratio of the amount of data preprocessed in edge servers to that in a cloud server. Moreover, we formulate an optimization problem under constraints for transmission latency and bandwidth congestion to select the proper ratio by using our formulation. By solving our optimization problem, the optimal load balance between edge servers and a cloud server can be selected, and the accuracy of anomalous traffic monitoring can be estimated. Our formulation and optimization framework can be used for other ML tasks by considering the generating distribution of data and the type of an ML model. In accordance with our formulation, we simulated the optimal load balance of edge-cloud cooperation in a topology that mimicked a Japanese network and conducted an anomalous traffic detection experiment by using real traffic data to compare the estimated accuracy based on our formulation and the actual accuracy based on the experiment.

  • Design of Enclosing Signing Keys by All Issuers in Distributed Public Key Certificate-Issuing Infrastructure

    Shohei KAKEI  Hiroaki SEKO  Yoshiaki SHIRAISHI  Shoichi SAITO  

     
    LETTER

      Pubricized:
    2023/05/25
      Vol:
    E106-D No:9
      Page(s):
    1495-1498

    This paper first takes IoT as an example to provide the motivation for eliminating the single point of trust (SPOT) in a CA-based private PKI. It then describes a distributed public key certificate-issuing infrastructure that eliminates the SPOT and its limitation derived from generating signing keys. Finally, it proposes a method to address its limitation by all certificate issuers.

  • Distilling Distribution Knowledge in Normalizing Flow

    Jungwoo KWON  Gyeonghwan KIM  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/04/26
      Vol:
    E106-D No:8
      Page(s):
    1287-1291

    In this letter, we propose a feature-based knowledge distillation scheme which transfers knowledge between intermediate blocks of teacher and student with flow-based architecture, specifically Normalizing flow in our implementation. In addition to the knowledge transfer scheme, we examine how configuration of the distillation positions impacts on the knowledge transfer performance. To evaluate the proposed ideas, we choose two knowledge distillation baseline models which are based on Normalizing flow on different domains: CS-Flow for anomaly detection and SRFlow-DA for super-resolution. A set of performance comparison to the baseline models with popular benchmark datasets shows promising results along with improved inference speed. The comparison includes performance analysis based on various configurations of the distillation positions in the proposed scheme.

  • Persymmetric Structured Covariance Matrix Estimation Based on Whitening for Airborne STAP

    Quanxin MA  Xiaolin DU  Jianbo LI  Yang JING  Yuqing CHANG  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2022/12/27
      Vol:
    E106-A No:7
      Page(s):
    1002-1006

    The estimation problem of structured clutter covariance matrix (CCM) in space-time adaptive processing (STAP) for airborne radar systems is studied in this letter. By employing the prior knowledge and the persymmetric covariance structure, a new estimation algorithm is proposed based on the whitening ability of the covariance matrix. The proposed algorithm is robust to prior knowledge of different accuracy, and can whiten the observed interference data to obtain the optimal solution. In addition, the extended factored approach (EFA) is used in the optimization for dimensionality reduction, which reduces the computational burden. Simulation results show that the proposed algorithm can effectively improve STAP performance even under the condition of some errors in prior knowledge.

  • A Low-Cost Neural ODE with Depthwise Separable Convolution for Edge Domain Adaptation on FPGAs

    Hiroki KAWAKAMI  Hirohisa WATANABE  Keisuke SUGIURA  Hiroki MATSUTANI  

     
    PAPER-Computer System

      Pubricized:
    2023/04/05
      Vol:
    E106-D No:7
      Page(s):
    1186-1197

    High-performance deep neural network (DNN)-based systems are in high demand in edge environments. Due to its high computational complexity, it is challenging to deploy DNNs on edge devices with strict limitations on computational resources. In this paper, we derive a compact while highly-accurate DNN model, termed dsODENet, by combining recently-proposed parameter reduction techniques: Neural ODE (Ordinary Differential Equation) and DSC (Depthwise Separable Convolution). Neural ODE exploits a similarity between ResNet and ODE, and shares most of weight parameters among multiple layers, which greatly reduces the memory consumption. We apply dsODENet to a domain adaptation as a practical use case with image classification datasets. We also propose a resource-efficient FPGA-based design for dsODENet, where all the parameters and feature maps except for pre- and post-processing layers can be mapped onto on-chip memories. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, inference speed, FPGA resource utilization, and speedup rate compared to a software counterpart. The results demonstrate that dsODENet achieves comparable or slightly better domain adaptation accuracy compared to our baseline Neural ODE implementation, while the total parameter size without pre- and post-processing layers is reduced by 54.2% to 79.8%. Our FPGA implementation accelerates the inference speed by 23.8 times.

  • ZGridBC: Zero-Knowledge Proof Based Scalable and Privacy-Enhanced Blockchain Platform for Electricity Tracking

    Takeshi MIYAMAE  Fumihiko KOZAKURA  Makoto NAKAMURA  Masanobu MORINAGA  

     
    PAPER-Information Network

      Pubricized:
    2023/04/14
      Vol:
    E106-D No:7
      Page(s):
    1219-1229

    The total number of solar power-producing facilities whose Feed-in Tariff (FIT) Program-based ten-year contracts will expire by 2023 is expected to reach approximately 1.65 million in Japan. If the facilities that produce or consume renewable energy would increase to reach a large number, e.g., two million, blockchain would not be capable of processing all the transactions. In this work, we propose a blockchain-based electricity-tracking platform for renewable energy, called ‘ZGridBC,’ which consists of mutually cooperative two novel decentralized schemes to solve scalability, storage cost, and privacy issues at the same time. One is the electricity production resource management, which is an efficient data management scheme that manages electricity production resources (EPRs) on the blockchain by using UTXO tokens extended to two-dimension (period and electricity amount) to prevent double-spending. The other is the electricity-tracking proof, which is a massive data aggregation scheme that significantly reduces the amount of data managed on the blockchain by using zero-knowledge proof (ZKP). Thereafter, we illustrate the architecture of ZGridBC, consider its scalability, security, and privacy, and illustrate the implementation of ZGridBC. Finally, we evaluate the scalability of ZGridBC, which handles two million electricity facilities with far less cost per environmental value compared with the price of the environmental value proposed by METI (=0.3 yen/kWh).

  • A Lightweight End-to-End Speech Recognition System on Embedded Devices

    Yu WANG  Hiromitsu NISHIZAKI  

     
    PAPER-Speech and Hearing

      Pubricized:
    2023/04/13
      Vol:
    E106-D No:7
      Page(s):
    1230-1239

    In industry, automatic speech recognition has come to be a competitive feature for embedded products with poor hardware resources. In this work, we propose a tiny end-to-end speech recognition model that is lightweight and easily deployable on edge platforms. First, instead of sophisticated network structures, such as recurrent neural networks, transformers, etc., the model we propose mainly uses convolutional neural networks as its backbone. This ensures that our model is supported by most software development kits for embedded devices. Second, we adopt the basic unit of MobileNet-v3, which performs well in computer vision tasks, and integrate the features of the hidden layer at different scales, thus compressing the number of parameters of the model to less than 1 M and achieving an accuracy greater than that of some traditional models. Third, in order to further reduce the CPU computation, we directly extract acoustic representations from 1-dimensional speech waveforms and use a self-supervised learning approach to encourage the convergence of the model. Finally, to solve some problems where hardware resources are relatively weak, we use a prefix beam search decoder to dynamically extend the search path with an optimized pruning strategy and an additional initialism language model to capture the probability of between-words in advance and thus avoid premature pruning of correct words. In our experiments, according to a number of evaluation categories, our end-to-end model outperformed several tiny speech recognition models used for embedded devices in related work.

  • Edge Computing Resource Allocation Algorithm for NB-IoT Based on Deep Reinforcement Learning

    Jiawen CHU  Chunyun PAN  Yafei WANG  Xiang YUN  Xuehua LI  

     
    PAPER-Network

      Pubricized:
    2022/11/04
      Vol:
    E106-B No:5
      Page(s):
    439-447

    Mobile edge computing (MEC) technology guarantees the privacy and security of large-scale data in the Narrowband-IoT (NB-IoT) by deploying MEC servers near base stations to provide sufficient computing, storage, and data processing capacity to meet the delay and energy consumption requirements of NB-IoT terminal equipment. For the NB-IoT MEC system, this paper proposes a resource allocation algorithm based on deep reinforcement learning to optimize the total cost of task offloading and execution. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using dueling Q-learning network algorithm. Simulation results show that compared with the deep Q-learning network and the all-local cost and all-offload cost algorithms, the proposed algorithm can effectively guarantee the success rates of task offloading and execution. In addition, when the execution task volume is 200KBit, the total system cost of the proposed algorithm can be reduced by at least 1.3%, and when the execution task volume is 600KBit, the total cost of system execution tasks can be reduced by 16.7% at most.

  • Chinese Named Entity Recognition Method Based on Dictionary Semantic Knowledge Enhancement

    Tianbin WANG  Ruiyang HUANG  Nan HU  Huansha WANG  Guanghan CHU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/02/15
      Vol:
    E106-D No:5
      Page(s):
    1010-1017

    Chinese Named Entity Recognition is the fundamental technology in the field of the Chinese Natural Language Process. It is extensively adopted into information extraction, intelligent question answering, and knowledge graph. Nevertheless, due to the diversity and complexity of Chinese, most Chinese NER methods fail to sufficiently capture the character granularity semantics, which affects the performance of the Chinese NER. In this work, we propose DSKE-Chinese NER: Chinese Named Entity Recognition based on Dictionary Semantic Knowledge Enhancement. We novelly integrate the semantic information of character granularity into the vector space of characters and acquire the vector representation containing semantic information by the attention mechanism. In addition, we verify the appropriate number of semantic layers through the comparative experiment. Experiments on public Chinese datasets such as Weibo, Resume and MSRA show that the model outperforms character-based LSTM baselines.

  • Secure Revocation Features in eKYC - Privacy Protection in Central Bank Digital Currency

    Kazuo TAKARAGI  Takashi KUBOTA  Sven WOHLGEMUTH  Katsuyuki UMEZAWA  Hiroki KOYANAGI  

     
    PAPER

      Pubricized:
    2022/10/07
      Vol:
    E106-A No:3
      Page(s):
    325-332

    Central bank digital currencies require the implementation of eKYC to verify whether a trading customer is eligible online. When an organization issues an ID proof of a customer for eKYC, that proof is usually achieved in practice by a hierarchy of issuers. However, the customer wants to disclose only part of the issuer's chain and documents to the trading partner due to privacy concerns. In this research, delegatable anonymous credential (DAC) and zero-knowledge range proof (ZKRP) allow customers to arbitrarily change parts of the delegation chain and message body to range proofs expressed in inequalities. That way, customers can protect the privacy they need with their own control. Zero-knowledge proof is applied to prove the inequality between two time stamps by the time stamp server (signature presentation, public key revocation, or non-revocation) without disclosing the signature content and stamped time. It makes it possible to prove that the registration information of the national ID card is valid or invalid while keeping the user's personal information anonymous. This research aims to contribute to the realization of a sustainable financial system based on self-sovereign identity management with privacy-enhanced PKI.

  • A CFAR Detection Algorithm Based on Clutter Knowledge for Cognitive Radar

    Kaixuan LIU  Yue LI  Peng WANG  Xiaoyan PENG  Hongshu LIAO  Wanchun LI  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2022/09/13
      Vol:
    E106-A No:3
      Page(s):
    590-599

    Under the background of non-homogenous and dynamic time-varying clutter, the processing ability of the traditional constant false alarm rate (CFAR) detection algorithm is significantly reduced, as well as the detection performance. This paper proposes a CFAR detection algorithm based on clutter knowledge (CK-CFAR), as a new CFAR, to improve the detection performance adaptability of the radar in complex clutter background. With the acquired clutter prior knowledge, the algorithm can dynamically select parameters according to the change of background clutter and calculate the threshold. Compared with the detection algorithms such as CA-CFAR, GO-CFAR, SO-CFAR, and OS-CFAR, the simulation results show that CK-CFAR has excellent detection performance in the background of homogenous clutter and edge clutter. This algorithm can help radar adapt to the clutter with different distribution characteristics, effectively enhance radar detection in a complex environment. It is more in line with the development direction of the cognitive radar.

  • GUI System to Support Cardiology Examination Based on Explainable Regression CNN for Estimating Pulmonary Artery Wedge Pressure

    Yuto OMAE  Yuki SAITO  Yohei KAKIMOTO  Daisuke FUKAMACHI  Koichi NAGASHIMA  Yasuo OKUMURA  Jun TOYOTANI  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2022/12/08
      Vol:
    E106-D No:3
      Page(s):
    423-426

    In this article, a GUI system is proposed to support clinical cardiology examinations. The proposed system estimates “pulmonary artery wedge pressure” based on patients' chest radiographs using an explainable regression-based convolutional neural network. The GUI system was validated by performing an effectiveness survey with 23 cardiology physicians with medical licenses. The results indicated that many physicians considered the GUI system to be effective.

  • Chinese Lexical Sememe Prediction Using CilinE Knowledge

    Hao WANG  Sirui LIU  Jianyong DUAN  Li HE  Xin LI  

     
    PAPER-Language, Thought, Knowledge and Intelligence

      Pubricized:
    2022/08/18
      Vol:
    E106-A No:2
      Page(s):
    146-153

    Sememes are the smallest semantic units of human languages, the composition of which can represent the meaning of words. Sememes have been successfully applied to many downstream applications in natural language processing (NLP) field. Annotation of a word's sememes depends on language experts, which is both time-consuming and labor-consuming, limiting the large-scale application of sememe. Researchers have proposed some sememe prediction methods to automatically predict sememes for words. However, existing sememe prediction methods focus on information of the word itself, ignoring the expert-annotated knowledge bases which indicate the relations between words and should value in sememe predication. Therefore, we aim at incorporating the expert-annotated knowledge bases into sememe prediction process. To achieve that, we propose a CilinE-guided sememe prediction model which employs an existing word knowledge base CilinE to remodel the sememe prediction from relational perspective. Experiments on HowNet, a widely used Chinese sememe knowledge base, have shown that CilinE has an obvious positive effect on sememe prediction. Furthermore, our proposed method can be integrated into existing methods and significantly improves the prediction performance. We will release the data and code to the public.

  • Migration Model for Distributed Server Allocation

    Souhei YANASE  Fujun HE  Haruto TAKA  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    44-56

    This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.

  • Holmes: A Hardware-Oriented Optimizer Using Logarithms

    Yoshiharu YAMAGISHI  Tatsuya KANEKO  Megumi AKAI-KASAYA  Tetsuya ASAI  

     
    PAPER

      Pubricized:
    2022/05/11
      Vol:
    E105-D No:12
      Page(s):
    2040-2047

    Edge computing, which has been gaining attention in recent years, has many advantages, such as reducing the load on the cloud, not being affected by the communication environment, and providing excellent security. Therefore, many researchers have attempted to implement neural networks, which are representative of machine learning in edge computing. Neural networks can be divided into inference and learning parts; however, there has been little research on implementing the learning component in edge computing in contrast to the inference part. This is because learning requires more memory and computation than inference, easily exceeding the limit of resources available for edge computing. To overcome this problem, this research focuses on the optimizer, which is the heart of learning. In this paper, we introduce our new optimizer, hardware-oriented logarithmic momentum estimation (Holmes), which incorporates new perspectives not found in existing optimizers in terms of characteristics and strengths of hardware. The performance of Holmes was evaluated by comparing it with other optimizers with respect to learning progress and convergence speed. Important aspects of hardware implementation, such as memory and operation requirements are also discussed. The results show that Holmes is a good match for edge computing with relatively low resource requirements and fast learning convergence. Holmes will help create an era in which advanced machine learning can be realized on edge computing.

  • Edge Computing-Enhanced Network Redundancy Elimination for Connected Cars

    Masahiro YOSHIDA  Koya MORI  Tomohiro INOUE  Hiroyuki TANAKA  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1372-1379

    Connected cars generate a huge amount of Internet of Things (IoT) sensor information called Controller Area Network (CAN) data. Recently, there is growing interest in collecting CAN data from connected cars in a cloud system to enable life-critical use cases such as safe driving support. Although each CAN data packet is very small, a connected car generates thousands of CAN data packets per second. Therefore, real-time CAN data collection from connected cars in a cloud system is one of the most challenging problems in the current IoT. In this paper, we propose an Edge computing-enhanced network Redundancy Elimination service (EdgeRE) for CAN data collection. In developing EdgeRE, we designed a CAN data compression architecture that combines in-vehicle computers, edge datacenters and a public cloud system. EdgeRE includes the idea of hierarchical data compression and dynamic data buffering at edge datacenters for real-time CAN data collection. Across a wide range of field tests with connected cars and an edge computing testbed, we show that the EdgeRE reduces bandwidth usage by 88% and the number of packets by 99%.

  • Incentive-Stable Matching Protocol for Service Chain Placement in Multi-Operator Edge System

    Jen-Yu WANG  Li-Hsing YEN  Juliana LIMAN  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1353-1360

    Network Function Virtualization (NFV) enables the embedding of Virtualized Network Function (VNF) into commodity servers. A sequence of VNFs can be chained in a particular order to form a service chain (SC). This paper considers placing multiple SCs in a geo-distributed edge system owned by multiple service providers (SPs). For a pair of SC and SP, minimizing the placement cost while meeting a latency constraint is formulated as an integer programming problem. As SC clients and SPs are self-interested, we study the matching between SCs and SPs that respects individual's interests yet maximizes social welfare. The proposed matching approach excludes any blocking individual and block pair which may jeopardize the stability of the result. Simulation results show that the proposed approach performs well in terms of social welfare but is suboptimal concerning the number of placed SCs.

  • Operations Smart Contract to Realize Decentralized System Operations Workflow for Consortium Blockchain

    Tatsuya SATO  Taku SHIMOSAWA  Yosuke HIMURA  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1318-1331

    Enterprises have paid attention to consortium blockchains like Hyperledger Fabric, which is one of the most promising platforms, for efficient decentralized transactions without depending on any particular organization. A consortium blockchain-based system will be typically built across multiple organizations. In such blockchain-based systems, system operations across multiple organizations in a decentralized manner are essential to maintain the value of introducing consortium blockchains. Decentralized system operations have recently been becoming realistic with the evolution of consortium blockchains. For instance, the release of Hyperledger Fabric v2.x, in which individual operational tasks for a blockchain network, such as command execution of configuration change of channels (Fabric's sub-networks) and upgrade of chaincodes (Fabric's smart contracts), can be partially executed in a decentralized manner. However, the operations workflows also include the preceding procedure of pre-sharing, coordinating, and pre-agreeing the operational information (e.g., configuration parameters) among organizations, after which operation executions can be conducted, and this preceding procedure relies on costly manual tasks. To realize efficient decentralized operations workflows for consortium blockchain-based systems in general, we propose a decentralized inter-organizational operations method that we call Operations Smart Contract (OpsSC), which defines an operations workflow as a smart contract. Furthermore, we design and implement OpsSC for blockchain network operations with Hyperledger Fabric v2.x. This paper presents OpsSC for operating channels and chaincodes, which are essential for managing the blockchain networks, through clarifying detailed workflows of those operations. A cost evaluation based on an estimation model shows that the total operational cost for executing a typical operational scenario to add an organization to a blockchain network having ten organizations could be reduced by 54 percent compared with a conventional script-based method. The implementation of OpsSC has been open-sourced and registered as one of Hyperledger Labs projects, which hosts experimental projects approved by Hyperledger.

21-40hit(512hit)