In Japan, research on spatial transmission Wireless Power Transfer/Transmission (WPT) for long-distance power transmission has been conducted ahead of the rest of the world; however, until 2022, there has been no category under the Radio Law, and it has been treated as an experimental station. The authors are working on Japanese institutionalization (revision of ministerial ordinances) and global standardization of this spatial transmission WPT for social implementation. This paper describes the Japanese and international institutionalization and standardization trends. In addition, as the latest trend in R&D trends, as the next step of institutionalization, the author introduces two national projects that are being worked on by industry, academia, and government for Step 2, which can be used for a wider range of applications by relaxing the scope of use and restrictions from Step 1, which has various restrictions. The first is about the Cross-ministerial Strategic Innovation Promotion Program (SIP) Phase 2. In SIP Phase 2, we conducted R&D on “WPT system for sensor networks and mobile devices”. This R&D is research on detecting and avoiding people so that radio exposure does not exceed protection guidelines and detecting incumbent radios and avoiding harmful interference so that more power can be transmitted under coexistence conditions. The other is “Research and Development for Expansion of Radio Resources” to be conducted by the Ministry of Internal Affairs and Communications (MIC), which is scheduled for four years from FY2022. This is also a more concrete research and development project for Step 2 institutionalization, along with the results of the SIP mentioned above.
Yoshinori ITOTAGAWA Koma ATSUMI Hikaru SEBE Daisuke KANEMOTO Tetsuya HIROSE
This paper describes a programmable differential bandgap reference (PD-BGR) for ultra-low-power IoT (Internet-of-Things) edge node devices. The PD-BGR consists of a current generator (CG) and differential voltage generator (DVG). The CG is based on a bandgap reference (BGR) and generates an operating current and a voltage, while the DVG generates another voltage from the current. A differential voltage reference can be obtained by taking the voltage difference from the voltages. The PD-BGR can produce a programmable differential output voltage by changing the multipliers of MOSFETs in a differential pair and resistance with digital codes. Simulation results showed that the proposed PD-BGR can generate 25- to 200-mV reference voltages with a 25-mV step within a ±0.7% temperature inaccuracy in a temperature range from -20 to 100°C. A Monte Carlo simulation showed that the coefficient of the variation in the reference was within 1.1%. Measurement results demonstrated that our prototype chips can generate stable programmable differential output voltages, almost the same results as those of the simulation. The average power consumption was only 88.4 nW, with a voltage error of -4/+3 mV with 5 samples.
Youquan XIAN Lianghaojie ZHOU Jianyong JIANG Boyi WANG Hao HUO Peng LIU
In recent years, blockchain has been widely applied in the Internet of Things (IoT). Blockchain oracle, as a bridge for data communication between blockchain and off-chain, has also received significant attention. However, the numerous and heterogeneous devices in the IoT pose great challenges to the efficiency and security of data acquisition for oracles. We find that the matching relationship between data sources and oracle nodes greatly affects the efficiency and service quality of the entire oracle system. To address these issues, this paper proposes a distributed and efficient oracle solution tailored for the IoT, enabling fast acquisition of real-time off-chain data. Specifically, we first design a distributed oracle architecture that combines both Trusted Execution Environment (TEE) devices and ordinary devices to improve system scalability, considering the heterogeneity of IoT devices. Secondly, based on the trusted node information provided by TEE, we determine the matching relationship between nodes and data sources, assigning appropriate nodes for tasks to enhance system efficiency. Through simulation experiments, our proposed solution has been shown to effectively improve the efficiency and service quality of the system, reducing the average response time by approximately 9.92% compared to conventional approaches.
Artificial intelligence and the introduction of Internet of Things technologies have benefited from technological advances and new automated computer system technologies. Eventually, it is now possible to integrate them into a single offline industrial system. This is accomplished through machine-to-machine communication, which eliminates the human factor. The purpose of this article is to examine security systems for machine-to-machine communication systems that rely on identification and authentication algorithms for real-time monitoring. The article investigates security methods for quickly resolving data processing issues by using the Security operations Center’s main machine to identify and authenticate devices from 19 different machines. The results indicate that when machines are running offline and performing various tasks, they can be exposed to data leaks and malware attacks by both the individual machine and the system as a whole. The study looks at the operation of 19 computers, 7 of which were subjected to data leakage and malware attacks. AnyLogic software is used to create visual representations of the results using wireless networks and algorithms based on previously processed methods. The W76S is used as a protective element within intelligent sensors due to its built-in memory protection. For 4 machines, the data leakage time with malware attacks was 70 s. For 10 machines, the duration was 150 s with 3 attacks. Machine 15 had the longest attack duration, lasting 190 s and involving 6 malware attacks, while machine 19 had the shortest attack duration, lasting 200 s and involving 7 malware attacks. The highest numbers indicated that attempting to hack a system increased the risk of damaging a device, potentially resulting in the entire system with connected devices failing. Thus, illegal attacks by attackers using malware may be identified over time, and data processing effects can be prevented by intelligent control. The results reveal that applying identification and authentication methods using a protocol increases cyber-physical system security while also allowing real-time monitoring of offline system security.
Yuhei YAMAMOTO Naoki SHIBATA Tokiyoshi MATSUDA Hidenori KAWANISHI Mutsumi KIMURA
Thermoelectric effect of Ga-Sn-O (GTO) thin films has been investigated for Internet-of-Things application. It is found that the amorphous GTO thin films provide higher power factors (PF) than the polycrystalline ones, which is because grain boundaries block the electron conduction in the polycrystalline ones. It is also found that the GTO thin films annealed in vacuum provide higher PF than those annealed in air, which is because oxygen vacancies are terminated in those annealed in air. The PF and dimensionless figure of merit (ZT) is not so excellent, but the cost effectiveness is excellent, which is the most important for some examples of the Internet-of-Things application.
Shota AKIYOSHI Yuzo TAENAKA Kazuya TSUKAMOTO Myung LEE
Cross-domain data fusion is becoming a key driver in the growth of numerous and diverse applications in the Internet of Things (IoT) era. We have proposed the concept of a new information platform, Geo-Centric Information Platform (GCIP), that enables IoT data fusion based on geolocation, i.e., produces spatio-temporal content (STC), and then provides the STC to users. In this environment, users cannot know in advance “when,” “where,” or “what type” of STC is being generated because the type and timing of STC generation vary dynamically with the diversity of IoT data generated in each geographical area. This makes it difficult to directly search for a specific STC requested by the user using the content identifier (domain name of URI or content name). To solve this problem, a new content discovery method that does not directly specify content identifiers is needed while taking into account (1) spatial and (2) temporal constraints. In our previous study, we proposed a content discovery method that considers only spatial constraints and did not consider temporal constraints. This paper proposes a new content discovery method that matches user requests with content metadata (topic) characteristics while taking into account spatial and temporal constraints. Simulation results show that the proposed method successfully discovers appropriate STC in response to a user request.
Yuki ABE Kazutoshi KOBAYASHI Jun SHIOMI Hiroyuki OCHI
Energy harvesting has been widely investigated as a potential solution to supply power for Internet of Things (IoT) devices. Computing devices must operate intermittently rather than continuously, because harvested energy is unstable and some of IoT applications can be periodic. Therefore, processors for IoT devices with intermittent operation must feature a hibernation mode with zero-standby-power in addition to energy-efficient normal mode. In this paper, we describe the layout design and measurement results of a nonvolatile standard cell memory (NV-SCM) and nonvolatile flip-flops (NV-FF) with a nonvolatile memory using Fishbone-in-Cage Capacitor (FiCC) suitable for IoT processors with intermittent operations. They can be fabricated in any conventional CMOS process without any additional mask. NV-SCM and NV-FF are fabricated in a 180nm CMOS process technology. The area overhead by nonvolatility of a bit cell are 74% in NV-SCM and 29% in NV-FF, respectively. We confirmed full functionality of the NV-SCM and NV-FF. The nonvolatile system using proposed NV-SCM and NV-FF can reduce the energy consumption by 24.3% compared to the volatile system when hibernation/normal operation time ratio is 500 as shown in the simulation.
Thin Tharaphe THEIN Yoshiaki SHIRAISHI Masakatu MORII
With a rapidly escalating number of sophisticated cyber-attacks, protecting Internet of Things (IoT) networks against unauthorized activity is a major concern. The detection of malicious attack traffic is thus crucial for IoT security to prevent unwanted traffic. However, existing traditional malicious traffic detection systems which relied on supervised machine learning approach need a considerable number of benign and malware traffic samples to train the machine learning models. Moreover, in the cases of zero-day attacks, only a few labeled traffic samples are accessible for analysis. To deal with this, we propose a few-shot malicious IoT traffic detection system with a prototypical graph neural network. The proposed approach does not require prior knowledge of network payload binaries or network traffic signatures. The model is trained on labeled traffic data and tested to evaluate its ability to detect new types of attacks when only a few labeled traffic samples are available. The proposed detection system first categorizes the network traffic as a bidirectional flow and visualizes the binary traffic flow as a color image. A neural network is then applied to the visualized traffic to extract important features. After that, using the proposed few-shot graph neural network approach, the model is trained on different few-shot tasks to generalize it to new unseen attacks. The proposed model is evaluated on a network traffic dataset consisting of benign traffic and traffic corresponding to six types of attacks. The results revealed that our proposed model achieved an F1 score of 0.91 and 0.94 in 5-shot and 10-shot classification, respectively, and outperformed the baseline models.
Guosheng ZHAO Yang WANG Jian WANG
Internet of Things (IoT) devices are widely used in various fields. However, their limited computing resources make them extremely vulnerable and difficult to be effectively protected. Traditional intrusion detection systems (IDS) focus on high accuracy and low false alarm rate (FAR), making them often have too high spatiotemporal complexity to be deployed in IoT devices. In response to the above problems, this paper proposes an intrusion detection model of IoT based on the light gradient boosting machine (LightGBM). Firstly, the one-dimensional convolutional neural network (CNN) is used to extract features from network traffic to reduce the feature dimensions. Then, the LightGBM is used for classification to detect the type of network traffic belongs. The LightGBM is more lightweight on the basis of inheriting the advantages of the gradient boosting tree. The LightGBM has a faster decision tree construction process. Experiments on the TON-IoT and BoT-IoT datasets show that the proposed model has stronger performance and more lightweight than the comparison models. The proposed model can shorten the prediction time by 90.66% and is better than the comparison models in accuracy and other performance metrics. The proposed model has strong detection capability for denial of service (DoS) and distributed denial of service (DDoS) attacks. Experimental results on the testbed built with IoT devices such as Raspberry Pi show that the proposed model can perform effective and real-time intrusion detection on IoT devices.
Jiawen CHU Chunyun PAN Yafei WANG Xiang YUN Xuehua LI
Mobile edge computing (MEC) technology guarantees the privacy and security of large-scale data in the Narrowband-IoT (NB-IoT) by deploying MEC servers near base stations to provide sufficient computing, storage, and data processing capacity to meet the delay and energy consumption requirements of NB-IoT terminal equipment. For the NB-IoT MEC system, this paper proposes a resource allocation algorithm based on deep reinforcement learning to optimize the total cost of task offloading and execution. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using dueling Q-learning network algorithm. Simulation results show that compared with the deep Q-learning network and the all-local cost and all-offload cost algorithms, the proposed algorithm can effectively guarantee the success rates of task offloading and execution. In addition, when the execution task volume is 200KBit, the total system cost of the proposed algorithm can be reduced by at least 1.3%, and when the execution task volume is 600KBit, the total cost of system execution tasks can be reduced by 16.7% at most.
As the Internet has become prevalent, the popularity of net media has been growing, to a point that it has taken over conventional mass media. However, TWtrends, the Twitter trends visualization system operated by our research team since 2019, indicates that many topics on TV programs frequently appear on Twitter trendlines. This study investigates the relationship between Twitter and TV programs by collecting information on Twitter trends and TV programs simultaneously. Although this study provides a rough estimation of the volume of tweets that mention TV programs, the results show that several tweets mention TV programs at a constant rate, which tends to increase on the weekend. This tendency of TV-related tweets stems from the audience rating survey results. Considering the study outcome, and the fact that many TV programs introduce topics popular in social media, implies codependency between Internet media (social media) and mass media.
Ouyang JUNJIE Naoto YANAI Tatsuya TAKEMURA Masayuki OKADA Shingo OKAMURA Jason Paul CRUZ
The BGPsec protocol, which is an extension of the border gateway protocol (BGP) for Internet routing known as BGPsec, uses digital signatures to guarantee the validity of routing information. However, the use of digital signatures in routing information on BGPsec causes a lack of memory in BGP routers, creating a gaping security hole in today's Internet. This problem hinders the practical realization and implementation of BGPsec. In this paper, we present APVAS (AS path validation based on aggregate signatures), a new protocol that reduces the memory consumption of routers running BGPsec when validating paths in routing information. APVAS relies on a novel aggregate signature scheme that compresses individually generated signatures into a single signature. Furthermore, we implement a prototype of APVAS on BIRD Internet Routing Daemon and demonstrate its efficiency on actual BGP connections. Our results show that the routing tables of the routers running BGPsec with APVAS have 20% lower memory consumption than those running the conventional BGPsec. We also confirm the effectiveness of APVAS in the real world by using 800,000 routes, which are equivalent to the full route information on a global scale.
Shuya ABE Go HASEGAWA Masayuki MURATA
It is now becoming important for mobile cellular networks to accommodate all kinds of Internet of Things (IoT) communications. However, the contention-based random access and radio resource allocation used in traditional cellular networks, which are optimized mainly for human communications, cannot efficiently handle large-scale IoT communications. For this reason, standardization activities have emerged to serve IoT devices such as Cellular-IoT (C-IoT). However, few studies have been directed at evaluating the performance of C-IoT communications with periodic data transmissions, despite this being a common characteristic of many IoT communications. In this paper, we give the performance analysis results of mobile cellular networks supporting periodic C-IoT communications, focusing on the performance differences between LTE and Narrowband-IoT (NB-IoT) networks. To achieve this, we first construct an analysis model for end-to-end performance of both the control plane and data plane, including random access procedures, radio resource allocation, establishing bearers in the Evolved Packet Core network, and user-data transmissions. In addition, we include the impact of the immediate release of the radio resources proposed in 3GPP. Numerical evaluations show that NB-IoT can support more IoT devices than LTE, up to 8.7 times more, but imposes a significant delay in data transmissions. We also confirm that the immediate release of radio resources increases the network capacity by up to 17.7 times.
Hequn LI Die LIU Jiaxi LU Hai ZHAO Jiuqiang XU
Industrial networks need to provide reliable communication services, usually in a redundant transmission (RT) manner. In the past few years, several device-redundancy-based, layer 2 solutions have been proposed. However, with the evolution of industrial networks to the Industrial Internet, these methods can no longer work properly in the non-redundancy, layer 3 environments. In this paper, an SDN-based reliable communication framework is proposed for the Industrial Internet. It can provide reliable communication guarantees for mission-critical applications while servicing non-critical applications in a best-effort transmission manner. Specifically, it first implements an RT-based reliable communication method using the Industrial Internet's link-redundancy feature. Next, it presents a redundant synchronization mechanism to prevent end systems from receiving duplicate data. Finally, to maximize the number of critical flows in it (an NP-hard problem), two ILP-based routing & scheduling algorithms are also put forward. These two algorithms are optimal (Scheduling with Unconstrained Routing, SUR) and suboptimal (Scheduling with Minimum length Routing, SMR). Numerous simulations are conducted to evaluate its effectiveness. The results show that it can provide reliable, duplicate-free services to end systems. Its reliable communication method performs better than the conventional best-effort transmission method in terms of packet delivery success ratio in layer 3 networks. In addition, its scheduling algorithm, SMR, performs well on the experimental topologies (with average quality of 93% when compared to SUR), and the time overhead is acceptable.
Tetsuya IIZUKA Meikan CHIN Toru NAKURA Kunihiro ASADA
This paper proposes a reference-clock-less quick-start-up CDR that resumes from a stand-by state only with a 4-bit preamble utilizing a phase generator with an embedded Time-to-Digital Converter (TDC). The phase generator detects 1-UI time interval by using its internal TDC and works as a self-tunable digitally-controlled delay line. Once the phase generator coarsely tunes the recovered clock period, then the residual time difference is finely tuned by a fine Digital-to-Time Converter (DTC). Since the tuning resolution of the fine DTC is matched by design with the time resolution of the TDC that is used as a phase detector, the fine tuning completes instantaneously. After the initial coarse and fine delay tuning, the feedback loop for frequency tracking is activated in order to improve Consecutive Identical Digits (CID) tolerance of the CDR. By applying the frequency tracking architecture, the proposed CDR achieves more than 100bits of CID tolerance. A prototype implemented in a 65nm bulk CMOS process operates at a 0.9-2.15Gbps continuous rate. It consumes 5.1-8.4mA in its active state and 42μA leakage current in its stand-by state from a 1.0V supply.
Peng YANG Yu YANG Puning ZHANG Dapeng WU Ruyan WANG
The integration of social networking concepts into the Internet of Things has led to the Social Internet of Things (SIoT) paradigm, and trust evaluation is essential to secure interaction in SIoT. In SIoT, when resource-constrained nodes respond to unexpected malicious services and malicious recommendations, the trust assessment is prone to be inaccurate, and the existing architecture has the risk of privacy leakage. An edge-cloud collaborative trust evaluation architecture in SIoT is proposed in this paper. Utilize the resource advantages of the cloud and the edge to complete the trust assessment task collaboratively. An evaluation algorithm of relationship closeness between nodes is designed to evaluate neighbor nodes' reliability in SIoT. A trust computing algorithm with enhanced sensitivity is proposed, considering the fluctuation of trust value and the conflict between trust indicators to enhance the sensitivity of identifying malicious behaviors. Simulation results show that compared with traditional methods, the proposed trust evaluation method can effectively improve the success rate of interaction and reduce the false detection rate when dealing with malicious services and malicious recommendations.
Kyungmin KIM Jiung SONG Jong Wook KWAK
We propose a novel synthetic-benchmarks generation model using partial time-series regression, called Partial-Regression-Integrated Generic Model (PRIGM). PRIGM abstracts the unique characteristics of the input sensor data into generic time-series data confirming the generation similarity and evaluating the correctness of the synthetic benchmarks. The experimental results obtained by the proposed model with its formula verify that PRIGM preserves the time-series characteristics of empirical data in complex time-series data within 10.4% on an average difference in terms of descriptive statistics accuracy.
With the high development of computation requirements in Internet of Things, resource-limited edge servers usually require to cooperate to perform the tasks. Most related studies usually assume a static cooperation approach which might not suit the dynamic environment of edge computing. In this paper, we consider a dynamic cooperation approach by guiding edge servers to form coalitions dynamically. It raises two issues: 1) how to guide them to optimally form coalitions and 2) how to cope with the dynamic feature where server statuses dynamically change as the tasks are performed. The coalitional Markov decision process (CMDP) model proposed in our previous work can handle these issues well. However, its basic solution, coalitional Q-learning, cannot handle the large scale problem when the task number is large in edge computing. Our response is to propose a novel algorithm called deep coalitional Q-learning (DCQL) to solve it. To sum up, we first formulate the dynamic cooperation problem of edge servers as a CMDP: each edge server is regarded as an agent and the dynamic process is modeled as a MDP where the agents observe the current state to formulate several coalitions. Each coalition takes an action to impact the environment which correspondingly transfers to the next state to repeat the above process. Then, we propose DCQL which includes a deep neural network and so can well cope with large scale problem. DCQL can guide the edge servers to form coalitions dynamically with the target of optimizing some goal. Furthermore, we run experiments to verify our proposed algorithm's effectiveness in different settings.
Distributed edge cloud computing is an important computation infrastructure for Internet of Things (IoT) and its task offloading problem has attracted much attention recently. Most existing work on task offloading in distributed edge cloud computing usually assumes that each self-interested user owns one edge server and chooses whether to execute its tasks locally or to offload the tasks to cloud servers. The goal of each edge server is to maximize its own interest like low delay cost, which corresponds to a non-cooperative setting. However, with the strong development of smart IoT communities such as smart hospital and smart factory, all edge and cloud servers can belong to one organization like a technology company. This corresponds to a cooperative setting where the goal of the organization is to maximize the team interest in the overall edge cloud computing system. In this paper, we consider a new problem called cooperative task offloading where all edge servers try to cooperate to make the entire edge cloud computing system achieve good performance such as low delay cost and low energy cost. However, this problem is hard to solve due to two issues: 1) each edge server status dynamically changes and task arrival is uncertain; 2) each edge server can observe only its own status, which makes it hard to optimize team interest as global information is unavailable. For solving these issues, we formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) which can well handle the dynamic features under partial observations. Then, we apply a multi-agent reinforcement learning algorithm called value decomposition network (VDN) and propose a VDN-based task offloading algorithm (VDN-TO) to solve the problem. Specifically, the motivation is that we use a team value function to evaluate the team interest, which is then divided into individual value functions for each edge server. Then, each edge server updates its individual value function in the direction that can maximize the team interest. Finally, we choose a part of a real dataset to evaluate our algorithm and the results show the effectiveness of our algorithm in a comparison with some other existing methods.
Hequn LI Jiaxi LU Jinfa WANG Hai ZHAO Jiuqiang XU Xingchi CHEN
Real-time and scalable multicast services are of paramount importance to Industrial Internet of Things (IIoT) applications. To realize these services, the multicast algorithm should, on the one hand, ensure the maximum delay of a multicast session not exceeding its upper delay bound. On the other hand, the algorithm should minimize session costs. As an emerging networking paradigm, Software-defined Networking (SDN) can provide a global view of the network to multicast algorithms, thereby bringing new opportunities for realizing the desired multicast services in IIoT environments. Unfortunately, existing SDN-based multicast (SDM) algorithms cannot meet the real-time and scalable requirements simultaneously. Therefore, in this paper, we focus on SDM algorithm design for IIoT environments. To be specific, the paper first converts the multicast tree construction problem for SDM in IIoT environments into a delay-bounded least-cost shared tree problem and proves that it is an NP-complete problem. Then, the paper puts forward a shared tree (ST) algorithm called SDM4IIoT to compute suboptimal solutions to the problem. The algorithm consists of five steps: 1) construct a delay-optimal shared tree; 2) divide the tree into a set of subpaths and a subtree; 3) optimize the cost of each subpath by relaxing the delay constraint; 4) optimize the subtree cost in the same manner; 5) recombine them into a shared tree. Simulation results show that the algorithm can provide real-time support that other ST algorithms cannot. In addition, it can achieve good scalability. Its cost is only 20.56% higher than the cost-optimal ST algorithm. Furthermore, its computation time is also acceptable. The algorithm can help to realize real-time and scalable multicast services for IIoT applications.