Non-orthogonal multiple access (NOMA), which combines multiple user signals and transmits the combined signal over one channel, can achieve high spectral efficiency for mobile communications. However, combining the multiple signals can lead to degradation of bit error rates (BERs) of NOMA under severe channel conditions. In order to improve the BER performance of NOMA, this paper proposes a new NOMA scheme based on orthogonal space-time block codes (OSTBCs). The proposed scheme transmits several multiplexed signals over their respective orthogonal time-frequency channels, and can gain diversity effects due to the orthogonality of OSTBC. Furthermore, the new scheme can detect the user signals using low-complexity linear detection in contrast with the conventional NOMA. The paper focuses on the Alamouti code, which can be considered the simplest OSTBC, and theoretically analyzes the performance of the linear detection. Computer simulations under the condition of the same bit rate per channel show that the Alamouti code based scheme using two channels is superior to the conventional NOMA using one channel in terms of BER performance. As shown by both the theoretical and simulation analyses, the linear detection for the proposed scheme can maintain the same BER performance as that of the maximum likelihood detection, when the two channels have the same frequency response and do not bring about any diversity effects, which can be regarded as the worst case.
High-performance deep learning-based object detection models can reduce traffic accidents using dashcam images during nighttime driving. Deep learning requires a large-scale dataset to obtain a high-performance model. However, existing object detection datasets are mostly daytime scenes and a few nighttime scenes. Increasing the nighttime dataset is laborious and time-consuming. In such a case, it is possible to convert daytime images to nighttime images by image-to-image translation model to augment the nighttime dataset with less effort so that the translated dataset can utilize the annotations of the daytime dataset. Therefore, in this study, a GAN-based image-to-image translation model is proposed by incorporating self-attention with cycle consistency and content/style separation for nighttime data augmentation that shows high fidelity to annotations of the daytime dataset. Experimental results highlight the effectiveness of the proposed model compared with other models in terms of translated images and FID scores. Moreover, the high fidelity of translated images to the annotations is verified by a small object detection model according to detection results and mAP. Ablation studies confirm the effectiveness of self-attention in the proposed model. As a contribution to GAN-based data augmentation, the source code of the proposed image translation model is publicly available at https://github.com/subecky/Image-Translation-With-Self-Attention
Theoretically secure cryptosystems, digital signatures may not be secure after being implemented on Internet of Things (IoT) devices and PCs because of side-channel attacks (SCA). Because RSA key generation and ECDSA require GCD computations or modular inversions, which are often computed using the binary Euclidean algorithm (BEA) or binary extended Euclidean algorithm (BEEA), the SCA weaknesses of BEA and BEEA become a serious concern. Constant-time GCD (CT-GCD) and constant-time modular inversion (CTMI) algorithms are effective countermeasures in such situations. Modular inversion based on Fermat's little theorem (FLT) can work in constant time, but it is not efficient for general inputs. Two CTMI algorithms, named BOS and BY in this paper, were proposed by Bos, Bernstein and Yang, respectively. Their algorithms are all based on the concept of BEA. However, one iteration of BOS has complicated computations, and BY requires more iterations. A small number of iterations and simple computations during one iteration are good characteristics of a constant-time algorithm. Based on this view, this study proposes new short-iteration CT-GCD and CTMI algorithms over Fp borrowing a simple concept from BEA. Our algorithms are evaluated from a theoretical perspective. Compared with BOS, BY, and the improved version of BY, our short-iteration algorithms are experimentally demonstrated to be faster.
With the support of emerging technologies such as 5G, machine learning, edge computing and Industry 4.0, the Internet of Things (IoT) continues to evolve and promote the construction of future networks. Existing work on IoT mainly focuses on its practical applications, but there is little research on modeling the interactions among components in IoT systems and verifying the correctness of the network deployment. Therefore, the Calculus of the Internet of Things (CaIT) has previously been proposed to formally model and reason about IoT systems. In this paper, the CaIT calculus is extended by introducing broadcast communications. For modeling convenience, we provide explicit operations to model node mobility as well as the interactions between sensors (or actuators) with the environment. To support the use of UPPAAL to verify the temporal properties of IoT networks described by the CaIT calculus, we establish a relationship between timed automata and the CaIT calculus. Using UPPAAL, we verify six temporal properties of a simple “smart home” example, including Boiler On Manually, Boiler Off Automatically, Boiler On Automatically, Lights On, Lights Mutually, and Windows Simultaneously. The verification results show that the “smart home” can work properly.
Shangdong LIU Chaojun MEI Shuai YOU Xiaoliang YAO Fei WU Yimu JI
The thermal imaging pedestrian segmentation system has excellent performance in different illumination conditions, but it has some drawbacks(e.g., weak pedestrian texture information, blurred object boundaries). Meanwhile, high-performance large models have higher latency on edge devices with limited computing performance. To solve the above problems, in this paper, we propose a real-time thermal infrared pedestrian segmentation method. The feature extraction layers of our method consist of two paths. Firstly, we utilize the lossless spatial downsampling to obtain boundary texture details on the spatial path. On the context path, we use atrous convolutions to improve the receptive field and obtain more contextual semantic information. Then, the parameter-free attention mechanism is introduced at the end of the two paths for effective feature selection, respectively. The Feature Fusion Module (FFM) is added to fuse the semantic information of the two paths after selection. Finally, we accelerate method inference through multi-threading techniques on the edge computing device. Besides, we create a high-quality infrared pedestrian segmentation dataset to facilitate research. The comparative experiments on the self-built dataset and two public datasets with other methods show that our method also has certain effectiveness. Our code is available at https://github.com/mcjcs001/LEIPNet.
Qianhui WEI Zengqing LI Hongyu HAN Hanzhou WU
In frequency hopping communication, time delay and Doppler shift incur interference. With the escalating upgrading of complicated interference, in this paper, the time-frequency two-dimensional (TFTD) partial Hamming correlation (PHC) properties of wide-gap frequency-hopping sequences (WGFHSs) with frequency shift are discussed. A bound on the maximum TFTD partial Hamming auto-correlation (PHAC) and two bounds on the maximum TFTD PHC of WGFHSs are got. Li-Fan-Yang bounds are the particular cases of new bounds for frequency shift is zero.
Feng TIAN Wan LIU Weibo FU Xiaojun HUANG
Intelligent traffic monitoring provides information support for autonomous driving, which is widely used in intelligent transportation systems (ITSs). A method for estimating vehicle moving target parameters based on millimeter-wave radars is proposed to solve the problem of low detection accuracy due to velocity ambiguity and Doppler-angle coupling in the process of traffic monitoring. First of all, a MIMO antenna array with overlapping elements is constructed by introducing them into the typical design of MIMO radar array antennas. The motion-induced phase errors are eliminated by the phase difference among the overlapping elements. Then, the position errors among them are corrected through an iterative method, and the angle of multiple targets is estimated. Finally, velocity disambiguation is performed by adopting the error-corrected phase difference among the overlapping elements. An accurate estimation of vehicle moving target angle and velocity is achieved. Through Monte Carlo simulation experiments, the angle error is 0.1° and the velocity error is 0.1m/s. The simulation results show that the method can be used to effectively solve the problems related to velocity ambiguity and Doppler-angle coupling, meanwhile the accuracy of velocity and angle estimation can be improved. An improved algorithm is tested on the vehicle datasets that are gathered in the forward direction of ordinary public scenes of a city. The experimental results further verify the feasibility of the method, which meets the real-time and accuracy requirements of ITSs on vehicle information monitoring.
Kundjanasith THONGLEK Kohei ICHIKAWA Keichi TAKAHASHI Chawanat NAKASAN Kazufumi YUASA Tadatoshi BABASAKI Hajimu IIDA
Solar power is the most widely used renewable energy source, which reduces pollution consequences from using conventional fossil fuels. However, supplying stable power from solar power generation remains challenging because it is difficult to forecast power generation. Accurate prediction of solar power generation would allow effective control of the amount of electricity stored in batteries, leading in a stable supply of electricity. Although the number of power plants is increasing, building a solar power prediction model for a newly constructed power plant usually requires collecting a new training dataset for the new power plant, which takes time to collect a sufficient amount of data. This paper aims to develop a highly accurate solar power prediction model for multiple power plants available for both new and existing power plants. The proposed method trains the model on existing multiple power plants to generate a general prediction model, and then uses it for a new power plant while waiting for the data to be collected. In addition, the proposed method tunes the general prediction model on the newly collected dataset and improves the accuracy for the new power plant. We evaluated the proposed method on 55 power plants in Japan with the dataset collected for two and a half years. As a result, the pre-trained models of our proposed method significantly reduces the average RMSE of the baseline method by 73.19%. This indicates that the model can generalize over multiple power plants, and training using datasets from other power plants is effective in reducing the RMSE. Fine-tuning the pre-trained model further reduces the RMSE by 8.12%.
Shota NAKABEPPU Nobuyuki YAMASAKI
It is very important to design an embedded real-time system as a fault-tolerant system to ensure dependability. In particular, when a power failure occurs, restart processing after power restoration is required in a real-time system using a conventional processor. Even if power is restored quickly, the restart process takes a long time and causes deadline misses. In order to design a fault-tolerant real-time system, it is necessary to have a processor that can resume operation in a short time immediately after power is restored, even if a power failure occurs at any time. Since current embedded real-time systems are required to execute many tasks, high schedulability for high throughput is also important. This paper proposes a non-stop microprocessor architecture to achieve a fault-tolerant real-time system. The non-stop microprocessor is designed so as to resume normal operation even if a power failure occurs at any time, to achieve little performance degradation for high schedulability even if checkpoint creations and restorations are performed many times, to control flexibly non-volatile devices through software configuration, and to ensure data consistency no matter when a checkpoint restoration is performed. The evaluation shows that the non-stop microprocessor can restore a checkpoint within 5µsec and almost hide the overhead of checkpoint creations. The non-stop microprocessor with such capabilities will be an essential component of a fault-tolerant real-time system with high schedulability.
The application of time-series prediction is very extensive, and it is an important problem across many fields, such as stock prediction, sales prediction, and loan prediction and so on, which play a great value in production and life. It requires that the model can effectively capture the long-term feature dependence between the output and input. Recent studies show that Transformer can improve the prediction ability of time-series. However, Transformer has some problems that make it unable to be directly applied to time-series prediction, such as: (1) Local agnosticism: Self-attention in Transformer is not sensitive to short-term feature dependence, which leads to model anomalies in time-series; (2) Memory bottleneck: The spatial complexity of regular transformation increases twice with the sequence length, making direct modeling of long time-series infeasible. In order to solve these problems, this paper designs an efficient model for long time-series prediction. It is a double pyramid bidirectional feature fusion mechanism network with parallel Temporal Convolution Network (TCN) and FastFormer. This network structure can combine the time series fine-grained information captured by the Temporal Convolution Network with the global interactive information captured by FastFormer, it can well handle the time series prediction problem.
Takayoshi SHOUDAI Satoshi MATSUMOTO Yusuke SUZUKI Tomoyuki UCHIDA Tetsuhiro MIYAHARA
A formal graph system (FGS for short) is a logic program consisting of definite clauses whose arguments are graph patterns instead of first-order terms. The definite clauses are referred to as graph rewriting rules. An FGS is shown to be a useful unifying framework for learning graph languages. In this paper, we show the polynomial-time PAC learnability of a subclass of FGS languages defined by parameterized hereditary FGSs with bounded degree, from the viewpoint of computational learning theory. That is, we consider VH-FGSLk,Δ(m, s, t, r, w, d) as the class of FGS languages consisting of graphs of treewidth at most k and of maximum degree at most Δ which is defined by variable-hereditary FGSs consisting of m graph rewriting rules having TGP patterns as arguments. The parameters s, t, and r denote the maximum numbers of variables, atoms in the body, and arguments of each predicate symbol of each graph rewriting rule in an FGS, respectively. The parameters w and d denote the maximum number of vertices of each hyperedge and the maximum degree of each vertex of TGP patterns in each graph rewriting rule in an FGS, respectively. VH-FGSLk,Δ(m, s, t, r, w, d) has infinitely many languages even if all the parameters are bounded by constants. Then we prove that the class VH-FGSLk,Δ(m, s, t, r, w, d) is polynomial-time PAC learnable if all m, s, t, r, w, d, Δ are constants except for k.
Juan LIU Xiaolin HOU Wenjia LIU Lan CHEN Yoshihisa KISHIYAMA Takahiro ASAI
To achieve the extreme high data rate and extreme coverage extension requirements of 6G wireless communication, new spectrum in sub-THz (100-300GHz) and non-terrestrial network (NTN) are two of the macro trends of 6G candidate technologies, respectively. However, non-linearity of power amplifiers (PA) is a critical challenge for both sub-THz and NTN. Therefore, high power efficiency (PE) or low peak to average power ratio (PAPR) waveform design becomes one of the most significant 6G research topics. Meanwhile, high spectral efficiency (SE) and low out-of-band emission (OOBE) are still important key performance indicators (KPIs) for 6G waveform design. Single-carrier waveform discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-s-OFDM) has achieved many research interests due to its high PE, and it has been supported in 5G New Radio (NR) when uplink coverage is limited. So DFT-s-OFDM can be regarded as a candidate waveform for 6G. Many enhancement schemes based on DFT-s-OFDM have been proposed, including null cyclic prefix (NCP)/unique word (UW), frequency-domain spectral shaping (FDSS), and time-domain compression and expansion (TD-CE), etc. However, there is no unified framework to be compatible with all the enhancement schemes. This paper firstly provides a general description of the 6G candidate waveforms based on DFT-s-OFDM enhancement. Secondly, the more flexible TD-CE supporting methods for unified non-orthogonal waveform (uNOW) are proposed and discussed. Thirdly, a unified waveform framework based on DFT-s-OFDM structure is proposed. By designing the pre-processing and post-processing modules before and after DFT in the unified waveform framework, the three technical methods (NCP/UW, FDSS, and TD-CE) can be integrated to improve three KPIs of DFT-s-OFDM simultaneously with high flexibility. Then the implementation complexity of the 6G candidate waveforms are analyzed and compared. Performance of different DFT-s-OFDM enhancement schemes is investigated by link level simulation, which reveals that uNOW can achieve the best PAPR performance among all the 6G candidate waveforms. When considering PA back-off, uNOW can achieve 124% throughput gain compared to traditional DFT-s-OFDM.
Weisong LIAO Akira KAINO Tomoaki MASHIKO Sou KUROMASA Masatoshi SAKAI Kazuhiro KUDO
We observed dynamical carrier motion in an OLED device under an external reverse bias application using ExTDR measurement. The rectangular wave pulses were used in our ExTDR to observe the transient impedance of the OLED sample. The falling edge of the transmission waveform reflects the transient impedance after applying pulse voltage during the pulse width. The observed pulse width variation at the falling edge waveform indicates that the frontline of the hole distribution in the hole transport layer was forced to move backward to the ITO electrode.
Daichi WATARI Ittetsu TANIGUCHI Francky CATTHOOR Charalampos MARANTOS Kostas SIOZIOS Elham SHIRAZI Dimitrios SOUDRIS Takao ONOYE
Energy management in buildings is vital for reducing electricity costs and maximizing the comfort of occupants. Excess solar generation can be used by combining a battery storage system and a heating, ventilation, and air-conditioning (HVAC) system so that occupants feel comfortable. Despite several studies on the scheduling of appliances, batteries, and HVAC, comprehensive and time scalable approaches are required that integrate such predictive information as renewable generation and thermal comfort. In this paper, we propose an thermal-comfort aware online co-scheduling framework that incorporates optimal energy scheduling and a prediction model of PV generation and thermal comfort with the model predictive control (MPC) approach. We introduce a photovoltaic (PV) energy nowcasting and thermal-comfort-estimation model that provides useful information for optimization. The energy management problem is formulated as three coordinated optimization problems that cover fast and slow time-scales by considering predicted information. This approach reduces the time complexity without a significant negative impact on the result's global nature and its quality. Experimental results show that our proposed framework achieves optimal energy management that takes into account the trade-off between electricity expenses and thermal comfort. Our sensitivity analysis indicates that introducing a battery significantly improves the trade-off relationship.
This paper proposes an algorithm for estimating the location of wireless access points (APs) in indoor environments to realize smartphone positioning based on Wi-Fi without pre-constructing a database. The proposed method is designed to overcome the main problem of existing positioning methods requiring the advance construction of a database with coordinates or precise AP location measurements. The proposed algorithm constructs a local coordinate system with the first four APs that are activated in turn, and estimates the AP installation location using Wi-Fi round-trip time (RTT) lateration and the ranging results between the APs. The effectiveness of the proposed algorithm is confirmed by conducting experiments in a real indoor environment consisting of two rooms of different sizes to evaluate the positioning performance of the algorithm. The experimental results showed the proposed algorithm using Wi-Fi RTT lateration delivers high smartphone positioning performance without a pre-constructed database or precise AP location measurements.
Qianhui WEI Hongyu HAN Limengnan ZHOU Hanzhou WU
In quasi-synchronous FH multiple-access (QS-FHMA) systems, no-hit-zone frequency-hopping sequences (NHZ-FHSs) can offer interference-free FHMA performance. But, outside the no-hit-zone (NHZ), the Hamming correlation of traditional NHZ-FHZs maybe so large that the performance becomes not good. And in high-speed mobile environment, Doppler shift phenomenon will appear. In order to ensure the performance of FHMA, it is necessary to study the NHZ-FHSs in the presence of transmission delay and frequency offset. In this paper, We derive a lower bound on the maximum time-frequency two-dimensional Hamming correlation outside of the NHZ of NHZ-FHSs. The Zeng-Zhou-Liu-Liu bound is a particular situation of the new bound for frequency shift is zero.
For many fields in real life, time series forecasting is essential. Recent studies have shown that Transformer has certain advantages when dealing with such problems, especially when dealing with long sequence time input and long sequence time forecasting problems. In order to improve the efficiency and local stability of Transformer, these studies combine Transformer and CNN with different structures. However, previous time series forecasting network models based on Transformer cannot make full use of CNN, and they have not been used in a better combination of both. In response to this problem in time series forecasting, we propose the time series forecasting algorithm based on convolution Transformer. (1) ES attention mechanism: Combine external attention with traditional self-attention mechanism through the two-branch network, the computational cost of self-attention mechanism is reduced, and the higher forecasting accuracy is obtained. (2) Frequency enhanced block: A Frequency Enhanced Block is added in front of the ESAttention module, which can capture important structures in time series through frequency domain mapping. (3) Causal dilated convolution: The self-attention mechanism module is connected by replacing the traditional standard convolution layer with a causal dilated convolution layer, so that it obtains the receptive field of exponentially growth without increasing the calculation consumption. (4) Multi-layer feature fusion: The outputs of different self-attention mechanism modules are extracted, and the convolutional layers are used to adjust the size of the feature map for the fusion. The more fine-grained feature information is obtained at negligible computational cost. Experiments on real world datasets show that the time series network forecasting model structure proposed in this paper can greatly improve the real-time forecasting performance of the current state-of-the-art Transformer model, and the calculation and memory costs are significantly lower. Compared with previous algorithms, the proposed algorithm has achieved a greater performance improvement in both effectiveness and forecasting accuracy.
The high-precision indoor positioning technology has gradually become one of the research hotspots in indoor mobile robots. Relax and Recover (RAR) is an indoor positioning algorithm using distance observations. The algorithm restores the robot's trajectory through curve fitting and does not require time synchronization of observations. The positioning can be successful with few observations. However, the algorithm has the disadvantages of poor resistance to gross errors and cannot be used for real-time positioning. In this paper, while retaining the advantages of the original algorithm, the RAR algorithm is improved with the adaptive Kalman filter (AKF) based on the innovation sequence to improve the anti-gross error performance of the original algorithm. The improved algorithm can be used for real-time navigation and positioning. The experimental validation found that the improved algorithm has a significant improvement in accuracy when compared to the original RAR. When comparing to the extended Kalman filter (EKF), the accuracy is also increased by 12.5%, which can be used for high-precision positioning of indoor mobile robots.
This research develops a new automatic path following control method for a car model based on just-in-time modeling. The purpose is that a lot of basic driving data for various situations are accumulated into a database, and we realize automatic path following for unknown roads by using only data in the database. Especially, just-in-time modeling is repeatedly utilized in order to follow the desired points on the given road. From the results of a numerical simulation, it turns out that the proposed new method can make the car follow the desired points on the given road with small error, and it shows high computational efficiency.
Zijie LIU Can CHEN Yi CHENG Maomao JI Jinrong ZOU Dengyin ZHANG
Common schedulers for long-term running services that perform task-level optimization fail to accommodate short-living batch processing (BP) jobs. Thus, many efficient job-level scheduling strategies are proposed for BP jobs. However, the existing scheduling strategies perform time-consuming objective optimization which yields non-negligible scheduling delay. Moreover, they tend to assign BP jobs in a centralized manner to reduce monetary cost and synchronization overhead, which can easily cause resource contention due to the task co-location. To address these problems, this paper proposes TEBAS, a time-efficient balance-aware scheduling strategy, which spreads all tasks of a BP job into the cluster according to the resource specifications of a single task based on the observation that computing tasks of a BP job commonly possess similar features. The experimental results show the effectiveness of TEBAS in terms of scheduling efficiency and load balancing performance.