Manlin XIAO Zhibo DUAN Zhenglong YANG
Based on TLS-ESPRIT algorithm, this paper proposes a weighted spatial smoothing DOA estimation algorithm to address the problem that the conventional TLS-ESPRIT algorithm will be disabled to estimate the direction of arrival (DOA) in the scenario of coherent sources. The proposed method divides the received signal array into several subarrays with special structural feature. Then, utilizing these subarrays, this paper constructs the new weighted covariance matrix to estimate the DOA based on TLS-ESPRIT. The auto-correlation and cross-correlation information of subarrays in the proposed algorithm is extracted sufficiently, improving the orthogonality between the signal subspace and the noise subspace so that the DOA of coherent sources could be estimated accurately. The simulations show that the proposed algorithm is superior to the conventional spatial smoothing algorithms under different signal to noise ratio (SNR) and snapshot numbers with coherent sources.
Jinwoo LEE Tae Gu KANG Kookrae CHO Dae Hyun YUM
SPHINCS+ is a state-of-the-art post-quantum hash-based signature that is a candidate for the NIST post-quantum cryptography standard. For a target bit security, SPHINCS+ supports many different tradeoffs between the signature size and the signing speed. SPHINCS+ provides 6 parameter sets: 3 parameter sets for size optimization and 3 parameter sets for speed optimization. We propose new parameter sets with better performance. Specifically, SPHINCS+ implementations with our parameter sets are up to 26.5% faster with slightly shorter signature sizes.
Shakhnaz AKHMEDOVA Vladimir STANOVOV Sophia VISHNEVSKAYA Chiori MIYAJIMA Yukihiro KAMIYA
This study is focused on the automated detection of a complex system operator's condition. For example, in this study a person's reaction while listening to music (or not listening at all) was determined. For this purpose various well-known data mining tools as well as ones developed by authors were used. To be more specific, the following techniques were developed and applied for the mentioned problems: artificial neural networks and fuzzy rule-based classifiers. The neural networks were generated by two modifications of the Differential Evolution algorithm based on the NSGA and MOEA/D schemes, proposed for solving multi-objective optimization problems. Fuzzy logic systems were generated by the population-based algorithm called Co-Operation of Biology Related Algorithms or COBRA. However, firstly each person's state was monitored. Thus, databases for problems described in this study were obtained by using non-contact Doppler sensors. Experimental results demonstrated that automatically generated neural networks and fuzzy rule-based classifiers can properly determine the human condition and reaction. Besides, proposed approaches outperformed alternative data mining tools. However, it was established that fuzzy rule-based classifiers are more accurate and interpretable than neural networks. Thus, they can be used for solving more complex problems related to the automated detection of an operator's condition.
Cuffless blood pressure (BP) monitors are noninvasive devices that measure systolic and diastolic BP without an inflatable cuff. They are easy to use, safe, and relatively accurate for resting-state BP measurement. Although commercially available from online retailers, BP monitors must be approved or certificated by medical regulatory bodies for clinical use. Cuffless BP monitoring devices also need to be approved; however, only the Institute of Electrical and Electronics Engineers (IEEE) certify these devices. In this paper, the principles of cuffless BP monitors are described, and the current situation regarding BP monitor standards and approval for medical use is discussed.
As NAND flash-based storage has been settled, a flash translation layer (FTL) has been in charge of mapping data addresses on NAND flash memory. Many FTLs implemented various mapping schemes, but the amount of mapping data depends on the mapping level. However, the FTL should contemplate mapping consistency irrespective of how much mapping data dwell in the storage. Furthermore, the recovery cost by the inconsistency needs to be considered for a faster storage reboot time. This letter proposes a novel method that enhances the consistency for a page-mapping level FTL running a legacy logging policy. Moreover, the recovery cost of page mappings also decreases. The novel method is to adopt a virtually-shrunk segment and deactivate page-mapping logs by assembling and storing the segments. This segment scheme already gave embedded NAND flash-based storage enhance its response time in our previous study. In addition to that improved result, this novel plan maximizes the page-mapping consistency, therefore improves the recovery cost compared with the legacy page-mapping FTL.
Hideya SO Takafumi FUJITA Kento YOSHIZAWA Maiko NAYA Takashi SHIMIZU
This paper proposes a novel radio access scheme that uses duplicated transmission via multiple frequency channels to achieve mission critical Internet of Things (IoT) services requiring highly reliable wireless communications; the interference constraints that yield the required reliability are revealed. To achieve mission critical IoT services by wireless communication, it is necessary to improve reliability in addition to satisfying the required transmission delay time. Reliability is defined as the packet arrival rate without exceeding the desired transmission delay time. Traffic of the own system and interference from the other systems using the same frequency channel such as unlicensed bands degrades the reliability. One solution is the frequency/time diversity technique. However, these techniques may not achieve the required reliability because of the time taken to achieve the correct reception. This paper proposes a novel scheme that transmits duplicate packets utilizing multiple wireless interfaces over multiple frequency channels. It also proposes a suppressed duplicate transmission (SDT) scheme, which prevents the wastage of radio resources. The proposed scheme achieves the same reliable performance as the conventional scheme but has higher tolerance against interference than retransmission. We evaluate the relationship between the reliability and the occupation time ratio where the interference occupation time ratio is defined as the usage ratio of the frequency resources occupied by the other systems. We reveal the upper bound of the interference occupation time ratio for each frequency channel, which is needed if channel selection control is to achieve the required reliability.
Quang Quan PHUNG Tuan Hung NGUYEN Naobumi MICHISHITA Hiroshi SATO Yoshio KOYANAGI Hisashi MORISHITA
In this study, a novel decoupling method using parasitic elements (PEs) connected by a bridge line (BL) for two planar inverted-F antennas (PIFAs) is proposed. The proposed method is developed from a well-known decoupling method that uses a BL to directly connect antenna elements. When antenna elements are connected directly by a BL, strong mutual coupling can be reduced, but the resonant frequency shifts to a different frequency. Hence, to shift the resonant frequency toward the desired frequency, the original size of the antenna elements must be adjusted. This is disadvantageous if the method is applied in cases where the design conditions render it difficult to connect the antennas directly or adjust the original antenna size. Therefore, to easily reduce mutual coupling in such a case, a decoupling method that does not require both connecting antennas directly and adjusting the original antenna size is necessitated. This study demonstrates that using PEs connected by a BL reduces the mutual coupling from -6.6 to -14.1dB, and that the resonant frequency is maintained at the desired frequency (2.0GHz) without having to adjust the original PIFAs size. In addition, impedance matching can be adjusted to the desired frequency, resulting in an improved total antenna efficiency from 77.4% to 94.6%. This method is expected to be a simple and effective approach for reducing the mutual coupling between larger numbers of PIFA elements in the future.
Miho SHINOHARA Reiko KOYAMA Shinya MOCHIDUKI Mitsuho YAMADA
We paid attention the amount of change for each resolution by specifying the gaze position of images, and measured accommodation and convergence eye movement when watching high-resolution images. Change of convergence angle and accommodation were like the actual depth composition in the image when images were presented in the high-resolution.
Haisong JIANG Mahmoud NASEF Kiichi HAMAMOTO
This paper reports a single dimensional mode based multiplexer / de-multiplexer using the slab waveguide to realize high modes multiplexing and high integration in the non-MIMO (multi-in multi-out) multimode transmission system. A sufficient mode crosstalk of -20 dB was obtained by selecting suitable parameters of the spacing between the connecting positions of each arrayed waveguide Di, the radius slab waveguide R0 and lateral V-parameter.
Chikako TAKASAKI Atsuko TAKEFUSA Hidemoto NAKADA Masato OGUCHI
With the development of cameras and sensors and the spread of cloud computing, life logs can be easily acquired and stored in general households for the various services that utilize the logs. However, it is difficult to analyze moving images that are acquired by home sensors in real time using machine learning because the data size is too large and the computational complexity is too high. Moreover, collecting and accumulating in the cloud moving images that are captured at home and can be used to identify individuals may invade the privacy of application users. We propose a method of distributed processing over the edge and cloud that addresses the processing latency and the privacy concerns. On the edge (sensor) side, we extract feature vectors of human key points from moving images using OpenPose, which is a pose estimation library. On the cloud side, we recognize actions by machine learning using only the feature vectors. In this study, we compare the action recognition accuracies of multiple machine learning methods. In addition, we measure the analysis processing time at the sensor and the cloud to investigate the feasibility of recognizing actions in real time. Then, we evaluate the proposed system by comparing it with the 3D ResNet model in recognition experiments. The experimental results demonstrate that the action recognition accuracy is the highest when using LSTM and that the introduction of dropout in action recognition using 100 categories alleviates overfitting because the models can learn more generic human actions by increasing the variety of actions. In addition, it is demonstrated that preprocessing using OpenPose on the sensor side can substantially reduce the transfer quantity from the sensor to the cloud.
Emerging byte-addressable non-volatile memory devices attract much attention. A non-volatile main memory (NVMM) built on them enables larger memory size and lower power consumption than a traditional DRAM main memory. To fully utilize an NVMM, both software and hardware must be cooperatively optimized. Simultaneously, even focusing on a memory module, its micro architecture is still being developed though real non-volatile memory modules, such as Intel Optane DC persistent memory (DCPMM), have been on the market. Looking at existing NVMM evaluation environments, software simulators can evaluate various micro architectures with their long simulation time. Emulators can evaluate the whole system fast with less flexibility in their configuration than simulators. Thus, an NVMM emulator that can realize flexible and fast system evaluation still has an important role to explore the optimal system. In this paper, we introduce an NVMM emulator for embedded systems and explore a direction of optimization techniques for NVMMs by using it. It is implemented on an SoC-FPGA board employing three NVMM behaviour models: coarse-grain, fine-grain and DCPMM-based. The coarse and fine models enable NVMM performance evaluations based on extensions of traditional DRAM behaviour. The DCPMM-based model emulates the behaviour of a real DCPMM. Whole evaluation environment is also provided including Linux kernel modifications and several runtime functions. We first validate the developed emulator with an existing NVMM emulator, a cycle-accurate NVMM simulator and a real DCPMM. Then, the program behavior differences among three models are evaluated with SPEC CPU programs. As a result, the fine-grain model reveals the program execution time is affected by the frequency of NVMM memory requests rather than the cache hit ratio. Comparing with the fine-grain model and the coarse-grain model under the condition of the former's longer total write latency than the latter's, the former shows lower execution time for four of fourteen programs than the latter because of the bank-level parallelism and the row-buffer access locality exploited by the former model.
In order to erase data including confidential information stored in storage devices, an unrelated and random sequence is usually overwritten, which prevents the data from being restored. The problem of minimizing the cost for information erasure when the amount of information leakage of the confidential information should be less than or equal to a constant asymptotically has been introduced by T. Matsuta and T. Uyematsu. Whereas the minimum cost for overwriting has been given for general sources, a single-letter characterization for stationary memoryless sources is not easily derived. In this paper, we give single-letter characterizations for stationary memoryless sources under two types of restrictions: one requires the output distribution of the encoder to be independent and identically distributed (i.i.d.) and the other requires it to be memoryless but not necessarily i.i.d. asymptotically. The characterizations indicate the relation among the amount of information leakage, the minimum cost for information erasure and the rate of the size of uniformly distributed sequences. The obtained results show that the minimum costs are different between these restrictions.
Yoshihiro MURASHIMA Taishin NAKAMURA Hisashi YAMAMOTO Xiao XIAO
In a network topology design problem, it is important to analyze the reliability and construction cost of complex network systems. This paper addresses a topological optimization problem of minimizing the total cost of a network system with separate subsystems under a reliability constraint. To solve this problem, we develop three algorithms. The first algorithm finds an exact solution. The second one finds an exact solution, specialized for a system with identical subsystems. The third one is a heuristic algorithm, which finds an approximate solution when a network system has several identical subsystems. We also conduct numerical experiments and demonstrate the efficacy and efficiency of the developed algorithms.
In this letter, a two-stage QR decomposition scheme based on Givens rotation with novel modified real-value decomposition (RVD) is presented. With the modified RVD applied to the result from complex Givens rotation at first stage, the number of non-zero terms needed to be eliminated by real Givens rotation at second stage decreases greatly and the computational complexity is thereby reduced significantly compared to the decomposition scheme with the conventional RVD. Besides, the proposed scheme is suitable for the hardware design of QR decomposition. Evaluation shows that the proposed QR decomposition scheme is superior to the related works in terms of computational complexity.
A modified whale optimization algorithm (MWOA) with dynamic leader selection mechanism and novel population updating procedure is introduced for pattern synthesis of linear antenna array. The current best solution is dynamic changed for each whale agent to overcome premature with local optima in iteration. A hybrid crossover operator is embedded in original algorithm to improve the convergence accuracy of solution. Moreover, the flow of population updating is optimized to balance the exploitation and exploration ability. The modified algorithm is tested on a 28 elements uniform linear antenna array to reduce its side lobe lever and null depth lever. The simulation results show that MWOA algorithm can improve the performance of WOA obviously compared with other algorithms.
Shota ISHIMURA Kosuke NISHIMURA Yoshiaki NAKANO Takuo TANEMURA
Coherent transceivers are now regarded as promising candidates for upgrading the current 400Gigabit Ethernet (400GbE) transceivers to 800G. However, due to the complicated structure of a dual-polarization IQ modulator (DP-IQM) with its bulky polarization-beam splitter/comber (PBS/PBC), the increase in the transmitter size and cost is inevitable. In this paper, we propose a compact PBS/PBC-free transmitter structure with a straight-line configuration. By using the concept of polarization differential modulation, the proposed transmitter is capable of generating a DP phase-shift-keyed (DP-PSK) signal, which makes it directly applicable to the current coherent systems. A detailed analysis of the system performance reveals that the imperfect equalization and the bandwidth limitation at the receiver are the dominant penalty factors. Although such a penalty is usually unacceptable in long-haul applications, the proposed transmitter can be attractive due to its significant simplicity and compactness for short-reach applications, where the cost and the footprint are the primary concerns.
Li TAN Haoyu WANG Xiaofeng LIAN Jiaqi SHI Minji WANG
As the nodes of AWSN (Aerial Wireless Sensor Networks) fly around, the network topology changes frequently with high energy consumption and high cluster head mortality, and some sensor nodes may fly away from the original cluster and interrupt network communication. To ensure the normal communication of the network, this paper proposes an improved LEACH-M protocol for aerial wireless sensor networks. The protocol is improved based on the traditional LEACH-M protocol and MCR protocol. A Cluster head selection method based on maximum energy and an efficient solution for outlier nodes is proposed to ensure that cluster heads can be replaced prior to their death and ensure outlier nodes re-home quickly and efficiently. The experiments show that, compared with the LEACH-M protocol and MCR protocol, the improved LEACH-M protocol performance is significantly optimized, increasing network data transmission efficiency, improving energy utilization, and extending network lifetime.
Young-Kyoon SUH Seounghyeon KIM Joo-Young LEE Hawon CHU Junyoung AN Kyong-Ha LEE
In this letter we analyze the economic worth of GPU on analytical processing of GPU-accelerated database management systems (DBMSes). To this end, we conducted rigorous experiments with TPC-H across three popular GPU DBMSes. Consequently, we show that co-processing with CPU and GPU in the GPU DBMSes was cost-effective despite exposed concerns.
Kagome NAYA Toshiaki MIYAZAKI Peng LI
In recent years, checking sleep quality has become essential from a healthcare perspective. In this paper, we propose a respiratory rate (RR) monitoring system that can be used in the bedroom without wearing any sensor devices directly. To develop the system, passive radio-frequency identification (RFID) tags are introduced and attached to a blanket, instead of attaching them to the human body. The received signal strength indicator (RSSI) and phase values of the passive RFID tags are continuously obtained using an RFID reader through antennas located at the bedside. The RSSI and phase values change depending on the respiration of the person wearing the blanket. Thus, we can estimate the RR using these values. After providing an overview of the proposed system, the RR estimation flow is explained in detail. The processing flow includes noise elimination and irregular breathing period estimation methods. The evaluation demonstrates that the proposed system can estimate the RR and respiratory status without considering the user's body posture, body type, gender, or change in the RR.
Wei LIU Yuan HU Tsung-Hsuan HSIEH Jiansen ZHAO Shengzheng WANG
In order to improve tracking, interference and multipath mitigation performance from that possible with existing signals, a new Global Navigation Satellite System (GNSS) signal is needed that can offer additional degrees of freedom for shaping its pulse waveform and spectrum. In this paper, a new modulation scheme called Quinary Offset Carrier modulation (QOC) is proposed as a new GNSS signal design. The pulse waveforms of QOC modulation are divided into two types: convex and concave waveforms. QOC modulations can be easily constructed by selecting different modulation parameters. The spectra and autocorrelation characteristics of QOC modulations are investigated and discussed. Simulations and analyses show that QOC modulation can achieve similar performance to traditional BOC modulation in terms of code tracking, anti-multipath, and compatibility. QOC modulation can provide a new option for satellite navigation signal design.