This paper reviews the evolutionary process that reduced the transmission loss of silica optical fibers from the report of 20dB/km by Corning in 1970 to the current record-low loss. At an early stage, the main effort was to remove impurities especially hydroxy groups for fibers with GeO2-SiO2 core, resulting in the loss of 0.20dB/km in 1980. In order to suppress Rayleigh scattering due to composition fluctuation, pure-silica-core fibers were developed, and the loss of 0.154dB/km was achieved in 1986. As the residual main factor of the loss, Rayleigh scattering due to density fluctuation was actively investigated by utilizing IR and Raman spectroscopy in the 1990s and early 2000s. Now, ultra-low-loss fibers with the loss of 0.150dB/km are commercially available in trans-oceanic submarine cable systems.
In this paper, we focus on developing efficient multi-configuration selection mechanisms by exploiting the spatial degrees of freedom (DoF), and leveraging the simple design benefits of spatial modulation (SM). Notably, the SM technique, as well as its variants, faces the following critical challenges: (i) the performance degradation and difficulty in improving the system performance for higher-level QAM constellations, and (ii) the vast complexity cost in precoder designs particularly for the increasing system dimension and amplitude-phase modulation (APM) constellation dimension. Given this situation, we first investigate two independent modulation domains, i.e., the original signal- and spatial-constellations. By exploiting the analog shift weighting and the virtual spatial signature technologies, we introduce the signature spatial modulation (SSM) concept, which is capable of guaranteing superior trade-offs among spectral- and cost-efficiencies, and system bit error rate (BER) performance. Besides, we develop an analog beamforming for SSM by solving the introduced unconstrained Lagrange dual function minimization problem. Numerical results manifest the performance gain brought by our developed analog beamforming for SSM.
Souhei YANASE Shuto MASUDA Fujun HE Akio KAWABATA Eiji OKI
This paper presents a distributed server allocation model with preventive start-time optimization against a single server failure. The presented model preventively determines the assignment of servers to users under each failure pattern to minimize the largest maximum delay among all failure patterns. We formulate the proposed model as an integer linear programming (ILP) problem. We prove the NP-completeness of the considered problem. As the number of users and that of servers increase, the size of ILP problem increases; the computation time to solve the ILP problem becomes excessively large. We develop a heuristic approach that applies simulated annealing and the ILP approach in a hybrid manner to obtain the solution. Numerical results reveal that the developed heuristic approach reduces the computation time by 26% compared to the ILP approach while increasing the largest maximum delay by just 3.4% in average. It reduces the largest maximum delay compared with the start-time optimization model; it avoids the instability caused by the unnecessary disconnection permitted by the run-time optimization model.
Keiichiro SATO Ryoichi SHINKUMA Takehiro SATO Eiji OKI Takanori IWAI Takeo ONISHI Takahiro NOBUKIYO Dai KANETOMO Kozo SATODA
Predictive spatial-monitoring, which predicts spatial information such as road traffic, has attracted much attention in the context of smart cities. Machine learning enables predictive spatial-monitoring by using a large amount of aggregated sensor data. Since the capacity of mobile networks is strictly limited, serious transmission delays occur when loads of communication traffic are heavy. If some of the data used for predictive spatial-monitoring do not arrive on time, prediction accuracy degrades because the prediction has to be done using only the received data, which implies that data for prediction are ‘delay-sensitive’. A utility-based allocation technique has suggested modeling of temporal characteristics of such delay-sensitive data for prioritized transmission. However, no study has addressed temporal model for prioritized transmission in predictive spatial-monitoring. Therefore, this paper proposes a scheme that enables the creation of a temporal model for predictive spatial-monitoring. The scheme is roughly composed of two steps: the first involves creating training data from original time-series data and a machine learning model that can use the data, while the second step involves modeling a temporal model using feature selection in the learning model. Feature selection enables the estimation of the importance of data in terms of how much the data contribute to prediction accuracy from the machine learning model. This paper considers road-traffic prediction as a scenario and shows that the temporal models created with the proposed scheme can handle real spatial datasets. A numerical study demonstrated how our temporal model works effectively in prioritized transmission for predictive spatial-monitoring in terms of prediction accuracy.
Kiana DZIUBINSKI Masaki BANDAI
The automation of the home through Internet of Things (IoT) devices presents security challenges for protecting the safety and privacy of its inhabitants. In spite of standard wireless communication security protocols, an attacker inside the wireless communication range of the smart home can extract identifier and statistical information, such as the MAC address and packet lengths, from the encrypted wireless traffic of IoT devices to make inferences about the private activities of the user. In this paper, to prevent this breach on privacy in the wireless LAN, we accomplish the following three items. First, we demonstrate that performing traffic shaping simultaneously on the upload and download node is necessary; second, we demonstrate that traffic shaping by random packet generation is impracticable due to the excessive bandwidth requirement; third, we propose traffic shaping by variable padding durations to reduce the bandwidth requirement for injecting dummy traffic during periods of user activity and inactivity to decrease the confidence of the local attacker from identifying genuine user activity traffic. From our performance evaluation, we decreased the data generated on several WiFi and ZigBee-enabled IoT devices by over 15% by our proposal of variable padding durations compared to the conventional method of fixed padding durations at low attacker confidence.
Takuya FUJIWARA Satoshi DENNO Yafei HOU
This paper proposes out-of-bound signal demapping for lattice reduction-aided iterative linear receivers in overloaded MIMO channels. While lattice reduction aided linear receivers sometimes output hard-decision signals that are not contained in the modulation constellation, the proposed demapping converts those hard-decision signals into binary digits that can be mapped onto the modulation constellation. Even though the proposed demapping can be implemented with almost no additional complexity, the proposed demapping achieves more gain as the linear reception is iterated. Furthermore, we show that the transmission performance depends on bit mapping in modulations such as the Gray mapping and the natural mapping. The transmission performance is confirmed by computer simulation in a 6 × 2 MIMO system, i.e., the overloading ratio of 3. One of the proposed demapping called “modulo demapping” attains a gain of about 2 dB at the packet error rate (PER) of 10-1 when the 64QAM is applied.