We consider an asymptotic stabilization problem for a chain of integrators by using an event-triggered controller. The times required between event-triggered executions and controller updates are uncertain, time-varying, and not necessarily small. We show that the considered system can be asymptotically stabilized by an event-triggered gain-scaling controller. Also, we show that the interexecution times are lower bounded and their lower bounds can be manipulated by a gain-scaling factor. Some future extensions are also discussed. An example is given for illustration.
Yuya KAMATAKI Yusuke KAMEDA Yasuyo KITA Ichiro MATSUDA Susumu ITOH
This paper proposes a lossless coding method for HDR color images stored in a floating point format called Radiance RGBE. In this method, three mantissa and a common exponent parts, each of which is represented in 8-bit depth, are encoded using the block-adaptive prediction technique with some modifications considering the data structure.
Yasunori SUZUKI Shoichi NARAHASHI
This paper presents linearization technologies for high efficiency power amplifiers of cellular base stations. These technologies are important to actualizing highly efficient power amplifiers that reduce power consumption of the base station equipment and to achieving a sufficient non-linear distortion compensation level. It is well known that it is very difficult for a power amplifier using linearization technologies to achieve simultaneously high efficiency and a sufficient non-linear distortion compensation level. This paper presents two approaches toward addressing this technical issue. The first approach is a feed-forward power amplifier using the Doherty amplifier as the main amplifier. The second approach is a digital predistortion linearizer that compensates for frequency dependent intermodulation distortion components. Experimental results validate these approaches as effective for providing power amplification for base stations.
Masayuki TEZUKA Keisuke TANAKA
Redactable signature allows anyone to remove parts of a signed message without invalidating the signature. The need to prove the validity of digital documents issued by governments is increasing. When governments disclose documents, they must remove private information concerning individuals. Redactable signature is useful for such a situation. However, in most redactable signature schemes, to remove parts of the signed message, we need pieces of information for each part we want to remove. If a signed message consists of ℓ elements, the number of elements in an original signature is at least linear in ℓ. As far as we know, in some redactable signature schemes, the number of elements in an original signature is constant, regardless of the number of elements in a message to be signed. However, these constructions have drawbacks in that the use of the random oracle model or generic group model. In this paper, we construct an efficient redactable signature to overcome these drawbacks. Our redactable signature is obtained by combining set-commitment proposed in the recent work by Fuchsbauer et al. (JoC 2019) and digital signatures.
Takashi YASUI Jun-ichiro SUGISAKA Koichi HIRAYAMA
In this study, we conduct guided mode analyses for chalcogenide glass channel waveguides using As2Se3 core and As2S3 lower cladding to determine their single-mode conditions across the astronomical N-band (8-12µm). The results reveal that a single-mode operation over the band can be achieved by choosing a suitable core-thickness.
A neural network that outputs reconstructed images based on projection data containing scattered X-rays is presented, and the proposed scheme exhibits better accuracy than conventional computed tomography (CT), in which the scatter information is removed. In medical X-ray CT, it is a common practice to remove scattered X-rays using a collimator placed in front of the detector. In this study, the scattered X-rays were assumed to have useful information, and a method was devised to utilize this information effectively using a neural network. Therefore, we generated 70,000 projection data by Monte Carlo simulations using a cube comprising 216 (6 × 6 × 6) smaller cubes having random density parameters as the target object. For each projection simulation, the densities of the smaller cubes were reset to different values, and detectors were deployed around the target object to capture the scattered X-rays from all directions. Then, a neural network was trained using these projection data to output the densities of the smaller cubes. We confirmed through numerical evaluations that the neural-network approach that utilized scattered X-rays reconstructed images with higher accuracy than did the conventional method, in which the scattered X-rays were removed. The results of this study suggest that utilizing the scattered X-ray information can help significantly reduce patient dosing during imaging.
Keiichiro SATO Ryoichi SHINKUMA Takehiro SATO Eiji OKI Takanori IWAI Takeo ONISHI Takahiro NOBUKIYO Dai KANETOMO Kozo SATODA
Predictive spatial-monitoring, which predicts spatial information such as road traffic, has attracted much attention in the context of smart cities. Machine learning enables predictive spatial-monitoring by using a large amount of aggregated sensor data. Since the capacity of mobile networks is strictly limited, serious transmission delays occur when loads of communication traffic are heavy. If some of the data used for predictive spatial-monitoring do not arrive on time, prediction accuracy degrades because the prediction has to be done using only the received data, which implies that data for prediction are ‘delay-sensitive’. A utility-based allocation technique has suggested modeling of temporal characteristics of such delay-sensitive data for prioritized transmission. However, no study has addressed temporal model for prioritized transmission in predictive spatial-monitoring. Therefore, this paper proposes a scheme that enables the creation of a temporal model for predictive spatial-monitoring. The scheme is roughly composed of two steps: the first involves creating training data from original time-series data and a machine learning model that can use the data, while the second step involves modeling a temporal model using feature selection in the learning model. Feature selection enables the estimation of the importance of data in terms of how much the data contribute to prediction accuracy from the machine learning model. This paper considers road-traffic prediction as a scenario and shows that the temporal models created with the proposed scheme can handle real spatial datasets. A numerical study demonstrated how our temporal model works effectively in prioritized transmission for predictive spatial-monitoring in terms of prediction accuracy.
While online communities are important platforms for various social activities, many online communities fail to survive, which motivates researchers to investigate factors affecting the growth and survival of online communities. We comprehensively examine the effects of a wide variety of social network features on the growth and survival of communities in Reddit. We show that several social network features, including clique ratio, density, clustering coefficient, reciprocity and centralization, have significant effects on the survival of communities. In contrast, we also show that social network features examined in this paper only have weak effects on the growth of communities. Moreover, we conducted experiments predicting future growth and survival of online communities utilizing social network features as well as contents and activity features in the communities. The results show that prediction models utilizing social network features as well as contents and activity features achieve approximately 30% higher F1 measure, which evaluates the prediction accuracy, than the models only using contents and activity features. In contrast, it is also shown that social network features are not effective for predicting the growth of communities.
Hiroaki KUDO Tetsuya MATSUMOTO Kentaro KUTSUKAKE Noritaka USAMI
In this paper, we evaluate a prediction method of regions including dislocation clusters which are crystallographic defects in a photoluminescence (PL) image of multicrystalline silicon wafers. We applied a method of a transfer learning of the convolutional neural network to solve this task. For an input of a sub-region image of a whole PL image, the network outputs the dislocation cluster regions are included in the upper wafer image or not. A network learned using image in lower wafers of the bottom of dislocation clusters as positive examples. We experimented under three conditions as negative examples; image of some depth wafer, randomly selected images, and both images. We examined performances of accuracies and Youden's J statistics under 2 cases; predictions of occurrences of dislocation clusters at 10 upper wafer or 20 upper wafer. Results present that values of accuracies and values of Youden's J are not so high, but they are higher results than ones of bag of features (visual words) method. For our purpose to find occurrences dislocation clusters in upper wafers from the input wafer, we obtained results that randomly select condition as negative examples is appropriate for 10 upper wafers prediction, since its results are better than other negative examples conditions, consistently.
Takeharu IKEZOE Takuya KOJIMA Hideharu AMANO
Recent IoT devices require extremely low standby power consumption, while a certain performance is needed during the active time, and Coarse-Grained Reconfigurable Arrays (CGRAs) have received attention because of their high energy efficiency. For further reduction of the standby energy consumption of CGRAs, the leakage power for their configuration memory must be reduced. Although the power gating is a common technique, the lost data in flip-flops and memory must be retrieved after the wake-up. Recovering everything requires numerous state transitions and considerable overhead both on its execution time and energy. To address the problem, Non-volatile Cool Mega Array (NVCMA), a CGRA providing non-volatile flip-flops (NVFFs) with spin transfer torque type non-volatile memory (NVM) technology has been developed. However, in general, non-volatile memory technologies have problems with reliability. Some NVFFs are stacked-at-0/1, and cannot store the data in a certain possibility. To improve the chip yield, we propose a mapping algorithm to avoid faulty processing elements of the CGRA caused by the erroneous configuration data. Next, we also propose a method to add an error-correcting code (ECC) mechanism to NVFFs for the configuration and constant memory. The proposed method was applied to NVCMA to evaluate the availability rate and reduction of write time. By using both methods, the average availability ratio of 94.2% was achieved, while the average availability ratio of the nine applications was 0.056% when the probability of failure of the FF was 0.01. The energy for storing data becomes about 2.3 times because of the hardware overhead of ECC but the proposed method can save 8.6% of the writing power on average.
Yahui WANG Wenxi ZHANG Xinxin KONG Yongbiao WANG Hongxin ZHANG
Laser speech detection uses a non-contact Laser Doppler Vibrometry (LDV)-based acoustic sensor to obtain speech signals by precisely measuring voice-generated surface vibrations. Over long distances, however, the detected signal is very weak and full of speckle noise. To enhance the quality and intelligibility of the detected signal, we designed a two-sided Linear Prediction Coding (LPC)-based locator and interpolator to detect and replace speckle noise. We first studied the characteristics of speckle noise in detected signals and developed a binary-state statistical model for speckle noise generation. A two-sided LPC-based locator was then designed to locate the polluted samples, composed of an inverse decorrelator, nonlinear filter and threshold estimator. This greatly improves the detectability of speckle noise and avoids false/missed detection by improving the noise-to-signal-ratio (NSR). Finally, samples from both sides of the speckle noise were used to estimate the parameters of the interpolator and to code samples for replacing the polluted samples. Real-world speckle noise removal experiments and simulation-based comparative experiments were conducted and the results show that the proposed method is better able to locate speckle noise in laser detected speech and highly effective at replacing it.
Hiroyuki UZAWA Kazuhiko TERADA Koyo NITTA
The power consumption of optical network units (ONUs) is a major issue in optical access networks. The downstream buffer is one of the largest power consumers among the functional blocks of an ONU. A cyclic sleep scheme for reducing power has been reported, which periodically powers off not only the downstream buffer but also other components, such as optical transceivers, when the idle period is long. However, when the idle period is short, it cannot power off those components even if the input data rate is low. Therefore, as continuous traffic, such as video, increases, the power-reduction effect decreases. To resolve this issue, we propose another sleep scheme in which the downstream buffer can be partially powered off by cooperative operation with an optical line terminal. Simulation and experimental results indicate that the proposed scheme reduces ONU power consumption without causing frame loss even while the ONU continuously receives traffic and the idle period is short.
Taku SUZUKI Mikihito SUZUKI Kenichi HIGUCHI
This paper proposes a parallel peak cancellation (PC) process for the computational complexity-efficient algorithm called PC with a channel-null constraint (PCCNC) in the adaptive peak-to-average power ratio (PAPR) reduction method using the null space in a multiple-input multiple-output (MIMO) channel for MIMO-orthogonal frequency division multiplexing (OFDM) signals. By simultaneously adding multiple PC signals to the time-domain transmission signal vector, the required number of iterations of the iterative algorithm is effectively reduced along with the PAPR. We implement a constraint in which the PC signal is transmitted only to the null space in the MIMO channel by beamforming (BF). By doing so the data streams do not experience interference from the PC signal on the receiver side. Since the fast Fourier transform (FFT) and inverse FFT (IFFT) operations at each iteration are not required unlike the previous algorithm and thanks to the newly introduced parallel processing approach, the enhanced PCCNC algorithm reduces the required total computational complexity and number of iterations compared to the previous algorithms while achieving the same throughput-vs.-PAPR performance.
Yuta KAMIKAWA Atsushi HASHIMOTO Motoharu SONOGASHIRA Masaaki IIYAMA
An encoder-decoder (Enc-Dec) model is one of the fundamental architectures in many computer vision applications. One desired property of a trained Enc-Dec model is to feasibly encode (and decode) diverse input patterns. Aiming to obtain such a model, in this paper, we propose a simple method called curiosity-guided fine-tuning (CurioFT), which puts more weight on uncommon input patterns without explicitly knowing their frequency. In an experiment, we evaluated CurioFT in a task of future frame generation with the CUHK Avenue dataset and found that it reduced the mean square error by 7.4% for anomalous scenes, 4.8% for common scenes, and 6.6% in total. Some other experiments with the UCSD dataset further supported the reasonability of the proposed method.
Zhi LIU Yifan SU Shuzhong YANG Mengmeng ZHANG
Cross-component linear model (CCLM) chromaticity prediction is a new technique introduced in Versatile Video Coding (VVC), which utilizes the reconstructed luminance component to predict the chromaticity parts, and can improve the coding performance. However, it increases the coding complexity. In this paper, how to accelerate the chroma intra-prediction process is studied based on texture characteristics. Firstly, two observations have been found through experimental statistics for the process. One is that the choice of the chroma intra-prediction candidate modes is closely related to the texture complexity of the coding unit (CU), and the other is that whether the direct mode (DM) is selected is closely related to the texture similarity between current chromaticity CU and the corresponding luminance CU. Secondly, a fast chroma intra-prediction mode decision algorithm is proposed based on these observations. A modified metric named sum modulus difference (SMD) is introduced to measure the texture complexity of CU and guide the filtering of the irrelevant candidate modes. Meanwhile, the structural similarity index measurement (SSIM) is adopted to help judging the selection of the DM mode. The experimental results show that compared with the reference model VTM8.0, the proposed algorithm can reduce the coding time by 12.92% on average, and increases the BD-rate of Y, U, and V components by only 0.05%, 0.32%, and 0.29% respectively.
Riku AKEMA Masao YAMAGISHI Isao YAMADA
Approximate Simultaneous Diagonalization (ASD) is a problem to find a common similarity transformation which approximately diagonalizes a given square-matrix tuple. Many data science problems have been reduced into ASD through ingenious modelling. For ASD, the so-called Jacobi-like methods have been extensively used. However, the methods have no guarantee to suppress the magnitude of off-diagonal entries of the transformed tuple even if the given tuple has an exact common diagonalizer, i.e., the given tuple is simultaneously diagonalizable. In this paper, to establish an alternative powerful strategy for ASD, we present a novel two-step strategy, called Approximate-Then-Diagonalize-Simultaneously (ATDS) algorithm. The ATDS algorithm decomposes ASD into (Step 1) finding a simultaneously diagonalizable tuple near the given one; and (Step 2) finding a common similarity transformation which diagonalizes exactly the tuple obtained in Step 1. The proposed approach to Step 1 is realized by solving a Structured Low-Rank Approximation (SLRA) with Cadzow's algorithm. In Step 2, by exploiting the idea in the constructive proof regarding the conditions for the exact simultaneous diagonalizability, we obtain an exact common diagonalizer of the obtained tuple in Step 1 as a solution for the original ASD. Unlike the Jacobi-like methods, the ATDS algorithm has a guarantee to find an exact common diagonalizer if the given tuple happens to be simultaneously diagonalizable. Numerical experiments show that the ATDS algorithm achieves better performance than the Jacobi-like methods.
Yuriko TAKAISHI Shouhei KIDERA
A noise-robust and accuracy-enhanced microwave imaging algorithm is presented for microwave ablation monitoring of cancer treatment. The ablation impact of dielectric change can be assessed by microwave inverse scattering analysis, where the dimension and dielectric drop of the ablation zone enable safe ablation monitoring. We focus on the distorted Born iterative method (DBIM), which is applicable to highly heterogeneous and contrasted dielectric profiles. As the reconstruction accuracy and convergence speed of DBIM depend largely on the initial estimate of the dielectric profile or noise level, this study exploits a prior estimate of the DBIM for the pre-ablation state to accelerate the convergence speed and introduces the matched-filter-based noise reduction scheme in the DBIM framework. The two-dimensional finite-difference time-domain numerical test with realistic breast phantoms shows that our method significantly enhances the reconstruction accuracy with a lower computational cost.
The PCHS (Park-Chang-Hong-Seo) algorithm is a varied Karatsuba algorithm (KA) that utilizes a different splitting strategy with no overlap module. Such an algorithm has been applied to develop efficient hybrid GF(2m) multipliers for irreducible trinomials and pentanomials. However, compared with KA-based hybrid multipliers, these multipliers usually match space complexity but require more gates delay. In this paper, we proposed a new design of hybrid multiplier using PCHS algorithm for irreducible all-one polynomial. The proposed scheme skillfully utilizes redundant representation to combine and simplify the subexpressions computation, which result in a significant speedup of the implementation. As a main contribution, the proposed multiplier has exactly the same space and time complexities compared with the KA-based scheme. It is the first time to show that different splitting strategy for KA also can develop the same efficient multiplier.
Haiyang LIU Lianrong MA Hao ZHANG
Let G11 (resp., G12) be the ternary Golay code of length 11 (resp., 12). In this letter, we investigate the separating redundancies of G11 and G12. In particular, we determine the values of sl(G11) for l = 1, 3, 4 and sl(G12) for l = 1, 4, 5, where sl(G11) (resp., sl(G12)) is the l-th separating redundancy of G11 (resp., G12). We also provide lower and upper bounds on s2(G11), s2(G12), and s3(G12).
Teruki HAYAKAWA Masateru TSUNODA Koji TODA Keitaro NAKASAI Amjed TAHIR Kwabena Ebo BENNIN Akito MONDEN Kenichi MATSUMOTO
Various software fault prediction models have been proposed in the past twenty years. Many studies have compared and evaluated existing prediction approaches in order to identify the most effective ones. However, in most cases, such models and techniques provide varying results, and their outcomes do not result in best possible performance across different datasets. This is mainly due to the diverse nature of software development projects, and therefore, there is a risk that the selected models lead to inconsistent results across multiple datasets. In this work, we propose the use of bandit algorithms in cases where the accuracy of the models are inconsistent across multiple datasets. In the experiment discussed in this work, we used four conventional prediction models, tested on three different dataset, and then selected the best possible model dynamically by applying bandit algorithms. We then compared our results with those obtained using majority voting. As a result, Epsilon-greedy with ϵ=0.3 showed the best or second-best prediction performance compared with using only one prediction model and majority voting. Our results showed that bandit algorithms can provide promising outcomes when used in fault prediction.