Takashi YAMAMOTO Shigemasa TAKAI
In this paper, we study conjunctive decentralized diagnosis of discrete event systems (DESs). In most existing works on decentralized diagnosis of DESs, it is implicitly assumed that diagnosis decisions of all local diagnosers are available to detect a failure. However, it may be possible that some local diagnosis decisions are not available, due to some reasons. Letting n be the number of local diagnosers, the notion of (n,k)-conjunctive codiagnosability guarantees that the occurrence of any failure is detected in the conjunctive architecture as long as at least k of the n local diagnosis decisions are available. We propose an algorithm for verifying (n,k)-conjunctive codiagnosability. To construct a reliable conjunctive decentralized diagnoser, we need to compute the delay bound within which the occurrence of any failure can be detected as long as at least k of the n local diagnosis decisions are available. We show how to compute the delay bound.
In this letter, a novel antenna selection (AS) technique is proposed for the downlink of large-scale multi-user multiple input multiple output (MU-MIMO) networks, where a base station (BS) is equipped with large-scale antennas (N) and communicates simultaneously with K(K ≪ N) mobile stations (MSs). In the proposed scheme, the S antennas (S ≤ N) are selected by utilizing the concept of a sliding window. It is shown that the sum-rate of our proposed scheme is comparable to that of the conventional scheme, while the proposed scheme can significantly reduce the complexity of the BS.
Masashi FUSHIKI Takeo OHSEKI Satoshi KONISHI
Single Carrier — Frequency Domain Multiple Access (SC-FDMA) is a multiple access technique employed in LTE uplink transmission. SC-FDMA can improve system throughput by frequency selective scheduling (FSS). In cellular systems using SC-FDMA in the uplink, interference arising from user equipments (UEs) in neighboring cells degrades the system throughput, especially the throughput of cell-edge UEs. In order to overcome this drawback, many papers have considered fractional frequency reuse (FFR) techniques and analyzed their effectiveness. However, these studies have come to different conclusions regarding the effectiveness of FFR because the throughput gain of FFR depends on the frequency reuse design and evaluation conditions. Previous papers have focused on the frequency reuse design. Few papers have examined the conditions where FFR is effective, and only the UE traffic conditions have been evaluated. This paper reveals other conditions where FFR is effective by demonstrating the throughput gain of FFR. In order to analyze the throughput gain of FFR, we focus on the throughput relationship between FFR and FSS. System level simulation results demonstrate that FFR is effective when the following conditions are met: (i) the number of UEs is small and (ii) the multipath delay spread is large or close to 0.
Shinichiro OHNUKI Kenichiro KOBAYASHI Seiya KISHIMOTO Tsuneki YAMASAKI
Electromagnetic scattering problems of canonical 2D structures can be analyzed with a high degree of accuracy by using the point matching method with mode expansion. In this paper, we will extend our previous method to 3D electromagnetic scattering problems and investigate the radar cross section of spherical shells and the computational accuracy.
Song GAO Chunheng WANG Baihua XIAO Cunzhao SHI Wen ZHOU Zhong ZHANG
This paper tries to model spatial layout beyond the traditional spatial pyramid (SP) in the coding/pooling scheme for scene text character recognition. Specifically, we propose a novel method to build a dictionary called spatiality embedded dictionary (SED) in which each codeword represents a particular character stroke and is associated with a local response region. The promising results outperform other state-of-the-art algorithms.
Shinsuke ODAGIRI Hiroyuki GOTO
For a fixed number of nodes, we focus on directed acyclic graphs in which there is not a shortcut. We find the case where the number of paths is maximized and its corresponding count of maximal paths. Considering this case is essential in solving large-scale scheduling problems using a PERT chart.
Lossy identification schemes are used to construct tightly secure signature schemes via the Fiat-Shamir heuristic in the random oracle model. Several lossy identification schemes are instantiated by using the short discrete logarithm assumption, the ring-LWE assumption and the subset sum assumption, respectively. For assumptions concerning the integer factoring, Abdalla, Ben Hamouda and Pointcheval [3] recently presented lossy identification schemes based on the φ-hiding assumption, the QR assumption and the DCR assumption, respectively. In this paper, we propose new instantiations of lossy identification schemes. We first construct a variant of the Schnorr's identification scheme, and show its lossiness under the subgroup decision assumption. We also construct a lossy identification scheme which is based on the DCR assumption. Our DCR-based scheme has an advantage relative to the ABP's DCR-based scheme since our scheme needs no modular exponentiation in the response phase. Therefore our scheme is suitable when it is transformed to an online/offline signature.
Keishi TSUBAKI Tetsuya HIROSE Yuji OSAKI Seiichiro SHIGA Nobutaka KUROKI Masahiro NUMA
A fully on-chip CMOS relaxation oscillator (ROSC) with a PVT variation compensation circuit is proposed in this paper. The circuit is based on a conventional ROSC and has a distinctive feature in the compensation circuit that compensates for comparator's non-idealities caused by not only offset voltage, but also delay time. Measurement results demonstrated that the circuit can generate a stable clock frequency of 6.66kHz. The current dissipation was 320nA at 1.0-V power supply. The measured line regulation and temperature coefficient were 0.98%/V and 56ppm/°C, respectively.
Keigo KUBO Sakriani SAKTI Graham NEUBIG Tomoki TODA Satoshi NAKAMURA
Grapheme-to-phoneme (g2p) conversion, used to estimate the pronunciations of out-of-vocabulary (OOV) words, is a highly important part of recognition systems, as well as text-to-speech systems. The current state-of-the-art approach in g2p conversion is structured learning based on the Margin Infused Relaxed Algorithm (MIRA), which is an online discriminative training method for multiclass classification. However, it is known that the aggressive weight update method of MIRA is prone to overfitting, even if the current example is an outlier or noisy. Adaptive Regularization of Weight Vectors (AROW) has been proposed to resolve this problem for binary classification. In addition, AROW's update rule is simpler and more efficient than that of MIRA, allowing for more efficient training. Although AROW has these advantages, it has not been applied to g2p conversion yet. In this paper, we first apply AROW on g2p conversion task which is structured learning problem. In an evaluation that employed a dataset generated from the collective knowledge on the Web, our proposed approach achieves a 6.8% error reduction rate compared to MIRA in terms of phoneme error rate. Also the learning time of our proposed approach was shorter than that of MIRA in almost datasets.
Takeshi MITSUNAKA Masafumi YAMANOUE Kunihiko IIZUKA Minoru FUJISHIMA
In this paper, we present a differential dual-modulus prescaler based on an injection-locked frequency divider (ILFD) for satellite low-noise block (LNB) down-converters. We fabricated three-stage differential latches using an ILFD and a cascaded differential divider in a 130-nm CMOS process. The prototype chip core area occupies 40µm × 20µm. The proposed prescaler achieved the locking range of 2.1-10GHz with both divide-by-10 and divide-by-11 operations at a supply voltage of 1.4V. Normalized energy consumptions are 0.4pJ (=mW/GHz) at a 1.4-V supply voltage and 0.24pJ at a 1.2-V supply voltage. To evaluate the tolerance of phase-difference deviation of the input differential pair from the perfect differential phase-difference, 180 degrees, we measured the operational frequencies for various phase-difference inputs. The proposed prescaler achieved the operational frequency range of 2.1-10GHz with an input phase-difference deviation of less than 90 degrees. However, the range of operational frequency decreases as the phase-difference deviation increases beyond 90 degrees and reaches 3.9-7.9GHz for the phase-difference deviation of 180 degrees (i.e. no phase difference). In addition, to confirm the fully locking operation, we measured the spurious noise and the phase noise degradation while reducing the supply voltage. The sensitivity analysis of the prescaler for various supply voltages can explain the above degradation of spectral purity. Spurious noise arises and the phase noise degrades with decreasing supply voltage due to the quasi- and non-locking operations. We verified the fully-locking operation for the LNB down-converter at a 1.4-V supply voltage.
Xiaohong YANG Mingxing XU Yufang YANG
The research reported in this paper is an attempt to elucidate the predictors of pause duration in read-aloud discourse. Through simple linear regression analysis and stepwise multiple linear regression, we examined how different factors (namely, syntactic structure, discourse hierarchy, topic structure, preboundary length, and postboundary length) influenced pause duration both separately and jointly. Results from simple regression analysis showed that discourse hierarchy, syntactic structure, topic structure, and postboundary length had significant impacts on boundary pause duration. However, when these factors were tested in a stepwise regression analysis, only discourse hierarchy, syntactic structure, and postboundary length were found to have significant impacts on boundary pause duration. The regression model that best predicted boundary pause duration in discourse context was the one that first included syntactic structure, and then included discourse hierarchy and postboundary length. This model could account for about 80% of the variance of pause duration. Tests of mediation models showed that the effects of topic structure and discourse hierarchy were significantly mediated by syntactic structure, which was most closely correlated with pause duration. These results support an integrated model combining the influence of several factors and can be applied to text-to-speech systems.
Kazuto OGAWA Go OHTAKE Arisa FUJII Goichiro HANAOKA
For the sake of privacy preservation, services that are offered with reference to individual user preferences should do so with a sufficient degree of anonymity. We surveyed various tools that meet requirements of such services and decided that group signature schemes with weakened anonymity (without unlinkability) are adequate. Then, we investigated a theoretical gap between unlinkability of group signature schemes and their other requirements. We show that this gap is significantly large. Specifically, we clarify that if unlinkability can be achieved from any other property of group signature schemes, it becomes possible to construct a chosen-ciphertext secure cryptosystem from any one-way function. This result implies that the efficiency of group signature schemes can be drastically improved if unlinkability is not taken into account. We also demonstrate a way to construct a scheme without unlinkability that is significantly more efficient than the best known full-fledged scheme.
Naoki NISHIKAWA Keisuke IWAI Hidema TANAKA Takakazu KUROKAWA
Computer systems with GPUs are expected to become a strong methodology for high-speed encryption processing. Moreover, power consumption has remained a primary deterrent for such processing on devices of all sizes. However, GPU vendors are currently announcing their future roadmaps of GPU architecture development: Nvidia Corp. promotes the Kepler architecture and AMD Corp. emphasizes the GCN architecture. Therefore, we evaluated throughput and power efficiency of three 128-bit block ciphers on GPUs with recent Nvidia Kepler and AMD GCN architectures. From our experiments, whereas the throughput and per-watt throughput of AES-128 on Radeon HD 7970 (2048 cores) with GCN architecture are 205.0Gbps and 1.3Gbps/Watt respectively, those on Geforce GTX 680 (1536 cores) with Kepler architecture are, respectively, 63.9Gbps and 0.43Gbps/W; an approximately 3.2 times throughput difference occurs between AES-128 on the two GPUs. Next, we investigate the reasons for the throughput difference using our micro-benchmark suites. According to the results, we speculate that to ameliorate Kepler GPUs as co-processor of block ciphers, the arithmetic and logical instructions must be improved in terms of software and hardware.
Daisaburo YOSHIOKA Akio TSUNEDA
Since substitution boxes (S-boxes) are the only nonlinear portion of most block ciphers, the design of cryptographically strong and low-complexity S-boxes is of great importance in cryptosystems. In this paper, a new kind of S-boxes obtained by iterating a discretized piecewise linear map is proposed. The S-box has an implementation efficiency both in software and hardware. Moreover, the results of performance test show that the proposed S-box has good cryptographic properties.
Chuchart PINTAVIROOJ Fernand S. COHEN Woranut IAMPA
This paper addresses the problems of fingerprint identification and verification when a query fingerprint is taken under conditions that differ from those under which the fingerprint of the same person stored in a database was constructed. This occurs when using a different fingerprint scanner with a different pressure, resulting in a fingerprint impression that is smeared and distorted in accordance with a geometric transformation (e.g., affine or even non-linear). Minutiae points on a query fingerprint are matched and aligned to those on one of the fingerprints in the database, using a set of absolute invariants constructed from the shape and/or size of minutiae triangles depending on the assumed map. Once the best candidate match is declared and the corresponding minutiae points are flagged, the query fingerprint image is warped against the candidate fingerprint image in accordance with the estimated warping map. An identification/verification cost function using a combination of distance map and global directional filterbank (DFB) features is then utilized to verify and identify a query fingerprint against candidate fingerprint(s). Performance of the algorithm yields an area of 0.99967 (perfect classification is a value of 1) under the receiver operating characteristic (ROC) curve based on a database consisting of a total of 1680 fingerprint images captured from 240 fingers. The average probability of error was found to be 0.713%. Our algorithm also yields the smallest false non-match rate (FNMR) for a comparable false match rate (FMR) when compared to the well-known technique of DFB features and triangulation-based matching integrated with modeling non-linear deformation. This work represents an advance in resolving the fingerprint identification problem beyond the state-of-the-art approaches in both performance and robustness.
In holographic data storage, information is recorded within the volume of a holographic medium. Typically, the data is presented as an array of pixels with modulation in amplitude and/or phase. In the 4-f orientation, the Fourier domain representation of the data array is produced optically, and this image is recorded. If the Fourier image contains large peaks, the recording material can saturate, which leads to errors in the read-out data array. In this paper, we present a coding process that produces sparse ternary data arrays. Ternary modulation is used because it inherently provides Fourier domain smoothing and allows more data to be stored per array in comparison to binary modulation. Sparse arrays contain fewer on-pixels than dense arrays, and thus contain less power overall, which reduces the severity of peaks in the Fourier domain. The coding process first converts binary data to a sequence of ternary symbols via a high-rate block code, and then uses guided scrambling to produce a set of candidate codewords, from which the most sparse is selected to complete the encoding process. Our analysis of the guided scrambling division and selection processes demonstrates that, with primitive scrambling polynomials, a sparsity greater than 1/3 is guaranteed for all encoded arrays, and that the probability of this worst-case sparsity decreases with increasing block size.
One of the technological innovations that has enabled the VLSI semiconductor industry to reduce the transistor size, increase the number of transistors per die, and also follow Moore's law year after year is the fact that an equivalent yield and equivalent testing quality have been ensured for the same die size. This has contributed to reducing the economically optimum production cost (production cost per component) as advocated by Moore. In this paper, we will verify Moore's law using actual values from VLSI manufacturing sites while introducing some of the technical progress that occurred from 1970 to 2010.
Rogene LACANIENTA Shingo TAKADA Haruto TANNO Morihide OINUMA
For the past couple of decades, the usage of the Web as a platform for deploying software products has become incredibly popular. Web applications became more prevalent, as well as more complex. Countless Web applications have already been designed, developed, tested, and deployed on the Internet. However, it is noticeable that many common functionalities are present among these vast number of applications. This paper proposes an approach based on a database containing information from previous test artifacts. The information is used to generate test scenarios for Web applications under test. We have developed a tool based on our proposed approach, with the aim of reducing the effort required from software test engineers and professionals during the test planning and creation stage of software engineering. We evaluated our approach from three viewpoints: comparison between our approach and manual generation, qualitative evaluation by professional software engineers, and comparison between our approach and two open-source tools.
In this paper we apply angle recoding to the CORDIC-based processing elements in a scalable architecture for complex matrix inversion. We extend the processing elements from the scalable real matrix inversion architecture to the complex domain and obtain the novel scalable complex matrix inversion architecture, which can significantly reduce computational complexity. We rearrange the CORDIC elements to make one half of the processing elements simple and compact. For the other half of the processing elements, the efficient use of angler recoding reduces the number of microrotation steps of the CORDIC elements to 3/4. Consequently, only 3 CORDIC elements are required for the processing elements with full utilization.
This letter presents a new entropy measure for electroencephalograms (EEGs), which reflects the underlying dynamics of EEG over multiple time scales. The motivation behind this study is that neurological signals such as EEG possess distinct dynamics over different spectral modes. To deal with the nonlinear and nonstationary nature of EEG, the recently developed empirical mode decomposition (EMD) is incorporated, allowing an EEG to be decomposed into its inherent spectral components, referred to as intrinsic mode functions (IMFs). By calculating Shannon entropy of IMFs in a time-dependent manner and summing them over adaptive multiple scales, the result is an adaptive subscale entropy measure of EEG. Simulation and experimental results show that the proposed entropy properly reveals the dynamical changes over multiple scales.