In order to erase data including confidential information stored in storage devices, an unrelated and random sequence is usually overwritten, which prevents the data from being restored. The problem of minimizing the cost for information erasure when the amount of information leakage of the confidential information should be less than or equal to a constant asymptotically has been introduced by T. Matsuta and T. Uyematsu. Whereas the minimum cost for overwriting has been given for general sources, a single-letter characterization for stationary memoryless sources is not easily derived. In this paper, we give single-letter characterizations for stationary memoryless sources under two types of restrictions: one requires the output distribution of the encoder to be independent and identically distributed (i.i.d.) and the other requires it to be memoryless but not necessarily i.i.d. asymptotically. The characterizations indicate the relation among the amount of information leakage, the minimum cost for information erasure and the rate of the size of uniformly distributed sequences. The obtained results show that the minimum costs are different between these restrictions.
Xinbo REN Haiyuan WU Qian CHEN Toshiyuki IMAI Takashi KUBO Takashi AKASAKA
Clinical researches show that the morbidity of coronary artery disease (CAD) is gradually increasing in many countries every year, and it causes hundreds of thousands of people all over the world dying for each year. As the optical coherence tomography with high resolution and better contrast applied to the lesion tissue investigation of human vessel, many more micro-structures of the vessel could be easily and clearly visible to doctors, which help to improve the CAD treatment effect. Manual qualitative analysis and classification of vessel lesion tissue are time-consuming to doctors because a single-time intravascular optical coherence (IVOCT) data set of a patient usually contains hundreds of in-vivo vessel images. To overcome this problem, we focus on the investigation of the superficial layer of the lesion region and propose a model based on local multi-layer region for vessel lesion components (lipid, fibrous and calcified plaque) features characterization and extraction. At the pre-processing stage, we applied two novel automatic methods to remove the catheter and guide-wire respectively. Based on the detected lumen boundary, the multi-layer model in the proximity lumen boundary region (PLBR) was built. In the multi-layer model, features extracted from the A-line sub-region (ALSR) of each layer was employed to characterize the type of the tissue existing in the ALSR. We used 7 human datasets containing total 490 OCT images to assess our tissue classification method. Validation was obtained by comparing the manual assessment with the automatic results derived by our method. The proposed automatic tissue classification method achieved an average accuracy of 89.53%, 93.81% and 91.78% for fibrous, calcified and lipid plaque respectively.
Song BIAN Michihiro SHINTANI Masayuki HIROMOTO Takashi SATO
As technology further scales semiconductor devices, aging-induced device degradation has become one of the major threats to device reliability. Hence, taking aging-induced degradation into account during the design phase can greatly improve the reliability of the manufactured devices. However, accurately estimating the aging effect for extremely large circuits, like processors, is time-consuming. In this research, we focus on the negative bias temperature instability (NBTI) as the aging-induced degradation mechanism, and propose a fast and efficient way of estimating NBTI-induced delay degradation by utilizing static-timing analysis (STA) and simulation-based lookup table (LUT). We modeled each type of gates at different degradation levels, load capacitances and input slews. Using these gate-delay models, path delays of arbitrary circuits can be efficiently estimated. With a typical five-stage pipelined processor as the design target, by comparing the calculated delay from LUT with the reference delay calculated by a commercial circuit simulator, we achieved 4114 times speedup within 5.6% delay error.
Balgeun YOO Seongjin LEE Youjip WON
SSDs consist of non-mechanical components (host interface, control core, DRAM, flash memory, etc.) whose integrated behavior is not well-known. This makes an SSD seem like a black-box to users. We analyzed power consumption of four SSDs with standard I/O operations. We find the following: (a) the power consumption of SSDs is not significantly lower than that of HDDs, (b) all SSDs we tested had similar power consumption patterns which, we assume, is a result of their internal parallelism. SSDs have a parallel architecture that connects flash memories by channel or by way. This parallel architecture improves performance of SSDs if the information is known to the file system. This paper proposes three SSD characterization algorithms to infer the characteristics of SSD, such as internal parallelism, I/O unit, and page allocation scheme, by measuring its power consumption with various sized workloads. These algorithms are applied to four real SSDs to find: (i) the internal parallelism to decide whether to perform I/Os in a concurrent or an interleaved manner, (ii) the I/O unit size that determines the maximum size that can be assigned to a flash memory, and (iii) a page allocation method to map the logical address of write operations, which are requested from the host to the physical address of flash memory. We developed a data sampling method to provide consistency in collecting power consumption patterns of each SSD. When we applied three algorithms to four real SSDs, we found flash memory configurations, I/O unit sizes, and page allocation schemes. We show that the performance of SSD can be improved by aligning the record size of file system with I/O unit of SSD, which we found by using our algorithm. We found that Q Pro has I/O unit of 32 KB, and by aligning the file system record size to 32 KB, the performance increased by 201% and energy consumption decreased by 85%, which compared to the record size of 4 KB.
Eiji UCHINO Ryosuke KUBOTA Takanori KOGA Hideaki MISAWA Noriaki SUETAKE
In this paper we propose a novel classification method for the multiple k-nearest neighbor (MkNN) classifier and show its practical application to medical image processing. The proposed method performs fine classification when a pair of the spatial coordinate of the observation data in the observation space and its corresponding feature vector in the feature space is provided. The proposed MkNN classifier uses the continuity of the distribution of features of the same class not only in the feature space but also in the observation space. In order to validate the performance of the present method, it is applied to the tissue characterization problem of coronary plaque. The quantitative and qualitative validity of the proposed MkNN classifier have been confirmed by actual experiments.
Hyun-chong CHO Lubomir HADJIISKI Berkman SAHINER Heang-Ping CHAN Chintana PARAMAGUL Mark HELVIE Alexis V. NEES Hyun Chin CHO
To study the similarity between queries and retrieved masses, we design an interactive CBIR (Content-based Image Retrieval) CADx (Computer-aided Diagnosis) system using relevance feedback for the characterization of breast masses in ultrasound (US) images based on radiologists' visual similarity assessment. The CADx system retrieves masses that are similar to query masses from a reference library based on six computer-extracted features that describe the texture, width-to-height, and posterior shadowing of the mass. The k-NN retrieval with Euclidean distance similarity measure and the Rocchio relevance feedback algorithm (RRF) are used. To train the RRF parameters, the similarities of 1891 image pairs from 62 (31 malignant and 31 benign) masses are rated by 3 MQSA (Mammography Quality Standards Act) radiologists using a 9-point scale (9=most similar). The best RRF parameters are chosen based on 3 observer experiments. For testing, 100 independent query masses (49 malignant and 51 benign) and 121 reference masses on 230 (79 malignant and 151 benign) images were collected. Three radiologists rated the similarity between the query masses and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 for the training set and 5.78 and 6.02 for the test set, respectively. Average AUC values without and with RRF were, respectively, 0.86±0.03 and 0.87±0.03 for the training set and 0.91±0.03 and 0.90±0.03 for the test set. On average, masses retrieved using the CBIR system were moderately similar to the query masses based on radiologists' similarity assessments. RRF improved the similarity of the retrieved masses.
Hai Huy NGUYEN PHAM Shintaro HISATAKE Tadao NAGATSUMA
We demonstrate the characterization of a horn antenna in the full F-band (90 ∼ 140 GHz) based on far-field transformation from near-field electro-optic (EO) measurement. Our nonpolarimetric self-heterodyne EO sensing system enables us to simultaneously measure the spatial distribution of the amplitude and phase of the RF signal. Because free-running lasers are used to generate and detect the RF signal, our EO sensing system has wide frequency tunability. Owing to the stable and reliable amplitude and phase measurements with minimal field perturbation, the estimated far-field patterns agree well with those of the simulated results. We have evaluated the estimation errors of the 3-dB beamwidth and position of the first sidelobe. The largest standard error of the measurements was 1.1° for 3-dB beamwidth and 3.5° for the position of first sidelobe at frequency 90 GHz. Our EO sensing system can be used to characterize and evaluate terahertz antennas for indoor communication applications such as small-size slot array antennas.
Korkut Kaan TOKGOZ Kimsrun LIM Seitarou KAWAI Nurul FAJRI Kenichi OKADA Akira MATSUZAWA
A multi-port device is characterized using measurement results of a two-port Vector Network Analyzer (VNA) with four different structures. The loads used as terminations are open-, or short-circuited transmission lines (TLs), which are characterized along with Ground-Signal-Ground pads based on L-2L de-embedding method. A new characterization method for a four-port device is introduced along with its theory. The method is validated using simulation and measurement results. The characterized four-port device is a Crossing Transmission Line (CTL), mainly used for over-pass or under-pass of RF signals. Four measurement results are used to characterize the CTL. The S-parameter response of the CTL is found. To compare the results, reconstructed responses compared with the measurements. Results show good agreement between the measured and modeled results from 1 GHz to 110 GHz.
Hiromitsu AWANO Hiroshi TSUTSUI Hiroyuki OCHI Takashi SATO
Random telegraph noise (RTN) is a phenomenon that is considered to limit the reliability and performance of circuits using advanced devices. The time constants of carrier capture and emission and the associated change in the threshold voltage are important parameters commonly included in various models, but their extraction from time-domain observations has been a difficult task. In this study, we propose a statistical method for simultaneously estimating interrelated parameters: the time constants and magnitude of the threshold voltage shift. Our method is based on a graphical network representation, and the parameters are estimated using the Markov chain Monte Carlo method. Experimental application of the proposed method to synthetic and measured time-domain RTN signals was successful. The proposed method can handle interrelated parameters of multiple traps and thereby contributes to the construction of more accurate RTN models.
This letter proposes a reuse method of unit test cases, which characterize internal behaviors of a called function, for enhancing capability of automatic generation of test cases. Existing test case generation tools have limits in finding solutions to the deep call structure of the source code. In our approach, the complex call structure is simplified by reusing unit test cases of called functions. As unit test cases represent the characteristics of the called function, the internal behaviors of called functions are replaced by the test cases. This approach can be applicable to existing test tools for simplifying the process of generation and enhancing their capabilities.
Yosuke HIMURA Kensuke FUKUDA Patrice ABRY Kenjiro CHO Hiroshi ESAKI
In this paper, we discuss the validity of the multi-scale gamma model and characterize the differences in host-level application traffic with this model by using a real traffic trace collected on a 150-Mbps transpacific link. First, we investigate the dependency of the model (parameters α and β, and fitting accuracy ε) on time scale Δ, then find suitable time scales for the model. Second, we inspect the relations among α, β, and ε, in order to characterize the differences in the types of applications. The main findings of the paper are as follows. (1) Different types of applications show different dependencies of α, β, and ε on Δ, and display different suitable Δs for the model. The model is more accurate if the traffic consists of intermittently-sent packets than other. (2) More appropriate models are obtained with specific α and β values (e.g., 0.1 < α < 1, and β < 2 for Δ = 500 ms). Also, application-specific traffic presents specific ranges of α, β, and ε for each Δ, so that these characteristics can be used in application identification methods such as anomaly detection and other machine learning methods.
Wimol SAN-UM Masayoshi TACHIBANA
An analog circuit testing scheme is presented. The testing technique is a sinusoidal fault signature characterization, involving the measurement of DC offset, amplitude, frequency and phase shift, and the realization of two crossing level voltages. The testing system is an extension of the IEEE 1149.4 standard through the modification of an analog boundary module, affording functionalities for both on-chip testing capability, and accessibility to internal components for off-chip testing. A demonstrating circuit-under-test, a 4th-order Gm-C low-pass filter, and the proposed analog testing scheme are implemented in a physical level using 0.18-µm CMOS technology, and simulated using Hspice. Both catastrophic and parametric faults are potentially detectable at the minimum parameter variation of 0.5%. The fault coverage associated with CMOS transconductance operational amplifiers and capacitors are at 94.16% and 100%, respectively. This work offers the enhancement of standardizing test approach, which reduces the complexity of testing circuit and provides non-intrusive analog circuit testing.
Hyunjin YOO Kang Y. KIM Kwan H. LEE
High Dynamic Range Imaging (HDRI) refers to a set of techniques that can represent a dynamic range of real world luminance. Hence, the HDR image can be used to measure the reflectance property of materials. In order to reproduce the original color of materials using this HDR image, characterization of HDR imaging is needed. In this study, we propose a new HDRI characterization method under a known illumination condition at the HDR level. The proposed method normalizes the HDR image by using the HDR image of a light and balances the tone using the reference of the color chart. We demonstrate that our method outperforms the previous method at the LDR level by the average color difference and BRDF rendering result. The proposed method gives a much better reproduction of the original color of a given material.
Chien-Tsun CHEN Yu Chin CHENG Chin-Yun HSIEH
Design by Contract (DBC), originated in the Eiffel programming language, is generally accepted as a practical method for building reliable software. Currently, however, few languages have built-in support for it. In recent years, several methods have been proposed to support DBC in Java. We compare eleven DBC tools for Java by analyzing their impact on the developer's programming activities, which are characterized by seven quality attributes identified in this paper. It is shown that each of the existing tools fails to achieve some of the quality attributes. This motivates us to develop ezContract, an open source DBC tool for Java that achieves all of the seven quality attributes. ezContract achieves streamlined integration with the working environment. Notably, standard Java language is used and advanced IDE features that work for standard Java programs can also work for the contract-enabled programs. Such features include incremental compilation, automatic refactoring, and code assist.
Mutsumi KIMURA Yoshitaka NISHIZAKI Takehiko YAMASHITA Takehiro SHIMA Tomohisa HACHIDA
Two types of thin-film phototransistors (TFPTs), p/i/n TFPT and n/i/n TFPT, are characterized from the viewpoint of operation condition and device behavior. It is found that the detected current can be both independent of the applied voltage (Vapply) and linearly dependent on the photo-illuminance in the saturation region of the p/i/n TFPT. This characteristic is because even if Vapply increases, the depletion layer remains in the whole intrinsic region, and the electric field changes only near the p-type/intrinsic interface and intrinsic/n-type interface but remains in the most intrinsic region. This characteristic is preferable for some kinds of photosensor applications. Finally, an application example of the p/i/n TFPT, artificial retina, is introduced.
Sung-Hak LEE Jong-Hyub LEE Kyu-Ik SOHNG
An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 33 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.
Weidong TIAN Joe R. TROGOLO Bob TODD
Capacitor mismatch is an important device parameter for precision analog applications. In the last ten years, the floating gate measurement technique has been widely used for its characterization. In this paper we describe the impact of leakage current on the technique. The leakage can come from, for example, thin gate oxide MOSFETs or high dielectric constant capacitors in advanced technologies. SPICE simulation, bench measurement, analytical model and numerical analyses are presented to illustrate the problem and key contributing factors. Criteria for accurate capacitor systematic and random mismatch characterization are developed, and practical methods of increasing measurement accuracy are discussed.
Hongbin SUO Ming LI Ping LU Yonghong YAN
Robust automatic language identification (LID) is the task of identifying the language from a short utterance spoken by an unknown speaker. The mainstream approaches include parallel phone recognition language modeling (PPRLM), support vector machine (SVM) and the general Gaussian mixture models (GMMs). These systems map the cepstral features of spoken utterances into high level scores by classifiers. In this paper, in order to increase the dimension of the score vector and alleviate the inter-speaker variability within the same language, multiple data groups based on supervised speaker clustering are employed to generate the discriminative language characterization score vectors (DLCSV). The back-end SVM classifiers are used to model the probability distribution of each target language in the DLCSV space. Finally, the output scores of back-end classifiers are calibrated by a pair-wise posterior probability estimation (PPPE) algorithm. The proposed language identification frameworks are evaluated on 2003 NIST Language Recognition Evaluation (LRE) databases and the experiments show that the system described in this paper produces comparable results to the existing systems. Especially, the SVM framework achieves an equal error rate (EER) of 4.0% in the 30-second task and outperforms the state-of-art systems by more than 30% relative error reduction. Besides, the performances of proposed PPRLM and GMMs algorithms achieve an EER of 5.1% and 5.0% respectively.
Maria Rosario de OLIVEIRA Rui VALADAS Antonio PACHECO Paulo SALVADOR
Internet access traffic follows hourly patterns that depend on various factors, such as the periods users stay on-line at the access point (e.g. at home or in the office) or their preferences for applications. The clustering of Internet users may provide important information for traffic engineering and billing. For example, it can be used to set up service differentiation according to hourly behavior, resource optimization based on multi-hour routing and definition of tariffs that promote Internet access in low busy hours. In this work, we propose a methodology for clustering Internet users with similar patterns of Internet utilization, according to their hourly traffic utilization. The methodology resorts to three statistical multivariate analysis techniques: cluster analysis, principal component analysis and discriminant analysis. The methodology is illustrated through measured data from two distinct ISPs, one using a CATV access network and the other an ADSL one, offering distinct traffic contracts. Principal component analysis is used as an exploratory tool. Cluster analysis is used to identify the relevant Internet usage profiles, with the partitioning around medoids and Ward's method being the preferred clustering methods. For the two data sets, these methods lead to the choice of 3 clusters with different hourly traffic utilization profiles. The cluster structure is validated through discriminant analysis. It is also evaluated in terms of several characteristics of the user traffic not used in the cluster analysis, such as the type of applications, the amount of downloaded traffic, the activity duration and the transfer rate, resulting in coherent outcomes.
Masahiko OMURA Toshiki KANAMOTO Michiko TSUKAMOTO Mitsutoshi SHIROTA Takashi NAKAJIMA Masayuki TERAI
This paper proposes a new efficient method of characterizing a memory compiler in order to reduce the computation time and remove human error. The new features that make our method greatly efficient are the following three points: (1) high-speed circuit simulation of the whole memory module using a hierarchical LPE (Layout Parasitic Extractor) and a hierarchical circuit simulator, (2) automatic generation of circuit simulation input data from corresponding parameterized description termed the template file, and (3) carefully selected environmental conditions of circuit level simulator and minimizing the number of runs of it. We demonstrate the effectiveness of the proposed method by application to the single-port SRAM generators using 90 nm CMOS technology.