Kanji YASUI Yutaka OOSHIMA Yuichiro KUROKI Hiroshi NISHIYAMA Masasuke TAKATA Tadashi AKAHANE
Al doped zinc oxide (AZO) films were deposited using a radio frequency (rf) magnetron sputtering apparatus with a mesh grid electrode. Improvement of crystalline uniformity was achieved by the use of an appropriate negative grid bias to effectively suppress the bombardment of high-energy charged particles onto the film surface. The uniformity of the film's electronic properties, such as resistivity, carrier concentration and Hall mobility, was also improved using the sputtering method. Hydrogen plasma annealing was investigated to further decrease the resistivity of the ZnO films and the carrier concentration was increased by 1-21020 cm-3 without decrease in the Hall mobility.
Taek-Young YOUN Young-Ho PARK Jongin LIM
Trapdoor commitment schemes are widely used for adding valuable properties to ordinary signatures or enhancing the security of weakly secure signatures. In this letter, we propose a trapdoor commitment scheme based on RSA function, and prove its security under the hardness of the integer factoring. Our scheme is very efficient in computing a commitment. Especially, it requires only three multiplications for evaluating a commitment when e=3 is used as a public exponent of RSA function. Moreover, our scheme has two useful properties, key exposure freeness and strong trapdoor opening, which are useful for designing secure chameleon signature schemes and converting a weakly secure signature to a strongly secure signature, respectively.
Ya-Shih HUANG Yu-Ju HONG Juinn-Dar HUANG
In deep-submicron technology, several state-of-the-art architectural synthesis flows have already adopted the distributed register architecture to cope with the increasing wire delay by allowing multicycle communication. In this article, we regard communication synthesis targeting a refined regular distributed register architecture, named RDR-GRS, as a problem of simultaneous data transfer routing and scheduling for global interconnect resource minimization. We also present an innovative algorithm with regard of both spatial and temporal perspectives. It features both a concentration-oriented path router gathering wire-sharable data transfers and a channel-based time scheduler resolving contentions for wires in a channel, which are in spatial and temporal domain, respectively. The experimental results show that the proposed algorithm can significantly outperform existing related works.
Yonghee PARK Junghoe CHOI Jisuk HONG Sanghoon LEE Moonhyun YOO Jundong CHO
The researches on predicting and removing of lithographic hot-spots have been prevalent in recent semiconductor industries, and known to be one of the most difficult challenges to achieve high quality detection coverage. To provide physical design implementation with designer's favors on fixing hot-spots, in this paper, we present a noble and accurate hot-spot detection method, so-called "leveling and scoring" algorithm based on weighted combination of image quality parameters (i.e., normalized image log-slope (NILS), mask error enhancement factor (MEEF), and depth of focus (DOF)) from lithography simulation. In our algorithm, firstly, hot-spot scoring function considering severity level is calibrated with process window qualification, and then least-square regression method is used to calibrate weighting coefficients for each image quality parameter. In this way, after we obtain the scoring function with wafer results, our method can be applied to future designs of using the same process. Using this calibrated scoring function, we can successfully generate fixing guidance and rule to detect hot-spot area by locating edge bias value which leads to a hot-spot-free score level. Finally, we integrate the hot-spot fixing guidance information into layout editor to facilitate the user-favorable design environment. Applying our method to memory devices of 60 nm node and below, we could successfully attain sufficient process window margin to yield high mass production.
This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.
Kazuhisa YAMAGISHI Takanori HAYASHI
Developing a non-intrusive packet-layer model is required to passively monitor the quality of experience (QoE) during service. We propose a packet-layer model that can be used to estimate the video quality of IPTV using quality parameters derived from transmitted packet headers. The computational load of the model is lighter than that of the model that takes video signals and/or video-related bitstream information such as motion vectors as input. This model is applicable even if the transmitted bitstream information is encrypted because it uses transmitted packet headers rather than bitstream information. For developing the model, we conducted three extensive subjective quality assessments for different encoders and decoders (codecs), and video content. Then, we modeled the subjective video quality assessment characteristics based on objective features affected by coding and packet loss. Finally, we verified the model's validity by applying our model to unknown data sets different from training data sets used above.
Koichi TAKEUCHI Hideyuki TAKAHASHI
The extraction of verb synonyms is a key technology to build a verb dictionary as a language resource. This paper presents a co-clustering-based verb synonym extraction approach that increases the number of extracted meanings of polysemous verbs from a large text corpus. For verb synonym extraction with a clustering approach dealing with polysemous verbs can be one problem issue because each polysemous verb should be categorized into different clusters depending on each meaning; thus there is a high possibility of failing to extract some of the meanings of polysemous verbs. Our proposed approach can extract the different meanings of polysemous verbs by recursively eliminating the extracted clusters from the initial data set. The experimental results of verb synonym extraction show that the proposed approach increases the correct verb clusters by about 50% with a 0.9% increase in precision and a 1.5% increase in recall over the previous approach.
Masayuki UKISHIMA Hitomi KANEKO Toshiya NAKAGUCHI Norimichi TSUMURA Markku HAUTA-KASARI Jussi PARKKINEN Yoichi MIYAKE
Image quality of halftone print is significantly influenced by optical characteristics of paper. Light scattering in paper produces optical dot gain, which has a significant influence on the tone and color reproductions of halftone print. The light scattering can be quantified by the Modulation Transfer Function (MTF) of paper. Several methods have been proposed to measure the MTF of paper. However, these methods have problems in efficiency or accuracy in the measurement. In this article, a new method is proposed to measure the MTF of paper efficiently and accurately, and the dot gain effect on halftone print is analyzed. The MTF is calculated from the ratio in spatial frequency domain between the responses of incident pencil light to paper and the perfect specular reflector. Since the spatial frequency characteristic of input pencil light can be obtained from the response of perfect specular reflector, it does not need to produce the input illuminant having "ideal" impulse characteristic. Our method is experimentally efficient since only two images need to be measured. Besides it can measure accurately since the data can be approximated by the conventional MTF model. Next, we predict the reflectance distribution of halftone print using the measured MTF in microscopy in order to analyze the dot gain effect since it can clearly be observed in halftone micro-structure. Finally, a simulation is carried out to remove the light scattering effect from the predicted image. Since the simulated image is not affected by the optical dot gain, it can be applied to analyze the real dot coverage.
Kei HASHIMOTO Hirofumi YAMAMOTO Hideo OKUMA Eiichiro SUMITA Keiichi TOKUDA
This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.
Amal PUNCHIHEWA Jonathan ARMSTRONG Seiichiro HANGAI Takayuki HAMAMOTO
This paper presents a novel approach of analysing colour bleeding caused by image compression. This is achieved by isolating two components of colour bleeding, and evaluating these components separately. Although these specific components of colour bleeding have not been studied with great detail in the past, with the use of a synthetic test pattern -- similar to the colour bars used to test analogue television transmissions -- we have successfully isolated, and evaluated: "colour blur" and "colour ringing," as two separate components of colour bleeding artefact. We have also developed metrics for these artefacts, and tested these derived metrics in a series of trials aimed to test the colour reproduction performance of a JPEG codec, and a JPEG2000 codec -- both implemented by the developer IrfanView. The algorithms developed to measure these artefact metrics proved to be effective tools for evaluating and benchmarking the performance of similar codecs, or different implementations of the same codecs.
Kei OHNISHI Kaori YOSHIDA Yuji OIE
We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.
Mai OHTA Takeo FUJII Kazushi MURAOKA Masayuki ARIYOSHI
In this paper, we propose a novel method for gathering sensing information by using an orthogonal narrowband signal for cooperative sensing in cognitive radio. It is desirable to improve the spectrum sensing performance by countering the locality effect of a wireless channel; cooperative sensing by using multiple inputs of sensing information from the surrounding sensing nodes has attracted attention. Cooperative sensing requires that sensing information be gathered at the master node for determining the existence of a primary signal. If the used information gathering method leads to redundancies, the total capacity of the secondary networks is not improved. In this paper, we propose a novel method for gathering sensing information that maps the sensing information to the orthogonal narrowband signal to achieve simultaneous sensing information gathering at the master node. In this method, the sensing information is mapped to an orthogonal subcarrier signal of an orthogonal frequency division multiplexing (OFDM) structure to reduce the frequency resource required for sensing information gathering. The orthogonal signals are transmitted simultaneously from multiple sensing nodes. This paper evaluates the performance of the proposed information gathering method and confirms its effectiveness.
In this paper, we present a Double-Anchoring Based Tone Mapping (DABTM) algorithm for displaying high dynamic range (HDR) images. First, two anchoring values are obtained using the double-anchoring theory. Second, we use the two values to formulate the compressing operator, which can achieve the aim of tone mapping directly. A new method based on accelerated K-means for the decomposition of HDR images into groups (frameworks) is proposed. Most importantly, a group of piecewise-overlap linear functions is put forward to define the belongingness of pixels to their locating frameworks. Experiments show that our algorithm is capable of achieving dynamic range compression, while preserving fine details and avoiding common artifacts such as gradient reversals, halos, or loss of local contrast.
Katsuya NAKAHIRA Kiyoshi KOBAYASHI
This paper describes a novel channel allocation scheme that enables data to be collected from observation points throughout the ultra-wide area covered by a satellite communication system. Most of the earth stations in the system acquire pre-scheduled type data such as that pertaining to rainfall and temperature measurements, but a few of them acquire event-driven type data such as that pertaining to earthquakes. Therefore, the main issue pertaining to this scheme is how to effectively accommodate demand for the channels by a huge number of earth stations with limited satellite frequency bandwidth regardless of their acquired data types. To tackle this issue, we propose a channel allocation scheme built on a pre-assigned scheme to gather pre-scheduled type data but that also includes an additional procedure to gather event-driven type data reliably. Performance evaluations show that the proposed scheme achieves higher throughput and lower packet loss rate than conventional schemes.
This paper focuses on fusion estimation algorithms weighted by matrices and scalars, and relationship between them is considered. We present new algorithms that address the computation of matrix weights arising from multidimensional estimation problems. The first algorithm is based on the Cholesky factorization of a cross-covariance block-matrix. This algorithm is equivalent to the standard composite fusion estimation algorithm however it is low-complexity. The second fusion algorithm is based on an approximation scheme which uses special steady-state approximation for local cross-covariances. Such approximation is useful for computing matrix weights in real-time. Subsequent analysis of the proposed fusion algorithms is presented, in which examples demonstrate the low-computational complexity of the new fusion estimation algorithms.
Satoru OCHIIWA Satoshi TAOKA Masahiro YAMAUCHI Toshimasa WATANABE
The minimum initial marking problem of Petri nets (MIM) is defined as follows: "Given a Petri net and a firing count vector X, find an initial marking M0, with the minimum total token number, for which there is a sequence δ of transitions such that each transition t appears exactly X(t) times in δ, the first transition is enabled at M0 and the rest can be fired one by one subsequently." In a production system like factory automation, economical distribution of initial resources, from which a schedule of job-processings is executable, can be formulated as MIM. AAD is known to produce best solutions among existing algorithms. Although solutions by AMIM+ is worse than those by AAD, it is known that AMIM+ is very fast. This paper proposes new heuristic algorithms AADO and AMDLO, improved versions of existing algorithms AAD and AMIM+, respectively. Sharpness of solutions or short CPU time is the main target of AADO or AMDLO, respectively. It is shown, based on computing experiment, that the average total number of tokens in initial markings by AADO is about 5.15% less than that by AAD, and the average CPU time by AADO is about 17.3% of that by AAD. AMDLO produces solutions that are slightly worse than those by AAD, while they are about 10.4% better than those by AMIM+. Although CPU time of AMDLO is about 180 times that of AMIM+, it is still fast: average CPU time of AMDLO is about 2.33% of that of AAD. Generally it is observed that solutions get worse as the sizes of input instances increase, and this is the case with AAD and AMIM+. This undesirable tendency is greatly improved in AADO and AMDLO.
Carrier aggregation is a potential technology for the LTE-Advanced system to support wider bandwidth than the LTE system. This paper analyzes the performance of carrier aggregation under elastic traffic, and compares it to that of a simpler approach for the same purpose, referred to as the independent carrier approach. The queueing behaviors of these two approaches are formulated as one fast versus multiple slow state-dependent Processor Sharing servers, respectively. Both analytical and simulation results show that when there are L component carriers with uniform bandwidth in the system, the performance of the carrier aggregation approach is L times better than that of the independent carrier approach in terms of the average user delay and throughput under the same traffic load.
Kiyoshi KOBAYASHI Fumihiro YAMASHITA Jun-ichi ABE Masazumi UEBA
This paper presents a prototype group modem for a hyper-multipoint data gathering satellite communication system. It can handle arbitrarily and dynamically assigned FDMA signals by employing a novel FFT-type block demultiplexer/multiplexer. We clarify its configuration and operational principle. Experiments show that the developed modem offers excellent performance.
Byung-Tae CHOI Hyung Dal PARK Heung-Sik TAE
To explain the variation of the address discharge during an address period, the wall voltage variation during an address period was investigated as a function of the address-on-time by using the Vt closed curves. It was observed that the wall voltage between the scan and address electrodes was decreased with an increase in the address-on-time. It was also observed that the wall voltage variation during an address period strongly depended on the voltage difference between the scan and address electrodes. Based on this result, the modified driving waveform to raise the level of Vscanw, was proposed to minimize the voltage difference between the scan and address electrodes. However, the modified driving waveform resulted in the increase in the falling time of scan pulse. Finally, the overlapped double scan waveform was proposed to reduce a falling time of scan pulse under the raised voltage level of Vscanw, also.
Marta R. COSTA-JUSSA Jose A. R. FONOLLOSA
This paper surveys several state-of-the-art reordering techniques employed in Statistical Machine Translation systems. Reordering is understood as the word-order redistribution of the translated words. In original SMT systems, this different order is only modeled within the limits of translation units. Relying only in the reordering provided by translation units may not be good enough in most language pairs, which might require longer reorderings. Therefore, additional techniques may be deployed to face the reordering challenge. The Statistical Machine Translation community has been very active recently in developing reordering techniques. This paper gives a brief survey and classification of several well-known reordering approaches.