In this paper, we investigate the discrepancy between a serial version and a parallel version of zero-knowledge protocols, and clarify the information "leaked" in the parallel version, which is not zero-knowledge unlike the case of the serial version. We consider two sides: one negative and the other positive in the parallel version of zero-knowledge protocols, especially of the Fiat-Shamir scheme.
Hiroaki TERADA Kenichi KAGOSHIMA
The term telecommunications is derived from "tele", meaning at a distance, and "communications", meaning exchanging of information. The history of electronic communications has thus far been applied to the exchange of spoken, visual, and or textual information between pairs of people, pairs of machines, and people and machines. The role of telecommunications has been to provide a medium for the exchange of the information, with the burden placed on the communicating people or machines to initiate the communication and to interpret or process the information being exchanged. In this paper we attempt to predict some future trends in telecommunications, reaching into the next century. Such predictions are inevitably incomplete, inaccurate, or both. Nevertheless, it is a useful exercise to try to anticipate these trends, and more importanly the issues and problems that will arise in the future, as a way of focusing near-term research efforts and suggesting opportunities. One of our hypotheses about the future is that telecommunications networks will become much more active in initiating, controlling, and participating in the exchange of information. Our approach will be to first review some particularly important past developments, and then to try to predict the future in two ways: First, by extrapolating present trends and activities, and second, by criticizing current trends and anticipating problems looming on the horizon.
Sho-ichi MATSUNAGA Tomokazu YAMADA Kiyohiro SHIKANO
In speech recognition systeme dealing with unlimited vocabulary and based on stochastic language models, when the target recognition task is changed, recognition performance decreases because the language model is no longer appropriate. This paper describes two approaches for adapting a specific/general syllable trigram model to a new task. One uses a amall amount of text data similar to the target task, and the other uses supervised learning using the most recent input phrases and similar text. In this paper, these adaptation methods are called preliminary learning" and successive learning", respectively. These adaptation are evaluated using syllable perplexity and phrase recognition rates. The perplexity was reduced from 24.5 to 14.3 for the adaptation using 1000 phrases of similar text by preliminary learning, and was reduced to 12.1 using 1000 phrases including the 100 most recent phrases by successive learning. The recognition rates were also improved from 42.3% to 51.3% and 52.9%, respectively. Text similarity for the approaches is also studied in this paper.
Mikio YAMAMOTO Satoshi KOBAYASHI Yuji MORIYA Seiichi NAKAGAWA
We studied the manner of clarification and verification in real dialogs and developed a spoken dialog system that can cope with the disambiguation of meanings of user input utterances. We analyzed content, query types and responses of human clarification queries. In human-human communications, ten percent of all sentences are concerned with meaning clarification. Therefore, in human-machine communications, we believe it is important that the machine verifies ambiguities occurring in dialog processing. We propose an architecture for a dialog system with this capability. Also, we have investigated the source of ambiguities in dialog processing and methods of dialog clarification for each part of the dialog system.
This paper discusses the problems facing spoken dialogue processing and the prospects for future improvements. Research on elemental topics like speech recognition, speech synthesis and language understanding has led to improvements in the accuracy and sophistication of each area of study. First, in order to handle a spoken dialogue, we show the necessity for information exchanges between each area of processing as seen through the analysis of spoken dialogue characteristics. Second, we discuss how to integrate those processes and show that the memory-basad approach to spontaneous speech interpretation offers a solution to the problem of process integration. The key to this is setting up a mental state affected by both speech and linguistic information. Finally, we discuss how those mental states are structured and a method for constructing them.
The future trends of optical technologies combined with LSI are reviewed. Present problems of LSI, and the possible solutions to these problems through the merger of the optical technology into LSI are discussed. One of the present trends in interconnection between LSI components is the timeserial approach, originally developed for the optical communication. This method is capable of high speed data transfer. The other is a space-parallel approach, arising from the two-dimensional nature of the light propagation. This approach has the capability of performing parallel processing. A hybrid OEIC, possibly on GaAs, is discussed as an example of future photonic LSI. The lack of key devices is a fundamental barrier to the future improvement of photonic LSI. Direct interaction between photons and electrons is a promissing approach. Some of the Author's ideas to promote the merger of photonics and LSI are proposed.
Masaaki NAGATA Tsuyoshi MORIMOTO
A unification-based Japanese parser has been implemented for an experimental Japanese-to-English spoken language translation system (SL-TRANS). The parser consists of a unification-based spoken-style Japanese grammar and an active chart parser. The grammar handles the syntactic, semantic, and pragmatic constraints in an integrated fashion using HPSG-based framework in order to cope with speech recognition errors. The parser takes multiple sentential candidates from the HMM-LR speech recognizer, and produces a semantic representation associated with the best scoring parse based on acoustic and linguistic plausibility. The unification-based parser has been tested using 12 dialogues in the conference registration domain, which include 261 sentences uttered by one male speaker. The sentence recognition accuracy of the underlying speech recognizer is 73.6% for the top candidate, and 83.5% for the top three candidates, where the test-set perplexity of the CFG grammar is 65. By ruling out erroneous speech recognition results using various linguistic constraints, the parser improves the sentence recognition accuracy up to 81.6% for the top candidate, and 85.8% for the top three candidates. From the experiment result, we found that the combination of syntactic restriction, selectional restriction and coordinate structure restriction can provide a sufficient restriction to rule out the recognition errors between case-marking particles with the same vowel, which are the type of errors most likely to occur. However, we also found that it is necessary to use pragmatic information, such as topic, presupposition, and discourse structure, to rule out the recognition errors involved with topicalizing particles and sentence final particles.
Yasuhisa SATO Rinshi SUGINO Masaki OKUNO Toshiro NAKANISHI Takashi ITO
Breakdown fields and the charges to breakdown (QBD) of oxides increased after UV/Cl2 pre-oxidation cleaning. This is due to decreased residual metal contaminants on silicon surfaces in the bottom of the LOCOS region after wet cleaning. Treatment in NH4OH, H2O2 and H2O prior to UV/Cl2 cleaning suppressed increases in surface roughness and kept leakage currents through the oxides after UV/Cl2 cleaning as low as those after wet cleaning alone. The large junction leakage currents--caused by metal contaminants introduced during dry etching--decreased after UV/Cl2 cleaning which removes the contaminated layer.
This paper was written for LSI engineers in order to demonstrate the effect of optical interconnections in LSIs to improve both the speed and power performances of 0.5 and 0.2 µm CMOS microprocessors. The feasibilities and problems regarding new micronsize optoelectronic devices as well as associated electronics are discussed. Actual circuit structures clocks and bus lines used for optical interconnection are discussed. Newly designed optical interconnections and the speed power performances are compared with those of the original electrical interconnection systems.
Recent progress in high-speed semiconductor devices and integrated circuits (ICs) has outpaced the conventional measuring and testing instruments. With advent of ultrashort-pulse laser technology, the electro-optic sampling (EOS) technique based on the Pockels effect has become the most promising solution way of overcoming the frequency limit, whose bandwidth is approaching a terahertz. This paper reviews recent progress on the research of the EOS technniques for measuring ultrahigh-speed electronic devices and ICs. It describes both the principle of the EOS and the key technologies used for noncontact probing of ICs. Internal-node measurements of state-of-the-art high-speed ICs are also presented.
Fumihiko UESUGI Iwao NISHIYAMA
A new direct projection patterning technique of aluminum using synchrotron radiation (SR) is proposed. It is based on the thermal reaction control effect of SR excitation. In the case of the Si surface, pure thermal growth is possible at 200, however, this growth is suppressed perfectly by SR irradiation. On the other hand, Al growth on the SiO2 surface is impossible at the same temperature thermally, however, SR has an effect to initiate thermal reaction. Both new effects of SR, suppression and initiation, are clarified to be caused by atomic order level thin layers formed from CVD gases by SR excitation on the surfaces. By using these effects, the direct inverse and normal projection patterning of Al are successfully demonstrated.
Tsuneo TERASAWA Shinji OKAZAKI
Fabrication of 0.2 to 0.3 µm features is vital for future ultralarge scale integration devices. An area of particular concern is whether optical lithography can delineate such feature sizes, i.e., less than the exposure wavelength. The use of a phase shift mask is one of the most effective means of improving resolution in optical lithography. This technology basically makes use of the interference between light transmitting through adjacent apertures of the mask. Various types of phase shift masks and their imaging characteristics are discussed and compared with conventional normal transmission masks. To apply these masks effectively to practical patterns, a phase shifter pattern design tool and mask repair method must be established. The phase shifting technology offers a potential to fabricate 0.3 µm features by using the current i-line stepper, and 0.2 µm features by using excimer laser stepper.
Kiyomichi ARAKI Masayuki TAKADA Masakatu MORII
In this paper a recursive form of Welch-Berlekamp (W-B) algorithm is provided which is a novel and fast decoding algorithm.
High-Definition Television (HDTV) 2 million pixel solid state image sensors with high performances are realized, applicable for 1 inch optical format. Key technical aspects of HDTV image sensors are suppression of smear level by maintaining large optical aperture and high readout signal rate by introducing a dual channel horizontal register. From such a perspective, new HDTV image sensors such as Stack CCD, Frame-Interline Transfer (FIT) CCD and Charge Modulation Device (CMD) are developed.
Kohji HOHKAWA Shinji MATSUOKA Kazuo HAGIMOTO Kiyoshi NAKAGAWA
Optical fiber transmission systems have advanced rapidly with the advent of highly advanced electronic and optical devices. This paper introduces several IC technologies required for ultra-high-speed optical transmission and overviews current IC technologies used for the existing and developing optical fiber trunk transmission systems. Future trends in device technologies are also discussed.
In 1990, Menezes, Okamoto and Vanstone proposed a method that reduces EDLP to DLP, which gave an impact on the security of cryptosystems based on EDLP. But this reducing is valid only when Weil pairing can be defined over the m-torsion group which includes the base point of EDLP. If an elliptic curve is ordinary, there exists EDLP to which we cannot apply the reducing. In this paper, we investigate the condition for which this reducing is invalid.
Reduction in the illumination wavelength for exposure leads to higher resolution while keeping the depth of focus. Thus, KrF excimer laser lithography has been positioned as the next generation lithography tool behind g/i-line optical lithography, and many studies have been investigated. In the early days, the excimer laser lithography had many inherent problems, such as inadequate reliability, difficult maintainability, high operating cost, and low resolution and sensitivity of resist materials. However, the performance of the excimer laser stepper has been improved and chemical amplification resists have been developed for the past decade. At present, KrF excimer lithography has reached the level of trial manufacturing of lower submicron ULSI devices beyond 64 Mbit DRAMs.