The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

17181-17200hit(18690hit)

  • Signal Dependent Time-Frequency and Time-Scale Signal Representations Designed Using the Radon Transform

    Branko RISTIC  Boualem BOASHASH  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1170-1177

    Time-frequency representations (TFRs) have been developed as tools for analysis of non-stationary signals. Signal dependent TFRs are known to perform well for a much wider range of signals than any fixed (signal independent) TFR. This paper describes customised and sequential versions of the signal dependent TFR proposed in [1]. The method, which is based on the use of the Radon transform at distance zero in the ambiguity domain, is simple and effective in dealing with both simulated and real data. The use of the described method for time-scale analysis is also presented. In addition, the paper investigates a simple technique for detection of noisy chirp signals using the Radon transfrom in the ambiguity domain.

  • Scattering of Electromagnetic Plane Waves by a Perfectly Conducting Wedge: The Case of E Polarization

    Michinari SHIMODA  Tokuya ITAKURA  Yuko YAMADA  

     
    PAPER-Electromagnetic Theory

      Vol:
    E78-C No:9
      Page(s):
    1298-1305

    The two-dimensional scattering problem of electromagnetic waves by a perfectly conducting wedge is analyzed by means of the Wiener-Hopf technique together with the formulation using the partition of scatterers. The Wiener-Hopf equations are derived on two complex planes. Investigating the mapping between these complex planes and introducing the appropriate functions which satisfy the edge condition of the wedge, the solutions of these equations are obtained by the decomposition procedure of functions. By deforming the integration path of the Fourier inverse transform, it is found that the representation of the scattered wave is in agreement with the integral representation using the Sommerfeld contours.

  • On p-Ary Bent Sequences

    Shinya MATSUFUJI  Kyoki IMAMURA  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E78-A No:9
      Page(s):
    1257-1260

    It is known that a family of p-ary bent sequences, whose elements take values of GF (p) with a prime p, possesses low periodic correlation properties and high linear span. Firstly such a family is shown to consist of balanced sequences in the sense that the frequency of appearances in one period is the same for each nonzero element and once less for zero element. Secondly the exact distribution of the periodic correlation values is given for the family.

  • Optical Information Processing by Synthesis of the Coherence Function--Photonic/Video Hybrid System--

    Toru OKUGAWA  Kazuo HOTATE  

     
    PAPER-Opto-Electronics

      Vol:
    E78-C No:9
      Page(s):
    1286-1291

    A photonic/video hybrid system for optical information processing by synthesis of the coherence function is proposed. Optical coherence function can be synthesized to have delta-function-like shape or notch shape by using direct frequency modulation of a laser diode with an appropriate waveform. Therefore, by choosing only the interference component in the interferometer, information processing functions can be obtained. The photonic/video hybrid system proposed provides a novel way to choose the interference component, which can improve the spatial resolution compared with our previous system with holographic technique. Selective extraction two-dimensional (2-D) information from a three-dimensional (3-D) object is successfully performed in basic experiments.

  • Strategies of Channel Allocation in Developing Radio Networks with Intersite and Cosite Constraints

    Vladimir LYANDRES  

     
    LETTER-Mobile Communication

      Vol:
    E78-B No:9
      Page(s):
    1344-1347

    The influence of cochannel, adjacent channel and intermodulation constraints on the capacity of the frequency band in the dynamic channel allocation problem is estimated. Algorithms including a backtracking phase with partial reassignment of currently assigned requirements are proposed. Numerical examples show a strong possibility of a 20% capacity improvement compared to the conventional strategies.

  • Harmonics Estimation Based on Instantaneous Frequency and Its Application to Pitch Determination of Speech

    Toshihiko ABE  Takao KOBAYASHI  Satoshi IMAI  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E78-D No:9
      Page(s):
    1188-1194

    This paper proposes a technique for estimating the harmonic frequencies based on instantaneous frequency (IF) of speech signals. The main problem is how to decompose the speech signal into the harmonic components. For this purpose, we use a set of bandpass-filters, each of whose center frequencies changes with time in order to track the instantaneous freuency of its output. As a result, the outputs of the band-pass filters become the harmonic components, and the instantaneous frequencies of the harmonics are accurately estimated. To evaluate the effectiveness of the approach, we apply it to pitch determination of speech. Pitch determination is simply accomplished by selecting the correct fundamental frequency out of the harmonic components. It is confirmed that the pitch extraction using the proposed pitch determination algorithm (PDA) is stable and accurate. The most significant feature of the PDA is that the extracted pitch contour is smooth and it requires no post-processing such as nonlinear filtering or any smoothing processes. Several examples are presented to demonstrate the capability of the harmonics estimation technique and the PDA.

  • Phase Ambiguity Resolver for PCM Sound Broadcasting Satellite Service with Low Power Consumption Viterbi Decoder Employing SST Scheme

    Kazuhiko SEKI  Shuji KUBOTA  Shuzo KATO  

     
    PAPER-Communication Systems and Transmission Equipment

      Vol:
    E78-B No:9
      Page(s):
    1269-1277

    This paper proposes a novel phase ambiguity resolver with combining a very low power Viterbi decoder employing a scarce state transition scheme to realize cost effective receivers for the PCM sound broadcasting satellite service. The theoretical analyses on phase decision performance show that the proposed resolver achieves the symbol-by-symbol phase detection and decides correctly phases of the demodulated data even if the bit error probability of 710-2. The resolver also reduces the phase decision time to below 1/1000 of that of the conventional resolver. Furthermore, experimental results of the power consumption estimate that the prototype Viterbi decoder consumes only 60mW at the data rate of 24.576Mbit/s.

  • Concepts and Methodologies for Knowledge-Based Program Understanding--The ALPUS's Approach--

    Haruki UENO  

     
    PAPER-Methodologies

      Vol:
    E78-D No:9
      Page(s):
    1108-1117

    The background concepts and methodologies of the knowledge-based program understander ALPUS is discussed. ALPUS understands user's buggy Pascal programs using four kinds of programming knowledge: the knowledge on algorithms, programming techniques, the Pascal language, and logical bugs. The knowledge on algorithms, the key knowledge, is represented in a form of hierarchical data structure called Hierarchical Procedure Graph (HPG). In HPG each node represents a chunk of operations called process," which is consisted of sub-processes. The other knowledge is maintained as independent knowledge bases and linked to associated processes of the HPG. The knowledge about bugs acquired by cognitive experiment is grouped into three categories: bugs on algorithms, programming techniques, and the Pascal language, and connected to associated elements of programming knowledge respectively. ALPUS tries to understand user's buggy programs, detects logical bugs, infers user's intentions, and gives advices for fixing bugs. Program understanding is achieved by three steps: normalization, variable identification, and process and technique identification. Normalization results in improving flexibility of understanding. Variable, process and technique identifications are achieved by knowledge-based pattern matching. Intentions are inferred by means of information attached to buggy patterns. The result of comprehension is reported to a user (i.e., student). Experimental results using Quicksort programs written by students show that the HPG formalism is quite powerful in understanding algorithm-oriented programs. The ALPUS's way of program comprehension is useful in the situation of programming education in an intermediate class of an engineering school. The ALPUS system is a subsystem of the intelligent programming environment INTELLITUTOR for learning programming, which was implemented in the frame-based knowledge engineering environment ZERO on a UNIX workstation.

  • A Software Project Management System Using an Object Oriented Database--Integration of Process Management System and Quality Management System--

    Seiichi KOMIYA  Atsuo HAZEYAMA  

     
    PAPER-Support Systems

      Vol:
    E78-D No:9
      Page(s):
    1142-1149

    There are three viewpoints involved in software project management: process management, quality management and cost management. Software projects must be managed on the basis of these three viewpoints. However, in many cases process management, quality management and cost management systems are built separately as individual systems respectively. Construction of software project management systems which these three functions are integrated has been rare. Therefore, in order to construct a system integrating these functions, the authors clarify the significance of integration of application systems. And then the authors unveil the structure of a software project management system that process management system, quality management system and cost management system are integrated by using an object oriented database.

  • A Constructing Method of Functional Model by Integrated Learning from Examples of Software Modification

    Hiroyuki YAMADA  Tetsuo KOBASHI  Tsunehiro AIBARA  

     
    PAPER-Models

      Vol:
    E78-D No:9
      Page(s):
    1133-1141

    One approach to develop software efficiently is to reuse existing software by modifying a part of it. However, modifying software will often introduce unexpected side effects into other parts of it. As a result, it costs much time and care to modify the software. So, in order to modify software efficiently, we have proposed a functional model to represent information about side effects caused by modification and a model based supporting system for modifying software. So far, however, an expert software developer must describe the entire functional model of the target software through the analysis of practical modifying processes. This will be an unnecessary burden on him. Moreover, the larger target software becomes, the harder the model construction becomes. Therefore, an automatic constructing method of the functional model is needed in order to solve this problem. So, this paper considers a method of acquiring useful interaction information by learning from training examples of modification. However, in our application domain, it seems that it is impossible to make complete domain theory and to prepare a large number or training examples in advance. Therefore, our learning method involves an integration of explanation-based learning (EBL) from positive examples of modification generated by the user and Similarity-based learning (SBL) from positive or negative examples generated by the user and the learning system. As a result, our method can acquire valid knowledge about the interaction from not so many examples under incomplete theory. Then, this paper presents a constructing method, in which our proposed learning method is incorporated, of a functional model. Finally, this paper demonstrates construction of the functional model in the domain of an event-driven queueing simulation program according to our learning method.

  • A Computer Supported System of Meetings Using a Model of Inter-Personal Communication

    Tomofumi UETAKE  Morio NAGATA  

     
    PAPER-Models

      Vol:
    E78-D No:9
      Page(s):
    1127-1132

    Information systems to support cooperative work among people should be first designed to help humam communication. However, there are few systems based on the analysis of human communication. Standing on this situation, we propose a meeting support system for the participants' understandings by indicating the suitable information about the topic of the scene". Our system provides only useful information by monitoring each statement without complex methods. To show something useful multi-media information for members, we propose the following structure of the meeting on the basis of the analysis of communication. Each statement is classified into two levels, either; a statement about the progress" of the meeting (context-level utterances) or, a statement about objects" (content-level utterances). Further, content-level utterances are classified into two types, position utterances and argument utterances. Using this classification of statements, the proceeding of the meeting is represented as the tree model which is called a context-tree". If the structure of meetings is fixed, it is possible to select only useful information from all shared information for members by analyzing each content-level utterance. The system introduced in this paper shows appropriate multi-media information about the topic of the scene" by using the above model. We have implemented a prototype system based on the above ideas. Moreover, we have mode some experiments to show the effectiveness of this system. Those results show that our method is effective to improve the productivity" of meetings.

  • Multisegment Multiple VQ Codebooks-Based Speaker Independent Isolated-Word Recognition Using Unbiased Mel Cepstrum

    Liang ZHOU  Satoshi IMAI  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E78-D No:9
      Page(s):
    1178-1187

    In this paper, we propose a new approach to speaker independent isolated-word speech recognition using multisegment multiple vector quantization (VQ) codebooks. In this approach, words are recognized by means of multisegment multiple VQ codebooks, a separate multisegment multiple VQ codebooks are designed for each word in the recognition vocabulary by dividing equally the word into multiple segments which is correlative with number of syllables or phonemes of the word, and designing two individual VQ codebooks consisting of both instantaneous and transitional speech features for each segment. Using this approach, the influence of the within-word coarticulation can be minimized, the time-sequence information of speech can be used, and the word length differences in the vocabulary or speaking rates variations can be adapted automatically. Moreover, the mel-cepstral coefficients based on unbiased estimation of log spectrum (UELS) are used, and comparison experiment with LPC derived mel cepstral coefficients is made. Recognition experiments Using testing databases consisting of 100 Japanese words (Waseda database) and 216 phonetically balanced words (ATR database), confirmed the effectiveness of the new method and the new speech features. The approach is described, computational complexity as well as memory requirements are analyzed, the experimental results are presented.

  • A Class of Error Locating Codes--SECSe/bEL Codes--

    Masato KITAKAMI  Eiji FUJIWARA  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1086-1091

    This paper proposes a new class of error locating codes which corrects random single-bit errors and indicates a location of an erroneous b-bit byte which includes e-bit errors, where 2 e b, called SECSe/bEL codes. This type of codes is very suitable for an application to memory systems constructed from byte-organized memory chips because this corrects random single-bit errors induced by soft-errors and also indicates the position of the faulty memory chips. This paper also gives a construction method of the proposed codes using tensor product of the two codes, i.e., the single b-bit byte error correcting codes and the single-bit error correcting and e-bit error detecting codes. This clarifies lower bounds and error control capabilities of the proposed codes.

  • An Improved Union Bound on Block Error Probability for Closest Coset Decoding

    Kenichi TOMITA  Toyoo TAKATA  Tadao KASAMI  Shu LIN  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1077-1085

    This paper is concerned with the evaluation of the block error probability Pic of a block modulation code for closest coset decoding over an AWGN channel. In most cases, the conventional union bound on Pic for closest coset decoding is loose not only at low signal-to-noise ratios but at relatively high signal-to-noise ratios. In this paper, we introduce a new upper bound on the probability of union of events by using the graph theory and we derive an improved upper bound on Pic for some block modulation codes using closest coset decoding over an AWGN channel. We show that the new bound is better than the conventional union bound especially at relatively high signal-to-noise ratios.

  • Coding Theorems on Correlated General Sources

    Shigeki MIYAKE  Fumio KANAYA  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1063-1070

    Slepian, Wolf and Wyner proved famous source coding theorems for correlated i.i.d. sources. On the other hand recently Han and Verdú have shown the source and channel coding theorems on general sources and channels whose statistics can be arbitrary, that is, no assumption such as stationarity or ergodicity is imposed. We prove source coding theorems on correlated general sources by using the method which Han and Verdú developed to prove their theorems. Also, through an example, we show some new results which are essentially different from those already obtained for the i.i.d. source cases.

  • Device Figure-of-Merits for High-Speed Digital ICs and Baseband Amplifiers

    Eiichi SANO  Yutaka MATSUOKA  Tadao ISHIBASHI  

     
    PAPER

      Vol:
    E78-C No:9
      Page(s):
    1182-1188

    Device figure-of-merits for digital ICs are derived from analytical delay expressions for emitter-coupled logic and source-coupled FET logic inverters and are compared with the operating speeds of D-F/Fs reported in previous studies. We show that device figure-of-merits for baseband amplifiers are equivalent to those for digital ICs. The validity of device figure-of-merits are confirmed by measuring the bandwidth of the baseband amplifiers fabricated with AlGaAs/GaAs LBCTs.

  • Importance Sampling for TCM Scheme over Non-Gaussian Noise Channel

    Takakazu SAKAI  Haruo OGIWARA  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1109-1116

    When bit error probability of a trellis-coded modulation (TCM) scheme becomes very small, it is almost impossible to evaluate it by an ordinary Monte-Carlo simulation method. Importance sampling is a technique of reducing the number of simulation samples required. The reduction is attained by modifying the noise to produce more errors. The low error rate can be effectively estimated by applying importance sampling. Each simulation run simulates a single error event, and importance sampling is used to make the error events more frequent. The previous design method of the probability density function in importance sampling is not suitable for the TCM scheme on an additive non-Gaussian noise channel. The main problem is how to design the probability density function of the noise used in the simulation. We propose a new design method of the simulation probability density function related to the Bhattacharyya bound. It is reduced to the same simulation probability density function of the old method when the noise is additive white Gaussian. By using the proposed method for an additive non-Gaussian noise, the reduction of simulation time is about 1/170 at bit error rate of 106 if the overhead of the calculation of the Bhattacharyya bound is ignored. Under the same condition, the reduction of the simulation time by the proposed method is 1/65 of the ordinary Monte-Carlo method even if we take the overhead for importance sampling into account.

  • A New Approach to Constructing a Provably Secure Variant of Schnorr's Identification Scheme

    Satoshi HADA  Hatsukazu TANAKA  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1154-1159

    Schnorr's identification scheme is the most efficient and simplest scheme based on the discrete logarithm problem. Unfortunately, Schnorr's scheme is not provably secure, i.e., the security has not been proven to be reducible to well defined intractable problems. Two works have already succeeded to construct provably secure variants of Schnorr's scheme. They have been constructed with a common approach, i.e., by modifying the formula to compute the public key so that each public key has multiple secret keys. These multiple secret keys seem to be essential for their provable security, but also give rise to a penalty in their efficiency. In this paper, we describe a new approach to constructing a provably secure variant, where we never modify the formula, and show that with our approach, we can construct a new efficient provably secure scheme.

  • A Proposal of Multiple Optical Wideband Frequency Modulation System and Its Phase Noise Insensitivity

    Toshiaki KURI  Katsutoshi TSUKAMOTO  Norihiko MORINAGA  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1136-1141

    This paper proposes a multiple optical wideband frequency modulation system and clarifies its phase noise insensitivity. In this system, an optical carrier is phase-modulated by a conventional FM signal to generate many sidebands in optical frequency band. The n-th order sideband component yields also FM signal with frequency deviation of n times the one of original FM signal. Therefore, by selecting the high order optical sideband, the wideband optical FM signal can be obtained. Moreover, if some sidebands are simultaneously extracted and multiplied at the receiver, a wideband FM signal with larger frequency deviation and no laser phase noise can be obtained, and FM threshold extension can be realized.

  • Throughput Analysis of Slotted Non-persistent and One-persistent CSSS/OD (Carrier Sense Spread Spectrum with Overload Detection) Protocols

    Francis N. MUMBA  Shinji TSUZUKI  Yoshio YAMADA  Saburo TAZAKI  

     
    LETTER

      Vol:
    E78-A No:9
      Page(s):
    1220-1224

    The throughput performance of the non-persistent carrier sense spread spectrum with overload detection (NP-CSSS/OD) protocol is analysed and compared with that of the conventional non-persistent and one-persistent carrier sense multiple access with collision detection (NP-CSMA/CD and 1P-CSMA/CD) and the one-persistent carrier sense spread spectrum with overload detection (1P-CSSS/OD) protocols. We also introduced utilization measurements and did some performance comparisons between these protocols. At high offered loads, the NP-CSSS/OD protocol is found to offer the best throughput and utilization performances amongst them.

17181-17200hit(18690hit)