The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Ti(30728hit)

29081-29100hit(30728hit)

  • A Method for Computing the Weight Distribution of a Block Code by Using Its Trellis Diagram

    Yoshihisa DESAKI  Toru FUJIWARA  Tadao KASAMI  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1230-1237

    A method is presented for computing the number of codewords of weight less than or equal to a given integer in a binary block code by using its trellis diagram. The time and space complexities are analyzed. It is also shown that this method is very efficient for the codes which have relatively simple trellis diagram, say some BCH codes. By using this method, the weight distribution of (128,36) extended BCH code is computed efficiently.

  • Unidirectional Byte Error Locating Codes

    Shuxin JIANG  Eiji FUJIWARA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1253-1260

    This papter proposes a new type of unidirectional error control codes which indicates the location of unidirectional errors clustered in b-bit length, i.e., unidirectional byte error in b (b2) bits. Single unidirectional b-bit byte error locating codes, called SUbEL codes, are first clarified using necessary and sufficient conditions, and then code construction algorithm is demonstrated. The lower bound on check bit length of the SUbEL codes is derived. Based on this, the proposed codes are shown to be very efficient. Using the code design concept presented for the SUbEL codes, it is demonstrated that generalized unidirectional byte error locating codes are easily constructed.

  • PATDRAM: Pixel-Aligned Triple-Port DRAM

    Toshiki MORI  Tetsuyuki FUKUSHIMA  Akifumi KAWAHARA  Katsumi WADA  Akihiro MATSUMOTO  

     
    PAPER-DRAM

      Vol:
    E77-C No:8
      Page(s):
    1316-1322

    This paper describes the architecture and new circuit technologies of a proposed Pixel (bit) -Aligned Triple-port DRAM (PATDRAM). The PATDRAM has a 270 K word 16 b Random Access Memory (RAM), a 512 word 8 b Serial Access Memory-(a) (SAMa) and a 1024 word 4 b Serial Access Memory-(b) (SAMb). The random port, serial-a and serial-b port can be operated by three independent synchronous clocks. In these three ports, word data can be aligned to the location of an arbitrary bit position. Data transfer from SAMb to RAM can be individually masked by transfer mask data. The RAM operates by 33 MHz synchronous clock and two SAMs operate by 40 MHz clocks. Novel architecture of the PATDRAM accelerates graphics performance and simplifies in multimedia systems which manage both realtime video and computer graphics data, and also accelerates graphics performance in both two-dimensional (2D) and three-dimensional (3D) graphics systems. PATDRAM was designed using a 0.6 µ double metal, triple poly, stacked capacitor, CMOS process technology in a 10.98 mm9.88 mm die area integrated 4.4 Mb RAM, 8 Kb SAM, 4 Kb transfer mask register and 5 Kgate logic.

  • Parallel Analog Image Coding and Decoding by Using Cellular Neural Networks

    Mamoru TANAKA  Kenneth R. CROUNSE  Tamás ROSKA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:8
      Page(s):
    1387-1395

    This paper describes highly parallel analog image coding and decoding by cellular neural networks (CNNs). The communication system in which the coder (C-) and decoder (D-) CNNs are embedded consists of a differential transmitter with an internal receiver model in the feedback loop. The C-CNN encodes the image through two cascaded techniques: structural compression and halftoning. The D-CNN decodes the received data through a reconstruction process, which includes a dynamic current distribution, so that the original input to the C-CNN can be recognized. The halftoning serves as a dynamic quantization to convert each pixel to a binary value depending on the neighboring values. We approach halftoning by the minimization of error energy between the original gray image and reconstructed halftone image, and the structural compression from the viewpoints of topological and regularization theories. All dynamics are described by CNN state equations. Both the proposed coding and decoding algorithms use only local image information in a space inveriant manner, therefore errors are distributed evenly and will not introduce the blocking effects found in DCT-based coding methods. In the future, the use of parallel inputs from on-chip photodetectors would allow direct dynamic quantization and compression of image sequences without the use of multiple bit analog-to-digital converters. To validate our theory, a simulation has been performed by using the relaxation method on an 150 frame image sequence. Each input image was 256256 pixels whth 8 bits per pixel. The simulated fixed compression rate, not including the Huffman coding, was about 1/16 with a PSNR of 31[dB]35[dB].

  • Variable Error Controlling Schemes for Intelligent Error Controlling Systems

    Taroh SASAKI  Ryuji KOHNO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1281-1288

    Recently, a lot of research works have been carried out regarding intelligent communication. If the final information sink is assumed as a human being, a communication channel can be used more effectively when encoders/decoders work "intelligently" or take into account of the semantics of information to be sent. We have been studying error-controlling systems based on different importance of segmental information. The system divides the information input into segments to which individual importance can be assigned. The segments are individually encoded by appropriate error-correcting codes (ECCs) which correspond to their importance among codes with different error-correcting capabilities. For the information that difference of the importance is systematically aligned, conventional UEP (unequal error protection) codes can be applied, but we treat the case that alignment of the importance of the information source is not systematically aligned. Since the system uses multiple ECCs with different (n,k,d) parameters, information regarding what length of the next codeword is required for decoding. We propose error controlling schemes using mulriple ECCs; the first scheme and the second scheme use the obvious codelength identifying information. In the second scheme, information bits are sorted so that segments with the same importance can be encoded by an ECC with the same error-correcting capability. The third scheme is a main proposal in this paper and uses Variable Capability Coding scheme (VCC) which uses some ECCs having different error-correcting capabilities and codelengths. A sequence encoded by the VCC is separable into appropriate segments without obvious codelength identifying information when the channel error probability is low. Subsequently, we evaluate these schemes by coderate when (1) error correcting capability (2) codelength identifying capability are the same. One of the feature of VCC is the capability of resuming from propagative errors because errors beyond the codelength identifying capability occur and the proper beginning of the codeword is lost in the decoder. We also evaluate this capability as (3) resynchronizing capability.

  • Establishment of Nonlinear ARMA Model for Non-Gaussian Stochastic Process and Its Application to Time Series Data of Road Traffic Noise

    Akira IKUTA  Mitsuo OHTA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1345-1352

    In the actual acoustic environment, the stochastic process exhibits various non-Gaussian distribution forms, and there exist potentially various nonlinear correlations in addition to the linear correlation between time series. In this study, a nonlinear ARMA model is proposed, based on the Bayes' theorem, where no artificially pre-established regression function model is assumed between time series, while reflecting hierarchically all of those various correlation informations. The proposed method is applied to the actual data of road traffic noise and its practical usefulness is verified.

  • Properties of Thin-Film Thermal Switches for High-Tc Superconductive Filter

    Yasuhiro NAGAI  Naobumi SUZUKI  Osamu MICHIKAMI  

     
    PAPER-HTS

      Vol:
    E77-C No:8
      Page(s):
    1229-1233

    This paper reports on the properties of thin-film thermal switches that are monolithically fabricated on high-Tc superconductive filter. Operating at a wide temperature range of 50-77 K, it was found that the switch could control the center frequency by -10 MHz with an increased insertion loss of less than 0.7 dB. In an on-off switching operation of filter characteristics using thin-film switches, power consumption was approximately 20 mW at 77 K, and the signal decay time as a switching speed was 30 ms at 76 K with a switch current of 70 mA. The decay time decreased exponentially as the switch current or the temperature setting increased.

  • Piecewise Parametric Cubic Interpolation

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:8
      Page(s):
    869-876

    A method is described for constructing an interpolant to a set of arbitrary data points (xi, yi), i1, 2, , n. The constructed interpolant is a piecewise parametric cubic polynomial and satisfies C1 continuity, and it reproduces all parametric polynomials of degree two or less exactly. The experiments to compare the new method with Bessel method and spline method are also shown.

  • Moving Point Light Source Photometric Stereo

    Yuji IWAHORI  Robert J. WOODHAM  Hidekazu TANAKA  Naohiro ISHII  

     
    LETTER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:8
      Page(s):
    925-929

    This paper describes a new method to determine the 3-D position coordinates of a Lambertian surface from four shaded images acquired with an actively controlled, nearby moving point light source. The method treats both the case when the initial position of the light source is known and the case when it is unknown.

  • Automatic Seal Imprint Verification System with Imprint Quality Assessment Function and Its Performance Evaluation

    Katsuhiko UEDA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:8
      Page(s):
    885-894

    An annoying problem encountered in automatic seal imprint verification is that for seal imprints may have a lot of variations, even if they are all produced from a single seal. This paper proposes a new automatic seal imprint verification system which adds an imprint quality assessment function to our previous system in order to solve this problem, and also examines the verification performance of this system experimentally. This system consists of an imprint quality assessment process and a verification process. In the imprint quality assessment process, an examined imprint is first divided into partial regions. Each partial region is classified into one of three quality classes (good quality region, poor quality region, and background) on the basis of characteristics of its gray level histogram. In the verification process, only good quality partial regions of an examined imprint are verified with registered one. Finally, the examined imprint is classified as one of two types: a genuine and a forgery. However, as a result of quality assessment, if the partial regions classified as poor quality are too many, the examined imprint is classified as ambiguous" without verification processing. A major advantage of this verification system is that this system can verify seal imprints of various qualities efficiently and accurately. Computer experiments with real seal imprints were performed by using this system, previous system (without image quality assessment function) and document examiners of a bank. The results of these experiments show that this system is superior in the verification performance to our previous system, and has a similar verification performance to that of document examiners (i.e., the experimental results show the effectiveness of adding the image quality assessment function to a seal imprint verification system).

  • Design of Repairable Cellular Arrays on Multiple-Valued Logic

    Naotake KAMIURA  Yutaka HATA  Kazuharu YAMATO  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E77-D No:8
      Page(s):
    877-884

    This paper proposes a repairable and diagnosable k-valued cellular array. We assume a single fault, i.e., either stuck-at-O fault or stuck-at-(k1) fault of switches occurs in the array. By building in a duplicate column iteratively, when a stuck-at-(k1) fault occurs in the array, the fault never influences the output of the array. That is, we can construct a fault-tolerant array for the stuck-at-(k1) fault. While, for the stuck-at-O fault, the diagnosing method is simple and easy because we don't have to diagnose the stuck-at-(k1) fault. Moreover, our array can be repaired easily for the fault. The comparison with other rectangular arrays shows that our array has advantages for the number of cells and the cost of the fault diagnosis.

  • Ultrafast Single-Shot Water and Fat Separated Imaging with Magnetic Field Inhomogeneities

    Shoichi KANAYAMA  Shigehide KUHARA  Kozo SATOH  

     
    PAPER-Medical Electronics and Medical Information

      Vol:
    E77-D No:8
      Page(s):
    918-924

    Ultrafast MR imaging (e.g., echo-planar imaging) acquires all the data within only several tens of milliseconds. This method, however, is affected by static magnetic field inhomogeneities and chemical shift; therefore, a high degree of field homogeneity and water and fat signal separation are required. However, it is practically impossible to obtain an homogeneous field within a subject even if in vivo shimming has been performed. In this paper, we describe a new ultrafast MR imaging method called Ultrafast Single-shot water and fat Separated Imaging (USSI) and a correction method for field inhomogeneities and chemical shift. The magnetic field distribution whthin the subject is measured before thd scan and used to obtain images without field inhomogeneity distortions. Computer simulation results have shown that USSI and the correction method can obtain water and fat separated images as real and imaginary parts, respectively, of a complex Fourier transform with a single-shot scan. Image quality is maintained in the presence of field inhomogeneities of several ppm similar to those occurring under practical imaging conditions. Limitations of the correction method are also discussed.

  • An 8-Dimensional Trellis-Coded 8-PSK with Non-zero Crossing Constraint

    Tadashi WADAYAMA  Koichiro WAKASUGI  Masao KASAHARA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1274-1280

    We present an 8-dimensional trellis-coded 8-PSK with a symbol transition constraint that is similar to that of π/4-shift quadrature phase shift keying (QPSK). This scheme can achieve a coding gain of 1.6 to 2.4 dB at the same rate of π/4-shift QPSK on Gaussian channel, and it has also an immunity against the integer multiples of 90 phase ambiguities. In order to label the constellation of the proposed scheme, a constellation partitioning algorithm is presented. This algorithm, on the basis of set partitioning, can be used to label the signal constellation with no coset structure.

  • Necessary and Sufficient Conditions for Unidirectional Byte Error Locating Codes

    Shuxin JIANG  Eiji FUJIWARA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1246-1252

    The byte error locating codes specify the byte location in which errors are occurred without indicating the precise location of erroneous bit positions. This type of codes is considered to be useful for fault isolation and reconfiguration in the fault-tolerant computer systems. In this paper, difference between the code function of error-location and that of error-correction/error-detection is clarified. With using the concepts of unidirectional byte distance, unordered byte number and ordered byte number, the necessary and sufficient conditions of the unidirectional byte error locating codes are demonstrated.

  • I-V Characteristic of YBCO Step-Edge Josephson Junction

    Keiichi YAMAGUCHI  Shuichi YOSHIKAWA  Tsuyoshi TAKENAKA  Syuichi FUJINO  Kunihiko HAYASHI  Tsutomu MITSUZUKA  Katsumi SUZUKI  Youichi ENOMOTO  

     
    PAPER-HTS

      Vol:
    E77-C No:8
      Page(s):
    1218-1223

    Step-edge Josephson junctions (SEJJs), which are made by YBa2Cu3O7 (YBCO) thin films on MgO (100) substrates with gentle step angle (below 40 degrees) have been successfully fabricated. The step-edge, with several angles on the MgO substrate, were made using photolithography and Ar ion beam etching, and then YBCO films were deposited on the step-edges by pulsed laser deposition method. The relationships between step-angles and I-V characteristics, microwave properties and structure of SEJJs were systematically investigated. Shapiro steps were clearly observed only in step-angle range between 10 and 30 degrees. Intermittence and hysteresis on the I-V characteristics were observed above 30 mA without effect from step-angles.

  • Analysis of an Open-Ended Waveguide as a Probe for Near Field Antenna Measurements by Using TLM Method

    Yoshiyuki FUJINO  Cheuk-yu Edward TONG  

     
    PAPER-Antennas and Propagation

      Vol:
    E77-B No:8
      Page(s):
    1048-1055

    To increase the accuracy of a near field antenna measurement system, it is necessary to know radiation characteristics of a probe to detect near field data. Open ended waveguide used as a near field probe in our system was analyzed using Transmission Line Matrix (TLM) method which is a time domain electromagnetic solver. Validity of this analysis has been confirmed by comparison with experimental data and existing theoretical approximation. Frequency dependence of a complex reflection coefficient at the waveguide aperture has been derived and is shown to agree with measured values. The radiation pattern of the open ended waveguide with mounting structure is also calculated. Ripples on both the amplitude and phase patterns are correctly predicted by our simulation. This method can be applied to accurately model the effect of probe antennas to enhance the accuracy of near field antenna range.

  • A Resistor Coupled Josephson Polarity-Convertible Driver

    Shuichi NAGASAWA  Shuichi TAHARA  Hideaki NUMATA  Yoshihito HASHIMOTO  Sanae TSUCHIDA  

     
    PAPER-LTS

      Vol:
    E77-C No:8
      Page(s):
    1176-1180

    A polarity-convertible driver is necessary as a basic component of several Josephson random access memories. This driver must be able to inject a current having positive or negative polarity into a load transmission line such as a word or bit line of the RAM. In this paper, we propose a resistor coupled Josephson polarity-convertible driver which is highly sensitive to input signals and has a wide operating margin. The driver consists of several Josephson junctions and several resistors. The input signal is directly injected to the driver through the resistors. The circuit design is discussed on the operating principle of the driver. The driver is fabricated by 1.5 µm Nb technology with Nb/AlOx/Nb Josephson junctions, two layer Nb wirings, an Nb ground plane, Mo resistors, and SiO2 insulators. The Nb/AlOx/Nb Josephson junctions are fabricated using technology refined for sub-micron size junctions. The insulators between wirings are formed using bias sputtering technique to obtain good step coverage. The driver circuit size is 53 µm34 µm. Measurements are carried out at 10 kHz to quasistatically test the polarity-convertible function and the operating margin of the driver. Proper polarity-convertible operation is confirmed for a large operating bias margin of 70% at a fairly small input current of 0.3 mA.

  • Innovation Models in a Stochastic System Represented by an Input-Output Model

    Kuniharu KISHIDA  

     
    PAPER

      Vol:
    E77-A No:8
      Page(s):
    1337-1344

    A stochastic system represented by an input-output model can be described by mainly two different types of state space representation. Corresponding to state space representations innovation models are examined. The relationship between both representations is made clear systematically. An easy transformation between them is presented. Zeros of innovation models are the same as those of an ARMA model which is stochastically equivalent to innovation models, and related to stable eigenvalues of generalized eigenvalue problem of matrix Riccati equation.

  • Highly Reliable Flash Memories Fabricated by in-situ Multiple Rapid Thermal Processing

    Takahisa HAYASHI  Yoshiyuki KAWAZU  Akira UCHIYAMA  Hisashi FUKUDA  

     
    PAPER-Non-volatile Memory

      Vol:
    E77-C No:8
      Page(s):
    1270-1278

    We propose, for the first time, highly reliable flash-type EEPROM cell fabrication using in-situ multiple rapid thermal processing (RTP) technology. In this study, rapid thermal oxynitridation tunnel oxide (RTONO) film formations followed by in-situ arsenic (As)-doped floating-gate polysilicon growth by rapid thermal chemical vapor deposition (RTCVD) technologies are fully utilized. The results show that after 5104 program/erase (P/E) endurance cycles, the conventional cell shows 65% narrowing of the threshold voltage (Vt) window, whereas the RTONO cell indicates narrowing of less than 20%. A large number of nitrogen atoms (1020 atoms/cm3) are confirmed by secondary ion mass spectrometry (SIMS), pile up at the SiO2/Si interface and distribute into bulk SiO2. It is considered that in the RTONO film stable Si-N bonds are formed which minimize electron trap generation as well as the neutral defect density, resulting in lower Vt shifts in P/E stress. In addition, the RTONO film reduces the number of hydrogen atoms because of final N2O oxynitridation. The SIMS data shows that by the in-situ RTCVD process As atoms (91020 atoms/cm3) are incorporated uniformly into 1000--thick film. Moreover, the RTCVD polysilicon film indicates an extremely flat surface. The time-dependent dielectric breakdown (TDDB) characteristics of interpoly oxide-nitride-oxide (ONO) film exhibited no defect-related breakdown and 5 times longer breakdown time as compared to phosphorus-doped polysilicon film. Therefore, the flash-EEPROM cell fabricated has good charge storing capability.

  • 3-D Object Recognition Using Hopfield-Style Neural Networks

    Tsuyoshi KAWAGUCHI  Tatsuya SETOGUCHI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E77-D No:8
      Page(s):
    904-917

    In this paper we propose a new algorithm for recognizing 3-D objects from 2-D images. The algorithm takes the multiple view approach in which each 3-D object is modeled by a collection of 2-D projections from various viewing angles where each 2-D projection is called an object model. To select the candidates for the object model that has the best match with the input image, the proposed algorithm computes the surface matching score between the input image and each object model by using Hopfield nets. In addition, the algorithm gives the final matching error between the input image and each candidate model by the error of the pose-transform matrix proposed by Hong et al. and selects an object model with the smallest matching error as the best matched model. The proposed algorithm can be viewed as a combination of the algorithm of Lin et al. and the algorithm of Hong et al. However, the proposed algorithm is not a simple combination of these algorithms. While the algorithm of Lin et al. computes the surface matching score and the vertex matching score berween the input image and each object model to select the candidates for the best matched model, the proposed algorithm computes only the surface matching score. In addition, to enhance the accuracy of the surface matching score, the proposed algorithm uses two Hopfield nets. The first Hopfield net, which is the same as that used in the algorithm of Lin et al., performs a coarse matching between surfaces of an input image and surfaces of an object model. The second Hopfield net, which is the one newly proposed in this paper, establishes the surface correspondences using the compatibility measures between adjacent surface-pairs of the input image and the object model. the results of the experiments showed that the surface matching score obtained by the Hopfield net proposed in this paper is much more useful for the selectoin of the candidates for the best matched model than both the sruface matching score obtained by the first Hopfield net of Lin et al. and the vertex matching score obtained by the second Hopfield net of Lin et al. and, as the result, the object recognition algorithm of this paper can perform much more reliable object recognition than that obtained by simply combining the algorithm of Lin et al. and the algorithm of Hong et al.

29081-29100hit(30728hit)