The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Ti(30728hit)

27581-27600hit(30728hit)

  • A Modeling and Protocol of the Out-Channel Interaction for PCS in Intelligent Network

    Takeshi SUGIYAMA  Tomoki OMIYA  Kazumasa TAKAMI  Shuji ESAKI  

     
    PAPER-Network architecture, signaling and protocols for PCS

      Vol:
    E79-B No:9
      Page(s):
    1388-1393

    We discuss the requirements, a model and protocol for the out-channel interaction for PCS in Intelligent Networks. As PCS can utilize the DSSI function (i.e. location update, authentication), it is reasonable to consider a model and protocol of the interaction for PCS as well as DSSI. To obtain the model/protocol, two types of interactions, call-related and call-unrelated, are considered. It is necessary to enhance the Basic Call State Model (BCSM) for the former, and to introduce a state model similar to BCSM for the latter, which represents association management and component exchange between a user and the network. The authentication function allocation for the dominant traffic, location update, is discussed based on the model and protocol, and this can co-exist with the proposed generic model and protocol.

  • An Efficient Wireless Voice/Data Integrated Access Algorithm in Noisy Channel Environments

    Byung Chul KIM  Chong Kwan UN  

     
    PAPER-Network architecture, signaling and protocols for PCS

      Vol:
    E79-B No:9
      Page(s):
    1394-1404

    In this paper, an efficient voice/data integrated access algorithm for future personal communication networks (PCNs) is proposed and analyzed based on an equilibrium point analysis (EPA) method. A practical wireless communication channel may be impaired by noise and multipath distortion, and thus corrupted real-time packets have to recompete immediately in order to be transmitted within the stringent delay constraint. Also, real-time traffic users have to transmit their packets irrespective of the amount of non real-time data messages so that heavy non real-time traffic does not degrade the quality of real-time traffic. In the proposed algorithm, request subslots are distributed in the beginning of every slot to reduce access delay of real-time traffic. Moreover, slots are assigned to real-time traffic first and the remaining idle slots are assigned later to non real-time traffic by using the scheme of contention separation. We analyze the throughput and delay characteristics of this system based on an EPA mothod, and validate their performances by simulations. This scheme can support different quality of services (QoSs) imposed by different services efficiently and show good quality of real-time traffic, especially voice packets, no matter how heavy non real-time traffic is.

  • Performance Comparison of Fixed and Dynamic Channel Assignments in Indoor Cellular System

    Hiroshi FURUKAWA  Mutsuhiko OISHI  Yoshihiko AKAIWA  

     
    PAPER-Advanced control techniques and channel assignments

      Vol:
    E79-B No:9
      Page(s):
    1295-1300

    This paper compares the performance of an indoor cellular system in terms of capacity and channel assignment delay for different Dynamic Channel Assignment (DCA) and Fixed Channel Assignment (FCA) schemes. We refer to specific group of DCAs, namely Channel Segregation and Reuse Partitioning (RP). Our main concern is to show that these DCA schemes offer better performance than FCA. Since the structure and floor layout of a building will have a major influence on the propagation and hence on the cell shape, a path loss simulator is developed for predicting the path loss which is used in evolving base station layouts. Computer simulation, based on Monte Carlo method, is carried out using the path loss values and the base station layouts. The results indicate that increased traffic capacity can be achieved with all DCAs in comparison with FCA. The highest capacity and a shorter channel assignment delay are delivered by Self-Organized Reuse Partitioning DCA scheme.

  • Serial and Parallel Search with Parallel I-Q Matched Filter for PN Acquisition in PCS

    Chun-Chieh FAN  Zsehong TSAI  

     
    PAPER-Advanced control techniques and channel assignments

      Vol:
    E79-B No:9
      Page(s):
    1278-1286

    For direct sequence spread spectrum systems, the performance of PN sequence acquisition can be significantly affected if data modulation is present. However, the data modulation often exists during the reacquisition of a PCS radio channel. This study proposes and analyzes two shemes which are designed to improve acquisition process for PN sequence under data modulation. Both designs are based upon a PN acquisition receiver with parallel I-Q matched filters. The first scheme employs a serial search strategy with verification mode. The second scheme, which is still based upon the same parallel acquisition receiver, employs the parallel search strategy. We show that the second scheme is capable of providing faster acquisition under data modulation than the first serial search scheme using the same number of I-Q matched filter. We believe it should become a very good alternative for the acquisition of data modulated PN sequences in personal communications.

  • Binary Counter with New Interface Circuits in the Extended Phase-Mode Logic Family

    Takeshi ONOMI  Yoshinao MIZUGAKI  TsutomuYAMASHITA  Koji NAKAJIMA  

     
    PAPER-Superconductive digital integrated circuits

      Vol:
    E79-C No:9
      Page(s):
    1200-1205

    A binary counter circuit in the extended phase-mode logic (EPL) family is presented. The EPL family utilizes a single flux quantum as an information bit carrier. Numerical simulations show that a binary counter circuit with a Josephson critical current density of 1 kA/cm2 can operate up to a 30 GHz input signal. The circuit has been fabricated using Nb/AlOx/Nb Josephson junction technology. New interface circuits are employed in the fabricated chip. A low speed test result shows the correct operation of the binary counter.

  • Josephson Memory Technology

    Suichi TAHARA  Shuichi NAGASAWA  Hideaki NUMATA  Yoshihito HASHIMOTO  Shinichi YOROZU  

     
    INVITED PAPER-Superconductive digital integrated circuits

      Vol:
    E79-C No:9
      Page(s):
    1193-1199

    Superconductive LSIs with Josephson junctions have features such as low power dissipation and high switching speed. In this paper, we review our developed 4-Kbit RAM with vortex transitional memory cells as an illustration of superconductive LSIs with Josephson junctions. We have developed a fabrication process technology for the 4-Kbit RAM. In the 4-Kbit RAM, 380ps access time and 9.8 mW power dissipation have been experimentally obtained. And also, we have estimated a suitable moat structure to reduce the influence of trapped magnetic structure. The 4-Kbit RAM has been successfully operated with a bit yield of 99.8%. Furthermore, we discuss GHz testing which is one of the most significant issues concerning superconductive digital LSIs.

  • R-ALOHA Protocol for SS Inter-Vehicle Communication Network Using Head Spacing Information

    Young-an KIM  Masao NAKAGAWA  

     
    PAPER-CDMA and multiple access technologies

      Vol:
    E79-B No:9
      Page(s):
    1309-1315

    Recently, there have been intensive studies on protocol methods and applications of short range inter-vehicle communication network (SR-IVCN) and systems. The purpose of the studies is to improve the safety of road traffic systems and the smooth control of the traffic flow by providing information to vehicles. Spread spectrum (SS) communication systems are able to simultaneously communicate and measure the distance between the terminals, thus it is advantageous to apply the spread spectrum technique to inter-vehicle communications. This paper assumes that the vehicles incidentally close to each other, form and manage a locally autonomous decentralized dynamic network. An R-ALOHA (Reservation-ALOHA) protocol for the spread spectrum inter-vehicle communication network using head spacing information is proposed which improve the conventional slot reservation methods. Since the near-far problem in SS communication is one reason for the degradation of system performance, this proposed scheme is shown to improve the efficiency of communication. The performance of the proposed system in the environment where the vehicles are assumed to run freely on a highway is verified by computer simulation. It is shown that inter-vehicle communication can be smoothly carried out between one vehicle and the surrounding vehicles using the propose method.

  • Recognition of Handprinted Thai Characters Using Loop Structures

    Surapan AIRPHAIBOON  Shozo KONDO  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:9
      Page(s):
    1296-1304

    A method for the recognition of handprinted Thai characters input using an image scanner is presented. We use methods of edge detection and boundary contour tracing algorithms to extract loop structures from input characters. The number of loops and their locations are detected and used as information for rough classification. For fine classification, local feature analysis of Thai characters is presented to discriminate an output character from a group of similar characters. In this paper, four parts of the recognition system are presented: Preprocessing, single-character segmentation, loop structure extraction and character identification. Preprocessing consists of pattern binarization, noise reduction and slant normalization based on geometrical transformation for the forward (backward) slanted word. The method of single-character segmentation is applied during the recognition phase. Each character from an input word including the character line level information is subjected to the processes of edge detection, contour tracing and thinning to detect loop structures and to extract topological properties of strokes. The decision trees are constructed based on the obtained information about loops, end points of strokes and some local characteristics of Thai characters. The proposed system is implemented on a personal computer, and a high recognition rate is obtained for 1000 samples of handprinted Thai words from 20 subjects.

  • On a Class of Byte-Error-Correcting Codes from Algebraic Curves and Their Fast Decoding Algorithm

    Masazumi KURIHARA  Shojiro SAKATA  Kingo KOBAYASHI  

     
    PAPER-Coding Theory

      Vol:
    E79-A No:9
      Page(s):
    1298-1304

    In this paper we propose a class of byte-error-correcting codes derived from algebraic curves which is a generalization on the Reed-Solomon codes, and present their fast parallel decoding algorithm. Our algorithm can correct up to (m + b -θ)/2b byte-errors for the byte length b, where m + b -θ + 1dG for the Goppa designed distance dG. This decoding algorithm can be parallelized. In this algorithm, for our code over the finite field GF (q), the total complexity for finding byte-error locations is O (bt(t + q - 1)) with time complexity O (t(t + q - 1)) and space complexity O(b), and the total complexity for finding error values is O (bt(b + q - 1)) with time complexity O (b(b + q - 1)) and space complexity O (t), where t(m + b -θ)/2b. Our byte-error-correcting algorithm is superior to the conventional fast decoding algorithm for randomerrors in regard to the number of correcting byte-errors in several cases.

  • Quick Simulation Method for TCM Scheme Using Importance Sampling without Truncation Error

    Takakazu SAKAI  Haruo OGIWARA  

     
    PAPER-Coded Modulation

      Vol:
    E79-A No:9
      Page(s):
    1361-1370

    The evaluation of a error probability of a trellis-coded modulation scheme by an ordinary Monte-Carlo simulation method is almost impossible since the excessive simulation time is required to evaluate it. The reduction of the number of simulation runs required is achieved by an importance sampling method, which is one of the variance reduction simulation methods. The reduction of it is attained by the modification of the probability density function, which makes errors more frequent. The error event simulation method, which evaluates the error probability of finite important error events, cannot avoid a truncation error. It is the fatal problem to evaluate the precision of the simulation result. The reason of it is how to design the simulation probability density function. We propose a evaluation method and the design methods of the simulation conditional probability density function. The proposed method simulates any error event starting at the fixed time, and the estimator of it has not the truncation error. The proposed design method approximate the optimum simulation conditional probability density function. By using the proposed method for an additive non-Gaussian noise case, the simulation time of the most effective case of the proposed method is less than 1/5600 of the ordinary Monte-Carlo method at the bit error rate of 10-6 under the condition of the same accuracy if the overhead of the selection of the error events is excluded. The simulation time of the same bit error rate is about 1/96 even if we take the overhead for the importance sampling method into account.

  • A Direct Relation between Bezier and Polynomial Representation

    Mohamed IMINE  Hiroshi NAGAHASHI  Takeshi AGUI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:9
      Page(s):
    1279-1285

    In this paper, a new explicit transformation method between Bezier and polynomial representation is proposed. An expression is given to approximate (n + 1) Bezier control points by another of (m + 1), and to perform simple and sufficiently good approximation without any additional transformation, such as Chebyshev polynomial. A criterion of reduction is then deduced in order to know if the given number of control points of a Bezier curve is reducible without error on the curve or not. Also an error estimation is given only in terms of control points. This method, unlike previous works, is more transparent because it is given in form of explicit expressions. Finally, we discuss some applications of this method to curve-fitting, order decreasing and increasing number of control points.

  • C1 Class Smooth Fuzzy Interpolation

    Shin NAKAMURA  Eiji UCHINO  Takeshi YAMAKAWA  

     
    LETTER-Systems and Control

      Vol:
    E79-A No:9
      Page(s):
    1512-1514

    C1 class smooth interpolation by a fuzzy reasoning for a small data set is proposed. The drafting technique of a human expert is implemented by using a set of fuzzy rules. The effectiveness of the present method is verified by computer simulations and by applications to the practical interpolation problem in a power system.

  • Parallel Encoder and Decoder Architecture for Cyclic Codes

    Tomoko K. MATSUSHIMA  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E79-A No:9
      Page(s):
    1313-1323

    Recently, the high-speed data transmission techniques that have been developed for communication systems have in turn necessitated the implementation of high-speed error correction circuits. Parallel processing has been found to be an effective method of speeding up operarions, since the maximum achievable clock frequency is generally bounded by the physical constraints of the circuit. This paper presents a parallel encoder and decoder architecture which can be applied to both binary and nonbinary cyclic codes. The architecture allows H symbols to be processed in parallel, where H is an arbitrary integer, although its hardware complexity is not proportional to the number of parallel symbols H. As an example, we investigate hardware complexity for a Reed-Solomon code and a binary BCH code. It is shown that both the hardware complexity and the delay for a parallel circuit is much less than that with the parallel operation of H conventional circuits. Although the only problem with this parallel architecture is that the encoder's critical path length increases with H, the proposed architecture is more efficient than a setup using H conventional circuits for high data rate applications. It is also suggested that a parallel Reed-Solomon encoder and decoder, which can keep up with optical transmission rates, i.e., several giga bits/sec, could be implemented on one LSI chip using current CMOS technology.

  • Analysis of Nonuniform and Nonlinear Transmission lines via Frequency-Domain Technique

    Yuichi TANJI  Yoshifumi NISHIO  Akio USHIDA  

     
    PAPER-Nonlinear Problems

      Vol:
    E79-A No:9
      Page(s):
    1486-1494

    There are many kinds of transmission lines such as uniform, nonuniform and nonlinear ones terminated by linear and/or nonlinear subnetworks. The nonuniform transmission lines are crucial in integrated circuits and printed circuit boards, because these circuits have complex geometries and layout between the multi layers, and most of the transmission lines possess nonuniform characteristics. On the other hand, the nonlinear transmission line have been focused in the fields of communication and instrumentation. Here, we present a new numerical method for analyzing nonuniform and nonlinear transmission lines with linear and/or nonlinear terminations. The waveforms at any points along the lines are described by the Fourier expansions. The partial differential equations representing the circuit are transformed into a set of ordinary differential equations at each frequency component, where for nonlinear transmission line, the perturbation technique is applied. The method is efficiently applied to weakly nonlinear transmission line. The nonuniform transmission lines terminated by a nonlinear subnetwork are analyzed by hybrid frequency-domain method. The stability for stiff circuit is improved by introducing compensation element. The efficiency of our method is illustrated by some examples.

  • Viterbi Decoding Considering Synchronization Errors

    Takuo MORI  Hideki IMAI  

     
    PAPER-Coding Theory

      Vol:
    E79-A No:9
      Page(s):
    1324-1329

    Viterbi decoding is known as a decoding scheme that can realize maximum likelihood decoding. However, it is impossible to continue it without re-synchronization even if only an insertion/deletion error occurs in a channel. In this paper, we show that Levenshtein distance is suitable for the metric of Viterbi decoding in a channel where not only symbol errors but also insertion/deletion errors occur under some conditions and we propose a kind of Viterbi decoding considering insertion/deletion errors.

  • Estimation of Two-Dimensional DOA under a Distributed Source Model and Some Simulation Results

    Seong Ro LEE  Iickho SONG  Yong Up LEE  Taejoo CHANG  Hyung-Myung KIM  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E79-A No:9
      Page(s):
    1475-1485

    Most research on the estimation of direction of arrival (DOA) has been performed based on the assumption that the signal sources are point sources. In some real surroundings, signal source localization can more adequately be accomplished with distributed source models. When the signal sources are distributed over an area, we cannot directly use well-known DOA estimation methods, because these methods are established based on the point source assumption. In this paper, we propose a 3-dimensional distributed signal source model, in which a source is represented by two parameters, the center angle and degree of dispersion. Then, we address the estimation of the elevation and azimuth angles of distributed sources based on the parametric distributed source modeling in the 3-dimensional space.

  • A Simple Construction of Codes for Identification via Channels under Average Error Criterion

    Tomohiko UYEMATSU  Kennya NAGANO  Eiji OKAMOTO  

     
    LETTER-Coding Theory

      Vol:
    E79-A No:9
      Page(s):
    1440-1443

    In 1989, Ahlswede and Dueck introduced a new formulation of Shannon theory called identification via channels. This paper presents a simple construction of codes for identification via channels when the probability of false identification is measured by its average. The proposed code achieves the identification capacity, and its construction does not require any knowledge of coding theory.

  • A Highly Parallel Systolic Tridiagonal Solver

    Takashi NARITOMI  Hirotomo ASO  

     
    PAPER-Computer Systems

      Vol:
    E79-D No:9
      Page(s):
    1241-1247

    Many numerical simulation problems of natural phenomena are formulated by large tridiagonal and block tridiagonal linear systems. In this paper, an efficient parallel algorithm to solve a tridiagonal linear system is proposed. The algorithm named bi-recurrence algorithm has an inherent parallelism which is suitable for parallel processing. Its time complexity is 8N - 4 for a tridiagonal linear system of order N. The complexity is little more than the Gaussian elimination algorithm. For parallel implementation with two processors, the time complexity is 4N - 1. Based on the bi-recurrence algorithm, a VLSI oriented tridiagonal solver is designed, which has an architecture of 1-D linear systolic array with three processing cells. The systolic tridiagonal solver completes finding the solution of a tridiagonal linear system in 3N + 6 units of time. A highly parallel systolic tridiagonal solver is also presented. The solver is characterized by highly parallel computability which originates in the divide-and-conquer strategy and high cost performance which originates in the systolic architecture. This solver completes finding the solution in 10(N/p) + 6p + 23 time units, where p is the number of partitions of the system.

  • Comparing Failure Times via Diffusion Models and Likelihood Ratio Ordering

    Antonio Di CRESCENZO  Luigi M. RICCIARDI  

     
    PAPER-Stochastic Process/Learning

      Vol:
    E79-A No:9
      Page(s):
    1429-1432

    For two devices whose quality is described by non-negative one-dimensional time-homogeneous diffusion processes of the Wiener and Ornstein-Uhlenbeck types sufficient conditions are given such that their failure times, modeled as first-passage times through the zero state, are ordered according to the likelihood ratio ordering.

  • Compression Coding Using an Optical Model for a Pair of Range and Grey-Scale Images of 3D Objects

    Kefei WANG  Ryuji KOHNO  

     
    PAPER-Source Coding/Security

      Vol:
    E79-A No:9
      Page(s):
    1330-1337

    When an image of a 3D object is transmitted or recorded, its range image as well its grey-scale image are required. In this paper, we propose a method of coding for efficient compression of a pair of a pair of range and grey-scale images of 3D objects. We use Lambertian reflection optical model to model the relationship between a 3D object shape and it's brightness. Good illuminant direction estimation leads to good grey-scale image generation and furthermore effects compression results. A method for estimating the illuminant derection and composite albedo from grey-scale image statistics is presented. We propose an approach for estimating the slant angle of illumination based on an optical model from a pair of range and grey-scale images. Estimation result shows that our approach is better. Using the estimated parameters of illuminant direction and composite albedo a synthetic grey-scale image is generated. For comparison, a comparison coding method is used, in which we assume that the range and grey-scale images are compressed separately. We propose an efficient compression coding method for a pair of range and grey-scale images in which we use the correlation between range and grey-scale images, and compress them together. We also evaluate the coding method on a workstation and show numerical results.

27581-27600hit(30728hit)