The search functionality is under construction.

Author Search Result

[Author] Hiroyoshi MORITA(23hit)

1-20hit(23hit)

  • On Increasing the Number of Users in (t, n) Threshold Secret Sharing Schemes

    Todorka ALEXANDROVA  Hiroyoshi MORITA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E91-A No:8
      Page(s):
    2138-2150

    Constructing ideal (t,n) threshold secret sharing schemes leads to some limitations on the maximum number of users, that are able to join the secret sharing scheme. We aim to remove these limitations by reducing the information rate of the constructed threshold secret sharing schemes. In this paper we propose recursive construction algorithms of (t,n) threshold secret sharing schemes, based on the generalized vector space construction. Using these algorithms we are able to construct a (t,n) threshold secret sharing scheme for any arbitrary n.

  • A Geographic Location-Based Distributed Routing System

    Kumiko KOBAYASHI  I Gusti Bagus Baskara NUGRAHA  Hiroyoshi MORITA  

     
    PAPER-Network System

      Vol:
    E96-B No:1
      Page(s):
    88-98

    In this paper, we propose a geographic location-based distributed routing (GDR) system. The GDR system provides information lookup based on latitude and longitude coordinates. Each node of the GDR system utilizes the coordinates as an identifier (ID), and manages an overlay routing table. An ID is generated to reflect the geographical location without using Space Filling Curve (SFC). The ID is in cartesian format (x, y), which represents the longitude x and latitude y. In a system with N nodes, each node has a routing table of size log N and a search is possible in O(log N). We evaluate the routing performance of GDR and other systems based on Chord, Kademlia and CAN. We show that in both the ID is in cartesian format and the ID is generated by using SFC, GDR, Chord and Kademlia have the same mean and the same variance of the path length, while the mean and the variance of the relay length of GDR are smaller than those of Chord and Kademlia. Furthermore, while GDR and CAN have the same mean and the same variance of the relay length, the mean and the variance of the path length of GDR are smaller than those of CAN.

  • Integer Codes Correcting Single Errors of Specific Types (e1,e2,...,es)

    Hristo KOSTADINOV  Hiroyoshi MORITA  Nikolai MANEV  

     
    PAPER-Communication Theory and Signals

      Vol:
    E86-A No:7
      Page(s):
    1843-1849

    In this paper, we investigate the problem how to construct integer codes capable of correcting any single error in the set {1,t,...,tk-1} and generalize our results to obtain (e1,e2,...,es) single error correctable codes where ei's are different elements in A. Moreover, we shall give the exact form of the check matrix in most of the classes considered in this paper.

  • Transcoding-after-Smoothing System for VBR MPEG Video Streaming

    I Gusti Bagus Baskara NUGRAHA  Hiroyoshi MORITA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E92-D No:2
      Page(s):
    298-309

    Delivering video streaming service over the Internet encounters some challenges. Two of them are heterogeneity of networks capacity and variability of video data rate. The capacity of network segments are constrained. Meanwhile, the rate of video data to be transmitted is highly variable in order to get near-constant images quality. Therefore, to send variable bit rate (VBR) video data over capacity-constrained network, its bit rate should be adjusted. In this paper a system to adjust the transmission bit rate of VBR MPEG video data called Transcoding-after-Smoothing (TaS), which is a combination of bit rate transcoding and bit rate smoothing algorithm, is proposed. The system smoothes out transmission rate of video data while at the same time also performs transcoding on some video frames when necessary in order to keep the transmission bit rate below a certain threshold value. Two kinds of TaS methods are proposed. One method does not have transcoding preference, while the other method uses frame type preference where an intra-coded frame is the last one that will be transcoded. These methods are implemented in our video server where a VBR video data is accessed by a client. Our experiment results show that the first TaS method yields significant reduction in the number of transcoded frames compared with the second TaS method and conventional frame-level transcoding. However, the second method performs better in minimizing the quality distortion.

  • Backward-Compatible Forward Error Correction of Burst Errors and Erasures for 10BASE-T1S Open Access

    Gergely HUSZAK  Hiroyoshi MORITA  George ZIMMERMAN  

     
    PAPER-Network

      Pubricized:
    2021/06/23
      Vol:
    E104-B No:12
      Page(s):
    1524-1538

    IEEE P802.3cg established a new pair of Ethernet physical layer devices (PHY), one of which, the short-reach 10BASE-T1S, uses 4B/5B mapping over Differential Manchester Encoding to maintain a data rate of 10 Mb/s at MAC/PLS interface, while providing in-band signaling between transmitter and receivers. However, 10BASE-T1S does not have any error correcting capability built into it. As a response to emerging building, industrial, and transportation requirements, this paper outlines research that leads to the possibility of establishing low-complexity, backward-compatible Forward Error Correction with per-frame configurable guaranteed burst error and erasure correcting capabilities over any 10BASE-T1S Ethernet network segment. The proposed technique combines a specialized, systematic Reed-Solomon code and a novel, three-tier, technique to avoid the appearance of certain inadmissible codeword symbols at the output of the encoder. In this way, the proposed technique enables error and erasure correction, while maintaining backwards compatibility with the current version of the standard.

  • On (1) Error Correctable Integer Codes

    Hristo KOSTADINOV  Hiroyoshi MORITA  Nikolai MANEV  

     
    LETTER-Information Theory

      Vol:
    E93-A No:12
      Page(s):
    2758-2761

    Integer codes correct errors of a given type, which means that for a given communication channel and modulator we can choose the type of the errors (which are the most common) then construct integer code capable of correcting those errors. A new general construction of single (1) error correctable integer codes will be presented. Comparison between single and multiple (1) error correctable integer codes over AWGN channel using QAM scheme will be presented.

  • On Multiple Smoothed Transmission of MPEG Video Streams

    Dongzhao SUN  Mikihiko NISHIARA  Hiroyoshi MORITA  

     
    PAPER-Image Coding

      Vol:
    E88-A No:10
      Page(s):
    2844-2851

    A rate splitting algorithm is presented for a multiple video transmission system to transfer the aggregation (or statical multiplexing) of multiple video streams to multiple clients so that each client can receive the requested video stream with the reliable fidelity. Computer simulations for transmission of a set of 128 MPEG compressed video streams show that the proposed algorithm alleviates the variability of the aggregate video transmission comparing with a scheme to smooth individually each of videos using the traditional online smoothing algorithm. Besides, the proposed is 2 time faster than the traditional one.

  • On the Construction of an Antidictionary with Linear Complexity Using the Suffix Tree

    Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E90-A No:11
      Page(s):
    2533-2539

    The antidictionary of a string is the set of all words of minimal length that never appear in this string. Antidictionaries are in particular useful for source coding. We present a fast and memory-efficient algorithm to construct an antidictionary using a suffix tree. It is proved that the complexity of this algorithm is linear in space and time, and its effectiveness is demonstrated by simulation results.

  • On the Irreducibility of Certain Shifts of Finite Type

    Tetsuya KOBAYASHI  Akiko MANADA  Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Sequence

      Vol:
    E96-A No:12
      Page(s):
    2415-2421

    A shift of finite type (SFT) is a set of all bi-infinite sequences over some alphabet which is characterized by a finite set of forbidden words. It is a typical example of sofic shifts and has been used in media storage area, such as CD's or DVD's. The study of sofic shifts is based on graph theory, and the irreducibility of shifts is an important property to be considered for the study. In this paper, we will provide some sufficient conditions for an SFT to be irreducible from the perspective of the antidictionary of a word and the number of forbidden words. We also present a necessary and sufficient condition for an SFT to be irreducible when the number of forbidden words is one less than the alphabet size.

  • Arithmetic Coding for Countable Alphabet Sources with Finite Precision

    Mikihiko NISHIARA  Hiroyoshi MORITA  

     
    PAPER-Information Theory

      Vol:
    E84-A No:10
      Page(s):
    2576-2582

    An improved arithmetic coding which provides an encoder with finite calculation precision for source sequences over a countable alphabet is presented. Conventional arithmetic coding theoretically has infinite precision for real variables. However any algorithm implemented on a computer has finite precision. This implies that conventional arithmetic codes can only encode sequences over a finite alphabet. The improved arithmetic coding presented here has a computational complexity which is roughly proportional to the length of the source sequence for a given source.

  • On-Line Electrocardiogram Lossless Compression Using Antidictionary Codes for a Finite Alphabet

    Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Biological Engineering

      Vol:
    E93-D No:12
      Page(s):
    3384-3391

    An antidictionary is particularly useful for data compression, and on-line electrocardiogram (ECG) lossless compression algorithms using antidictionaries have been proposed. They work in real-time with constant memory and give better compression ratios than traditional lossless data compression algorithms, while they only deal with ECG data on a binary alphabet. This paper proposes on-line ECG lossless compression for a given data on a finite alphabet. The proposed algorithm gives not only better compression ratios than those algorithms but also uses less computational space than they do. Moreover, the proposed algorithm work in real-time. Its effectiveness is demonstrated by simulation results.

  • Multicast Communication for Video Broadcasting Service over IPv4 Network Using IP Option

    I Gusti Bagus Baskara NUGRAHA  Sumiya MARUGAMI  Mikihiko NISHIARA  Hiroyoshi MORITA  

     
    PAPER-Internet

      Vol:
    E89-B No:5
      Page(s):
    1570-1580

    In this paper, we propose a protocol for multicast communication called Multicast Datagram Transfer Protocol (MDTP) to provide multicast for video broadcasting service on the Internet. MDTP is a one-to-many multicast communication protocol, which is constructed based on IPv4 unicast protocol by utilizing IP Router Alert Option, and it uses unicast addressing and unicast routing protocol. A mechanism is presented to allow a router to remove identical video stream, to duplicate a video stream, and to forward each copy of the duplicated video stream to its destinations. Ordinary IP routers that do not support MDTP will treat the MDTP packets as normal unicast packets. Hence, gradual deployment is possible without tunneling technique. With a delegation mechanism, MDTP router is also able to handle request from clients, and serve the requested video stream. The simulation results show that the average bandwidth usage of MDTP is close to the average bandwidth usage of IP multicast. MDTP also has greater efficiency than XCAST, and its efficiency becomes significant for a large number of clients.

  • FOREWORD

    Hiroyoshi MORITA  

     
    FOREWORD

      Vol:
    E89-A No:10
      Page(s):
    2457-2458
  • Real-Time and Memory-Efficient Arrhythmia Detection in ECG Monitors Using Antidictionary Coding

    Takahiro OTA  Hiroyoshi MORITA  Adriaan J. de Lind van WIJNGAARDEN  

     
    PAPER-Source Coding

      Vol:
    E96-A No:12
      Page(s):
    2343-2350

    This paper presents a real-time and memory-efficient arrhythmia detection system with binary classification that uses antidictionary coding for the analysis and classification of electrocardiograms (ECGs). The measured ECG signals are encoded using a lossless antidictionary encoder, and the system subsequently uses the compression rate to distinguish between normal beats and arrhythmia. An automated training data procedure is used to construct the automatons, which are probabilistic models used to compress the ECG signals, and to determine the threshold value for detecting the arrhythmia. Real-time computer simulations with samples from the MIT-BIH arrhythmia database show that the averages of sensitivity and specificity of the proposed system are 97.8% and 96.4% for premature ventricular contraction detection, respectively. The automatons are constructed using training data and comprise only 11 kilobytes on average. The low complexity and low memory requirements make the system particularly suitable for implementation in portable ECG monitors.

  • Data Compression of ECG Based on the Edit Destance Algorithms

    Hiroyoshi MORITA  Kingo KOBAYASHI  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1443-1453

    A method for the compression of ECG data is presented. The method is based on the edit distance algorithm developed in the file comparison problems. The edit distance between two sequences of symbols is defined as the number of edit operations required to transform a sequence of symbols into the other. We adopt the edit distance algorithm to obtain a list of edit operations, called edit script, which transforms a reference pulse into a pulse selected from ECG data. If the decoder knows the same reference, it can reproduce the original pulse, only from the edit script. The amount of the edit script is expected to be smaller than that of the original pulse when the two pulses look alike and thereby we can reduce the amount of space to store the data. Applying the proposed scheme to the raw data of ECG, we have achieved a high compression about 14: 1 without losing the significant features of signals.

  • A Compact Tree Representation of an Antidictionary

    Takahiro OTA  Hiroyoshi MORITA  

     
    PAPER-Information Theory

      Vol:
    E100-A No:9
      Page(s):
    1973-1984

    In both theoretical analysis and practical use for an antidictionary coding algorithm, an important problem is how to encode an antidictionary of an input source. This paper presents a proposal for a compact tree representation of an antidictionary built from a circular string for an input source. We use a technique for encoding a tree in the compression via substring enumeration to encode a tree representation of the antidictionary. Moreover, we propose a new two-pass universal antidictionary coding algorithm by means of the proposal tree representation. We prove that the proposed algorithm is asymptotic optimal for a stationary ergodic source.

  • Soft Decoding of Integer Codes and Their Application to Coded Modulation

    Hristo KOSTADINOV  Hiroyoshi MORITA  Noboru IIJIMA  A. J. HAN VINCK  Nikolai MANEV  

     
    PAPER-Information Theory

      Vol:
    E93-A No:7
      Page(s):
    1363-1370

    Integer codes are very flexible and can be applied in different modulation schemes. A soft decoding algorithm for integer codes will be introduced. Comparison of symbol error probability (SEP) versus signal-to-noise ratio (SNR) between soft and hard decoding using integer coded modulation shows us that we can obtain at least 2 dB coding gain. Also, we shall compare our results with trellis coded modulation (TCM) because of their similar decoding schemes and complexity.

  • Codes over the Ring of Integers Modulo m

    A. J. Han VINCK  Hiroyoshi MORITA  

     
    PAPER-Coding Theory

      Vol:
    E81-A No:10
      Page(s):
    2013-2018

    We discuss the concept of coding over the ring of integers modulo m. This method of coding finds its origin in the early work by Varshamov and Tenengolz. We first give a definition of the codes followed by some general properties. We derive specific code constructions and show computer-search results. We conclude with applications in 8-phase modulation and peak-shift correction in magnetic recording systems.

  • On Analysis of Acceptable Region of a Statistical Multiplexer Based on Fractional Brownian Traffic Models

    Suhono HARSO SUPANGKAT  Shuji KAWASAKI  Hiroyoshi MORITA  

     
    PAPER-Modeling and Simulation

      Vol:
    E81-A No:2
      Page(s):
    295-303

    We consider statistical multiplexing for various types of input data with different statistics in an integrated multimedia system such as ATM networks. The system is assumed to have a constant service rate and a finite buffer. The bit-rate of each data input is variable and is modeled by a fractional Brownian motion process. Under a criterion of quality of service, we obtain an acceptable region of statistical multiplexing. We introduce a new method of investigating the acceptable region of a statistical multiplexer. The results show that transmitting multitype input processes will increase the multiplexing gain.

  • Compression by Substring Enumeration Using Sorted Contingency Tables

    Takahiro OTA  Hiroyoshi MORITA  Akiko MANADA  

     
    PAPER-Information Theory

      Vol:
    E103-A No:6
      Page(s):
    829-835

    This paper proposes two variants of improved Compression by Substring Enumeration (CSE) with a finite alphabet. In previous studies on CSE, an encoder utilizes inequalities which evaluate the number of occurrences of a substring or a minimal forbidden word (MFW) to be encoded. The inequalities are derived from a contingency table including the number of occurrences of a substring or an MFW. Moreover, codeword length of a substring and an MFW grows with the difference between the upper and lower bounds deduced from the inequalities, however the lower bound is not tight. Therefore, we derive a new tight lower bound based on the contingency table and consequently propose a new CSE algorithm using the new inequality. We also propose a new encoding order of substrings and MFWs based on a sorted contingency table such that both its row and column marginal total are sorted in descending order instead of a lexicographical order used in previous studies. We then propose a new CSE algorithm which is the first proposed CSE algorithm using the new encoding order. Experimental results show that compression ratios of all files of the Calgary corpus in the proposed algorithms are better than those of a previous study on CSE with a finite alphabet. Moreover, compression ratios under the second proposed CSE get better than or equal to that under a well-known compressor for 11 files amongst 14 files in the corpus.

1-20hit(23hit)