The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] data processing(10hit)

1-10hit
  • How to Decide Window-Sizes of Smoothing Methods: A Goodness of Fit Criterion for Smoothing Oscillation Data

    Kenichi SHIBATA  Takashi AMEMIYA  

     
    BRIEF PAPER

      Vol:
    E102-C No:2
      Page(s):
    143-146

    Organic electronics devices can be applicable to implant sensors. The noises in the acquired data can be removed by smoothing using sliding windows. We developed a new criterion for window-size decision based on smoothness and similarity (SSC). The smoothed curve fits the raw data well and is sufficiently smooth.

  • DiSC: A Distributed In-Storage Computing Platform Using Cost-Effective Hardware Devices

    Jaehwan LEE  Joohwan KIM  Ji Sun SHIN  

     
    LETTER-Computer System

      Pubricized:
    2017/08/23
      Vol:
    E100-D No:12
      Page(s):
    3018-3021

    The ability to efficiently process exponentially increasing data remains a challenging issue for computer platforms. In legacy computing platforms, large amounts of data can cause performance bottlenecks at the I/O interfaces between CPUs and storage devices. To overcome this problem, the in-storage computing (ISC) technique is introduced, which offloads some of the computations from the CPUs to the storage devices. In this paper, we propose DiSC, a distributed in-storage computing platform using cost-effective hardware. First, we designed a general-purpose ISC device, a so-called DiSC endpoint, by combining an inexpensive single-board computer (SBC) and a hard disk. Second, a Mesos-based resource manager is adapted into the DiSC platform to schedule the DiSC endpoint tasks. To draw comparisons to a general CPU-based platform, a DiSC testbed is constructed and experiments are carried out using essential applications. The experimental results show that DiSC attains cost-efficient performance advantages over a desktop, particularly for searching and filtering workloads.

  • A New Efficient Resource Management Framework for Iterative MapReduce Processing in Large-Scale Data Analysis

    Seungtae HONG  Kyongseok PARK  Chae-Deok LIM  Jae-Woo CHANG  

    This paper has been cancelled due to violation of duplicate submission policy on IEICE Transactions on Information and Systems on September 5, 2019.
     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    704-717
    • HTML
    • Errata[Uploaded on March 1,2018]

    To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.

  • Towards Trusted Result Verification in Mass Data Processing Service

    Yan DING  Huaimin WANG  Peichang SHI  Hongyi FU  Xinhai XU  

     
    PAPER

      Vol:
    E97-B No:1
      Page(s):
    19-28

    Computation integrity is difficult to verify when mass data processing is outsourced. Current integrity protection mechanisms and policies verify results generated by participating nodes within a computing environment of service providers (SP), which cannot prevent the subjective cheating of SPs. This paper provides an analysis and modeling of computation integrity for mass data processing services. A third-party sampling-result verification method, named TS-TRV, is proposed to prevent lazy cheating by SPs. TS-TRV is a general solution of verification on the intermediate results of common MapReduce jobs, and it utilizes the powerful computing capability of SPs to support verification computing, thus lessening the computing and transmission burdens of the verifier. Theoretical analysis indicates that TS-TRV is effective on detecting the incorrect results with no false positivity and almost no false negativity, while ensuring the authenticity of sampling. Intensive experiments show that the cheating detection rate of TS-TRV achieves over 99% with only a few samples needed, the computation overhead is mainly on the SP, while the network transmission overhead of TS-TRV is only O(log N).

  • CCDM: Ladder-Logic Programming for Wireless Sensors and Actuators with Central Controller-Based Device Management

    Hideya OCHIAI  Hiroshi ESAKI  

     
    PAPER

      Vol:
    E94-B No:8
      Page(s):
    2208-2215

    This paper proposes ladder-logic programming model for sensor actuator networks. We also demonstrate optimized operations of them with central controller-based device management (CCDM) architecture. A wireless sensor actuator network consists of distributed wireless nodes, and implementing data streams and data processors onto these wireless nodes has been challenging. System programmers have to describe their instructions by a programming language, and data processors must be placed so that it optimizes, for example, total network traffic. The ladder-logic model enables the programming of them, and CCDM makes various types of optimizations feasible, including the optimization of network traffic, delivery latency, load-balancing and fault-tolerance even though these algorithms are not lightweight. In this paper, we focus on traffic reduction case, and propose two moderately complex algorithms. The experiment has shown that CCDM achieves optimizations even with such moderately complex algorithms.

  • Fuzzy Logic-Based Quantized Event Filter for RFID Data Processing

    Sung Ho JANG  Hi Sung CHOUN  Heung Seok CHAE  Jong Sik LEE  

     
    PAPER

      Vol:
    E91-B No:11
      Page(s):
    3560-3568

    RFID event filtering is an important issue of RFID data management. Tag read events from readers have some problems like unreliability, redundancy, and disordering of tag readings. Duplicated events lead to performance degradation of RFID systems with a flood of similar tag information. Therefore, this paper proposes a fuzzy logic-based quantized event filter. In order to reduce duplicated tag readings and solve disordering of tag readings, the filter applies a fuzzy logic system to control a filtering threshold by the change in circumstances of readers. Continuous tag readings are converted into discrete values for event generation by the filtering threshold. And, the filter generates as many events as the discrete values at a point of event generation time. Experimental results comparing the proposed filter with existing RFID event filters, such as the primitive event filter and the smoothing event filter, verify effectiveness and efficiency of the fuzzy logic-based quantized event filter.

  • Filtering for Simple Threshold Systems: Self-Tuning, Mutual Information and Applications

    Takahiro HADA  Toyonori MUNAKATA  

     
    PAPER-Signal Processing

      Vol:
    E89-A No:10
      Page(s):
    2566-2574

    In this paper we discuss an adaptive process, which is based on the so-called self-tuning mechanism. We simplify this mechanism and apply it to a threshold system. From view points of information quantity and estimation accuracy we show this mechanism enhances information transmission through the threshold system. In addition we extend our theory so that it could be applied to a truncation coding.

  • Concurrency Control with Permissible Serializability in Multi-Media Data Processings

    Yuichi SAKAUE  Jun'ichi MIYAO  

     
    PAPER-Computer Hardware and Design

      Vol:
    E78-D No:4
      Page(s):
    336-344

    Recent advances of processing speed and window systems in computers, especially workstations, accelerate multi-media data processing (MMDP). Then, a variety of data such as numerics, characters, voice, video, animation and so on, are processed concurrently in a workstation. In data processings, concurrent execution of transactions is a key to improve through-puts. However, concurrent execution without concurrency control may cause inconsistent results. Thus, the concurrency control must be introduced in such systems. However, in MMDP it is ineffective to adopt previous concurrency control methods for ordinal databases since multi-media data are huge and possess a real-time property. This paper discusses concurrency control for MMDP. We propose some new concepts for MMDP, and define a new serializability class called Permissible Serializability which provides high concurrency in MMDP compared with ordinal classes. Then, we propose a concurrency control algorithm TYPE for the Permissible Serializability, and show some simulation results.

  • Automatic Data Processing Procedure for Ground Probing Radar

    Toru SATO  Kenya TAKADA  Toshio WAKAYAMA  Iwane KIMURA  Tomoyuki ABE  Tetsuya SHINBO  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E77-B No:6
      Page(s):
    831-837

    We developed an automatic data processing algorithm for a ground-probing radar which is essential in analyzing a large amount of data by a non-expert. Its aim is to obtain an optimum result that the conventional technique can give, without the assistance of an experienced operator. The algorithm is general except that it postulates the existence of at least one isolated target in the radar image. The raw images of underground objects are compressed in the vertical and the horizontal directions by using a pulse-compression filter and the aperture synthesis technique, respectively. The test function needed to configure the compression filter is automatically selected from the given image. The sensitivity of the compression filter is adjusted to minimize the magnitude of spurious responses. The propagation velocity needed to perform the aperture synthesis is determined by fitting a hyperbola to the selected echo trace. We verified the algorithm by applying it to the data obtained at two test sites with different magnitude of clutter echoes.

  • Efficient Application of Coding Technique for Data Compression of ECG

    Susumu TSUDA  Koichi SHIMIZU  Goro MATSUMOTO  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1425-1433

    A technique was developed to reduce ECG data efficiently within a controlled accuracy. The sampled and digitized data of the original waveform of an ECG is transformed in three major processes. They are the calculation of a beat-to-beat variation, a polygonal approximation and the calculation of the difference between consecutive node points. Then, an adaptive coding technique is applied to minimize redundancies in the data. It was demonstrated that the ECG waveform sampled in 200 Hz, 10 bit/sample, 5 µV/digit could be reduced with the bit reduction ratio of about 10% and within the reconstruction error of about 2.5%. A polygonal approximation method, called MSAPA, was newly developed as a modification of the well known method, SAPA. It was shown that the MSAPA gave better reduction efficiency and smaller reconstruction error than the SAPA, when it was applied to the beat-to-beat variation waveform. The importance of the low-pass filtering as a preprocessing for the polygonal approximation was confirmed in concrete examples. The efficiency of the proposed technique was compared with the cased in which the polygonal approximation was not used. Through these analyses, it was found that the redundancy elimination of the coding technique worked effectively in the proposed technique.