The search functionality is under construction.

Keyword Search Result

[Keyword] data analysis(16hit)

1-16hit
  • A Network Design Scheme in Delay Sensitive Monitoring Services Open Access

    Akio KAWABATA  Takuya TOJO  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/04/19
      Vol:
    E106-B No:10
      Page(s):
    903-914

    Mission-critical monitoring services, such as finding criminals with a monitoring camera, require rapid detection of newly updated data, where suppressing delay is desirable. Taking this direction, this paper proposes a network design scheme to minimize this delay for monitoring services that consist of Internet-of-Things (IoT) devices located at terminal endpoints (TEs), databases (DB), and applications (APLs). The proposed scheme determines the allocation of DB and APLs and the selection of the server to which TE belongs. DB and APL are allocated on an optimal server from multiple servers in the network. We formulate the proposed network design scheme as an integer linear programming problem. The delay reduction effect of the proposed scheme is evaluated under two network topologies and a monitoring camera system network. In the two network topologies, the delays of the proposed scheme are 78 and 80 percent, compared to that of the conventional scheme. In the monitoring camera system network, the delay of the proposed scheme is 77 percent compared to that of the conventional scheme. These results indicate that the proposed scheme reduces the delay compared to the conventional scheme where APLs are located near TEs. The computation time of the proposed scheme is acceptable for the design phase before the service is launched. The proposed scheme can contribute to a network design that detects newly added objects quickly in the monitoring services.

  • Privacy-Preserving Data Analysis: Providing Traceability without Big Brother

    Hiromi ARAI  Keita EMURA  Takuya HAYASHI  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    2-19

    Collecting and analyzing personal data is important in modern information applications. Though the privacy of data providers should be protected, the need to track certain data providers often arises, such as tracing specific patients or adversarial users. Thus, tracking only specific persons without revealing normal users' identities is quite important for operating information systems using personal data. It is difficult to know in advance the rules for specifying the necessity of tracking since the rules are derived by the analysis of collected data. Thus, it would be useful to provide a general way that can employ any data analysis method regardless of the type of data and the nature of the rules. In this paper, we propose a privacy-preserving data analysis construction that allows an authority to detect specific users while other honest users are kept anonymous. By using the cryptographic techniques of group signatures with message-dependent opening (GS-MDO) and public key encryption with non-interactive opening (PKENO), we provide a correspondence table that links a user and data in a secure way, and we can employ any anonymization technique and data analysis method. It is particularly worth noting that no “big brother” exists, meaning that no single entity can identify users who do not provide anomaly data, while bad behaviors are always traceable. We show the result of implementing our construction. Briefly, the overhead of our construction is on the order of 10 ms for a single thread. We also confirm the efficiency of our construction by using a real-world dataset.

  • Recent Advances in Practical Secure Multi-Party Computation Open Access

    Satsuya OHATA  

     
    INVITED PAPER-cryptography

      Vol:
    E103-A No:10
      Page(s):
    1134-1141

    Secure multi-party computation (MPC) allows a set of parties to compute a function jointly while keeping their inputs private. MPC has been actively studied, and there are many research results both in the theoretical and practical research fields. In this paper, we introduce the basic matters on MPC and show recent practical advances. We first explain the settings, security notions, and cryptographic building blocks of MPC. Then, we show and discuss current situations on higher-level secure protocols, privacy-preserving data analysis, and frameworks/compilers for implementing MPC applications with low-cost.

  • lcyanalysis: An R Package for Technical Analysis in Stock Markets

    Chun-Yu LIU  Shu-Nung YAO  Ying-Jen CHEN  

     
    PAPER-Office Information Systems, e-Business Modeling

      Pubricized:
    2019/03/26
      Vol:
    E102-D No:7
      Page(s):
    1332-1341

    With advances in information technology and the development of big data, manual operation is unlikely to be a smart choice for stock market investing. Instead, the computer-based investment model is expected to bring investors more accurate strategic analysis and more effective investment decisions than human beings. This paper aims to improve investor profits by mining for critical information in the stock data, therefore helping big data analysis. We used the R language to find the technical indicators in the stock market, and then applied the technical indicators to the prediction. The proposed R package includes several analysis toolkits, such as trend line indicators, W type reversal patterns, V type reversal patterns, and the bull or bear market. The simulation results suggest that the developed R package can accurately present the tendency of the price and enhance the return on investment.

  • A New Efficient Resource Management Framework for Iterative MapReduce Processing in Large-Scale Data Analysis

    Seungtae HONG  Kyongseok PARK  Chae-Deok LIM  Jae-Woo CHANG  

    This paper has been cancelled due to violation of duplicate submission policy on IEICE Transactions on Information and Systems on September 5, 2019.
     
    PAPER

      Pubricized:
    2017/01/17
      Vol:
    E100-D No:4
      Page(s):
    704-717
    • HTML
    • Errata[Uploaded on March 1,2018]

    To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.

  • A Network-Type Brain Machine Interface to Support Activities of Daily Living Open Access

    Takayuki SUYAMA  

     
    INVITED PAPER

      Vol:
    E99-B No:9
      Page(s):
    1930-1937

    To help elderly and physically disabled people to become self-reliant in daily life such as at home or a health clinic, we have developed a network-type brain machine interface (BMI) system called “network BMI” to control real-world actuators like wheelchairs based on human intention measured by a portable brain measurement system. In this paper, we introduce the technologies for achieving the network BMI system to support activities of daily living.

  • NHPP-Based Software Reliability Model with Marshall-Olkin Failure Time Distribution

    Xiao XIAO  

     
    PAPER

      Vol:
    E98-A No:10
      Page(s):
    2060-2068

    A new modeling approach for the non-homogeneous Poisson processes (NHPPs) based software reliability modeling is proposed to describe the stochastic behavior of software fault-detection processes, of which the failure rate is not monotonic. The fundamental idea is to apply the Marshall-Olkin distribution to the software fault-detection time distribution. The applicability of Marshall-Olkin distribution in software reliability modeling is studied. The data fitting abilities of the proposed NHPP-based software reliability model is compared with the existing typical ones through real software project data analysis.

  • Asymptotic Marginal Likelihood on Linear Dynamical Systems

    Takuto NAITO  Keisuke YAMAZAKI  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:4
      Page(s):
    884-892

    Linear dynamical systems are basic state space models literally dealing with underlying system dynamics on the basis of linear state space equations. When the model is employed for time-series data analysis, the system identification, which detects the dimension of hidden state variables, is one of the most important tasks. Recently, it has been found that the model has singularities in the parameter space, which implies that analysis for adverse effects of the singularities is necessary for precise identification. However, the singularities in the models have not been thoroughly studied. There is a previous work, which dealt with the simplest case; the hidden state and the observation variables are both one dimensional. The present paper extends the setting to general dimensions and more rigorously reveals the structure of singularities. The results provide the asymptotic forms of the generalization error and the marginal likelihood, which are often used as criteria for the system identification.

  • Risk Assessment of a Portfolio Selection Model Based on a Fuzzy Statistical Test

    Pei-Chun LIN  Junzo WATADA  Berlin WU  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:3
      Page(s):
    579-588

    The objective of our research is to build a statistical test that can evaluate different risks of a portfolio selection model with fuzzy data. The central points and radiuses of fuzzy numbers are used to determine the portfolio selection model, and we statistically evaluate the best return by a fuzzy statistical test. Empirical studies are presented to illustrate the risk evaluation of the portfolio selection model with interval values. We conclude that the fuzzy statistical test enables us to evaluate a stable expected return and low risk investment with different choices for k, which indicates the risk level. The results of numerical examples show that our method is suitable for short-term investments.

  • Software Failure Time Data Analysis via Wavelet-Based Approach

    Xiao XIAO  Tadashi DOHI  

     
    PAPER

      Vol:
    E95-A No:9
      Page(s):
    1490-1497

    The non-homogeneous Poisson process (NHPP) has been applied successfully to model nonstationary counting phenomena for a large class of problems. In software reliability engineering, the NHPP-based software reliability models (SRMs) are of a very important class. Since NHPP is characterized by its rate (intensity) function, which is known as the software failure rate of NHPP-based SRM, it is of great interest to estimate accurately the rate function from observed software failure data. In the existing work the same authors introduced a Haar-wavelet-based technique for this problem and found that the Haar wavelet transform provided a very powerful performance in estimating software failure rate. In this paper, we consider the application potentiality of a Daubechies wavelet estimator in the estimation of software failure rate, given the software failure time data. We give practical solutions by overcoming technical difficulties in applying the Daubechies wavelet estimator to the real software failure time data.

  • NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Xiao XIAO  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E95-A No:5
      Page(s):
    894-902

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  • User Location in Picocells -- A Paging Algorithm Derived from Measured Data

    Stephan WANKE  Hiroshi SAITO  Yutaka ARAKAWA  Shinsuke SHIMOGAWA  

     
    PAPER-Network

      Vol:
    E93-B No:9
      Page(s):
    2291-2298

    We present a new paging algorithm for wireless networks with ultra-short-range radio access links (picocells). The ubiquitous office (u-office) network is a good example of such a network, and we present some u-office example applications. In addition, we show that conventional paging algorithms are not feasible in such networks. Therefore, we derived a new paging algorithm from the measurement results of an experimental sensor network with short-range wireless links deployed in our office. We equipped persons with sensors and deployed sensor readers at selected places in our office. The sensors transmit messages to the sensor readers at regular intervals. If no sensor reader is in range, the message is lost. Our main observation is that, if a picocell shows an attraction property to a certain person, the residence time of an attached mobile terminal is not gamma distributed (as described in the literature) and the probability of long-lasting residences increases. Thus, if the residence time is larger than a certain threshold, the probability of a long-lasting residence time increases if a sensor reader location has an attraction property to a person. Based on this observation, our proposed paging algorithm registers the location of the mobile terminal only when the residence time in the cell is longer than a predetermined constant. By appropriately setting this constant, we can significantly reduce the registration message frequency while ensuring that the probability of the network successfully connecting to a mobile terminal remains high.

  • Analysis of Leakage-Inductance Effect on Characteristics of Flyback Converter without Right Half Plane Zero

    Hiroto TERASHI  Tamotsu NINOMIYA  

     
    PAPER-DC/DC Converters

      Vol:
    E87-B No:12
      Page(s):
    3539-3544

    In recent years the size of transformer in a DC-DC converter becomes smaller and thinner for power module type application. It results in the increase of the leakage inductances because the number of turns of the secondary winding becomes smaller. This paper presents the analysis of static and dynamic characteristics of the novel flyback converter proposed before, and clarifies that the transformer's leakage inductances deteriorate the static load regulation, but improve the dynamic stability by increasing the dumping factor.

  • Single-Trial Magnetoencephalographic Data Decomposition and Localization Based on Independent Component Analysis Approach

    Jianting CAO  Noboru MURATA  Shun-ichi AMARI  Andrzej CICHOCKI  Tsunehiro TAKEDA  Hiroshi ENDO  Nobuyoshi HARADA  

     
    PAPER-Nonlinear Problems

      Vol:
    E83-A No:9
      Page(s):
    1757-1766

    Magnetoencephalography (MEG) is a powerful and non-invasive technique for measuring human brain activity with a high temporal resolution. The motivation for studying MEG data analysis is to extract the essential features from measured data and represent them corresponding to the human brain functions. In this paper, a novel MEG data analysis method based on independent component analysis (ICA) approach with pre-processing and post-processing multistage procedures is proposed. Moreover, several kinds of ICA algorithms are investigated for analyzing MEG single-trial data which is recorded in the experiment of phantom. The analyzed results are presented to illustrate the effectiveness and high performance both in source decomposition by ICA approaches and source localization by equivalent current dipoles fitting method.

  • Data Analysis by Positive Decision Trees

    Kazuhisa MAKINO  Takashi SUDA  Hirotaka ONO  Toshihide IBARAKI  

     
    PAPER-Theoretical Aspects

      Vol:
    E82-D No:1
      Page(s):
    76-88

    Decision trees are used as a convenient means to explain given positive examples and negative examples, which is a form of data mining and knowledge discovery. Standard methods such as ID3 may provide non-monotonic decision trees in the sense that data with larger values in all attributes are sometimes classified into a class with a smaller output value. (In the case of binary data, this is equivalent to saying that the discriminant Boolean function that the decision tree represents is not positive. ) A motivation of this study comes from an observation that real world data are often positive, and in such cases it is natural to build decision trees which represent positive (i. e. , monotone) discriminant functions. For this, we propose how to modify the existing procedures such as ID3, so that the resulting decision tree represents a positive discriminant function. In this procedure, we add some new data to recover the positivity of data, which the original data had but was lost in the process of decomposing data sets by such methods as ID3. To compare the performance of our method with existing methods, we test (1) positive data, which are randomly generated from a hidden positive Boolean function after adding dummy attributes, and (2) breast cancer data as an example of the real-world data. The experimental results on (1) tell that, although the sizes of positive decision trees are relatively larger than those without positivity assumption, positive decision trees exhibit higher accuracy and tend to choose correct attributes, on which the hidden positive Boolean function is defined. For the breast cancer data set, we also observe a similar tendency; i. e. , positive decision trees are larger but give higher accuracy.

  • Surface Reconstruction Model for Realistic Visualization

    Hiromi T. TANAKA  Fumio KISHINO  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    494-500

    Surface reconstruction and visualization from sparse and incomplete surface data is a fundamental problem and has received growing attention in both computer vision and graphics. This paper presents a computational scheme for realistic visualization of free-formed surfaces from 3D range images. The novelty of this scheme is that by integrating computer vision and computer graphics techniques, we dynamically construct a mesh representation of the arbitrary view of the surfaces, from a view-invariant shape description obtained from 3D range images. We outline the principle of this scheme and describle the frame work of a graphical reconstruction model, we call arbitrarily oriented meshes', which is developed based on differential geometry. The experimental results on real range data of human faces are shown.