The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] REM(1013hit)

161-180hit(1013hit)

  • Cloud Provider Selection Models for Cloud Storage Services to Satisfy Availability Requirements

    Eiji OKI  Ryoma KANEKO  Nattapong KITSUWAN  Takashi KURIMOTO  Shigeo URUSHIDANI  

     
    PAPER-Network

      Pubricized:
    2017/01/24
      Vol:
    E100-B No:8
      Page(s):
    1406-1418

    Cost-effective cloud storage services are attracting users with their convenience, but there is a trade-off between service availability and usage cost. We develop two cloud provider selection models for cloud storage services to minimize the total cost of usage. The models select multiple cloud providers to meet the user requirements while considering unavailability. The first model, called a user-copy (UC) model, allows the selection of multiple cloud providers, where the user copies its data to multiple providers. In addition to the user copy function of the UC model, the second model, which is called a user and cloud-provider copy (UCC) model, allows cloud providers to make copies of the data to deliver them to other cloud providers. The cloud service is available if at least one cloud provider is available. We formulate both models as integer linear programming (ILP) problems. Our performance evaluation observes that both models reduce the total cost of usage, compared to the single cloud provider selection approach. As the cost of bandwidth usage between a user and a cloud provider increases, the UCC model becomes more beneficial than the UC model. We implement the prototype for cloud storage services, and demonstrate our models via Science Information Network 5.

  • Experimental Study of Mixed-Mode Oscillations in a Four-Segment Piecewise Linear Bonhoeffer-van der Pol Oscillator under Weak Periodic Perturbation -Successive and Nonsuccessive MMO-Incrementing Bifurcations-

    Tri Quoc TRUONG  Tadashi TSUBONE  Kuniyasu SHIMIZU  Naohiko INABA  

     
    PAPER-Nonlinear Problems

      Vol:
    E100-A No:7
      Page(s):
    1522-1531

    This report presents experimental measurements of mixed-mode oscillations (MMOs) generated by a weakly driven four-segment piecewise linear Bonhoeffer-van der Pol (BVP) oscillator. Such a roughly approximated simple piecewise linear circuit can generate MMOs and mixed-mode oscillation-incrementing bifurcations (MMOIBs). The laboratory experiments well agree with numerical results. We experimentally and numerically observe time series and Lorenz plots of MMOs generated by successive and nonsuccessive MMOIBs.

  • Note on Support Weight Distribution of Linear Codes over $mathbb{F}_{p}+umathbb{F}_{p}+vmathbb{F}_{p}+uvmathbb{F}_{p}$

    Minjia SHI  Jie TANG  Maorong GE  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:6
      Page(s):
    1346-1348

    Let $R$ = $mathbb{F}_{p}+umathbb{F}_{p}+vmathbb{F}_{p}+uvmathbb{F}_{p}$, where u2=u, v2 and uv=vu. A relation between the support weight distribution of a linear code $mathscr{C}$ of type p4k over R and its dual code $mathscr{C}^{ot}$ is established.

  • Physical-Weight-Based Measurement Methodology Suppressing Noise or Reducing Test Time for High-Resolution Low-Speed ADCs

    Mitsutoshi SUGAWARA  Zule XU  Akira MATSUZAWA  

     
    PAPER

      Vol:
    E100-C No:6
      Page(s):
    576-583

    We propose a statistical processing method to reduce the time of chip test of high-resolution and low-speed analog-to-digital converters (ADCs). For this kinds of ADCs, due to the influence of noise, conventional histogram or momentum method suffers from long time to collect required data for averaging. The proposed method, based on physically weighing the ADC, intending to physical weights in ADC/DAC under test. It can suppress white noise to 1/22 than conventional method in a case of 10bit binary ADC. Or it can reduce test data to 1/8 or less, which directly means to reduce measuring time to 1/8 or less. In addition, it earns complete Integrated Non-Linearity (INL) and Differential Non-linearity (DNL) even missing codes happens due to less data points. In this report, we theoretically describe how to guarantee missing codes at lacked measured data points.

  • An Analytical Model of Charge Pump DC-DC Voltage Multiplier Using Diodes

    Toru TANZAWA  

     
    PAPER-Circuit Theory

      Vol:
    E100-A No:5
      Page(s):
    1137-1144

    An output voltage-current equation of charge pump DC-DC voltage multiplier using diodes is provided to cover wide clock frequency and output current ranges for designing energy harvester operating at a near-threshold voltage or in sub-threshold region. Equivalent circuits in slow and fast switching limits are extracted. The effective threshold voltage of the diode in slow switching limit is also derived as a function of electrical characteristics of the diodes, such as the saturation current and voltage slope parameter, and design parameters such as the number of stages, capacitance per stage, parasitic capacitance at the top plate of the main boosting capacitor, and the clock frequency. The model is verified compared with SPICE simulation.

  • Set-Based Boosting for Instance-Level Transfer on Multi-Classification

    Haibo YIN  Jun-an YANG  Wei WANG  Hui LIU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/01/26
      Vol:
    E100-D No:5
      Page(s):
    1079-1086

    Transfer boosting, a branch of instance-based transfer learning, is a commonly adopted transfer learning method. However, currently popular transfer boosting methods focus on binary classification problems even though there are many multi-classification tasks in practice. In this paper, we developed a new algorithm called MultiTransferBoost on the basis of TransferBoost for multi-classification. MultiTransferBoost firstly separated the multi-classification problem into several orthogonal binary classification problems. During each iteration, MultiTransferBoost boosted weighted instances from different source domains while each instance's weight was assigned and updated by evaluating the difficulty of the instance being correctly classified and the “transferability” of the instance's corresponding source domain to the target. The updating process repeated until it reached the predefined training error or iteration number. The weight update factors, which were analyzed and adjusted to minimize the Hamming loss of the output coding, strengthened the connections among the sub binary problems during each iteration. Experimental results demonstrated that MultiTransferBoost had better classification performance and less computational burden than existing instance-based algorithms using the One-Against-One (OAO) strategy.

  • TOA Based Recalibration Systems for Improving LOS/NLOS Identification

    Yu Min HWANG  Yuchan SONG  Kwang Yul KIM  Yong Sin KIM  Jae Seang LEE  Yoan SHIN  Jin Young KIM  

     
    LETTER-Communication Theory and Signals

      Vol:
    E100-A No:5
      Page(s):
    1267-1270

    In this paper, we propose a non-cooperative line-of-sight (LOS)/non-LOS channel identification algorithm with single node channel measurements based on time-of-arrival statistics. In order to improve the accuracy of channel identification, we adopt a recalibration interval in terms of measured distance to the proposed algorithm. Experimental results are presented in terms of identification probability and recalibration interval. The proposed algorithm involves a trade-off between the channel identification quality and the recalibration rate. However, depending on the recalibration interval, it is possible to greatly improve the sensitivity of the channel identification system.

  • Proposal of Dehazing Method and Quantitative Index for Evaluation of Haze Removal Quality

    Yi RU  Go TANAKA  

     
    PAPER-Image

      Vol:
    E100-A No:4
      Page(s):
    1045-1054

    When haze exists in an image of an outdoor scene, the visibility of objects in the image is deteriorated. In recent years, to improve the visibility of objects in such images, many dehazing methods have been investigated. Most of the methods are based on the atmospheric scattering model. In such methods, the transmittance and global atmospheric light are estimated from an input image and a dehazed image is obtained by substituting them into the model. To estimate the transmittance and global atmospheric light, the dark channel prior is a major and powerful concept that is employed in many dehazing methods. In this paper, we propose a new dehazing method in which the degree of haze removal can be adjusted by changing its parameters. Our method is also based on the atmospheric scattering model and employs the dark channel prior. In our method, the estimated transmittance is adjusted to a more suitable value by a transform function. By choosing appropriate parameter values for each input image, good haze removal results can be obtained by our method. In addition, a quantitative index for evaluating the quality of a dehazed image is proposed in this paper. It can be considered that haze removal is a type of saturation enhancement. On the other hand, an output image obtained using the atmospheric scattering model is generally darker than the input image. Therefore, we evaluate the quality of dehazed images by considering the balance between the brightness and saturation of the input and output images. The validity of the proposed index is examined using our dehazing method. Then a comparison between several dehazing methods is carried out using the index. Through these experiments, the effectiveness of our dehazing method and the quantitative index is confirmed.

  • Measurement and Stochastic Modeling of Vertical Handover Interruption Time of Smartphone Real-Time Applications on LTE and Wi-Fi Networks

    Sungjin SHIN  Donghyuk HAN  Hyoungjun CHO  Jong-Moon CHUNG  

     
    PAPER-Network

      Pubricized:
    2016/11/16
      Vol:
    E100-B No:4
      Page(s):
    548-556

    Due to the rapid growth of applications that are based on Internet of Things (IoT) and real-time communications, mobile traffic growth is increasing exponentially. In highly populated areas, sudden concentration of numerous mobile user traffic can cause radio resource shortage, where traffic offloading is essential in preventing overload problems. Vertical handover (VHO) technology which supports seamless connectivity across heterogeneous wireless networks is a core technology of traffic offloading. In VHO, minimizing service interruption is a key design factor, since service interruption deteriorates service performance and degrades user experience (UX). Although 3GPP standard VHO procedures are designed to prevent service interruption, severe quality of service (QoS) degradation and severe interruption can occur in real network environments due to unintended disconnections with one's base station (BS) or access point (AP). In this article, the average minimum handover interruption time (HIT) (i.e., the guaranteed HIT influence) between LTE and Wi-Fi VHO is analyzed and measured based on 3GPP VHO access and decision procedures. In addition, the key parameters and procedures which affect HIT performance are analyzed, and a reference probability density function (PDF) for HIT prediction is derived from Kolmogorov-Smirnov test techniques.

  • Perceptual Distributed Compressive Video Sensing via Reweighted Sampling and Rate-Distortion Optimized Measurements Allocation

    Jin XU  Yan ZHANG  Zhizhong FU  Ning ZHOU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/01/06
      Vol:
    E100-D No:4
      Page(s):
    918-922

    Distributed compressive video sensing (DCVS) is a new paradigm for low-complexity video compression. To achieve the highest possible perceptual coding performance under the measurements budget constraint, we propose a perceptual optimized DCVS codec by jointly exploiting the reweighted sampling and rate-distortion optimized measurements allocation technologies. A visual saliency modulated just-noticeable distortion (VS-JND) profile is first developed based on the side information (SI) at the decoder side. Then the estimated correlation noise (CN) between each non-key frame and its SI is suppressed by the VS-JND. Subsequently, the suppressed CN is utilized to determine the weighting matrix for the reweighted sampling as well as to design a perceptual rate-distortion optimization model to calculate the optimal measurements allocation for each non-key frame. Experimental results indicate that the proposed DCVS codec outperforms the other existing DCVS codecs in term of both the objective and subjective performance.

  • Analyzing Temporal Dynamics of Consumer's Behavior Based on Hierarchical Time-Rescaling

    Hideaki KIM  Noriko TAKAYA  Hiroshi SAWADA  

     
    PAPER

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:4
      Page(s):
    693-703

    Improvements in information technology have made it easier for industry to communicate with their customers, raising hopes for a scheme that can estimate when customers will want to make purchases. Although a number of models have been developed to estimate the time-varying purchase probability, they are based on very restrictive assumptions such as preceding purchase-event dependence and discrete-time effect of covariates. Our preliminary analysis of real-world data finds that these assumptions are invalid: self-exciting behavior, as well as marketing stimulus and preceding purchase dependence, should be examined as possible factors influencing purchase probability. In this paper, by employing the novel idea of hierarchical time rescaling, we propose a tractable but highly flexible model that can meld various types of intrinsic history dependency and marketing stimuli in a continuous-time setting. By employing the proposed model, which incorporates the three factors, we analyze actual data, and show that our model has the ability to precisely track the temporal dynamics of purchase probability at the level of individuals. It enables us to take effective marketing actions such as advertising and recommendations on timely and individual bases, leading to the construction of a profitable relationship with each customer.

  • Industry Application of Software Development Task Measurement System: TaskPit

    Pawin SUTHIPORNOPAS  Pattara LEELAPRUTE  Akito MONDEN  Hidetake UWANO  Yasutaka KAMEI  Naoyasu UBAYASHI  Kenji ARAKI  Kingo YAMADA  Ken-ichi MATSUMOTO  

     
    PAPER-Software Engineering

      Pubricized:
    2016/12/20
      Vol:
    E100-D No:3
      Page(s):
    462-472

    To identify problems in a software development process, we have been developing an automated measurement tool called TaskPit, which monitors software development tasks such as programming, testing and documentation based on the execution history of software applications. This paper introduces the system requirements, design and implementation of TaskPit; then, presents two real-world case studies applying TaskPit to actual software development. In the first case study, we applied TaskPit to 12 software developers in a certain software development division. As a result, several concerns (to be improved) have been revealed such as (a) a project leader spent too much time on development tasks while he was supposed to be a manager rather than a developer, (b) several developers rarely used e-mails despite the company's instruction to use e-mail as much as possible to leave communication records during development, and (c) several developers wrote too long e-mails to their customers. In the second case study, we have recorded the planned, actual, and self reported time of development tasks. As a result, we found that (d) there were unplanned tasks in more than half of days, and (e) the declared time became closer day by day to the actual time measured by TaskPit. These findings suggest that TaskPit is useful not only for a project manager who is responsible for process monitoring and improvement but also for a developer who wants to improve by him/herself.

  • Controllability Analysis of Aggregate Demand Response System in Multiple Price-Change Situation

    Kazuhiro SATO  Shun-ichi AZUMA  

     
    PAPER

      Vol:
    E100-A No:2
      Page(s):
    376-384

    The paper studies controllability of an aggregate demand response system, i.e., the amount of the change of the total electric consumption in response to the change of the electric price, for real-time pricing (RTP). In order to quantify the controllability, this paper defines the controllability index as the lowest occurrence probability of the total electric consumption when the best possible the electric price is chosen. Then the paper formulates the problem which finds the consumer group maximizing the controllability index. The controllability problem becomes hard to solve as the number of consumers increases. To give a solution of the controllability problem, the article approximates the controllability index by the generalized central limit theorem. Using the approximated controllability index, the controllability problem can be reduced to a problem for solving nonlinear equations. Since the number of variables of the equations is independent of the number of consumers, an approximate solution of the controllability problem is obtained by numerically solving the equations.

  • Analyzing Fine Motion Considering Individual Habit for Appearance-Based Proficiency Evaluation

    Yudai MIYASHITA  Hirokatsu KATAOKA  Akio NAKAMURA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/10/18
      Vol:
    E100-D No:1
      Page(s):
    166-174

    We propose an appearance-based proficiency evaluation methodology based on fine-motion analysis. We consider the effects of individual habit in evaluating proficiency and analyze the fine motion of guitar-picking. We first extract multiple features on a large number of dense trajectories of fine motion. To facilitate analysis, we then generate a histogram of motion features using a bag-of-words model and change the number of visual words as appropriate. To remove the effects of individual habit, we extract the common principal histogram elements corresponding to experts or beginners according to discrimination's contribution rates using random forests. We finally calculate the similarity of the histograms to evaluate the proficiency of a guitar-picking motion. By optimizing the number of visual words for proficiency evaluation, we demonstrate that our method distinguishes experts from beginners with an accuracy of about 86%. Moreover, we verify experimentally that our proposed methodology can evaluate proficiency while removing the effects of individual habit.

  • A Free Space Permittivity Measurement at Microwave Frequencies for Solid Materials

    An Ngoc NGUYEN  Hiroshi SHIRAI  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    52-59

    A broadband approach to estimate the relative permittivity of dielectric cuboids has been proposed for materials of weak frequency dispersive characteristic. Our method involves a numerical iterative scheme with appropriate initial values carefully selected to solve for the relative permittivity in a wide range of frequencies. Good agreements between our method and references have been observed for nylon and acrylic samples. An applicable range relation between the minimal thickness, the frequency range and the dielectric property of the material has also been discussed.

  • Reciprocity Theorems and Their Application to Numerical Analysis in Grating Theory

    Junichi NAKAYAMA  Yasuhiko TAMURA  

     
    PAPER

      Vol:
    E100-C No:1
      Page(s):
    3-10

    This paper deals with the diffraction of a monochromatic plane wave by a periodic grating. We discuss a problem how to obtain a numerical diffraction efficiency (NDE) satisfying the reciprocity theorem for diffraction efficiencies, because diffraction efficiencies are the subject of the diffraction theories. First, this paper introduces a new formula that decomposes an NDE into two components: the even component and the odd one. The former satisfies the reciprocity theorem for diffraction efficiencies, but the latter does not. Therefore, the even component of an NDE becomes an answer to our problem. On the other hand, the odd component of an NDE represents an unwanted error. Using such the decomposition formula, we then obtain another new formula that decomposes the conventional energy error into two components. One is the energy error made by even components of NDE's. The other is the energy error constructed by unwanted odd ones and it may be used as a reciprocity criterion of a numerical solution. This decomposition formula shows a drawback of the conventional energy balance. The total energy error is newly introduced as a more strict condition for a desirable solution. We point out theoretically that the reciprocal wave solution, an approximate solution satisfying the reciprocity for wave fields, gives another solution to our problem. Numerical examples are given for the diffraction of a TM plane wave by a very rough periodic surface with perfect conductivity. In the case of a numerical solution by the image integral equation of the second kind, we found that the energy error is much reduced by use of the even component of an NDE as an approximate diffraction efficiency or by use of a reciprocal wave solution.

  • A Fast Single Image Haze Removal Method Based on Human Retina Property

    Xin NING  Weijun LI  Wenjie LIU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/10/13
      Vol:
    E100-D No:1
      Page(s):
    211-214

    In this letter, a novel and highly efficient haze removal algorithm is proposed for haze removal from only a single input image. The proposed algorithm is built on the atmospheric scattering model. Firstly, global atmospheric light is estimated and coarse atmospheric veil is inferred based on statistics of dark channel prior. Secondly, the coarser atmospheric veil is refined by using a fast Tri-Gaussian filter based on human retina property. To avoid halo artefacts, we then redefine the scene albedo. Finally, the haze-free image is derived by inverting the atmospheric scattering model. Results on some challenging foggy images demonstrate that the proposed method can not only improve the contrast and visibility of the restored image but also expedite the process.

  • A Highly-Adaptable and Small-Sized In-Field Power Analyzer for Low-Power IoT Devices

    Ryosuke KITAYAMA  Takashi TAKENAKA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2348-2362

    Power analysis for IoT devices is strongly required to protect attacks from malicious attackers. It is also very important to reduce power consumption itself of IoT devices. In this paper, we propose a highly-adaptable and small-sized in-field power analyzer for low-power IoT devices. The proposed power analyzer has the following advantages: (A) The proposed power analyzer realizes signal-averaging noise reduction with synchronization signal lines and thus it can reduce wide frequency range of noises; (B) The proposed power analyzer partitions a long-term power analysis process into several analysis segments and measures voltages and currents of each analysis segment by using small amount of data memories. By combining these analysis segments, we can obtain long-term analysis results; (C) The proposed power analyzer has two amplifiers that amplify current signals adaptively depending on their magnitude. Hence maximum readable current can be increased with keeping minimum readable current small enough. Since all of (A), (B) and (C) do not require complicated mechanisms nor circuits, the proposed power analyzer is implemented on just a 2.5cm×3.3cm board, which is the smallest size among the other existing power analyzers for IoT devices. We have measured power and energy consumption of the AES encryption process on the IoT device and demonstrated that the proposed power analyzer has only up to 1.17% measurement errors compared to a high-precision oscilloscope.

  • Hardware-Efficient Local Extrema Detection for Scale-Space Extrema Detection in SIFT Algorithm

    Kazuhito ITO  Hiroki HAYASHI  

     
    LETTER

      Vol:
    E99-A No:12
      Page(s):
    2507-2510

    In this paper a hardware-efficient local extrema detection (LED) method used for scale-space extrema detection in the SIFT algorithm is proposed. By reformulating the reuse of the intermediate results in taking the local maximum and minimum, the necessary operations in LED are reduced without degrading the detection accuracy. The proposed method requires 25% to 35% less logic resources than the conventional method when implemented in an FPGA with a slight increase in latency.

  • Time Delay Estimation via Co-Prime Aliased Sparse FFT

    Bei ZHAO  Chen CHENG  Zhenguo MA  Feng YU  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:12
      Page(s):
    2566-2570

    Cross correlation is a general way to estimate time delay of arrival (TDOA), with a computational complexity of O(n log n) using fast Fourier transform. However, since only one spike is required for time delay estimation, complexity can be further reduced. Guided by Chinese Remainder Theorem (CRT), this paper presents a new approach called Co-prime Aliased Sparse FFT (CASFFT) in O(n1-1/d log n) multiplications and O(mn) additions, where m is smooth factor and d is stage number. By adjusting these parameters, it can achieve a balance between runtime and noise robustness. Furthermore, it has clear advantage in parallelism and runtime for a large range of signal-to-noise ratio (SNR) conditions. The accuracy and feasibility of this algorithm is analyzed in theory and verified by experiment.

161-180hit(1013hit)