The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] quality(483hit)

141-160hit(483hit)

  • Local Information, Observable Parameters, and Global View Open Access

    Hiroshi SAITO  

     
    INVITED PAPER

      Vol:
    E96-B No:12
      Page(s):
    3017-3027

    The “Blind Men and an Elephant” is an old Indian story about a group of blind men who encounter an elephant and do not know what it is. This story describes the difficulties of understanding a large concept or global view based on only local information. Modern technologies enable us to easily obtain and retain local information. However, simply collecting local information does not give us a global view, as evident in this old story. This paper gives a concrete model of this story on the plane to theoretically and mathematically discuss it. It analyzes what information we can obtain from collected local information. For a convex target object modeling the elephant and a convex sensing area, it is proven that the size and perimeter length of the target object are the only parameters that can be observed by randomly deployed sensors modeling the blind men. To increase the number of observable parameters, this paper argues that non-convex sensing areas are important and introduces composite sensor nodes as an approach to implement non-convex sensing areas. The paper also derives a model on the discrete space and analyzes it. The analysis results on the discrete space are applicable to some network related issues such as link quality estimation in a part of a network based on end-to-end probing.

  • On Global Exponential Stabilization of a Class of Nonlinear Systems by Output Feedback via Matrix Inequality Approach

    Min-Sung KOO  Ho-Lim CHOI  

     
    LETTER-Systems and Control

      Vol:
    E96-A No:10
      Page(s):
    2034-2038

    In this letter, we consider the global exponential stabilization problem by output feedback for a class of nonlinear systems. Along with a newly proposed matrix inequality condition, the proposed control method has improved flexibility in dealing with nonlinearity, over the existing methods. Analysis and examples are given to illustrate the improved features of our control method.

  • Design Requirements for Improving QoE of Web Service Using Time-Fillers

    Sumaru NIIDA  Satoshi UEMURA  Etsuko T. HARADA  

     
    PAPER-Network

      Vol:
    E96-B No:8
      Page(s):
    2069-2075

    As mobile multimedia services expand, user behavior will become more diverse and the control of service quality from the user's perspective will become more important in service design. The quality of the network is one of the critical factors determining mobile service quality. However, this has mainly been evaluated in objective physical terms, such as delay reduction and bandwidth expansion. It is less common to use a human-centered design viewpoint when improving network performance. In this paper, we discuss ways to improve the quality of web services using time-fillers that actively address the human factors to improve the subjective quality of a mobile network. A field experiment was conducted, using a prototype. The results of the field experiment show that time-fillers can significantly decrease user dissatisfaction with waiting, but that this effect is strongly influenced by user preferences concerning content. Based on these results, we discuss the design requirements for effective use of time-fillers.

  • Deterministic Packet Buffer System with Multi FIFO Queues for the Advanced QoS

    Hisashi IWAMOTO  Yuji YANO  Yasuto KURODA  Koji YAMAMOTO  Shingo ATA  Kazunari INOUE  

     
    PAPER-Network System

      Vol:
    E96-B No:7
      Page(s):
    1819-1825

    Network traffic keeps increasing due to the increasing popularity of video streaming services. Routers and switches in wire-line networks require guaranteed line rates as high as 20 Gbp/s as well as advanced quality of service (QoS). Hybrid SRAM and DRAM architecture previously presented with the benefit of high-speed and high-density, but it requires complex memory management. As a result, it has hardly supported large numbers of queue, which is an effective approach to satisfying the QoS requirements. This paper proposes an intelligent memory management unit (MMU) which is based on the hybrid architecture, where over 16k multi queues are integrated. The performance examined by the system board is zero-packet loss under the seamless traffic with 60–1.5 kByte packet-length (deterministic manner). Noticeable feature in this paper's architecture is eliminating the need for any premium memories but only low-cost commodity SRAMs and DRAMs are used. The intelligent MMU employs the head buffer architecture, which is suitable for supporting a large numbers of FIFO queues. An experimental board based on this architecture is embedded into a Router system to evaluate the performance. Using 16k queues at 20 Gbps, zero-packet loss is examined with 64-Byte to 1,500-Byte packet-length.

  • In-Service Video Quality Verifying Using DCT Basis for DTV Broadcasting

    Byeong-No KIM  Chan-Ho HAN  Kyu-Ik SOHNG  

     
    BRIEF PAPER-Electronic Instrumentation and Control

      Vol:
    E96-C No:7
      Page(s):
    1028-1031

    We propose a composite DCT basis line test signal to evaluate the video quality of a DTV encoder. The proposed composite test signal contains a frame index, a calibration square wave, and 7-field basis signals. The results show that the proposed method may be useful for an in-service video quality verifier, using an ordinary oscilloscope instead of special equipment.

  • LDR Image to HDR Image Mapping with Overexposure Preprocessing

    Yongqing HUO  Fan YANG  Vincent BROST  Bo GU  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1185-1194

    Due to the growing popularity of High Dynamic Range (HDR) images and HDR displays, a large amount of existing Low Dynamic Range (LDR) images are required to be converted to HDR format to benefit HDR advantages, which give rise to some LDR to HDR algorithms. Most of these algorithms especially tackle overexposed areas during expanding, which is the potential to make the image quality worse than that before processing and introduces artifacts. To dispel these problems, we present a new LDR to HDR approach, unlike the existing techniques, it focuses on avoiding sophisticated treatment to overexposed areas in dynamic range expansion step. Based on a separating principle, firstly, according to the familiar types of overexposure, the overexposed areas are classified into two categories which are removed and corrected respectively by two kinds of techniques. Secondly, for maintaining color consistency, color recovery is carried out to the preprocessed images. Finally, the LDR image is expanded to HDR. Experiments show that the proposed approach performs well and produced images become more favorable and suitable for applications. The image quality metric also illustrates that we can reveal more details without causing artifacts introduced by other algorithms.

  • Perceptual Distortion Measure for Polygon-Based Shape Coding

    Zhongyuan LAI  Wenyu LIU  Fan ZHANG  Guang CHENG  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E96-D No:3
      Page(s):
    750-753

    In this paper, we present a perceptual distortion measure (PDM) for polygon-based shape coding. We model the PDM as the salience of relevance triangle, and express the PDM by using three properties derived from the salience of visual part. Performance analysis and experimental results show that our proposal can improve the quality of the shape reconstruction when the object contour has sharp protrusions.

  • A Reduced-Reference Video Quality Assessment Method Based on the Activity-Difference of DCT Coefficients

    Wyllian B. da SILVA  Keiko V. O. FONSECA  Alexandre de A. P. POHL  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:3
      Page(s):
    708-718

    A simple and efficient reduced-reference video quality assessment method based on the activity-difference of DCT coefficients is proposed. The method provides better accuracy, monotonicity, and consistent predictions than the PSNR full-reference metric and comparable results with the full-reference SSIM. It also shows an improved performance to a similar VQ technique based on the calculation of the pixel luminance differences performed in the spatial-domain.

  • The Impact of Information Quality on Quality of Life: An Information Quality Oriented Framework Open Access

    Markus HELFERT  Ray WALSHE  Cathal GURRIN  

     
    INVITED PAPER

      Vol:
    E96-B No:2
      Page(s):
    404-409

    Information affects almost all aspects of life, and thus the Quality of Information (IQ) plays a critical role in businesses and societies; It can have significant positive and negative impacts on the quality of life of citizens, employees and organizations. Over many years aspects and challenges of IQ have been studied within various contexts. As a result, the general approach to the study of IQ has offered numerous management and measurement approaches, IQ frameworks and list of IQ criteria. As the volume of data and information increases, IQ problems become pervasive. Whereas earlier studies investigated specific aspects of IQ, the next phase of IQ research will need to examine IQ in a wider context, thus its impact on the quality of life and societies. In this paper we apply an IQ oriented framework to two cases, cloud computing and lifelogging, illustrating the impact of IQ on the quality of life. The paper demonstrates the value of the framework, the impact IQ can have on the quality of life and in summary provides a foundation for further research.

  • Subjective Quality Metric for 3D Video Services

    Kazuhisa YAMAGISHI  Taichi KAWANO  Takanori HAYASHI  Jiro KATTO  

     
    PAPER

      Vol:
    E96-B No:2
      Page(s):
    410-418

    Three-dimensional (3D) video service is expected to be introduced as a next-generation television service. Stereoscopic video is composed of two 2D video signals for the left and right views, and these 2D video signals are encoded. Video quality between the left and right views is not always consistent because, for example, each view is encoded at a different bit rate. As a result, the video quality difference between the left and right views degrades the quality of stereoscopic video. However, these characteristics have not been thoroughly studied or modeled. Therefore, it is necessary to better understand how the video quality difference affects stereoscopic video quality and to model the video quality characteristics. To do that, we conducted subjective quality assessments to derive subjective video quality characteristics. The characteristics showed that 3D video quality was affected by the difference in video quality between the left and right views, and that when the difference was small, 3D video quality correlated with the highest 2D video quality of the two views. We modeled these characteristics as a subjective quality metric using a training data set. Finally, we verified the performance of our proposed model by applying it to unknown data sets.

  • An Incentive-Compatible Load Distribution Approach for Wireless Local Area Networks with Usage-Based Pricing

    Bo GU  Kyoko YAMORI  Sugang XU  Yoshiaki TANAKA  

     
    PAPER

      Vol:
    E96-B No:2
      Page(s):
    451-458

    Recent studies have shown that the traffic load is often distributed unevenly among the access points. Such load imbalance results in an ineffective bandwidth utilization. The load imbalance and the consequent ineffective bandwidth utilization could be alleviated via intelligently selecting user-AP associations. In this paper, the diversity in users' utilities is sufficiently taken into account, and a Stackelberg leader-follower game is formulated to obtain the optimal user-AP association. The effectiveness of the proposed algorithm on improving the degree of load balance is evaluated via simulations. Simulation results show that the performance of the proposed algorithm is superior to or at least comparable with the best existing algorithms.

  • A Method for Improving TIE-Based VQ Encoding Introducing RI Rules

    Chi-Jung HUANG  Shaw-Hwa HWANG  Cheng-Yu YEH  

     
    LETTER-Pattern Recognition

      Vol:
    E96-D No:1
      Page(s):
    151-154

    This study proposes an improvement to the Triangular Inequality Elimination (TIE) algorithm for vector quantization (VQ). The proposed approach uses recursive and intersection (RI) rules to compensate and enhance the TIE algorithm. The recursive rule changes reference codewords dynamically and produces the smallest candidate group. The intersection rule removes redundant codewords from these candidate groups. The RI-TIE approach avoids over-reliance on the continuity of the input signal. This study tests the contribution of the RI rules using the VQ-based, G.729 standard LSP encoder and some classic images. Results show that the RI rules perform excellently in the TIE algorithm.

  • Modeling and Algorithms for QoS-Aware Service Composition in Virtualization-Based Cloud Computing

    Jun HUANG  Yanbing LIU  Ruozhou YU  Qiang DUAN  Yoshiaki TANAKA  

     
    PAPER

      Vol:
    E96-B No:1
      Page(s):
    10-19

    Cloud computing is an emerging computing paradigm that may have a significant impact on various aspects of the development of information infrastructure. In a Cloud environment, different types of network resources need to be virtualized as a series of service components by network virtualization, and these service components should be further composed into Cloud services provided to end users. Therefore Quality of Service (QoS) aware service composition plays a crucial role in Cloud service provisioning. This paper addresses the problem on how to compose a sequence of service components for QoS guaranteed service provisioning in a virtualization-based Cloud computing environment. The contributions of this paper include a system model for Cloud service provisioning and two approximation algorithms for QoS-aware service composition. Specifically, a system model is first developed to characterize service provisioning behavior in virtualization-based Cloud computing, then a novel approximation algorithm and a variant of a well-known QoS routing procedure are presented to resolve QoS-aware service composition. Theoretical analysis shows that these two algorithms have the same level of time complexity. Comparison study conducted based on simulation experiments indicates that the proposed novel algorithm achieves better performance in time efficiency and scalability without compromising quality of solution. The modeling technique and algorithms developed in this paper are general and effective; thus are applicable to practical Cloud computing systems.

  • Robustness of Image Quality Factors for Environment Illumination

    Shogo MORI  Gosuke OHASHI  Yoshifumi SHIMODAIRA  

     
    LETTER-Image

      Vol:
    E95-A No:12
      Page(s):
    2498-2501

    This study examines the robustness of image quality factors in various types of environment illumination using a parameter design in the field of quality engineering. Experimental results revealed that image quality factors are influenced by environment illuminations in the following order: minimum luminance, maximum luminance and gamma.

  • Theoretical Considerations for Maintaining the Performance of Composite Web Services

    Shinji KIKUCHI  Yoshihiro KANNA  Yohsuke ISOZAKI  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E95-D No:11
      Page(s):
    2634-2650

    In recent years, there has been an increasing demand with regard to available elemental services provided by independent firms for compositing new services. Currently, however, whenever it is difficult to maintain the required level of quality of a new composite web service, assignment of the new computer's resources as provisioning at the data center is not always effective, especially in the area of performance for composite web service providers. Thus, a new approach might be required. This paper presents a new control method aiming to maintain the performance requirements for composite web services. There are three aspects of our method that are applied: first of all, the theory of constraints (TOC) proposed by E.M. Goldratt ; secondly, an evaluation process in the non-linear feed forward controlling method: and finally multiple trials in applying policies with verification. In particular, we will discuss the architectural and theoretical aspects of the method in detail, and will show the insufficiency of combining the feedback controlling approach with TOC as a result of our evaluation.

  • No-Reference Quality Estimation for Video-Streaming Services Based on Error-Concealment Effectiveness

    Toru YAMADA  Yoshihiro MIYAMOTO  Takao NISHITANI  

     
    PAPER-Multimedia Environment Technology

      Vol:
    E95-A No:11
      Page(s):
    2007-2014

    This paper proposes a video-quality estimation method based on a no-reference model for realtime quality monitoring in video-streaming services. The proposed method analyzes both bitstream information and decoded pixel information to estimate video-quality degradation by transmission errors. Video quality in terms of a mean squared error (MSE) between degraded video frames and error-free video frames is estimated on the basis of the number of impairment macroblocks in which the quality degradation has not been possible to be concealed. Error-concealment effectiveness is evaluated using motion information and luminance discontinuity at the boundaries of impairment regions. Simulation results show a high correlation (correlation coefficients of 0.93) between the actual MSE and the number of macroblocks in which error concealment has not been effective. These results show that the proposed method works well in reatime quality monitoring for video-streaming services.

  • Flow Control Scheme Using Adaptive Receiving Opportunity Control for Wireless Multi-Hop Networks

    Atsushi TAKAHASHI  Nobuyoshi KOMURO  Shiro SAKATA  Shigeo SHIODA  Tutomu MURASE  

     
    PAPER

      Vol:
    E95-B No:9
      Page(s):
    2751-2758

    In wireless single-hop networks, IEEE 802.11e Enhanced Distributed Channel Access (EDCA) is the standard for Quality of Service (QoS) control. However, it is necessary for controlling QoS to modify the currently used IEEE 802.11 Distributed Coordination Function (DCF)-compliant terminals as well as Access Points (APs). In addition, it is necessary to modify the parameter of IEEE 802.11e EDCA when the traffic is heavy. This paper proposes a novel scheme to guarantee QoS of high-priority flow with Receiving Opportunity Control in MAC Frame (ROC) employed adaptive flow control in wireless multi-hop network. In the proposed scheme, the edge APs which are directly connected to user terminals estimate the network capacity, and calculate appropriate ACK prevention probability against low-priority flow according to traffic load. Simulation evaluation results show that the proposed scheme guarantees QoS.

  • A Study of Stereoscopic Image Quality Assessment Model Corresponding to Disparate Quality of Left/Right Image for JPEG Coding

    Masaharu SATO  Yuukou HORITA  

     
    LETTER-Quality Metrics

      Vol:
    E95-A No:8
      Page(s):
    1264-1269

    Our research is focused on examining a stereoscopic quality assessment model for stereoscopic images with disparate quality in left and right images for glasses-free stereo vision. In this paper, we examine the objective assessment model of 3-D images, considering the difference in image quality between each view-point generated by the disparity-compensated coding. A overall stereoscopic image quality can be estimated by using only predicted values of left and right 2-D image qualities based on the MPEG-7 descriptor information without using any disparity information. As a result, the stereoscopic still image quality is assessed with high prediction accuracy with correlation coefficient=0.98 and average error=0.17.

  • Reduced-Reference Objective Quality Assessment Model of Coded Video Sequences Based on the MPEG-7 Descriptor

    Masaharu SATO  Yuukou HORITA  

     
    LETTER-Quality Metrics

      Vol:
    E95-A No:8
      Page(s):
    1259-1263

    Our research is focused on examining the video quality assessment model based on the MPEG-7 descriptor. Video quality is estimated by using several features based on the predicted frame quality such as average value, worst value, best value, standard deviation, and the predicted frame rate obtained from descriptor information. As a result, assessment of video quality can be conducted with a high prediction accuracy with correlation coefficient=0.94, standard deviation of error=0.24, maximum error=0.68 and outlier ratio=0.23.

  • A No Reference Metric of Video Coding Quality Based on Parametric Analysis of Video Bitstream

    Osamu SUGIMOTO  Sei NAITO  Yoshinori HATORI  

     
    PAPER-Quality Metrics

      Vol:
    E95-A No:8
      Page(s):
    1247-1255

    In this paper, we propose a novel method of measuring the perceived picture quality of H.264 coded video based on parametric analysis of the coded bitstream. The parametric analysis means that the proposed method utilizes only bitstream parameters to evaluate video quality, while it does not have any access to the baseband signal (pixel level information) of the decoded video. The proposed method extracts quantiser-scale, macro block type and transform coefficients from each macroblock. These parameters are used to calculate spatiotemporal image features to reflect the perception of coding artifacts which have a strong relation to the subjective quality. A computer simulation shows that the proposed method can estimate the subjective quality at a correlation coefficient of 0.923 whereas the PSNR metric, which is referred to as a benchmark, correlates the subjective quality at a correlation coefficient of 0.793.

141-160hit(483hit)