Tung-Sheng CHIANG Chian-Song CHIU Peter LIU
This paper proposes a robust fuzzy integral controller for output regulating a class of affine nonlinear systems subject to a bias reference to the origin. First, a common biased fuzzy model is introduced for a class of continuous/discrete-time affine nonlinear systems, such as dc-dc converters, robotic systems. Then, combining an integrator and parallel distributed compensators, the fuzzy integral regulator achieves an asymptotic regulation. Moreover, when considering disturbances or unstructured certainties, a virtual reference model is presented and provides a robust gain design via LMI techniques. In this case, H∞ performances is guaranteed. Note that the information regarding the operational point and bias terms are not required during the controller implementation. Thus, the controller can be applied to a multi-task regulation. Finally, three numerical simulations show the expected results.
ChoonKi AHN SooHee HAN WookHyun KWON
This letter presents robustness bounds (RBs) for receding horizon controls (RHCs) of uncertain systems. The proposed RBs are obtained easily by solving convex problems represented by linear matrix inequalities (LMIs). We show, by numerical examples, that the RHCs can guarantee robust stabilization for a larger class of uncertain systems than conventional linear quadratic regulators (LQRs).
Hiroo SEKIYA Yoji ARIFUKU Hiroyuki HASE Jianming LU Takashi YAHAGI
This paper investigates the design curves of class E amplifier with nonlinear capacitance for any output Q and finite dc-feed inductance. The important results are; 1) the capacitance nonlinearity strongly affects the design parameters for low Q, 2) the value of dc-feed inductance is hardly affected by the capacitance nonlinearity, and 3) the input voltage is an important parameter to design class E amplifier with nonlinear capacitance. By carrying out PSpice simulations, we show that the simulated results agree with the desired ones quantitatively. It is expected that the design curves in this paper are useful guidelines for the design of class E amplifier with nonlinear capacitance.
Ken UENO Tetsuya HIROSE Tetsuya ASAI Yoshihito AMEMIYA
We developed a CMOS watchdog sensor that simulates the changes in quality of perishables such as farm and marine products. The sensor can imitate a chemical reaction that causes the changes in the quality of perishables, with a wide range of activation energy from 0.1 eV to 0.7 eV. Attached to perishable goods, the sensor simulates the deterioration of the goods caused by surrounding temperatures. By reading the output of the sensor, consumers can determine whether the goods are fresh or not. This sensor consists of subthreshold CMOS circuits with a low-power consumption of 5 µW or less.
Michio TSUDA Sadahiro ISHIKAWA Osamu OHNO Akira HARADA Mayumi TAKAHASHI Shinji KUSUMOTO Katsuro INOUE
This is commonly thought that CASE tools reduce programming efforts and increase development productivity. However, no paper has provide quantitative data supporting the matter. This paper discusses productivity improvement through the use of an integrated CASE tool system named EAGLE (Effective Approach to Achieving High Level Software Productivity), as shown by various data collected in Hitachi from the 1980s to the 2000s. We have evaluated productivity by using three metrics, l) program generation rate using reusable program skeletons and components, 2) fault density at two test phase, and 3) learning curve for the education of inexperienced programmers. We will show that productivity has been improved by the various facilities of EAGLE.
Yasuo SATO Shuji HAMADA Toshiyuki MAEDA Atsuo TAKATORI Seiji KAJIHARA
In this paper we introduce a statistical quality model for delay testing that reflects fabrication process quality, design delay margin, and test timing accuracy. The model provides a measure that predicts the chip defect level that cause delay failure, including marginal small delay. We can therefore use the model to make test vectors that are effective in terms of both testing cost and chip quality. The results of experiments using ISCAS89 benchmark data and some large industrial design data reflect various characteristics of our statistical delay quality model.
Kazuhisa YAMAGISHI Takanori HAYASHI
We propose the concept of an opinion model for interactive multimodal services and apply it to an audiovisual communication service. First, psychological factors of an audiovisual communication service were extracted by using the semantic differential (SD) technique and factor analysis. Forty subjects participated in subjective tests and performed point-to-point conversational tasks on a PC-based video phone that exhibited various network qualities. The subjects assessed those qualities on the basis of 25 pairs of adjectives. Two psychological factors, i.e., an aesthetic feeling and a feeling of activity, were extracted from the results. Then, quality impairment factors affecting these two psychological factors were analyzed. We found that the aesthetic feeling was affected by IP packet loss and video coding bit rate, and the feeling of activity depended on delay time, video packet loss, video coding bit rate, and video frame rate. Using this result, we formulated an opinion model derived from the relationships among quality impairment factors, psychological factors, and overall quality. The validation test results indicated that the estimation error of our model was almost equivalent to the statistical reliability of the subjective score.
Gooyoun HWANG Jitae SHIN JongWon KIM
This paper introduces a network-aware video delivery framework where the quality-of-service (QoS) interaction between prioritized packet video and relative differentiated service (DiffServ) network is taken into account. With this framework, we propose a dynamic class mapping (DCM) scheme to allow video applications to cope with service degradation and class-based resource constraint in a time-varying network environment. In the proposed scheme, an explicit congestion notification (ECN)-based feedback mechanism is utilized to notify the status of network classes and the received service quality assessment to the end-host applications urgently. Based on the feedback information, DCM agent at ingress point can dynamically re-map each packet onto a network class in order to satisfy the desired QoS requirement. Simulation results verify the enhanced QoS performance of the streaming video application by comparing the static class-mapping and the class re-mapping based on loss-driven feedback.
Patrick LE CALLET Christian VIARD-GAUDIN Stephane PECHARD Emilie CAILLAULT
This paper describes an objective measurement method designed to assess the perceived quality for digital videos. The proposed approach can be used either in the context of a reduced reference quality assessment or in the more challenging situation where no reference is available. In that way, it can be deployed in a QoS monitoring strategy in order to control the end-user perceived quality. The originality of the approach relies on the very limited computation resources which are involved, such a system could be integrated quite easily in a real time application. It uses a convolutional neural network (CNN) that allows a continuous time scoring of the video. Experiments conducted on different MPEG-2 videos, with bit rates ranging from 2 to 6 Mbits/s, show the effectiveness of the proposed approach. More specifically, a linear correlation criterion, between objective and subjective scoring, ranging from 0.90 up to 0.95 has been obtained on a set of typical TV videos in the case of a reduced reference assessment. Without any reference to the original video, the correlation criteria remains quite satisfying since it still lies between 0.85 and 0.90, which is quite high with respect to the difficulty of the task, and equivalent and more in some cases than the traditional PSNR, which is a full reference measurement.
Masataka MASUDA Takanori HAYASHI
With the increasing demand for IP telephony services using Voice over IP (VoIP) technology, techniques for monitoring speech quality in actual networks are required to manage the quality of VoIP services constantly. Since the speech quality of VoIP is affected by IP network performance factors, non-intrusive methods of monitoring the quality of service (QoS) by passively measuring network performance are being watched with keen interest. VQmon technology is one of the non-intrusive quality monitoring methods. Although the monitoring functions of the VQmon for post-arrived packet behavior events at VoIP-gateways are effective, the estimating algorithm does not take differences in the implementations of VoIP-gateway products into account. We therefore propose a non-intrusive method of monitoring QoS that works in conjunction with ITU-T Recommendation P.862 "PESQ" that takes the characteristics of VoIP-gateway products into consideration. We compared the performance of non-intrusive quality monitoring technology such as VQmon and the proposed method in terms of estimating the accuracy of speech quality and mouth-to-ear delay. The experimental results revealed that the proposed method outperforms the conventional one, achieving sufficient accuracy for quality monitoring of VoIP services.
Hideaki YAMADA Norihiro FUKUMOTO
We present a quantitative evaluation of speech quality using the multiplexing scheme for the efficient transmission of voice signals in order to reduce the number of the IP packets carrying voice signals (called VoIP packets) transferred. The multiplexing scheme is applicable to a variety of media gateways controlling the bulk of voice streams over IP-based networks, based on VoIP technology. We speculated that the multiplexing scheme would reduce the degradation of speech quality due to packet loss since it also has a similar effect to interleaving the voice signal streams. However, the interleaving effect for maintaining speech quality in the scheme characterized by the feature of IP-based multiplication is not quantitatively clear. Through our end-to-end quality evaluation results of speech, as transmitted via the multiplexing scheme using dedicated hardware, we confirm the advantages of the multiplexing scheme from the perspective of achieving improved speech quality without increasing the processing delay when considering practical packet loss conditions within an IP-based network.
Takanori HAYASHI Ginga KAWAGUTI Jun OKAMOTO Akira TAKAHASHI
This paper proposes a subjective model for estimating the quality of video streaming services with dynamic bit-rate control. In a subjective quality assessment test, we clarify users' perceptions of distributed video signals whose quality is time-variant due to dynamic bit-rate control. Using this result, we constructed an estimation model considering the following three characteristics: 1) the influence of the video section where quality degradation is large will strongly affect the overall quality, 2) the impression of a past quality weakens with the passage of time, and 3) the range of evaluation scores becomes wider when the time duration of an evaluation is longer. We found that the proposed model enables the accuracy of estimating overall subjective quality to be dramatically improved compared with that of a model that averages segmental quality. The estimation error of the proposed model is less than the statistical reliability of the subjective score even for verification data. We also show that our findings are applicable to QoS design/management issues for video streaming services with dynamic bit-rate control.
Yeon-Mo YANG Ji-Myong NHO Nitaigour Premchand MAHALIK Kiseon KIM Byung-Ha AHN
As an alternative solution to provide the quality of services (QoS) for broadband access over Ethernet Passive Optical Network (EPON), we present the usage of MAC control message for plural class queues and a traffic-class burst-polling based delta dynamic bandwidth allocation (DBA), referred to as TCBP-DDBA, scheme. For better QoS support, the TCBP-DDBA minimizes packet delays and delay variations for expedited forwarding packet and maximizes throughput for assured forwarding and best effort packets. The network resources are efficiently utilized and adaptively allocated to the three traffic classes for the given unbalanced traffic conditions by guaranteeing the requested QoS. Simulation results using OPNET show that the TCBP-DDBA scheme performs well in comparison to the conventional unit-based allocation scheme over the measurement parameters such as: packet delay, packet delay variation, and channel utilization.
Dai-boong LEE Hwangjun SONG Inkyu LEE
Differentiated-services model has been prevailed as a scalable solution to provide quality of service over the Internet. Many researches have been focused on per hop behavior or a single domain behavior to enhance quality of service. Thus, there are still difficulties in providing the end-to-end guaranteed service when the path between sender and receiver includes multiple domains. Furthermore differentiated-services model mainly considers quality of service for traffic aggregates due to the scalability, and the quality of service state may be time varying according to the network conditions in the case of relative service model, which make the problem more challenging to guarantee the end-to-end quality-of-service. In this paper, we study class mapping mechanisms along the path to provide the end-to-end guaranteed quality of service with the minimum networking price over multiple differentiated-services domains. The proposed mechanism includes an effective implementation of relative differentiated-services model, quality of service advertising mechanism and class selecting mechanisms. Finally, the experimental results are provided to show the performance of the proposed algorithm.
Kazuhiro KONDO Kiyoshi NAKAGAWA
We proposed and evaluated a speech packet loss concealment method which predicts lost segments from speech included in packets either before, or both before and after the lost packet. The lost segments are predicted recursively by using linear prediction both in the forward direction from the packet preceding the loss, and in the backward direction from the packet succeeding the lost segment. Predicted samples in each direction are smoothed by averaging using linear weights to obtain the final interpolated signal. The adjacent segments are also smoothed extensively to significantly reduce the speech quality discontinuity between the interpolated signal and the received speech signal. Subjective quality comparisons between the proposed method and the the packet loss concealment algorithm described in the ITU standard G.711 Appendix I showed similar scores up to about 10% packet loss. However, the proposed method showed higher scores above this loss rate, with Mean Opinion Score rating exceeding 2.4, even at an extremely high packet loss rate of 30%. Packet loss concealment of speech degraded with G.729 coding, and babble noise mixed speech showed similar trends, with the proposed method showing higher qualities at high loss rates. We plan to further improve the performance by using adaptive LPC prediction order depending on the estimated pitch, and adaptive LPC bandwidth expansion depending on the consecutive number of repetitive prediction, among many other improvements. We also plan to investigate complexity reduction using gradient LPC coefficient updates, and processing delay reduction using adaptive forward/bidirectional prediction modes depending on the measured packet loss ratio.
This paper describes the author's perspective on multimedia quality prediction methodologies for multimedia communications in advanced mobile and internet protocol (IP)-based telephony, and reports related experiments and trials. First, the paper describes the need for perceptual QoS (Quality of Service) assessment in which various quality factors in multimedia communications for advanced mobile and IP-based telephony are analyzed. Then an objective quality prediction scheme is proposed from the viewpoints of quality measurement tools for each quality factor and an opinion model for compound quality factors in mobile and IP-based communications networks. Finally, the author's current trials of measurement tools and opinion models are described.
Matthew D. BROTHERTON Damien BAYART David S. HANDS
Next generation codecs, benchmarked by the H.264/AVC standard, are providing substantial compression efficiency for the coding and transmission of video. Coupled with technologies offering larger transmission bandwidths over DSL, wireless and satellite networks, the capability of delivering high quality video services to the home is now a reality. The perceptual quality of the content delivered over communications networks will be crucial in ensuring a first-class customer experience. It is therefore important to assess the advantages and disadvantages of the optional features offered by next generation codecs. This paper describes a subjective assessment that was carried out to investigate the perceptual effects of switching the in loop de-blocking filter within the H.264/ AVC CODEC on or off. Although the filter is believed to substantially improve the perceptual quality of video, it has been suggested that in some cases negative perceptual effects can be produced. The H.264/AVC architecture allows de-blocking to be switched off in cases where there are limited processing resources or it is considered a negative perceptual effect may be introduced. This paper describes a study that examined the perceptual effects of de-blocking by employing a standardised subjective assessment methodology. The Absolute Category Rating (ACR) method was used to capture Difference Mean Opinion Scores (DMOS) for a range of video. Content was selected to span a wide and representative range of coding complexity. This content was then encoded at a variety of bit-rates to represent high, medium and low qualities. Results were used to examine the end-user perception of video quality when the de-blocking filter is switched on or off. The experimental design allowed the overall effects of the de-blocking filter to be examined and additionally the relationship between content and quality on the filter performance. The experiment found that the performance of the de-blocking filter was content-dependent. Results were used to discuss the advantages and disadvantages of in-loop de-blocking and there is an examination of content properties (e.g. spatial and temporal complexity) that influence the performance of de-blocking.
Ryoichi KAWADA Osamu SUGIMOTO Atsushi KOIKE
As digital television transmission is becoming ubiquitous, a method that can remotely monitor the quality of the final and intermediate pictures is urgently needed. In particular, the case where standards conversion is included in the transmission chain is a serious issue as the input and output cannot simply be compared. This letter proposes a novel method to solve this issue. The combination of skipping fields/pixels and the previously proposed SSSWHT-RR method, using the information of correlation coefficients and variance of the picture, achieves accurate detection of picture failure.
Digital watermarks on pictures are more useful when they are better able to survive image processing operations and when they cause less degradation of picture quality. Random geometric distortion is one of the most difficult kinds of image processing for watermarks to survive because of the difficulty of synchronizing the expected watermark patterns to the watermarks embedded in pictures. This paper proposes three methods to improve a previous method that is not affected by this difficulty but that is insufficient in maintaining picture quality and treating other problems in surviving image processing. The first method determines the watermark strength in L*u*v* space, where human-perceived degradation of picture quality can be measured in terms of Euclidian distance, but embeds and detects watermarks in YUV space, where the detection is more reliable. The second method, based on the knowledge of image quantization, uses the messiness of color planes to hide watermarks. The third method reduces detection noises by preprocessing the watermarked image with orientation-sensitive image filtering, which is especially effective in picture portions where pixel values change drastically. Subjective evaluations have shown that these methods improved the picture quality of the previous method by 0.5 point of the mean evaluation score at the representative example case. On the other hand, the watermark strength of the previous method could be increased by 30% through 60% while keeping the same picture quality. Robustness to image processing has been evaluated for random geometric distortion, JPEG compression, Gaussian noise addition, and median filtering and it was clarified that these methods reduced the detection error ratio to 1/10 through 1/4. These methods can be applied not only to the previous method but also to other types of pixel-domain watermarking such as the Patchwork watermarking method and, with modification, to frequency-domain watermarking.
Wonjun LEE Eunkyo KIM Dongshin KIM Choonhwa LEE
Management of applications in the new world of pervasive computing requires new mechanisms to be developed for admission control, QoS negotiation, allocation and scheduling. To solve such resource-allocation and QoS provisioning problems within pervasive and ubiquitous computational environments, distribution and decomposition of the computation are important. In this paper we present a QoS-based welfare economic resource management model that models the actual price-formation process of an economy. We compare our economy-based approach with a mathematical approach we previously proposed. We use the constructs of application benefit functions and resource demand functions to represent the system configuration and to solve the resource allocation problems. Finally empirical studies are conducted to evaluate the performance of our proposed pricing model and to compare it with other approaches such as priority-based scheme and greedy method.