The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] quality(483hit)

461-480hit(483hit)

  • Spectrum Broadening of Telephone Band Signals Using Multirate Processing for Speech Quality Enhancement

    Hiroshi YASUKAWA  

     
    LETTER

      Vol:
    E78-A No:8
      Page(s):
    996-998

    This paper describes a system that can enchance the speech quality degradation due to severe band limitation during speech transmission. We have already proposed a spectrum widening method that utilizes aliasing in sampling rate conversion and digital filtering for spectrum shaping. This paper proposes a new method that offers improved performance in terms of the spectrum distortion characteristics. Implementation procedures are clarified, and its performance is discussed. The proposed method can effectively enhance speech quality.

  • A Practical Test System with a Fuzzy Logic Controller

    Takeshi KOYAMA  Ryuji OHMURA  

     
    PAPER

      Vol:
    E78-D No:7
      Page(s):
    868-873

    A test system with a fuzzy logic controller is proposed to assure stable outgoing quality as well as to raise throughput. The test system controls the number of items under test in accordance with fuzzy information as well as statistical information about incoming quality and outgoing quality. First, an algorithm, minimum-minimum-the center of gravity-weighted mean method, is studied with both fuzzy reasoning rules and membership functions which are used for the control. Second, characteristics of the test system are verified and examined with computer simulations so that the fuzzy logic control rules are determined to realize sufficient sensitivity to process changes. Third, the control rules are installed in the test management processor which commands test equipment for testing very large scale integrated circuits, with programming language C. The authors have obtained satisfactory results through a trial run using a series of lots of 16 bit micro controller units in an IC manufacturing factory. Finally, they study the stability condition of the fuzzy test system.

  • An Objective Measure Based on an Auditory Model for Assessing Low-Rate Coded Speech

    Toshiro WATANABE  Shinji HAYASHI  

     
    PAPER

      Vol:
    E78-D No:6
      Page(s):
    751-757

    We propose an objective measure from assessing low-rate coded speech. The model for this objective measure, in which several known features of the perceptual processing of speech sounds by the human ear are emulated, is based on the Hertz-to-Bark transformation, critical-band filtering with preemphasis to boost higher frequencies, nonlinear conversion for subjective loudness, and temporal (forward) masking. The effectiveness of the measure, called the Bark spectral distortion rating (BSDR), was validated by second-order polynomial regression analysis between the computed BSDR values and subjective MOS ratings obtained for a large number of utterances coded by several versions of CELP coders and one VSELP coder under three degradation conditions: input speech levels, transmission error rates, and background noise levels. The BSDR values correspond better to MOS ratings than several commonly used measures. Thus, BSDR can be used to accurately predict subjective scores.

  • All-Optical Timing Clock Extraction Using Multiple Wavelength Pumped Brillouin Amplifier

    Hiroto KAWAKAMI  Yutaka MIYAMOTO  Tomoyoshi KATAOKA  Kazuo HAGIMOTO  

     
    PAPER

      Vol:
    E78-B No:5
      Page(s):
    694-701

    This paper discusses an all-optical tank circuit that uses the comb-shaped gain spectrum generated by a Brillouin amplifier. The theory of timing clock extraction is shown for two cases: with two gains and with three gains. In both cases, the waveform of the extracted timing clock is simulated. According to the simulation, unlike an ordinary tank circuit, the amplitude of the extracted clock is not constant even though the quality factor (Q) is infinite. The extracted clock is clearly influenced by the pattern of the original data stream if the Brillouin gain is finite. The ratio of the maximum extracted clock amplitude to the minimum extracted amplitude is calculated as a function of Brillouin gain. The detuning of the pump light frequency is also discussed. It induces not only changes in the Brillouin gain, but also phase shift in the amplified light. The relation between the frequency drift of the pump lights and the jitter of the extracted timing clock is shown, in both cases: two pump lights are used and three pump lights are used. It is numerically shown that when the all pump lights have the same frequency drift, i.e., their frequency separation is constant, the phase of the extracted clock is not influenced by the frequency drift of the pump lights. The operation principle is demonstrated at 5Gbit/s, 2.5Gbit/s, and 2Gbit/s using two pumping techniques. The parameters of quality factor and the suppression ratio in the baseband domain are measured. Q and the suppression ratio are found to be 160 and 28dB, respectively.

  • Phenomenological Description of Temperature and Frequency Dependence of Surface Resistance of High-Tc Superconductors by Improved Three-Fluid Model

    Tadashi IMAI  Yoshio KOBAYASHI  

     
    PAPER-Microwave devices

      Vol:
    E78-C No:5
      Page(s):
    498-502

    A calculation method by the improved three-fluid model is shown to describe phenomenologically temperature and frequency dependence of surface resistance Rs for high-Tc superconductors. It is verified that this model is useful to describe temperature dependence of Rs for such high-Tc superconducting films as Y-Ba-Cu-O (YBCO), Eu-Ba-Cu-O, and Tl-Ba-Ca-Cu-O films. For the frequency dependence of Rs of a YBCO bulk, furthermore, the measured results which have not depended on f2 in the frequency range 10-25 GHz, can be described successfully by this model. Finally, a figure of merit is proposed to evaluate material quality for high-Tc superconductors from the values of electron densities and momentum relaxation time determined by the present model.

  • QOS Controls and Service Models in the Internet

    Takeshi NISHIDA  Kunihiro TANIGUCHI  

     
    INVITED PAPER

      Vol:
    E78-B No:4
      Page(s):
    447-457

    Over the last decade, the Internet has been extremely successful by distinguishing between overlaying applications and underlying networking technologies. This approach allows rapid and independent improvement in both networking and application technologies. The internetworking layer that divides applications and the network enables the Internet to function as a general and evolving infrastructures for data communications. The current Internet architecture offers only best-effort data delivery. However, recent emerging computer and networking technologies, demand the Internet guaranteed performance. In particular, audio and video applications have more rigid delay requirement than those applications which the current Internet supports. To offer guaranteed services in addition to best-effort services, both a new service model and a new architecture are necessary in the Internet architecture. The paper surveys researches and experiments conducted in the Internet community to accommodate a wide variety of qualities of services.

  • Performance Design and Control for B-ISDN

    Hideo MURAKAMI  Takeo ABE  Ken-ichi MASE  

     
    INVITED PAPER

      Vol:
    E78-B No:4
      Page(s):
    439-446

    This paper examines performance study items for ATM connections in B-ISDNs. We consider the characteristics of B-ISDN performance and describe the current status in ITU-T and the ATM Forum. On this basis, we propose a new performance framework and performance criteria. We also describe objectives for ATM cell transfer performance.

  • Performance Evaluation of Dynamic Resolution and QOS Control Schemes for Integrated VBR Video and Data Communications

    Yutaka ISHIBASHI  Shuji TASAKA  

     
    PAPER

      Vol:
    E78-B No:4
      Page(s):
    563-571

    This paper studies congestion control schemes for integrated variable bit-rate (VBR) video and data communications, where the quality of service (QOS) of each medium needs to be satisfied. In order to control congestion, we exert here either dynamic resolution control or QOS control. The dynamic resolution control scheme in this paper dynamically changes the temporal or spatial resolution of video according to the network loads. The QOS control scheme here assigns a constant capacity of buffer to each connection and determines the video resolution in order to guarantee the QOS of each medium at the connection establishment. The performance of these schemes is evaluated through simulation in terms of throughput, video frame delay probability distribution, and video frame loss rate. We also examine the effects of priority scheduling and packet discarding on the performance. Numerical results indicate that both dynamic resolution and QOS control attain low delay jitters as well as large video and data throughput. In particular, the QOS control is shown to be more suitable for integrated VBR video and data communications.

  • A New Concept of Network Dimensioning Based on Quality and Profit

    Kimihide MATSUMOTO  Satoshi NOJO  

     
    PAPER

      Vol:
    E78-B No:4
      Page(s):
    546-550

    We propose a new concept of network dimensioning, which is based not only on the grade of service but also on profit. In traditional network dimensioning methodology, the number of circuits on links is designed under a cost-minimization concept with grade of service constraints. Recently, telecommunication markets have become very large and competitive; therefore, we believe that a profit viewpoint is now essential. However, it is difficult to calculate profit in almost all the dimensioning methods currently used, because they mainly employ peak-hour traffic data, while profit depends on all the hourly traffic data which contain both peak and off-peak data. In this paper, we propose using all the hourly traffic data in network dimensioning. From these data and telephone charges for each hour, revenues will be estimated. On the other hand, facility costs will be estimated from the number of circuits. Finally, we can estimate profit from the difference between revenues and facility costs. Focusing on both quality and profits in network dimensioning leads to more advanced quality management and quality control in telecommunications networks than with traditional methodology. This paper outlines a dimensioning method based on profit, and describes its properties, some applications of it, and summarizes further studies.

  • A New Blazed Half-Transparent Mirror (BHM) for Eye Contact

    Makoto KURIKI  Kazutake UEHIRA  Hitoshi ARAI  Shigenobu SAKAI  

     
    PAPER-Communication Terminal and Equipment

      Vol:
    E78-B No:3
      Page(s):
    373-378

    We developed an eye-contact technique using a blazed half-transparent mirror (BHM), which is a micro-HM array arranged on the display surface, to make a compact eye-contact videophone. This paper describes a new BHM structure that eliminates ghosts and improves image quality. In the new BHM, the reflection and transmission areas are separated to exclude ghosts from appearing in the captured image. We evaluated the characteristics of the captured and displayed images. The results show that the contrast ratio of the captured image and the brightness of both captured and displayed images are much better than with the previous BHM.

  • Virtual Rate-Based Queueing: A Generalized Queueing Discipline for Switches in High-Speed Networks

    Yusheng JI  Shoichiro ASANO  

     
    PAPER-Switching and Communication Processing

      Vol:
    E77-B No:12
      Page(s):
    1537-1545

    A new rate-controlled queueing discipline, called virtual rate-based queueing (VRBQ), is proposed for packet-switching nodes in connection-oriented, high-speed, wide-area networks. The VRBQ discipline is based on the virtual rate which has a value between the average and peak transmission rates. By choosing appropriate virtual rates, various requirements can be met regarding the performance and quality of services in integrated-service networks. As the worst-case performance guarantee, we determine the upper bounds of queueing delay when VRBQ is combined with an admission control mechanism, i.e., Dynamic Time Windows or Leaky Bucket. Simulation results demonstrate the fairness policy of VRBQ in comparison with other queueing disciplines, and the performance of sources controlled under different virtual rates.

  • A Study on Objective Picture Quality Scales for Pictures Digitally Encoded for Broadcast

    Hiroyuki HAMADA  Seiichi NAMBA  

     
    PAPER

      Vol:
    E77-B No:12
      Page(s):
    1480-1488

    Considering the trend towards adopting high efficiency picture coding schemes into digital broadcasting services, we investigate objective picture quality scales for evaluating digitally encoded still and moving pictures. First, the study on the objective picture quality scale for high definition still pictures coded by the JPEG scheme is summarized. This scale is derived from consideration of the following distortion factors; 1) weighted noise by the spatial frequency characteristics and masking effects of human vision, 2) block distortion, and 3) mosquito noise. Next, an objective picture quality scale for motion pictures of standard television coded by the hybrid DCT scheme is studied. In addition to the above distortion factors, the temporal frequency characteristics of vision are also considered. Furthermore, considering that all of these distortions vary over time in motion pictures, methods for determining a single objective picture quality value for this time varying distortion are examined. As a result, generally applicable objective picture quality scale is obtained that correlates extremely well with subjective picture quality scale for both still and motion pictures, irrespective of the contents of the pictures. Having an objective scale facilitates automated picture quality evaluation and control.

  • Interpolation Technique of Fingerprint Features for Personal Verification

    Kazuharu YAMATO  Toshihide ASADA  Yutaka HATA  

     
    LETTER

      Vol:
    E77-D No:11
      Page(s):
    1306-1309

    In this letter we propose an interpolation technique for low-quality fingerprint images for highly reliable feature extraction. To improve the feature extraction rate, we extract fingerprint features by referring to both the interpolated image obtained by using a directional Laplacian filter and the high-contrast image obtained by using histogram equalization. Experimental results show the applicability of our method.

  • Automatic Seal Imprint Verification System with Imprint Quality Assessment Function and Its Performance Evaluation

    Katsuhiko UEDA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:8
      Page(s):
    885-894

    An annoying problem encountered in automatic seal imprint verification is that for seal imprints may have a lot of variations, even if they are all produced from a single seal. This paper proposes a new automatic seal imprint verification system which adds an imprint quality assessment function to our previous system in order to solve this problem, and also examines the verification performance of this system experimentally. This system consists of an imprint quality assessment process and a verification process. In the imprint quality assessment process, an examined imprint is first divided into partial regions. Each partial region is classified into one of three quality classes (good quality region, poor quality region, and background) on the basis of characteristics of its gray level histogram. In the verification process, only good quality partial regions of an examined imprint are verified with registered one. Finally, the examined imprint is classified as one of two types: a genuine and a forgery. However, as a result of quality assessment, if the partial regions classified as poor quality are too many, the examined imprint is classified as ambiguous" without verification processing. A major advantage of this verification system is that this system can verify seal imprints of various qualities efficiently and accurately. Computer experiments with real seal imprints were performed by using this system, previous system (without image quality assessment function) and document examiners of a bank. The results of these experiments show that this system is superior in the verification performance to our previous system, and has a similar verification performance to that of document examiners (i.e., the experimental results show the effectiveness of adding the image quality assessment function to a seal imprint verification system).

  • A Specific Design Approach for Automotive Microcomputers

    Nobusuke ABE  Shozo SHIROTA  

     
    PAPER

      Vol:
    E76-C No:12
      Page(s):
    1788-1793

    When used for automotive applications, microcomputers have to meet two requirements more demanding than those for general use. One of these requirements is to respond to external events within a time scale of microseconds; the other is the high quality and high reliability necessary for the severe environmental operating conditions and the ambitious market requirements inherent to automotive applications. These needs especially the latter one have been responded to by further elaboration of each basic technology involved in semiconductor manufacturing. At the same time, various logic parts have been built into the microcomputer. This paper deals with several design approaches to the high quality and high reliability objective. First, testability improvement by the logical separation method focusing on the logic simulation model for generating test vectors, which enables us to reduce the time required for test vector development in half. Next, noise suppression methods to gain electromagnetic compatibility (EMC). Then, simplified memory transistor's analysis to evaluate the V/I-characteristics directly via external pins without opening the model seal, removing the passivation and placing a probe needle on the chip. Finally, increased reliability of on-chip EPROM using a special circuit raising the threshold value by approximately 1(V) compared to EPROM's without such a circuit.

  • An Analysis of Optimal Frame Rate in Low Bit Rate Video Coding

    Yasuhiro TAKISHIMA  Masahiro WADA  Hitomi MURAKAMI  

     
    PAPER-Communication Systems and Transmission Equipment

      Vol:
    E76-B No:11
      Page(s):
    1389-1397

    We analyze frame rates in low bit rate video coding and show that an optimal frame rate can be theoretically obtained. In low bit rate video coding the frame rate is usually forced to be decreased for reducing the total amount of coded information. The choice of frame rate, however, has a great effect on the picture quality in a trade-off relation between coded picture quality and motion smoothness. It is known from experience that in order to achieve an optimum balance between these two factors, a frame rate has to be selected which is appropriate for the coding scheme, property of the video sequences and coding bit rate. A theoretical analysis, however, on the existence of an optimal frame rate and how the optimal frame rate would be expressed has not been performed. In this paper, coding distortion measured by mean square error is analyzed by using video signal models such as a rate-distortion function for coded frames and inter-frame correlation coefficients for non-coded frames. Overall picture quality taking account of coded picture quality and motion smoothness simultaneously is expressed as a function of frame rate. This analysis shows that the optimum frame rate can be uniquely specified. The maximum frame rate is optimal when the coding bit rate is higher than a certain value for a given video scene, while a frame rate less than the maximum is optimal otherwise. The result of the theoretical analysis is compared with the results of computer simulation. In addition, the relation between this analysis and a subjective evaluation is described. From both comparisons this theoretical analysis can be justified as an effective scheme to indicate the optimal frame rate, and it shows the possibility of improving picture quality by selecting frame rate adaptively.

  • Significance of Suitability Assessment in Speech Synthesis Applications

    Hideki KASUYA  

     
    INVITED PAPER

      Vol:
    E76-A No:11
      Page(s):
    1893-1897

    The paper indicates the importance of suitability assesment in speech synthesis applications. Human factors involved in the use of a synthetic speech are first discussed on the basis of an example of a newspaper company where synthetic speech is extensively used as an aid for proofreading a manuscript. Some findings obtained from perceptual experiments on the subjects' preference for paralinguistic properties of synthetic speech are then described, focusing primarily on the suitability of pitch characteristics, speaker's gender, and speaking rates in the task where subjects are asked to proofread a printed text while listening to the speech. The paper finally claims the need for a flexibile speech synthesis system which helps the users create their own synthetic speech.

  • Adaptive Image Sharpening Method Using Edge Sharpness

    Akira INOUE  Johji TAJIMA  

     
    PAPER

      Vol:
    E76-D No:10
      Page(s):
    1174-1180

    This paper proposes a new method for automatic improvement in image quality through adjusting the image sharpness. This method does not need prior knowledge about image blur. To improve image quality, the sharpness must be adjusted to an optimal value. This paper shows a new method to evaluate sharpness without MTF. It is considered that the human visual system judges image sharpness mainly based upon edge area features. Therefore, attention is paid to the high spatial frequency components in the edge area. The value is defined by the average intensity of the high spatial fequency components in the edge area. This is called the image edge sharpness" value. Using several images, edge sharpness values are compared with experimental results for subjective sharpness. According to the experiments, the calculated edge sharpness values show a good linear relation with subjective sharpness. Subjective image sharpness does not have a monotonic relation with subjective image quality. If the edge sharpness value is in a particular range, the image quality is judged to be good. According to the subjective experiments, an optimal edge sharpness value for image quality was obtained. This paper also shows an algorithm to alter an image into one which has another edge sharpness value. By altering the image, which achieves optimal edge sharpness using this algorithm, image sharpness can be optimally adjusted automatically. This new image improving method was applied to several images obtained by scanning photographs. The experimental results were quite good.

  • Wavelet Pyramid Image Coding with Predictable and Controllable Subjective Picture Quality

    Jie CHEN  Shuichi ITOH  Takeshi HASHIMOTO  

     
    PAPER

      Vol:
    E76-A No:9
      Page(s):
    1458-1468

    A new method by which images are coded with predictable and controllable subjective picture quality in the minimum cost of bit rate is developed. By using wavelet transform, the original image is decomposed into a set of subimages with different frequency channels and resolutions. By utilizing human contrast sensitivity, each decomposed subimage is treated according to its contribution to the total visual quality and to the bit rate. A relationship between physical errors (mainly quantization errors) incurred in the orthonormal wavelet image coding system and the subjective picture quality quantified as the mean opinion score (MOS) is established. Instred of using the traditional optimum bit allocation scheme which minimizes a distortion cost function under the constraint of a given bit rate, we develop an "optimum visually weighted noise power allocation" (OVNA) scheme which emphasizes the satisfying of a desired subjective picture quality in the minumum cost of bit rate. The proposed method enables us to predict and control the picture quality before the reconstruction and to compress images with desired subjective picture quality in the minimum bit rate.

  • Conversion of Image Resolutions for High Quality Visual Communication

    Saprangsit MRUETUSATORN  Hirotsugu KINOSHITA  Yoshinori SAKAI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:2
      Page(s):
    251-258

    This paper discusses the conversion of spatial resolution (pixel density) and amplitude resolution (levels of brightness) for multilevel images. A source image is sampled by an image scanner or a video camera, and a converted image is printed by a printer with the capability of higher spatial but lower amplitude resolution than the image input device. In the proposed method, the impulse response of the scanner sensor is modeled to obtain pixel values from the convolution of the impulse and the image signal. Discontinuous areas (edge) of the original image are detected locally according to the impulse model and neighbouring pixel values. The edge route is estimated which gives the pixel values for the output resolutions. Comparison of the proposed method with two conventional methods, reciprocal distance weight interpolation and pixel replication, shows higher edge quality for the proposed method.

461-480hit(483hit)