1-8hit |
Takao OGURA Junji SUZUKI Akira CHUGO Masafumi KATOH Tomonori AOYAMA
As use of the Internet continues to spread rapidly, Traffic Engineering (TE) is needed to optimize IP network resource utilization. In particular, load balancing with TE can prevent traffic concentration on a single path between ingress and egress routers. To apply TE, we have constructed an MPLS (Multi-Protocol Label Switching) network with TE capability in the JGN (Japan Gigabit Network), and evaluated dynamic load balancing behavior in it from the viewpoint of control stability. We confirmed that with this method, setting appropriate control parameter values enables traffic to be equally distributed over two or more routes in an actual large-scale network. In addition, we verified the method's effectiveness by using a digital cinema application as input traffic.
A specification for digital cinema systems which deal with movies digitally from production to delivery as well as projection on the screens is recommended by DCI (Digital Cinema Initiative), and the systems based on this specification have already been developed and installed in theaters. The parameters of the systems that play an important role in determining image quality include image resolution, quantization bit depth, color space, gamma characteristics, and data compression methods. This paper comparatively discusses a relation between required bit depth and gamma quantization using both of a human visual system for grayscale images and two color difference models for color images. The required bit depth obtained from a contrast sensitivity function against grayscale images monotonically decreases as the gamma value increases, while it has a minimum value when the gamma is 2.9 to 3.0 from both of the CIE 1976 L* a* b* and CIEDE2000 color difference models. It is also shown that the bit depth derived from the contrast sensitivity function is one bit greater than that derived from the color difference models at the gamma value of 2.6. Moreover, a comparison between the color differences computed with the CIE 1976 L* a* b* and CIEDE2000 leads to a same result from the view point of the required bit depth for digital cinema systems.
Takayuki NAKACHI Tatsuya FUJII Junji SUZUKI
This paper describes a unified coding algorithm for lossless and near-lossless color image compression that exploits the correlations between RGB signals. A reversible color transform that removes the correlations between RGB signals while avoiding any finite word length limitation is proposed for the lossless case. The resulting algorithm gives higher performance than the lossless JPEG without the color transform. Next, the lossless algorithm is extended to a unified coding algorithm of lossless and near-lossless compression schemes that can control the level of the reconstruction error on the RGB plane from 0 to p, where p is a certain small non-negative integer. The effectiveness of this algorithm was demonstrated experimentally.
This paper proposes a design method for representing monochrome medical X-ray images on an electronic display. The required quantizing resolution of the input density and output voltage are theoretically clarified. The proposed method makes it easier to determine the required quantizing resolution which is important in a X-ray diagnostic system.
Takayuki NAKACHI Tatsuya FUJII Junji SUZUKI
In this paper, we propose an adaptive predictive coding method based on image segmentation for lossless compression. MAR (Multiplicative Autoregressive) predictive coding is an efficient lossless compression scheme. Predictors of the MAR model can be adapted to changes in the local image statistics due to its local image processing. However, the performance of the MAR method is reduced when applied to images whose local statistics change within the block-by-block subdivided image. Furthermore, side-information such as prediction coefficients must be transmitted to the decoder with each block. In order to enhance the compression performance, we improve the MAR coding method by using image segmentation. The proposed MAR predictor can be adapted to the local statistics of the image efficiently at each pixel. Furthermore, less side-information need be transmitted compared with the conventional MAR method.
Junji SUZUKI Isao FURUKAWA Sadayasu ONO
Digital cinema will continue, for some time, to use image signals converted from the density values of film stock through some form of digitization. This paper investigates the required numbers of quantization bits for both intensity and density. Equations for the color differences created by quantization distortion are derived on the premise that the uniform color space L* a* b* can be used to evaluate color differences in digitized pictorial color images. The location of the quantized sample that yields the maximum color difference in the color gamut is theoretically analyzed with the proviso that the color difference must be below the perceivable limit of human visual systems. The result shows that the maximum color difference is located on a ridge line or a surface of the color gamut. This can reduce the computational burden for determining the required precision for color quantization. Design examples of quantization resolution are also shown by applying the proposed evaluation method to three actual color spaces: NTSC, HDTV, and ROMM.
To keep in step with the rapid progress of high quality imaging systems, the Digital Cinema Initiative (DCI) has been issuing digital cinema standards that cover all processes from production to distribution and display. Various evaluation measurements are used in the assessment of image quality, and, of these, the required number of quantization bits is one of the most important factors in realizing the very high quality images needed for cinema. While DCI defined 12 bits for the bit depth by applying Barten's model to just the luminance signal, actual cinema applications use color signals, so we can say that this value has an insufficient theoretical basis. This paper, first of all, investigates the required number of quantization bits by computer simulations in discrete 3-D space for the color images defined using CIE's XYZ signal. Next, the required number of quantization bits is formulated by applying Taylor's development in the continuous value region. As a result, we show that 13.04 bits, 11.38 bits, and 10.16 bits are necessary for intensity, density, and gamma-corrected signal quantization, respectively, for digital cinema applications. As these results coincide with those from calculations in the discrete value region, the proposed analysis method enables a drastic reduction in the computer simulation time needed for obtaining the required number of quantization bits for color signals.
Takayuki NAKACHI Tomoko SAWABE Junji SUZUKI Tetsuro FUJII
JPEG2000, an international standard for still image compression, offers 1) high coding performance, 2) unified lossless/lossy compression, and 3) resolution and SNR scalability. Resolution scalability is an especially promising attribute given the popularity of Super High Definition (SHD) images like digital-cinema. Unfortunately, its current implementation of resolution scalability is restricted to powers of two. In this paper, we introduce non-octave scalable coding (NSC) based on the use of filter banks. Two types of non-octave scalable coding are implemented. One is based on a DCT filter bank and the other uses wavelet transform. The latter is compatible with JPEG2000 Part2. By using the proposed algorithm, images with rational scale resolutions can be decoded from a compressed bit stream. Experiments on digital cinema test material show the effectiveness of the proposed algorithm.