Yoshihiro KITAURA Mitsuji MUNEYASU Katsuaki NAKANISHI
JPEG2000 still image coding standard has a feature called Region of Interest (ROI) coding. This feature can encode a restricted region in an image prior to than its background (BG) region. In low bit rate compression, the code of the ROI region occupies the most of the bit stream in the whole image and it causes the serious deterioration of the image quality in the BG region. This paper proposes a new image quality control method between the ROI region and the BG one by the one time encoding process and it can achieve more detailed image quality control. The use of ROI masks in the encoder makes it possible. The standard decoder of JPEG2000 part1 can decode the encoded data in the proposed method.
Chih-Yang LIN Chin-Chen CHANG Yu-Zheng WANG
This paper presents a lossless steganography method based on the multiple-base notation approach for JPEG images. Embedding a large amount of secret data in a JPEG-compressed image is a challenge since modifying the quantized DCT coefficients may cause serious image distortion. We propose two main strategies to deal with this problem: (1) we embed the secret values in the middle-frequency of the quantized DCT coefficients, and (2) we limit the number of nonzero values of the quantized DCT coefficients that participate in the embedding process. We also investigated the effect of modifying the standard quantization table. The experimental results show that the proposed method can embed twice as much secret data as the irreversible embedding method of Iwata et al. under the same number of embedded sets. The results also demonstrate how three important factors: (1) the quantization table, (2) the number of selected nonzero quantized DCT coefficients, and (3) the number of selected sets, influence the image quality and embedding capacity.
Due to the decoding procedure and filtering for edge detection, the feature extraction process of MPEG-7 Edge Histogram Descriptor (EHD) is time-consuming and computationally expensive. We proposed the fast EHD generation method in wavelet domain of JPEG2000 images. Experimental results demonstrate the advantage of this method over EHD.
Fitri ARNIA Ikue IIZUKA Masaaki FUJIYOSHI Hitoshi KIYA
We propose a method to retrieve similar and duplicate images from a JPEG (Joint Photographic Image Group) image database. Similarity level is decided based on the DCT (Discrete Cosine Transform) coefficients signs. The method is simple and fast because it uses the DCT coefficients signs as features, which can be obtained directly after partial decoding of JPEG bitstream. The method is robust to JPEG compression, in which similarity level of duplicate images, i.e., images that are compressed from the same original images with different compression ratios, is not disguised due to JPEG compression. Simulation results showed the superiority of the method compared to previous methods in terms of computational complexity and robustness to JPEG compression.
Chung-Hsien YANG Jia-Ching WANG Jhing-Fa WANG Chi-Wei CHANG
Two-dimensional discrete wavelet transform (DWT) for processing image is conventionally designed by line-based architectures, which are simple and have low complexity. However, they suffer from two main shortcomings - the memory required for storing intermediate data and the long latency of computing wavelet coefficients. This work presents a new block-based architecture for computing lifting-based 2-D DWT coefficients. This architecture yields a significantly lower buffer size. Additionally, the latency is reduced from N2 down to 3N as compared to the line-based architectures. The proposed architecture supports the JPEG2000 default filters and has been realized in ARM-based ALTERA EPXA10 Development Board at a frequency of 44.33 MHz.
Hideki NODA Yohsuke TSUKAMIZU Michiharu NIIMI
This paper presents two steganographic methods for JPEG2000 still images which approximately preserve histograms of discrete wavelet transform coefficients. Compared with a conventional JPEG2000 steganography, the two methods show better histogram preservation. The proposed methods are promising candidates for secure JPEG2000 steganography against histogram-based attack.
Yoshihide TONOMURA Takayuki NAKACHI Tetsuro FUJII
Distributed Video Coding (DVC), based on the theorems proposed by Slepian-Wolf and Wyner-Ziv, is attracting attention as a new paradigm for video compression. Some of the DVC systems use intra-frame compression based on discrete cosine transform (DCT). Unfortunately, conventional DVC systems have low affinity with DCT. In this paper, we propose a wavelet-based DVC scheme that utilizs current JPEG 2000 standard. Accordingly, the scheme has scalability with regard to resolution and quality. In addition, we propose two methods to increase the coding gain of the new DVC scheme. One is the introduction of a Gray code, and the other method involves optimum quantization. An interesting point is that though our proposed method uses Gray code, it still achieves quality scalability. Tests confirmed that the PSNR is increased about 5 [dB] by the two methods, and the PSNR of the new scheme (with methods) is about 1.5-3 [dB] higher than that of conventional JPEG 2000.
Khairul MUNADI Masaaki FUJIYOSHI Kiyoshi NISHIKAWA Hitoshi KIYA
JPEG2000 compression standard considers a block of wavelet coefficients, called codeblock, as the smallest coding unit that being independently entropy-coded. In this paper, we propose a codeblock-based concealment technique for JPEG2000 images to mitigate missing codeblock due to packet loss in network transmission. The proposed method creates a single JPEG2000 codestream from an image that composed of several subsampled versions of the original image and transmit the codestream over a single channel.The technique then substitutes the affected codeblock in a subsampled image with a copy of the corresponding codeblock obtained from other subsampled images. Thus, it does not require an iterative processing, which is time consuming, to construct an estimated version of the lost data. Moreover, it is applicable for a large codeblock size and can be implemented either in wavelet or codestream domain. Simulation results confirm the effectiveness of the proposed method.
Jun HOU Xiangzhong FANG Haibin YIN Yan CHENG
This paper proposes two efficient rate control algorithms for Motion JPEG2000. Both methods provide accurate visual quality control under buffer constraints. Frames of the same scene usually have the similar rate-distortion (R-D) characters. The proposed methods predict the R-D models of uncoded frames forwardly or bilaterally according to those of coded frames. Experimental results demonstrate that the proposed algorithms offer visual quality improvements over similar competing methods and save a large amount of memory simultaneously.
Jun UCHITA Shogo MURAMATSU Takuma ISHIDA Hisakazu KIKUCHI
In this paper, a coefficient-parameter embedding method into Motion-JPEG2000 (MJP2) is proposed for invertible deinterlacing with variable coefficients. Invertible deinterlacing, which the authors have developed before, can be used as a preprocess of frame-based motion picture codec, such as MJP2, for interlaced videos. When the conventional field-interleaving is used instead, comb-tooth artifacts appear around edges of moving objects. On the other hand, the invertible deinterlacing technique allows us to suppress the comb-tooth artifacts and also guaranties recovery of original pictures. As previous works, the authors have developed a variable coefficient scheme with a motion detector, which realizes adaptability to local characteristics of given pictures. However, when this deinterlacing technique is applied to a video codec, coefficient parameters have to be sent to receivers for original picture recovery. This paper proposes a parameter-embedding technique in MJP2 and constructs a standard stream which consists both of picture data and the parameters. The parameters are embedded into the LH1 component of wavelet transform domain through the ROI (region of interest) function of JPEG2000 without significant loss in the performance of comb-tooth suppression. Some experimental results show the feasibility of our proposed scheme.
Ayman HAGGAG Mohamed GHONEIM Jianming LU Takashi YAHAGI
In this paper, we first briefly discuss the newly emerging Secured JPEG (JPSEC) standard for security services for JPEG 2000 compressed images. We then propose our novel approach for applying authentication to JPEG 2000 images in a scalable manner. Our authentication technique can be used for source authentication, nonrepudiation and integrity verification for the received possibly transcoded JPEG 2000 images in such a way that it is possible to authenticate different resolutions or different qualities extracted or received from a JPEG 2000 encoded image. Three different implementation methods for our authentication technique are presented. Packet-Based Authentication involves using the MD5 hashing algorithm for calculating the hash value for each individual packet in the JPEG 2000 codestream. Hash values are truncated to a specified length to reduce the overhead in storage space, concatenated into a single string, and then signed using the RSA algorithm and the author's private key for repudiation prevention. Resolution-Based Authentication and Quality-Based Authentication methods involve generating a single hash value from all contiguous packets from each entire resolution or each entire quality layer, respectively. Our algorithms maintain most of the inherent flexibility and scalability of JPEG 2000 compressed images. The resultant secured codestream is still JPEG 2000 compliant and compatible with JPEG 2000 compliant decoders. Also, our algorithms are compatible with the Public Key Infrastructure (PKI) for preventing signing repudiation from the sender and are implemented using the new JPSEC standard for security signaling.
Gu-Min JEONG Chunghoon KIM Hyun-Sik AHN Bong-Ju AHN
This paper proposes a new codec design method based on JPEG for face images and presents its application to face recognition. Quantization table is designed using the R-D optimization for the Yale face database. In order to use in the embedded systems, fast codec design is also considered. The proposed codec achieves better compression rates than JPEG codec for face images. In face recognition experiments using the linear discriminant analysis (LDA), the proposed codec shows better performance than JPEG codec.
Jun HOU Xiangzhong FANG Haibin YIN Jiliang LI
The paper proposes a constant bit rate (CBR) control algorithm for motion JPEG2000 (MJ2). In MJ2 coding, every frame can be coded at similar target bitrate due to the accurate rate control feature. Moreover, frames of the same scene have the similar rate-distortion (RD) characters. The proposed method estimates the initial cutoff threshold of the current frame according to the previous frame's RD information. This iterative method reduces computational cost significantly. As opposed to previous algorithms, it can be used at any compression ratio. Experiments show that the performance is comparable to normal JPEG2000 coding.
Takayuki NAKACHI Tomoko SAWABE Junji SUZUKI Tetsuro FUJII
JPEG2000, an international standard for still image compression, offers 1) high coding performance, 2) unified lossless/lossy compression, and 3) resolution and SNR scalability. Resolution scalability is an especially promising attribute given the popularity of Super High Definition (SHD) images like digital-cinema. Unfortunately, its current implementation of resolution scalability is restricted to powers of two. In this paper, we introduce non-octave scalable coding (NSC) based on the use of filter banks. Two types of non-octave scalable coding are implemented. One is based on a DCT filter bank and the other uses wavelet transform. The latter is compatible with JPEG2000 Part2. By using the proposed algorithm, images with rational scale resolutions can be decoded from a compressed bit stream. Experiments on digital cinema test material show the effectiveness of the proposed algorithm.
Kiyoshi NISHIKAWA Shinichi NAGAWARA Hitoshi KIYA
In this paper, we propose a novel QoS (Quality of Service) estimation scheme for JPEG 2000 coded image at RTP (realtime transfer protocol) layer without decoding the image. QoS of streaming video is estimated in view of several points, such as, transmission delay, or quality of received images. In this paper, we evaluate the QoS in terms of quality of received images. Generally, RTP is carried on top of UDP, and hence, quality of transmitted images could be degraded due to packet loss. To estimate the quality of received JPEG 2000 coded image without decoding, we use RTP header extension in order to send additional information to the receiver. The effectiveness of the proposed method is confirmed by the computer simulations.
Gab-Cheon JUNG Hyoung-Jin MOON Seong-Mo PARK
This paper describes an efficient PCRD (Post-Compression Rate-Distortion) scheme for rate control of JPEG2000. The proposed method determines the rate constant in consideration of the decreasing characteristic of RD-slopes and conducts rate allocation about only coding passes excluded from the previous rate allocation. As a result, it can considerably reduce the number of operations and encoding time with nearly the same PSNR performance as the conventional rate control scheme of JPEG2000.
This paper presents an efficient VLSI architecture of biorthogonal (9,7)/(5,3) lifting based discrete wavelet transform that is used by lossy or lossless compression of JPEG2000. To improve hardware utilization of RPA (Recursive Pyramid Algorithm) implementation, we make the filter that is responsible for row operations of the first level perform both column operations and row operations of the second and following levels. As a result, the architecture has 66.7-88.9% hardware utilization. It requires 9 multipliers, 12 adders, and 12N line memories for NN image, which is smaller hardware complexity compared to that of other architectures with comparable throughput.
Masayuki HASHIMOTO Kenji MATSUO Atsushi KOIKE
This paper proposes an effective JPEG 2000 encoding method for reducing tiling artifacts, which cause one of the biggest problems in JPEG 2000 encoders. Symmetric pixel extension is generally thought to be the main factor in causing artifacts. However this paper shows that differences in quantization accuracy between tiles are a more significant reason for tiling artifacts at middle or low bit rates. This paper also proposes an algorithm that predicts whether tiling artifacts will occur at a tile boundary in the rate control process and that locally improves quantization accuracy by the original post quantization control. This paper further proposes a method for reducing processing time which is yet another serious problem in the JPEG 2000 encoder. The method works by predicting truncation points using the entropy of wavelet transform coefficients prior to the arithmetic coding. These encoding methods require no additional processing in the decoder. The experiments confirmed that tiling artifacts were greatly reduced and that the coding process was considerably accelerated.
Amit Kumar GUPTA Saeid NOOSHABADI David TAUBMAN
JPEG2000 image compression standard is designed to cater the needs of a large span of applications including numerous consumer products. However, its use is restricted due to the high hardware cost involved in its implementation. Bit Plane Coder (BPC) is the main resource intensive component of JPEG2000. Its throughput plays a key role in deciding the overall throughput of a JPEG2000 encoder. In this paper we present the algorithm and parallel pipelined VLSI architecture for BPC which processes a complete stripe-column concurrently during every pass. The hardware requirements and the critical path delay of the proposed technique are compared with the existing solutions. The experimental results show that the proposed architecture has 2.6 times greater throughput than existing architectures, with a comparatively small increase in hardware cost.
Jeong-Sig KIM Ju-Do KIM Keun-Young LEE
Many image and video compression algorithms work by splitting the image into blocks and producing variable-length code bits for each block data. If variable-length code data are transmitted consecutively over error-prone channel without any error protection technique, the receiving decoder cannot decode the stream properly. So the standard image and video compression algorithms insert some redundant information into the stream to provide some protection against channel errors. One of such redundancy is resynchronization marker, which enables the decoder to restart the decoding process from a known state in the event of transmission errors, but its frequent use should be restricted not to consume bandwidth too much. The Error Resilient Entropy Code (EREC) is well known method which can regain synchronization without any redundant information. It can work with the overall prefix codes, which many image compression methods use. This paper proposes an improvement to FEREC (Fast Error-Resilient Entropy Coding). It first calculates initial searching position according to bit lengths of consecutive blocks. Second, initial offset is decided using statistical distribution of long and short blocks, and initial offset is adjusted to insure all possible offset value can be examined. The proposed algorithm can speed up the construction of EREC slots, and can preserve compressed image quality in the event of transmission errors. The simulation result shows that the quality of transmitted image is enhanced about 0.3-3.5 dB compared with the existing FEREC when random channel error happens.