The search functionality is under construction.

Keyword Search Result

[Keyword] JPEG(87hit)

1-20hit(87hit)

  • JPEG Image Steganalysis Using Weight Allocation from Block Evaluation

    Weiwei LUO  Wenpeng ZHOU  Jinglong FANG  Lingyan FAN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2021/10/18
      Vol:
    E105-D No:1
      Page(s):
    180-183

    Recently, channel-aware steganography has been presented for high security. The corresponding selection-channel-aware (SCA) detecting algorithms have also been proposed for improving the detection performance. In this paper, we propose a novel detecting algorithm of JPEG steganography, where the embedding probability and block evaluation are integrated into the new probability. This probability can embody the change due to data embedding. We choose the same high-pass filters as maximum diversity cascade filter residual (MD-CFR) to obtain different image residuals and a weighted histogram method is used to extract detection features. Experimental results on detecting two typical steganographic methods show that the proposed method can improve the performance compared with the state-of-art methods.

  • Neural Watermarking Method Including an Attack Simulator against Rotation and Compression Attacks

    Ippei HAMAMOTO  Masaki KAWAMURA  

     
    PAPER

      Pubricized:
    2019/10/23
      Vol:
    E103-D No:1
      Page(s):
    33-41

    We have developed a digital watermarking method that use neural networks to learn embedding and extraction processes that are robust against rotation and JPEG compression. The proposed neural networks consist of a stego-image generator, a watermark extractor, a stego-image discriminator, and an attack simulator. The attack simulator consists of a rotation layer and an additive noise layer, which simulate the rotation attack and the JPEG compression attack, respectively. The stego-image generator can learn embedding that is robust against these attacks, and also, the watermark extractor can extract watermarks without rotation synchronization. The quality of the stego-images can be improved by using the stego-image discriminator, which is a type of adversarial network. We evaluated the robustness of the watermarks and image quality and found that, using the proposed method, high-quality stego-images could be generated and the neural networks could be trained to embed and extract watermarks that are robust against rotation and JPEG compression attacks. We also showed that the robustness and image quality can be adjusted by changing the noise strength in the noise layer.

  • Image Identification of Encrypted JPEG Images for Privacy-Preserving Photo Sharing Services

    Kenta IIDA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2019/10/25
      Vol:
    E103-D No:1
      Page(s):
    25-32

    We propose an image identification scheme for double-compressed encrypted JPEG images that aims to identify encrypted JPEG images that are generated from an original JPEG image. To store images without any visual sensitive information on photo sharing services, encrypted JPEG images are generated by using a block-scrambling-based encryption method that has been proposed for Encryption-then-Compression systems with JPEG compression. In addition, feature vectors robust against JPEG compression are extracted from encrypted JPEG images. The use of the image encryption and feature vectors allows us to identify encrypted images recompressed multiple times. Moreover, the proposed scheme is designed to identify images re-encrypted with different keys. The results of a simulation show that the identification performance of the scheme is high even when images are recompressed and re-encrypted.

  • Two-Layer Near-Lossless HDR Coding Using Zero-Skip Quantization with Backward Compatibility to JPEG

    Hiroyuki KOBAYASHI  Osamu WATANABE  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E102-A No:12
      Page(s):
    1842-1848

    We propose an efficient two-layer near-lossless coding method using an extended histogram packing technique with backward compatibility to the legacy JPEG standard. The JPEG XT, which is the international standard to compress HDR images, adopts a two-layer coding method for backward compatibility to the legacy JPEG standard. However, there are two problems with this two-layer coding method. One is that it does not exhibit better near-lossless performance than other methods for HDR image compression with single-layer structure. The other problem is that the determining the appropriate values of the coding parameters may be required for each input image to achieve good compression performance of near-lossless compression with the two-layer coding method of the JPEG XT. To solve these problems, we focus on a histogram-packing technique that takes into account the histogram sparseness of HDR images. We used zero-skip quantization, which is an extension of the histogram-packing technique proposed for lossless coding, for implementing the proposed near-lossless coding method. The experimental results indicate that the proposed method exhibits not only a better near-lossless compression performance than that of the two-layer coding method of the JPEG XT, but also there are no issue regarding the combination of parameter values without losing backward compatibility to the JPEG standard.

  • Robust Image Identification with DC Coefficients for Double-Compressed JPEG Images

    Kenta IIDA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2018/10/19
      Vol:
    E102-D No:1
      Page(s):
    2-10

    In the case that images are shared via social networking services (SNS) and cloud photo storage services (CPSS), it is known that the JPEG images uploaded to the services are mostly re-compressed by the providers. Because of such a situation, a new image identification scheme for double-compressed JPEG images is proposed in this paper. The aim is to detect a single-compressed image that has the same original image as the double-compressed ones. In the proposed scheme, a feature extracted from only DC coefficients in DCT coefficients is used for the identification. The use of the feature allows us not only to robustly avoid errors caused by double-compression but also to perform the identification for different size images. The simulation results demonstrate the effectiveness of the proposed one in terms of the querying performance.

  • Image Manipulation Specifications on Social Networking Services for Encryption-then-Compression Systems

    Tatsuya CHUMAN  Kenta IIDA  Warit SIRICHOTEDUMRONG  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2018/10/19
      Vol:
    E102-D No:1
      Page(s):
    11-18

    Encryption-then-Compression (EtC) systems have been proposed to securely transmit images through an untrusted channel provider. In this study, EtC systems were applied to social media like Twitter that carry out image manipulations. The block scrambling-based encryption schemes used in EtC systems were evaluated in terms of their robustness against image manipulation on social media. The aim was to investigate how five social networking service (SNS) providers, Facebook, Twitter, Google+, Tumblr and Flickr, manipulate images and to determine whether the encrypted images uploaded to SNS providers can avoid being distorted by such manipulations. In an experiment, encrypted and non-encrypted JPEG images were uploaded to various SNS providers. The results show that EtC systems are applicable to the five SNS providers.

  • JPEG Steganalysis Based on Multi-Projection Ensemble Discriminant Clustering

    Yan SUN  Guorui FENG  Yanli REN  

     
    LETTER-Information Network

      Pubricized:
    2018/10/15
      Vol:
    E102-D No:1
      Page(s):
    198-201

    In this paper, we propose a novel algorithm called multi-projection ensemble discriminant clustering (MPEDC) for JPEG steganalysis. The scheme makes use of the optimal projection of linear discriminant analysis (LDA) algorithm to get more projection vectors by using the micro-rotation method. These vectors are similar to the optimal vector. MPEDC combines unsupervised K-means algorithm to make a comprehensive decision classification adaptively. The power of the proposed method is demonstrated on three steganographic methods with three feature extraction methods. Experimental results show that the accuracy can be improved using iterative discriminant classification.

  • Security Evaluation for Block Scrambling-Based Image Encryption Including JPEG Distortion against Jigsaw Puzzle Solver Attacks

    Tatsuya CHUMAN  Hitoshi KIYA  

     
    LETTER-Image

      Vol:
    E101-A No:12
      Page(s):
    2405-2408

    Encryption-then-Compression (EtC) systems have been considered for the user-controllable privacy protection of social media like Twitter. The aim of this paper is to evaluate the security of block scrambling-based encryption schemes, which have been proposed to construct EtC systems. Even though this scheme has enough key spaces against brute-force attacks, each block in encrypted images has almost the same correlation as that of original images. Therefore, it is required to consider the security from different viewpoints from number theory-based encryption methods with provable security such as RSA and AES. In this paper, we evaluate the security of encrypted images including JPEG distortion by using automatic jigsaw puzzle solvers.

  • Two-Layer Lossless HDR Coding Using Histogram Packing Technique with Backward Compatibility to JPEG

    Osamu WATANABE  Hiroyuki KOBAYASHI  Hitoshi KIYA  

     
    PAPER-Image, Multimedia Environment Tech

      Vol:
    E101-A No:11
      Page(s):
    1823-1831

    An efficient two-layer coding method using the histogram packing technique with the backward compatibility to the legacy JPEG is proposed in this paper. The JPEG XT, which is the international standard to compress HDR images, adopts two-layer coding scheme for backward compatibility to the legacy JPEG. However, this two-layer coding structure does not give better lossless performance than the other existing methods for HDR image compression with single-layer structure. Moreover, the lossless compression of the JPEG XT has a problem on determination of the coding parameters; The lossless performance is affected by the input images and/or the parameter values. That is, finding appropriate combination of the values is necessary to achieve good lossless performance. It is firstly pointed out that the histogram packing technique considering the histogram sparseness of HDR images is able to improve the performance of lossless compression. Then, a novel two-layer coding with the histogram packing technique and an additional lossless encoder is proposed. The experimental results demonstrate that not only the proposed method has a better lossless compression performance than that of the JPEG XT, but also there is no need to determine image-dependent parameter values for good compression performance without losing the backward compatibility to the well known legacy JPEG standard.

  • Cube-Based Encryption-then-Compression System for Video Sequences

    Kosuke SHIMIZU  Taizo SUZUKI  Keisuke KAMEYAMA  

     
    PAPER-Image

      Vol:
    E101-A No:11
      Page(s):
    1815-1822

    We propose the cube-based perceptual encryption (C-PE), which consists of cube scrambling, cube rotation, cube negative/positive transformation, and cube color component shuffling, and describe its application to the encryption-then-compression (ETC) system of Motion JPEG (MJPEG). Especially, cube rotation replaces the blocks in the original frames with ones in not only the other frames but also the depth-wise cube sides (spatiotemporal sides) unlike conventional block-based perceptual encryption (B-PE). Since it makes intra-block observation more difficult and prevents unauthorized decryption from only a single frame, it is more robust than B-PE against attack methods without any decryption key. However, because the encrypted frames including the blocks from the spatiotemporal sides affect the MJPEG compression performance slightly, we also devise a version of C-PE with no spatiotemporal sides (NSS-C-PE) that hardly affects compression performance. C-PE makes the encrypted video sequence robust against the only single frame-based algorithmic brute force (ABF) attack with only 21 cubes. The experimental results show the compression efficiency and encryption robustness of the C-PE/NSS-C-PE-based ETC system. C-PE-based ETC system shows mixed results depending on videos, whereas NSS-C-PE-based ETC system shows that the BD-PSNR can be suppressed to about -0.03dB not depending on videos.

  • On the Security of Block Scrambling-Based EtC Systems against Extended Jigsaw Puzzle Solver Attacks

    Tatsuya CHUMAN  Kenta KURIHARA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2017/10/16
      Vol:
    E101-D No:1
      Page(s):
    37-44

    The aim of this paper is to apply automatic jigsaw puzzle solvers, which are methods of assembling jigsaw puzzles, to the field of information security. Encryption-then-Compression (EtC) systems have been considered for the user-controllable privacy protection of digital images in social network services. Block scrambling-based encryption schemes, which have been proposed to construct EtC systems, have enough key spaces for protecting brute-force attacks. However, each block in encrypted images has almost the same correlation as that of original images. Therefore, it is required to consider the security from different viewpoints from number theory-based encryption methods with provable security such as RSA and AES. In this paper, existing jigsaw puzzle solvers, which aim to assemble puzzles including only scrambled and rotated pieces, are first reviewed in terms of attacking strategies on encrypted images. Then, an extended jigsaw puzzle solver for block scrambling-based encryption scheme is proposed to solve encrypted images including inverted, negative-positive transformed and color component shuffled blocks in addition to scrambled and rotated ones. In the experiments, the jigsaw puzzle solvers are applied to encrypted images to consider the security conditions of the encryption schemes.

  • Robust Image Identification without Visible Information for JPEG Images

    Kenta IIDA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2017/10/16
      Vol:
    E101-D No:1
      Page(s):
    13-19

    A robust identification scheme for JPEG images is proposed in this paper. The aim is to robustly identify JPEG images that are generated from the same original image, under various compression conditions such as differences in compression ratios and initial quantization matrices. The proposed scheme does not provide any false negative matches in principle. In addition, secure features, which do not have any visual information, are used to achieve not only a robust identification scheme but also secure one. Conventional schemes can not avoid providing false negative matches under some compression conditions, and are required to manage a secret key for secure identification. The proposed scheme is applicable to the uploading process of images on social networks like Twitter for image retrieval and forensics. A number of experiments are carried out to demonstrate that the effectiveness of the proposed method. The proposed method outperforms conventional ones in terms of query performances, while keeping a reasonable security level.

  • JPEG Image Steganalysis from Imbalanced Data

    Jia FU  Guorui FENG  Yanli REN  

     
    LETTER-Information Theory

      Vol:
    E100-A No:11
      Page(s):
    2518-2521

    Image steganalysis can determine whether the image contains the secret messages. In practice, the number of the cover images is far greater than that of the secret images, so it is very important to solve the detection problem in imbalanced image sets. Currently, SMOTE, Borderline-SMOTE and ADASYN are three importantly synthesized algorithms used to solve the imbalanced problem. In these methods, the new sampling point is synthesized based on the minority class samples. But this research is seldom seen in image steganalysis. In this paper, we find that the features of the majority class sample are similar to those of the minority class sample based on the distribution of the image features in steganalysis. So the majority and minority class samples are both used to integrate the new sample points. In experiments, compared with SMOTE, Borderline-SMOTE and ADASYN, this approach improves detection accuracy using the FLD ensemble classifier.

  • Image Restoration of JPEG Encoded Images via Block Matching and Wiener Filtering

    Yutaka TAKAGI  Takanori FUJISAWA  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E100-A No:9
      Page(s):
    1993-2000

    In this paper, we propose a method for removing block noise which appears in JPEG (Joint Photographic Experts Group) encoded images. We iteratively perform the 3D wiener filtering and correction of the coefficients. In the wiener filtering, we perform the block matching for each patch in order to get the patches which have high similarities to the reference patch. After wiener filtering, the collected patches are returned to the places where they were and aggregated. We compare the performance of the proposed method to some conventional methods, and show that the proposed method has an excellent performance.

  • An Encryption-then-Compression System for Lossless Image Compression Standards

    Kenta KURIHARA  Shoko IMAIZUMI  Sayaka SHIOTA  Hitoshi KIYA  

     
    LETTER

      Pubricized:
    2016/10/07
      Vol:
    E100-D No:1
      Page(s):
    52-56

    In many multimedia applications, image encryption has to be conducted prior to image compression. This letter proposes an Encryption-then-Compression system using JPEG XR/JPEG-LS friendly perceptual encryption method, which enables to be conducted prior to the JPEG XR/JPEG-LS standard used as an international standard lossless compression method. The proposed encryption scheme can provides approximately the same compression performance as that of the lossless compression without any encryption. It is also shown that the proposed system consists of four block-based encryption steps, and provides a reasonably high level of security. Existing conventional encryption methods have not been designed for international lossless compression standards, but for the first time this letter focuses on applying the standards.

  • Fuzzy Commitment Scheme-Based Secure Identification for JPEG Images with Various Compression Ratios

    Kenta IIDA  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E99-A No:11
      Page(s):
    1962-1970

    A secure identification scheme for JPEG images is proposed in this paper. The aim is to robustly identify JPEG images which are generated from the same original image under various compression levels in security. A property of the positive and negative signs of DCT coefficients is employed to achieve a robust scheme. The proposed scheme is robust against a difference in compression levels, and does not produce false negative matches in any compression level. Conventional schemes that have this property are not secure. To construct a secure identification system, we combine a new error correction technique with 1-bit parity with a fuzzy commitment scheme, which is a well-known biometric cryptosystem. In addition, a way for speeding up the identification is also proposed. The experimental results show the proposed scheme is effective for not only still images, but also video sequences in terms of the querying such as false positive, false negative and true positive matches, while keeping a high level of the security.

  • An Encryption-then-Compression System for JPEG/Motion JPEG Standard

    Kenta KURIHARA  Masanori KIKUCHI  Shoko IMAIZUMI  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER

      Vol:
    E98-A No:11
      Page(s):
    2238-2245

    In many multimedia applications, image encryption has to be conducted prior to image compression. This paper proposes a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG and Motion JPEG compressions. The proposed encryption scheme can provides approximately the same compression performance as that of JPEG compression without any encryption, where both gray scale images and color ones are considered. It is also shown that the proposed scheme consists of four block-based encryption steps, and provide a reasonably high level of security. Most of conventional perceptual encryption schemes have not been designed for international compression standards, but this paper focuses on applying the JPEG and Motion JPEG standards, as one of the most widely used image compression standards. In addition, this paper considers an efficient key management scheme, which enables an encryption with multiple keys to be easy to manage its keys.

  • Biometric Identification Using JPEG2000 Compressed ECG Signals

    Hung-Tsai WU  Yi-Ting WU  Wen-Whei CHANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/06/24
      Vol:
    E98-D No:10
      Page(s):
    1829-1837

    In wireless telecardiology applications, electrocardiogram (ECG) signals are often represented in compressed format for efficient transmission and storage purposes. Incorporation of compressed ECG based biometric enables faster person identification as it by-passes the full decompression. This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. To this end, an ECG signal is considered as an image and the JPEG2000 standard is applied for data compression. Features relating to ECG morphology and heartbeat intervals are computed directly from the compressed ECG. Different classification approaches are used for person identification. Experiments on standard ECG databases demonstrate the validity of the proposed system for biometric identification with high accuracies on both healthy and diseased subjects.

  • Method of Spread Spectrum Watermarking Using Quantization Index Modulation for Cropped Images

    Takahiro YAMAMOTO  Masaki KAWAMURA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/04/16
      Vol:
    E98-D No:7
      Page(s):
    1306-1315

    We propose a method of spread spectrum digital watermarking with quantization index modulation (QIM) and evaluate the method on the basis of IHC evaluation criteria. The spread spectrum technique can make watermarks robust by using spread codes. Since watermarks can have redundancy, messages can be decoded from a degraded stego-image. Under IHC evaluation criteria, it is necessary to decode the messages without the original image. To do so, we propose a method in which watermarks are generated by using the spread spectrum technique and are embedded by QIM. QIM is an embedding method that can decode without an original image. The IHC evaluation criteria include JPEG compression and cropping as attacks. JPEG compression is lossy compression. Therefore, errors occur in watermarks. Since watermarks in stego-images are out of synchronization due to cropping, the position of embedded watermarks may be unclear. Detecting this position is needed while decoding. Therefore, both error correction and synchronization are required for digital watermarking methods. As countermeasures against cropping, the original image is divided into segments to embed watermarks. Moreover, each segment is divided into 8×8 pixel blocks. A watermark is embedded into a DCT coefficient in a block by QIM. To synchronize in decoding, the proposed method uses the correlation between watermarks and spread codes. After synchronization, watermarks are extracted by QIM, and then, messages are estimated from the watermarks. The proposed method was evaluated on the basis of the IHC evaluation criteria. The PSNR had to be higher than 30 dB. Ten 1920×1080 rectangular regions were cropped from each stego-image, and 200-bit messages were decoded from these regions. Their BERs were calculated to assess the tolerance. As a result, the BERs were less than 1.0%, and the average PSNR was 46.70 dB. Therefore, our method achieved a high image quality when using the IHC evaluation criteria. In addition, the proposed method was also evaluated by using StirMark 4.0. As a result, we found that our method has robustness for not only JPEG compression and cropping but also additional noise and Gaussian filtering. Moreover, the method has an advantage in that detection time is small since the synchronization is processed in 8×8 pixel blocks.

  • An Efficient Wavelet-Based ROI Coding for Multiple Regions

    Kazuma SHINODA  Naoki KOBAYASHI  Ayako KATOH  Hideki KOMAGATA  Masahiro ISHIKAWA  Yuri MURAKAMI  Masahiro YAMAGUCHI  Tokiya ABE  Akinori HASHIGUCHI  Michiie SAKAMOTO  

     
    PAPER-Image

      Vol:
    E98-A No:4
      Page(s):
    1006-1020

    Region of interest (ROI) coding is a useful function for many applications. JPEG2000 supports ROI coding and can decode ROIs preferentially regardless of the shape and number of the regions. However, if the number of regions is quite large, the ROI coding performance of JPEG2000 declines because the code-stream includes many useless non-ROI codes. This paper proposes a wavelet-based ROI coding method suited for multiple ROIs. The proposed wavelet transform does not access any non-ROIs when transforming the ROIs. Additionally, the proposed method eliminates the need for unnecessary coding of the bits in the higher bit planes of non-ROI regions by adding an ROI map to the code-stream. The experimental results show that the proposed method achieves a higher peak signal-to-noise ratio than the ROI coding of JPEG2000. The proposed method can be applied to both max-shift and scaling-based ROI coding.

1-20hit(87hit)