The search functionality is under construction.

Keyword Search Result

[Keyword] image processing(166hit)

1-20hit(166hit)

  • Grid Sample Based Temporal Iteration for Fully Pipelined 1-ms SLIC Superpixel Segmentation System Open Access

    Yuan LI  Tingting HU  Ryuji FUCHIKAMI  Takeshi IKENAGA  

     
    PAPER-Computer System

      Pubricized:
    2023/12/19
      Vol:
    E107-D No:4
      Page(s):
    515-524

    A 1 millisecond (1-ms) vision system, which processes videos at 1000 frames per second (FPS) within 1 ms/frame delay, plays an increasingly important role in fields such as robotics and factory automation. Superpixel as one of the most extensively employed image oversegmentation methods is a crucial pre-processing step for reducing computations in various computer vision applications. Among the different superpixel methods, simple linear iterative clustering (SLIC) has gained widespread adoption due to its simplicity, effectiveness, and computational efficiency. However, the iterative assignment and update steps in SLIC make it challenging to achieve high processing speed. To address this limitation and develop a SLIC superpixel segmentation system with a 1 ms delay, this paper proposes grid sample based temporal iteration. By leveraging the high frame rate of the input video, the proposed method distributes the iterations into the temporal domain, ensuring that the system's delay keeps within one frame. Additionally, grid sample information is added as initialization information to the obtained superpixel centers for enhancing the stability of superpixels. Furthermore, a selective label propagation based pipeline architecture is proposed for parallel computation of all the possibilities of label propagation. This eliminates data dependency between adjacent pixels and enables a fully pipelined system. The evaluation results demonstrate that the proposed superpixel segmentation system achieves boundary recall and under-segmentation error comparable to the original SLIC algorithm. When considering label consistency, the proposed system surpasses the performance of state-of-the-art superpixel segmentation methods. Moreover, in terms of hardware performance, the proposed system processes 1000 FPS images with 0.985 ms/frame delay.

  • ConvNeXt-Haze: A Fog Image Classification Algorithm for Small and Imbalanced Sample Dataset Based on Convolutional Neural Network

    Fuxiang LIU  Chen ZANG  Lei LI  Chunfeng XU  Jingmin LUO  

     
    PAPER

      Pubricized:
    2022/11/22
      Vol:
    E106-D No:4
      Page(s):
    488-494

    Aiming at the different abilities of the defogging algorithms in different fog concentrations, this paper proposes a fog image classification algorithm for a small and imbalanced sample dataset based on a convolution neural network, which can classify the fog images in advance, so as to improve the effect and adaptive ability of image defogging algorithm in fog and haze weather. In order to solve the problems of environmental interference, camera depth of field interference and uneven feature distribution in fog images, the CutBlur-Gauss data augmentation method and focal loss and label smoothing strategies are used to improve the accuracy of classification. It is compared with the machine learning algorithm SVM and classical convolution neural network classification algorithms alexnet, resnet34, resnet50 and resnet101. This algorithm achieves 94.5% classification accuracy on the dataset in this paper, which exceeds other excellent comparison algorithms at present, and achieves the best accuracy. It is proved that the improved algorithm has better classification accuracy.

  • A Bus Crowdedness Sensing System Using Deep-Learning Based Object Detection

    Wenhao HUANG  Akira TSUGE  Yin CHEN  Tadashi OKOSHI  Jin NAKAZAWA  

     
    PAPER

      Pubricized:
    2022/06/23
      Vol:
    E105-D No:10
      Page(s):
    1712-1720

    Crowdedness of buses is playing an increasingly important role in the disease control of COVID-19. The lack of a practical approach to sensing the crowdedness of buses is a major problem. This paper proposes a bus crowdedness sensing system which exploits deep learning-based object detection to count the numbers of passengers getting on and off a bus and thus estimate the crowdedness of buses in real time. In our prototype system, we combine YOLOv5s object detection model with Kalman Filter object tracking algorithm to implement a sensing algorithm running on a Jetson nano-based vehicular device mounted on a bus. By using the driving recorder video data taken from real bus, we experimentally evaluate the performance of the proposed sensing system to verify that our proposed system system improves counting accuracy and achieves real-time processing at the Jetson Nano platform.

  • Current Status and Issues of Traffic Light Recognition Technology in Autonomous Driving System Open Access

    Naoki SUGANUMA  Keisuke YONEDA  

     
    INVITED PAPER

      Pubricized:
    2021/10/12
      Vol:
    E105-A No:5
      Page(s):
    763-769

    Autonomous driving technology is currently attracting a lot of attention as a technology that will play a role in the next generation of mobility. For autonomous driving in urban areas, it is necessary to recognize various information. Especially, the recognition of traffic lights is important in crossing intersections. In this paper, traffic light recognition technology developed by the authors was evaluated using onboard sensor data during autonomous driving in the Tokyo waterfront area as an example of traffic light recognition technology. Based on the results, it was found that traffic lights could be recognized with an accuracy of approximately 99% to carry out the decision making for intersection approaching. However, from the evaluation results, it was also confirmed that traffic light recognition became difficult under situations involving occlusion by other object, background assimilation, nighttime conditions, and backlight by sunlight. It was also confirmed that these effects are mostly temporary, and do not significantly affect decision-making to enter intersections as a result of utilizing information from multiple traffic lights installed at an intersection. On the other hand, it is expected that recognition with current onboard cameras will become technically difficult during situations in which not all traffic lights are visually recognizable due to the effects of back or front light by sunlight when stopped at the stop line of an intersection. This paper summarizes these results and presents the necessity of appropriate traffic light installation on the assumption of recognition by onboard cameras.

  • Experiment of Integrated Technologies in Robotics, Network, and Computing for Smart Agriculture Open Access

    Ryota ISHIBASHI  Takuma TSUBAKI  Shingo OKADA  Hiroshi YAMAMOTO  Takeshi KUWAHARA  Kenichi KAWAMURA  Keisuke WAKAO  Takatsune MORIYAMA  Ricardo OSPINA  Hiroshi OKAMOTO  Noboru NOGUCHI  

     
    INVITED PAPER

      Pubricized:
    2021/11/05
      Vol:
    E105-B No:4
      Page(s):
    364-378

    To sustain and expand the agricultural economy even as its workforce shrinks, the efficiency of farm operations must be improved. One key to efficiency improvement is completely unmanned driving of farm machines, which requires stable monitoring and control of machines from remote sites, a safety system to ensure safe autonomous driving even without manual operations, and precise positioning in not only small farm fields but also wider areas. As possible solutions for those issues, we have developed technologies of wireless network quality prediction, an end-to-end overlay network, machine vision for safety and positioning, network cooperated vehicle control and autonomous tractor control and conducted experiments in actual field environments. Experimental results show that: 1) remote monitoring and control can be seamlessly continued even when connection between the tractor and the remote site needs to be switched across different wireless networks during autonomous driving; 2) the safety of the autonomous driving can automatically be ensured by detecting both the existence of people in front of the unmanned tractor and disturbance of network quality affecting remote monitoring operation; and 3) the unmanned tractor can continue precise autonomous driving even when precise positioning by satellite systems cannot be performed.

  • Effects of Image Processing Operations on Adversarial Noise and Their Use in Detecting and Correcting Adversarial Images Open Access

    Huy H. NGUYEN  Minoru KURIBAYASHI  Junichi YAMAGISHI  Isao ECHIZEN  

     
    PAPER

      Pubricized:
    2021/10/05
      Vol:
    E105-D No:1
      Page(s):
    65-77

    Deep neural networks (DNNs) have achieved excellent performance on several tasks and have been widely applied in both academia and industry. However, DNNs are vulnerable to adversarial machine learning attacks in which noise is added to the input to change the networks' output. Consequently, DNN-based mission-critical applications such as those used in self-driving vehicles have reduced reliability and could cause severe accidents and damage. Moreover, adversarial examples could be used to poison DNN training data, resulting in corruptions of trained models. Besides the need for detecting adversarial examples, correcting them is important for restoring data and system functionality to normal. We have developed methods for detecting and correcting adversarial images that use multiple image processing operations with multiple parameter values. For detection, we devised a statistical-based method that outperforms the feature squeezing method. For correction, we devised a method that uses for the first time two levels of correction. The first level is label correction, with the focus on restoring the adversarial images' original predicted labels (for use in the current task). The second level is image correction, with the focus on both the correctness and quality of the corrected images (for use in the current and other tasks). Our experiments demonstrated that the correction method could correct nearly 90% of the adversarial images created by classical adversarial attacks and affected only about 2% of the normal images.

  • Movie Map for Virtual Exploration in a City

    Kiyoharu AIZAWA  

     
    INVITED PAPER

      Pubricized:
    2021/10/12
      Vol:
    E105-D No:1
      Page(s):
    38-45

    This paper introduces our work on a Movie Map, which will enable users to explore a given city area using 360° videos. Visual exploration of a city is always needed. Nowadays, we are familiar with Google Street View (GSV) that is an interactive visual map. Despite the wide use of GSV, it provides sparse images of streets, which often confuses users and lowers user satisfaction. Forty years ago, a video-based interactive map was created - it is well-known as Aspen Movie Map. Movie Map uses videos instead of sparse images and seems to improve the user experience dramatically. However, Aspen Movie Map was based on analog technology with a huge effort and never built again. Thus, we renovate the Movie Map using state-of-the-art technology. We build a new Movie Map system with an interface for exploring cities. The system consists of four stages; acquisition, analysis, management, and interaction. After acquiring 360° videos along streets in target areas, the analysis of videos is almost automatic. Frames of the video are localized on the map, intersections are detected, and videos are segmented. Turning views at intersections are synthesized. By connecting the video segments following the specified movement in an area, we can watch a walking view along a street. The interface allows for easy exploration of a target area. It can also show virtual billboards in the view.

  • CLAHE Implementation and Evaluation on a Low-End FPGA Board by High-Level Synthesis

    Koki HONDA  Kaijie WEI  Masatoshi ARAI  Hideharu AMANO  

     
    PAPER

      Pubricized:
    2021/07/12
      Vol:
    E104-D No:12
      Page(s):
    2048-2056

    Automobile companies have been trying to replace side mirrors of cars with small cameras for reducing air resistance. It enables us to apply some image processing to improve the quality of the image. Contrast Limited Adaptive Histogram Equalization (CLAHE) is one of such techniques to improve the quality of the image for the side mirror camera, which requires a large computation performance. Here, an implementation method of CLAHE on a low-end FPGA board by high-level synthesis is proposed. CLAHE has two main processing parts: cumulative distribution function (CDF) generation, and bilinear interpolation. During the CDF generation, the effect of increasing loop initiation interval can be greatly reduced by placing multiple Processing Elements (PEs). and during the interpolation, latency and BRAM usage were reduced by revising how to hold CDF and calculation method. Finally, by connecting each module with streaming interfaces, using data flow pragmas, overlapping processing, and hiding data transfer, our HLS implementation achieved a comparable result to that of HDL. We parameterized the components of the algorithm so that the number of tiles and the size of the image can be easily changed. The source code for this research can be downloaded from https://github.com/kokihonda/fpga_clahe.

  • Virtual Address Remapping with Configurable Tiles in Image Processing Applications

    Jae Young HUR  

     
    PAPER-Computer System

      Pubricized:
    2019/10/17
      Vol:
    E103-D No:2
      Page(s):
    309-320

    The conventional linear or tiled address maps can degrade performance and memory utilization when traffic patterns are not matched with an underlying address map. The address map is usually fixed at design time. Accordingly, it is difficult to adapt to given applications. Modern embedded system usually accommodates memory management units (MMUs). As a result, depending on virtual address patterns, the system can suffer from performance overheads due to page table walks. To alleviate this performance overhead, we propose to cluster and rearrange tiles to construct an MMU-aware configurable address map. To construct the clustered tiled map, the generic tile number remapping algorithm is presented. In the presented scheme, an address map is configured based on the adaptive dimensioning algorithm. Considering image processing applications, a design, an analysis, an implementation, and simulations are conducted. The results indicate the proposed method can improve the performance and the memory utilization with moderate hardware costs.

  • Real-Time Image Processing Based on Service Function Chaining Using CPU-FPGA Architecture

    Yuta UKON  Koji YAMAZAKI  Koyo NITTA  

     
    PAPER-Network System

      Pubricized:
    2019/08/05
      Vol:
    E103-B No:1
      Page(s):
    11-19

    Advanced information-processing services based on cloud computing are in great demand. However, users want to be able to customize cloud services for their own purposes. To provide image-processing services that can be optimized for the purpose of each user, we propose a technique for chaining image-processing functions in a CPU-field programmable gate array (FPGA) coupled server architecture. One of the most important requirements for combining multiple image-processing functions on a network, is low latency in server nodes. However, large delay occurs in the conventional CPU-FPGA architecture due to the overheads of packet reordering for ensuring the correctness of image processing and data transfer between the CPU and FPGA at the application level. This paper presents a CPU-FPGA server architecture with a real-time packet reordering circuit for low-latency image processing. In order to confirm the efficiency of our idea, we evaluated the latency of histogram of oriented gradients (HOG) feature calculation as an offloaded image-processing function. The results show that the latency is about 26 times lower than that of the conventional CPU-FPGA architecture. Moreover, the throughput decreased by less than 3.7% under the worst-case condition where 90 percent of the packets are randomly swapped at a 40-Gbps input rate. Finally, we demonstrated that a real-time video monitoring service can be provided by combining image processing functions using our architecture.

  • Weber Centralized Binary Fusion Descriptor for Fingerprint Liveness Detection

    Asera WAYNE ASERA  Masayoshi ARITSUGI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/04/17
      Vol:
    E102-D No:7
      Page(s):
    1422-1425

    In this research, we propose a novel method to determine fingerprint liveness to improve the discriminative behavior and classification accuracy of the combined features. This approach detects if a fingerprint is from a live or fake source. In this approach, fingerprint images are analyzed in the differential excitation (DE) component and the centralized binary pattern (CBP) component, which yield the DE image and CBP image, respectively. The images obtained are used to generate a two-dimensional histogram that is subsequently used as a feature vector. To decide if a fingerprint image is from a live or fake source, the feature vector is processed using support vector machine (SVM) classifiers. To evaluate the performance of the proposed method and compare it to existing approaches, we conducted experiments using the datasets from the 2011 and 2015 Liveness Detection Competition (LivDet), collected from four sensors. The results show that the proposed method gave comparable or even better results and further prove that methods derived from combination of features provide a better performance than existing methods.

  • Pixel Selection and Intensity Directed Symmetry for High Frame Rate and Ultra-Low Delay Matching System

    Tingting HU  Takeshi IKENAGA  

     
    PAPER-Machine Vision and its Applications

      Pubricized:
    2018/02/16
      Vol:
    E101-D No:5
      Page(s):
    1260-1269

    High frame rate and ultra-low delay matching system plays an increasingly important role in human-machine interactive applications which call for higher frame rate and lower delay for a better experience. The large amount of processing data and the complex computation in a local feature based matching system, make it difficult to achieve a high process speed and ultra-low delay matching with limited resource. Aiming at a matching system with the process speed of more than 1000 fps and with the delay of less than 1 ms/frame, this paper puts forward a local binary feature based matching system with field-programmable gate array (FPGA). Pixel selection based 4-1-4 parallel matching and intensity directed symmetry are proposed for the implementation of this system. To design a basic framework with the high process speed and ultra-low delay using limited resource, pixel selection based 4-1-4 parallel matching is proposed, which makes it possible to use only one-thread resource consumption to achieve a four-thread processing. Assumes that the orientation of the keypoint will bisect the patch best and will point to the region with high intensity, intensity directed symmetry is proposed to calculate the keypoint orientation in a hardware friendly way, which is an important part for a rotation-robust matching system. Software experiment result shows that the proposed keypoint orientation calculation method achieves almost the same performance with the state-of-art intensity centroid orientation calculation method in a matching system. Hardware experiment result shows that the designed image process core supports to process VGA (640×480) videos at a process speed of 1306 fps and with a delay of 0.8083 ms/frame.

  • A Color Restoration Method for Irreversible Thermal Paint Based on Atmospheric Scattering Model

    Zhan WANG  Ping-an DU  Jian LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/12/08
      Vol:
    E101-D No:3
      Page(s):
    826-829

    Irreversible thermal paints or temperature sensitive paints are a kind of special temperature sensor which can indicate the temperature grad by judging the color change and is widely used for off-line temperature measurement during aero engine test. Unfortunately, the hot gases flow within the engine during measuring always make the paint color degraded, which means a serious saturation reduction and contrast loss of the paint colors. This phenomenon makes it more difficult to interpret the thermal paint test results. Present contrast enhancement algorithms can significantly increase the image contrast but can't protect the hue feature of the paint images effectively, which always cause color shift. In this paper, we propose a color restoration method for thermal paint image. This method utilizes the atmospheric scattering model to restore the lost contrast and saturation information, so that the hue can be protected and the temperature can be precisely interpreted based on the image.

  • A Describing Method of an Image Processing Software in C for a High-Level Synthesis Considering a Function Chaining

    Akira YAMAWAKI  Seiichi SERIKAWA  

     
    PAPER-Design Methodology and Platform

      Pubricized:
    2017/11/17
      Vol:
    E101-D No:2
      Page(s):
    324-334

    This paper shows a describing method of an image processing software in C for high-level synthesis (HLS) technology considering function chaining to realize an efficient hardware. A sophisticated image processing would be built on the sequence of several primitives represented as sub-functions like the gray scaling, filtering, binarization, thinning, and so on. Conventionally, generic describing methods for each sub-function so that HLS technology can generate an efficient hardware module have been shown. However, few studies have focused on a systematic describing method of the single top function consisting of the sub-functions chained. According to the proposed method, any number of sub-functions can be chained, maintaining the pipeline structure. Thus, the image processing can achieve the near ideal performance of 1 pixel per clock even when the processing chain is long. In addition, implicitly, the deadlock due to the mismatch of the number of pushes and pops on the FIFO connecting the functions is eliminated and the interpolation of the border pixels is done. The case study on a canny edge detection including the chain of some sub-functions demonstrates that our proposal can easily realize the expected hardware mentioned above. The experimental results on ZYNQ FPGA show that our proposal can be converted to the pipelined hardware with moderate size and achieve the performance gain of more than 70 times compared to the software execution. Moreover, the reconstructed C software program following our proposed method shows the small performance degradation of 8% compared with the pure C software through a comparative evaluation preformed on the Cortex A9 embedded processor in ZYNQ FPGA. This fact indicates that a unified image processing library using HLS software which can be executed on CPU or hardware module for HW/SW co-design can be established by using our proposed describing method.

  • Performance Optimization of Light-Field Applications on GPU

    Yuttakon YUTTAKONKIT  Shinya TAKAMAEDA-YAMAZAKI  Yasuhiko NAKASHIMA  

     
    PAPER-Computer System

      Pubricized:
    2016/08/24
      Vol:
    E99-D No:12
      Page(s):
    3072-3081

    Light-field image processing has been widely employed in many areas, from mobile devices to manufacturing applications. The fundamental process to extract the usable information requires significant computation with high-resolution raw image data. A graphics processing unit (GPU) is used to exploit the data parallelism as in general image processing applications. However, the sparse memory access pattern of the applications reduced the performance of GPU devices for both systematic and algorithmic reasons. Thus, we propose an optimization technique which redesigns the memory access pattern of the applications to alleviate the memory bottleneck of rendering application and to increase the data reusability for depth extraction application. We evaluated our optimized implementations with the state-of-the-art algorithm implementations on several GPUs where all implementations were optimally configured for each specific device. Our proposed optimization increased the performance of rendering application on GTX-780 GPU by 30% and depth extraction application on GTX-780 and GTX-980 GPUs by 82% and 18%, respectively, compared with the original implementations.

  • Inter-Person Occlusion Handling with Social Interaction for Online Multi-Pedestrian Tracking

    Yuke LI  Weiming SHEN  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/09/15
      Vol:
    E99-D No:12
      Page(s):
    3165-3171

    Inter-person occlusion handling is a critical issue in the field of tracking, and it has been extensively researched. Several state-of-the-art methods have been proposed, such as focusing on the appearance of the targets or utilizing knowledge of the scene. In contrast with the approaches proposed in the literature, we propose to address this issue using a social interaction model, which allows us to explore spatio-temporal information pertaining to the targets involved in the occlusion situation. Our experimental results show promising results compared with those obtained using other methods.

  • Adaptive Local Thresholding for Co-Localization Detection in Multi-Channel Fluorescence Microscopic Images

    Eisuke ITO  Yusuke TOMARU  Akira IIZUKA  Hirokazu HIRAI  Tsuyoshi KATO  

     
    LETTER-Biological Engineering

      Pubricized:
    2016/07/27
      Vol:
    E99-D No:11
      Page(s):
    2851-2855

    Automatic detection of immunoreactive areas in fluorescence microscopic images is becoming a key technique in the field of biology including neuroscience, although it is still challenging because of several reasons such as low signal-to-noise ratio and contrast variation within an image. In this study, we developed a new algorithm that exhaustively detects co-localized areas in multi-channel fluorescence images, where shapes of target objects may differ among channels. Different adaptive binarization thresholds for different local regions in different channels are introduced and the condition of each segment is assessed to recognize the target objects. The proposed method was applied to detect immunoreactive spots that labeled membrane receptors on dendritic spines of mouse cerebellar Purkinje cells. Our method achieved the best detection performance over five pre-existing methods.

  • Image Modification Based on a Visual Saliency Map for Guiding Visual Attention

    Hironori TAKIMOTO  Tatsuhiko KOKUI  Hitoshi YAMAUCHI  Mitsuyoshi KISHIHARA  Kensuke OKUBO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/08/13
      Vol:
    E98-D No:11
      Page(s):
    1967-1975

    It is commonly believed that improved interaction between humans and electronic device, it is effective to draw the viewer's attention to a particular object. Augmented reality (AR) applications can call attention to real objects by overlaying highlight effects or visual stimuli (such as arrows) on a physical scene. Sometimes, more subtle effects would be desirable, in which case it would be necessary to smoothly and naturally guide the user's gaze without external stimuli. Here, a novel image modification method is proposed for directing a viewer's gaze to specific regions of interest. The proposed method uses saliency analysis and color modulation to create modified images in which the region of interest is the most salient region in the entire image. The proposed saliency map model that is used during saliency analysis reduces computational costs and improves the naturalness of the image using the LAB color space and simplified normalization. During color modulation, the modulation value of each LAB component is determined in order to consider the relationship between the LAB components and the saliency value. With the image obtained in this manner, the viewer's attention is smoothly attracted to a specific region very naturally. Gaze measurements as well as a subjective experiments were conducted to prove the effectiveness of the proposed method. These results show that a viewer's visual attention is indeed attracted toward the specified region without any sense of discomfort or disruption when the proposed method is used.

  • A New Method of Storing Integral Image for Memory Efficiency Using Modified Block Structure

    Su-hyun LEE  Yong-jin JEONG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/07/13
      Vol:
    E98-D No:10
      Page(s):
    1888-1891

    Integral image is the sum of input image pixel values. It is mainly used to speed up the process of a box filter operation, such as Haar-like features. However, large memory capacity for integral image data can be an obstacle in an embedded environment with limited hardware. In a previous research, [5] reduced the size of integral image memory using 2×2 block structure with additional calculations. It can be easily extended to n×n block structure for further reduction, but it requires more additional calculations. In this paper, we propose a new block structure for the integral image by modifying the location of the reference pixel in the block. It results in much less additional calculations by reducing the number of memory accesses, while keeping the same amount of memory as the original block structure.

  • Color Image Enhancement in HSI Color Space without Gamut Problem

    Akira TAGUCHI  Yoshikatsu HOSHI  

     
    LETTER-Image

      Vol:
    E98-A No:2
      Page(s):
    792-795

    While emphasizing the intensity or saturation component for getting high-quality color images, keeping the hue component unchanged is important; thus, perceptual color models such as HSI and HSV have been used. Hue-Saturation-Intensity (HSI) is a public color model, and many color applications are commonly based on this model. However, the transformation from the HSI color space to the RGB color space after processing intensity/saturation in the HSI color space usually generates the gamut problem. In this study, we clear the relationship between the RGB gamut and the HSI gamut completely. According to the result, we can check whether the processing result is located inside or outside of the RGB gamut without transforming to the RGB color space. If the processing result is judged outside of the RGB gamut, we apply the effective way of hue preserving correction algorithm which is proposed in this study to the saturation component. Experimental results demonstrate that the proposed algorithm can correct the color distortion caused by the enhancement without reducing the visual effect and it is especially useful for images with rich colors and local high component values.

1-20hit(166hit)