The search functionality is under construction.
The search functionality is under construction.

Open Access
Color Correction Method Considering Hue Information for Dichromats

Shi BAO, Xiaoyan SONG, Xufei ZHUANG, Min LU, Gao LE

  • Full Text Views

    133

  • Cite this
  • Free PDF (9.9MB)

Summary :

Images with rich color information are an important source of information that people obtain from the objective world. Occasionally, it is difficult for people with red-green color vision deficiencies to obtain color information from color images. We propose a method of color correction for dichromats based on the physiological characteristics of dichromats, considering hue information. First, the hue loss of color pairs under normal color vision was defined, an objective function was constructed on its basis, and the resultant image was obtained by minimizing it. Finally, the effectiveness of the proposed method is verified through comparison tests. Red-green color vision deficient people fail to distinguish between partial red and green colors. When the red and green connecting lines are parallel to the a* axis of CIE L*a*b*, red and green perception defectives cannot distinguish the color pair, but can distinguish the color pair parallel to the b* axis. Therefore, when two colors are parallel to the a* axis, their color correction yields good results. When color correction is performed on a color, the hue loss between the two colors under normal color vision is supplemented with b* so that red-green color vision-deficient individuals can distinguish the color difference between the color pairs. The magnitude of the correction is greatest when the connecting lines of the color pairs are parallel to the a* axis, and no color correction is applied when the connecting lines are parallel to the b* axis. The objective evaluation results show that the method achieves a higher score, indicating that the proposed method can maintain the naturalness of the image while reducing confusing colors.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E107-A No.9 pp.1496-1508
Publication Date
2024/09/01
Publicized
2024/04/22
Online ISSN
1745-1337
DOI
10.1587/transfun.2024EAP1026
Type of Manuscript
PAPER
Category
Image

1.  Introduction

With the continuous development of society and the wide popularity of smart devices, digital media technology has been evolving. Multimedia content is mostly used to deliver information, such as pictures, videos, and colorful charts, on smartphones, computers, and other smart devices that flood our work and daily lives. Humans acquire and receive information from the objective world through their sense organs (i.e., ears, eyes, mouth, nose, and hands) by listening, seeing, tasting, smelling, and making contact. Among the information received from the outside world, those from vision constitute the vast majority and largely affect people’s cognitive, emotional, and subconscious activities.

Images, with rich color information, are an important source of information that people obtain from the objective world. The eyes can perceive differences in light when they see different colors. The anterior segment of the eye includes crystals that focus images onto the retina, which are covered with special nerve cells, including optic rods and cone cells. Optic rod cells perceive low light, dark vision, and vision without color, whereas optic cone cells perceive bright light, bright vision, and vision with color. Thus, optic cone cells are responsible for color vision and contain three types of optic pigment. According to the absorption spectrum, the stimuli for the three types of pigment cells, which are sensitive to red, green, and blue light, are mixed to form color vision. The three optic cone cells, L-, M-, and S-optic cone cells, are sensitive to long-, medium-, and short-wavelength visible light, respectively. Optic rod cells contain only one optochrome, which is extremely sensitive to blue-green light with a peak at 500 nm [1]. When the optic cone and rod cells work simultaneously, the eye can recognize a wide range of colors.

The average person has three types of visual cone cells, each of which recognizes a color - green, red, or blue - hence the term “trichromats”. Each type of optical cell undergoes a different chain reaction at different light wavelengths, resulting in vision. These three types of vision cells, when activated, send information to the nerve center. The brain collects combined signals and produces color vision. The verbal description of the color being sought and the formation of pigments in the optic cones depends on genes. When genes are mutated, some or all color vision is lost or altered. In the absence of optic pigments in the optic cones, the eye cannot distinguish colors; this condition is known as color vision deficiency or acritochromacy. There are three types of acritochromia: protanopia, deuteranopia, and tritanopia. A protanopia lacks sensitivity to long wavelengths of light, and the affected cone cells have difficulty distinguishing greens and reds of the spectrum. Deuteranopia is affected in the same spectral range as protanopia. Those with deuteranopia lacked visual cone cells sensitive to intermediate wavelengths, making the difference more pronounced. Protanopia and deuteranopia, referred to as red-green color blindness, affect similar groups of people. Tritanopia is rare and suggests a lack of cone cells sensitive to short-wave light. In some people, the cells of the optic cones do not contain any pigment, and the eye cannot recognize any color; this condition is known as monochromatism. Monochromatism is a very rare and severe form of color vision disorder.

A person with normal color vision can see over a million shades and tints, while a person with protanopia and deuteranopia can only see about 10%. For people with color vision deficiencies, colors can sometimes seem muted and confusing, and some colors can be difficult to distinguish. The detection rate of color acritochromacy among Chinese people is about 3.14%, of which the rate of male and female color blindness is 4.71% \(\pm\) 0.074% and 0.67% \(\pm\) 0.036%, respectively; the frequency of color blindness gene carriers is 8.98%. The rate of color acritochromacy in Japan is about 4%-5% for males and 0.5% for females; in Europe and the United States, it is about 8% for males and 0.4% for females. With the progress of society and the development of science and technology, the division of labor in various professions has become increasingly sophisticated, and the requirements for color discrimination in some professions have also increased. However, at present, many people are unaware of this problem and cannot judge whether they have color vision abnormalities. Therefore, acritochromacy screening is medically essential. There are various screening tools for color vision, such as the Nagel color vision screening goggles and the FM-100 hue test. Each disk is covered with dots of various colors and sizes, some of which are numbered in color components that are easily confused by people with color vision impairment. The dots that form the numbers or shapes are easily recognizable to people with normal color vision and are invisible or difficult to see for people with color vision disorders. Since its inception, the Ishihara acritochromacy test has been widely used for its applicability and high accuracy.

In recent years, many researchers have proposed different color vision models based on the three-channel vision model to simulate color perception in dichromatopia [2], [3]. Brettel [4] and Vi\(\acute{\mathrm{e}}\)not [5] proposed simulation models of protanopia and deuteranopia that constructed a color perception plane of dichromatopia in the LMS space and quantified the range of color information visible to dichromatopia. Martin [6] proposed a model of color blindness and normal visual system, elaborating on the reasons for the abnormalities in color perception by comparing the differences in the ability to perceive color between normal and dichromats. Machado et al. [7] proposed a physiological-based model to simulate color perception by shifting the spectral distribution curves of the L or M cone cells, depending on the degree of curve shift that simulates color perception in people with different degrees of color vision abnormalities. These models differed in approach but achieved similar results.

This study used the color acritochromacy simulation model proposed by Vi\(\acute{\mathrm{e}}\)not [5] for red-green color vision deficiency simulation and proposed a color correction method for dichromats that considers hue information. For protanopia and deuteranopia, the loss of color information arises mainly on the a* axis, while little information is lost in the b* axis, based on the above characteristics. In this study, when the color pairs are parallel to the a*-axis of CIE L*a*b* [8], the two colors are color-corrected, and the hue loss between the two colors under normal color vision is supplemented to b*. The correction is greatest when the connecting lines of the color pairs are parallel to the a*-axis, whereas no color correction is applied when the connecting lines are parallel to the b*-axis. Color correction can help find a balance between image contrast enhancement and naturalness maintenance, increasing the level of color perception in red-green color vision deficient individuals while maintaining the naturalness of the image amongst trichromats. In this paper, when the color of the output image is closer to that of the input image (with a smaller average color difference), it is said that the naturalness of the output image is better.

The remainder of the paper is organized as follows: Section 2 describes related work on color vision detection. Section 3 discusses the proposed method, while Sect. 4 verifies the validity of the proposed method. Finally, Sect. 5 concludes the paper.

2.  Related Work

Humans have the ability of color perception and can perceive the frequency of light reflected from the surface of an object. However, color vision deficiency is common, and the problem, while not usually fatal, can cause inconvenience to a large portion of the population. Color vision defects are caused by two main reasons: natural genetic factors and nerve or brain damage. Since most cases are genetically determined, they need to be genetically altered in their treatment. Currently, there is no method of gene therapy in China, and treatment can only alleviate the inconvenience of people with color vision defects from the perspective of color blindness correction. With the emergence of modern digital display devices, digital image processing techniques for recoloring images or color correction to compensate for color vision deficiencies has attracted much attention.

Several recoloring algorithms are used in web, software, and mobile applications [9], [10]. Most methods begin by changing the image hue to the correct color [11]-[15]. Huang et al. [16]-[18] proposed a method to transfer the information about defects to a normal position, reduce the difference between the color-corrected image and the original image by introducing an error function, and transfer the information on the defective axes a* to the b* axis by rotating the operation, thereby reducing the loss of image content. In subsequent research, they proposed an improved recoloring algorithm that sets the key colors of the image and measures the contrast between the two key colors by calculating the Kullback-Leibler dispersion and interpolating the colors according to the corresponding mapping to ensure the smoothness of the local colors in the recolored image. Kuhn et al. [19] proposed a mass-spring system, thereby enhancing the color contrast of red and green defects by setting the mass of each particle on the spring according to the original color and perceived difference to ensure the naturalness of the original color. However, this method cannot effectively enhance images that span almost the entire chromatic plane. In 2015, Milic et al. [20] proposed a color correction method based on confusion lines, which defines the remapping range of the center color and avoids creating new confusing colors for the center color. However, it cannot effectively avoid creating new confusing colors for those other than the center color. Takimoto et al. [21] proposed a saturation-based color correction method, which converts a color image into a grayscale map, preprocesses the grayscale image, and finally, color modifies the set of pixels with similar colors to achieve color correction. In 2016, Tennenholtz et al. [22] proposed a similarity-based natural contrast enhancement technique, where the similarity difference between different regions of the image is measured based on the variance difference between the pixels in the original and simulated images to identify confusing regions and change only some regions to enhance the color contrast. Hassan et al. [23] proposed a correction method that maintains the naturalness of red-green color vision deficient individuals. The advantage of this method is that it maintains the color of the confusing color-corrected regions at the same hue as the original color. In addition, the recolored image has the same luminance as the original image. In 2019, Hassan [24] extended the previous work by proposing a flexible color contrast enhancement method targeting red-green deficiencies by setting a dynamic threshold. A contrast parameter was also introduced for exaggerating the blue stimulus of the recolored image, but it was more inefficient than the previous method in terms of naturalness. Zhu et al. [25] proposed a novel adaptive reimaging algorithm that performs image coloring by minimizing an objective function constrained by contrast enhancement and naturalness preservation.

In addition to changing the hue of an image, image luminance can also be changed for color correction purposes. In 2010, Tanaka et al. [26] proposed an effective luminance modification method, which is an optimization problem defined by the color differences in the input image with respect to the luminance component. Later Suetake et al. [27] performed luminance modification around the contours of objects by considering the luminance modification of the Craik-O’ Brien effect. However, the amount of luminance modification was insufficient for difficult to distinguish parts, and there is room for improvement. In 2016, Bao et al. [28] improved this method, using the a*-component instead of the X-component, and the weight portion of the luminance alteration was redefined to achieve color blindness correction. In 2019, Meng [29], [30] proposed a luminance correction method based on the minimization problem, which modifies the luminance value of the output image by considering the color difference of the input image to preserve its visual details and output images with natural colors.

3.  Proposed Method

In this study, we focused on common types of color blindness: red and green. The method described by Vi\(\acute{\mathrm{e}}\)not [5] was used to simulate color blindness. In this method, N and K denote normal trichromats and dichromats, respectively. Red and green blindness are abbreviated as P and D, respectively.

The method used in this study transfers color information from defective to normal functional cones using the CIE L*a*b* color space as the working domain. In red-green color vision defects, a strong correlation exists between the original color of the image and the luminance L* and b* axis information of the simulated color of the color vision defect perception. In contrast, there is a weak correlation with the a* axis information to reduce information loss on the a* axis. The b* component of the image is moderately altered so that the information of the a* is reflected onto the b* axis in the CIE L*a*b* color space. In this paper, the method only modifies the input image’s b* values, and the L* and a* values remain unchanged.

3.1  Objective Function

This method uses the CIE L*a*b* color space as the working domain, and the objective function is defined as

\[\begin{equation*} E({\textit{f}}) = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^n {{{\left[ {\left( {{f_i} - {f_j}} \right) - {\delta _{ij}}} \right]}^2}} }. \tag{1} \end{equation*}\]

Among them.

\[\begin{eqnarray*} &&\!\!\!\!\! {\delta _{ij}} = \Delta b_{ij}^* + \alpha \cos \left[ {\Phi ({{\vec C}_{ij}} \bullet {{\vec \nu }_\theta})} \right], \tag{2} \\ &&\!\!\!\!\! \vec{C_{ij}}=\left ( {\varDelta a}_{ij}^{*},{\varDelta b}_{ij}^{*}\right ), \tag{3} \\ &&\!\!\!\!\! {{\vec \nu }_\theta} = \left( {\cos{\theta} ,\sin{\theta} } \right), \tag{4} \\ &&\!\!\!\!\! \Delta a_{ij}^* = a_i^* - a_j^*, \tag{5} \\ &&\!\!\!\!\! \Delta b_{ij}^* = b_i^* - b_j^*, \tag{6} \end{eqnarray*}\]

where vector \({\textit{f}} = \{ {f_1},{f_2}, \ldots ,{f_n}\}\) represents the output image, \(n\) is the number of pixels in the input image, \({f_i}\) is the \({b^{*}}\) component value of the ith pixel in output image, and \({f_j}\) is the b* component value of the jth pixel in output image. \({a^{*}}\) represents the component from green to red, and \({b^{*}}\) represents the component from blue to yellow; \({a}_{i}^{*}\) and \({b}_{i}^{*}\) respectively represent the a* and b* components of i in the input image, \({{\vec C}_{ij}}\) is a vector in the \({a^{*}}{b^{*}}\) plane; \({{\vec \nu }_\theta}\) is the unit vector in the a*b* plane, \(\theta\) is the angle, and it is set to 0\(^\circ\) in this study. \(\Phi \left ( {{\vec C}_{ij}}\cdot {{\vec \nu }_\theta}\right )\)is the angle between vectors \({{\vec C}_{ij}}\) and \({{\vec \nu }_\theta}\) in a*b* space. \(\alpha\) is a parameter that takes on a real number.

In the proposed method, the main goal is to maintain color pair differences while making the color-corrected resultant image as natural as possible. As shown in Fig. 1, the difference in the a* axis is large, and the difference in the b* component between pixel i and pixel j in input image is small. Protanopia and deuteranopia mainly lose information on the a* axis, which leads to the inability of red-green color-blind individuals to distinguish color pairs. In this study, a panning operation is performed on the a*b* chromatic plane to convert these colors into new colors. \(i'\) and \(j'\) are the pixel points after the panning operation on \(i\) and \(j\), respectively. At this time, the distance of this color pair on the b* axis is increased, thus improving the color discrimination ability of people with chromatic deficiencies. \(\alpha \cos \left [ \Phi \left ( {{\vec C}_{ij}}\cdot {{\vec \nu }_\theta}\right )\right ]\) is the correction amount, as shown in Fig. 2. Since red-green color vision defectors can distinguish color differences along the b* axis, no color change is necessary when the color pair is parallel to the b* axis; color correction is greatest when the color pair is parallel to the a* axis, i.e., \(\Phi \left ( {{\vec C}_{ij}}\cdot {{\vec \nu }_\theta}\right )\) is close to zero. The unit vector \({{\vec \nu }_\theta}\) determines whether the corrected color is modified in the positive or negative direction of b*.

Fig. 1  Schematic diagram of color correction for pixels i and j on the a*b* plane.

Fig. 2  Cosine function graph.

The pixels i and j in Fig. 1 are corrected by \({\delta _{ij}}\) in Eq. (2). From the positions of pixels i and j, \(\Delta b_{ij}^ *\) in Eq. (2) is a positive number; at the same time, the angle between \(\vec{C_{ij}}\) and \({{\vec \nu }_\theta}\) is less than 90\(^\circ\). From Fig. 2, it can be seen that the value of \(\cos \left [ \Phi \left ( {{\vec C}_{ij}}\cdot {{\vec \nu }_\theta}\right )\right ]\) is positive. Therefore, pixel i’s \(b_i^ *\) will be corrected in the direction of \(+b^*\), and the corrected position will be i’. Similarly, pixel j’s \(b_j^ *\) will be corrected in the direction of \(-b^*\), and the corrected position will be j’. After the above correction, the difference between color pairs \(\left ( i,j\right )\) and \(b^*\) that are difficult to distinguish due to color vision abnormalities is enlarged, thus enabling color vision abnormalities to distinguish the corrected color pairs \(\left ( i',j'\right )\).

3.2  Minimization of the Objective Function

The resulting image was obtained by optimizing the objective function as follows:

\[\begin{equation*} \tilde{{\textit{f}}} = \arg \mathop {\min }\limits_{{f_i} \in \mathbb{R} } E({\textit{f}}){\kern 1pt}, \tag{7} \end{equation*}\]

where \(\tilde{{\textit{f}}}\) is the output image and \(\mathbb{R}\) is the set of real numbers. We minimize the objective function, that is, choose a value that can optimally change the b*-component value of the color image and obtain the optimal output image.

The solution to the minimization problem of formulas (1)-(6) is obtained by the conjugate gradient method. Solving optimization problems is equivalent to solving the following simultaneous equations:

\[\begin{equation*} {A}_{(n)}\boldsymbol{x}=\boldsymbol{b}. \tag{8} \end{equation*}\]

Among them,

\[\begin{eqnarray*} &&\!\!\!\!\! {A}_{(n)}=n{I}_{(n)}-{J}_{(n)}, \tag{9} \\ &&\!\!\!\!\! \boldsymbol{x}={\left ({f}_{1},{f}_{2},\ldots,{f}_{n} \right )}^{\mathrm{T}}, \tag{10} \\ &&\!\!\!\!\! \boldsymbol{b}={\left (\displaystyle\sum_{j=1}^{n}{\delta }_{1j},\displaystyle\sum_{j=1}^{n}{\delta }_{2j},\ldots,\displaystyle\sum_{j=1}^{n}{\delta }_{nj} \right )}^{\mathrm{T}}. \tag{11} \end{eqnarray*}\]

Among them, \(\textit{I}_{(n)}\) and \(\textit{J}_{(n)}\) are two matrices, which are \(n \times n\) identity matrices and \(n \times n\) matrices with all elements being 1, respectively. Through Eq. (9) and Eq. (11), the relationship between A and \(\boldsymbol{b}\) can be obtained as follows:

\[\begin{eqnarray*} &&\!\!\!\!\! {A}_{\left ( n\right )}^{2}=n{A}_{\left(n\right)} \tag{12} \\ &&\!\!\!\!\! {A}_{(n)}\boldsymbol{b}=n\boldsymbol{b}. \tag{13} \end{eqnarray*}\]

According to the relationship between Eq. (12) and Eq. (13), use the conjugate gradient method to obtain an analytical solution.

Our workspace is the CIE L*a*b* color space; however, it is irregularly shaped, making it necessary to correct the color gamut before outputting the resultant image to ensure that the output colors are within the color gamut. We used the method of Tanaka et al. [31] to define the pixel values quadratically, which considers the limitations of the color range of the target color space and the differences in the images among different color models to achieve color gamut correction.

4.  Experimental Methods

The experimental images used in this paper are shown in Fig. 3 for a total of six images, comprising artificial and natural images.

Fig. 3  Experimental images: (a) Chart 5, (b) Chart 6, (c) Chart 97, (d) flower, (e) line, and (f) Nanten.

4.1  Parameter Setting

The parameter in the proposed method is used to control the amount of modification to the b*-value of the color image. If the b*-value is too small, the contrast enhancement of the recolored image becomes insufficient, leading to people with color vision deficiency being unable to distinguish the confusing area; if b*-value is too large, the color-corrected image is excessively altered, making the trichromatopsy people unable to receive the original information of the image, and lose its naturalness and detail information. The resultant images, in Fig. 4, correspond to the six different parameters. Column (a) shows the original images and corresponding simulated images; columns (b)-(g) are the resultant images for \(\alpha\) = 10, 20, 30, 40, 50, and 60, respectively. The first row shows the normal image, the second row is the image observed in the red-blind population, and the third row is the image observed in the green-blind population. When \(\alpha\) gradually increases, the color of the original image gradually changes, and the corresponding numbers in the simulated image become clear. When \(\alpha\) is varied from 10 to 40, although the subjectively optimized image looks very similar in color to the original image, the numbers in the simulated image are not very obvious; when \(\alpha\) is 50 and 60, the numbers in the simulated image are clearer than when the \(\alpha\) is 40, but the color-corrected image change from green to the original blue color. Therefore, to ensure the naturalness of the color-corrected image for tricolor viewers, we chose \(\alpha\) = 40 as the experimental parameter setting. At this time, the color of the color-corrected image changes, but the blue stimulus is less, which is relatively easy to accept, and the simulated image; i.e., the dichromatic viewers viewed the image with enhanced contrast, the original confusing figures became clearly visible, and ensure a certain naturalness.

Fig. 4  Differences α result image for Chart 97. (a) Original images and corresponding simulated images, (b) α = 10, (c) α = 20, (d) α = 30, (e) α = 40, (f) α = 50, (g) α = 60.

4.2  Comparative Experiments

This study used comparative tests to verify the validity of the proposed method. The methods described by Milic et al. [20], Takimoto et al. [21], Tennholtz et al. [22], Hassan and Paramesaran [23], Hassan [24], and Meng et al. [29], [30] were used for comparison. The corresponding parameter settings for the different methods are listed in Table 1. Hassan and Paramesaran [23] has no parameters, so it is not listed in this table.

Table 1  Parameter setting of each method.

This study used the evaluation metrics proposed by Tanaka [26]. The effectiveness of the resultant images was evaluated using four quantitative metrics: contrast \({V_\text{K}}\), average color difference \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\), average lightness difference \({{e}_{{L}^{*}}}\), and average saturation difference \({{e}_{{a}^{*}{b}^{*}}}\). Four quantitative metrics are defined as follows:

\[\begin{eqnarray*} &&\!\!\!\!\! {{e}_{{L}^{*}{a}^{*}{b}^{*}}} = \frac{1}{n}\displaystyle\sum_{i=1}^{n}\varDelta {E}_{i}, \tag{14} \\ &&\!\!\!\!\!\hskip-3mm \varDelta {E}_{i}=\sqrt{{({L}_{i}^{*\text{out}}-{L}_{i}^{*\text{in}})}^{2}+{({a}_{i}^{*\text{out}}-{a}_{i}^{*\text{in}})}^{2}+{({b}_{i}^{*\text{out}}-{b}_{i}^{*\text{in}})}^{2}}, \tag{15} \\ &&\!\!\!\!\! {{e}_{{L}^{*}}} = \frac{1}{n}\displaystyle\sum_{i=1}^{n}\left | {L}_{i}^{*\text{out}}-{L}_{i}^{*\text{in}}\right |, \tag{16} \\ &&\!\!\!\!\! {{e}_{{a}^{*}{b}^{*}}} = \frac{1}{n}\displaystyle\sum_{i=1}^{n}\sqrt{{\left ( {a}_{i}^{*\text{out}}-{a}_{i}^{*\text{in}}\right )}^{2}+{\left ( {b}_{i}^{*\text{out}}-{b}_{i}^{*\text{in}}\right )}^{2}}. \tag{17} \end{eqnarray*}\]

In Eq. (14), \(\varDelta {E}_{i}\) is the color difference between the input image and the output image at the ith pixel. In Eq. (15), \({L}_{i}^{*\text{out}}\) and \({L}_{i}^{*\text{in}}\) represents the lightness component in the output image and the input image, respectively.

\({V_\text{K}}\) is a quantitative evaluation index of contrast based on the visual characteristics of the human eye, that is, the degree of improvement in the ability to discriminate confusing colors under K-type color vision, which is divided into P- and D-type color vision.

\[\begin{eqnarray*} &&\!\!\!\!\! {V}_\text{K}\left ( \lambda \right )=\frac{{S}_{\text{K}}^{\text{out}}\left ( \lambda\right )}{{S}_\text{K}^{\text{in}}\left ( \lambda\right )}, \tag{18} \\ &&\!\!\!\!\! {S}_\text{K}^{\text{out}}\left ( \lambda\right )=\frac{1}{{N}_{\text{K},\lambda}}\displaystyle\sum_{\left ( i,j\right )\in {\sigma }_{\text{K},\lambda }}^{}\left | \varDelta {E}_{\text{K},ij}^{\text{out}}-\varDelta{E}_{\text{K},ij}^{\text{in}}\right |, \tag{19} \\ &&\!\!\!\!\! {S}_\text{K}^{\text{in}}\left ( \lambda\right )=\frac{1}{{N}_{\text{K},\lambda}}\displaystyle\sum_{\left ( i,j\right )\in {\sigma }_{\text{K},\lambda }}^{}\left | \varDelta {E}_{\text{K},ij}^{\text{in}}-\varDelta {E}_{ij}^{\text{in}}\right |. \tag{20} \end{eqnarray*}\]

Among them, N represents the number of pixels under K-type color perception. \({S}_\text{K}\left ( \lambda\right )\) is the average difference of the contrasts between colors (pixel pairs) whose contrast ratios are less than or equal to \(\lambda\) for K-type color vision and standard color vision. Thus, the contrast of colors for K-type color vision becomes similar to that for standard color vision when \({S}_\text{K}\) is close to 0.

When \({V_\text{K}}\) is close to 0, it represents that the confusion color is minimized, i.e., the color contrast of the resultant image is similar to that of the original image; when \({V_\text{K} =}\) 1, the color contrast of the resultant image is not the same as that of the original image; when \({V_\text{K} >}\) 1, the color contrast of the resultant image is worse than that of the relative input image, indicating possible loss of color information of the resultant image.

The \({V_\text{K}}\) values at different methods are shown in Figs. 5 and 6. \(\lambda\) is set from 0.1 to 1, which is used to weigh the maintenance of the color contrast of the original image and the improvement of the K-type chromatic contrast. When \(\lambda\) is small, the color correction range only considers the part of the color that is very similar. The range of the considered colors is expanded when \(\lambda\) is increased, and there will be some colors that can be distinguished by the K-type color vision changes; therefore, \({V_\text{K}}\) increased with increasing \(\lambda\). The method proposed in this paper has the smallest average value for any \(\lambda\), which proves the validity of our method. For the average value of \({V_\text{K}}\) for P-type color vision, the lowest value exists for our method; for the average value of \({V_\text{K}}\) for each method for D-type color vision, although our method does not consider targeting different color senses when performing color correction, the performance of our method is optimal for both color deficiency. Thus, our proposed method can better improve confusing colors and increase discrimination.

Fig. 5  \({V_\text{K}}\) values of P-type color vision methods.

Fig. 6  \({V_\text{K}}\) values of D-type color vision methods.

The average color difference \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\) is the evaluation index of the degree of color change using the CIE DE2000 color difference formula. In this paper, when the color of the output image is closer to that of the input image (with a smaller average color difference), it is said that the naturalness of the output image is better. The average saturation difference \({{e}_{{a}^{*}{b}^{*}}}\) was used to measure the degree of the color change of the resultant image compared to the original image. The smaller the value, the less the method changes the color information, that is, it can better maintain the characteristic information of the original image. The average lightness difference \({{e}_{{L}^{*}}}\) refers to the degree of lightness and darkness of the image, which reflects the magnitude of lightness change in the resultant image.

Tables 2 to 4 show the \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\), \({{e}_{{a}^{*}{b}^{*}}}\) and \({{e}_{{L}^{*}}}\) values, respectively. All three evaluation metrics are used as dissimilarity measures between the original and color-corrected resultant image. From Table 2 and Table 3, it can be seen that the \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\) value and \({{e}_{{a}^{*}{b}^{*}}}\) value of this method are relatively low. Although it may be inferior to Meng’s method, because Meng’s method mainly changes the luminance component of the image, which results in a smaller change in the color component of the image; thus, the \({{e}_{{a}^{*}{b}^{*}}}\) value is very small. At the same time, Meng’s method also brings a larger \({{e}_{{L}^{*}}}\) value. Compared with other methods for modifying image color hues, our method has the lowest \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\) value and the second lowest \({{e}_{{a}^{*}{b}^{*}}}\) value. The output image color is relatively close to the original image color. This further indicates that the proposed method is still effective in maintaining naturalness. Table 4 shows that the \({{e}_{{L}^{*}}}\) index of the proposed method is higher than that of the Hassan method; however, it is generally better than the other methods, especially for Meng’s method. Taken together, the \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\) metrics of the proposed method are smaller than those of the other hue modification methods despite the modification of the b* value, indicating that the color change of the resultant image of the proposed method over the original image is relatively small and will not affect the information acquisition of the resultant image by the trichromatic viewer. Based on the results of the comparison experiments of the dissimilarity metrics, from the perspective of the three evaluation metrics, \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\), \({{e}_{{a}^{*}{b}^{*}}}\) and \({{e}_{{L}^{*}}}\), the proposed method is good in the comparison experiments as a whole, which further illustrates the validity of the present method.

Table 2  \({{e}_{{L}^{*}{a}^{*}{b}^{*}}}\) values of various methods under K-type color vision.

Table 3  \({{e}_{{a}^{*}{b}^{*}}}\) values of various methods under K-type color vision.

Table 4  \({{e}_{{L}^{*}}}\) values of various methods under K-type color vision.

The results of the comparison test and simulation of the proposed method with the red and green color vision deficient person are shown in Figs. 7 and 8. In Fig. 7 although Tennenholtz et al.’s method performs moderately well in reducing confusing colors for red color vision deficient individuals, the output image will have artifacts. Moreover, because some pixel values are out of the color gamut, it will lead to black areas in the image, which is obvious in the natural image, and therefore the image information will be lost. Hassan et al.’s method enlarges the image’s blue stimulus, which makes the naturalness of the image less natural when viewed by people with color vision deficiency. In Milic, Takimoto, and Meng et al.’s method, insufficient contrast enhancement is observed, and the restoration effect of color correction is not obvious; Milic et al.’s method for green blindness outputs the resultant image with higher lightness and larger blue stimulus than the simulated resultant image for red blindness, contributing to the presence of artifacts in the edge part of the image and the edge blurring problem.

Fig. 7  Simulated images of P-type color perception methods: (a) Milic, (b) Takimoto, (c) Tennenholtz, (d) Hassan 2017, (e) Hassan 2019, (f) Meng, and (g) proposed.

Fig. 8  Simulated images of D-type color perception methods: (a) Milic, (b) Takimoto, (c) Tennenholtz, (d) Hassan 2017, (e) Hassan 2019, (f) Meng, and (g) proposed.

Figure 9 and Fig. 10 show the optimized images for each method under P-type and D-type, respectively. The results of “Nanten” color correction of natural images using different methods. Milic’s D-type image changes the color of the “branch” part drastically, which exaggerates the blue stimulus; Takimoto and Meng et al.’s method has too much lightness of the “fruit” part of the image during processing, which results in blurring of its boundary. Hassan’s method in [23] has insufficient contrast enhancement, as well as low color discrimination in protanopia and deuteranopia simulations, and in [24], by changing the red part to blue and purple, the original image meaning was changed, and the naturalness of the image was significantly reduced. In contrast, our proposed method achieved a saturation balance for artificial and natural images and distinguished between colors and areas that were originally prone to confusion well without losing image details. The proposed method outperformed other methods in terms of contrast enhancement and naturalness preservation, effectively reduced confusing colors and improved the perceptual ability of people with color vision deficiencies.

Fig. 9  Recolor images of P-type color perception method: (a) original, (b) Milic, (c) Takimoto, (d) Tennenholtz, (e) Hassan 2017, (f) Hassan 2019, (g) Meng, and (h) proposed.

Fig. 10  Recolor images of D-type color perception method: (a) original, (b) Milic, (c) Takimoto, (d) Tennenholtz, (e) Hassan 2017, (f) Hassan 2019, (g) Meng, and (h) proposed.

5.  Conclusion

This paper proposes a color correction method based on hue for dichromatically sighted people; red-green color vision-deficient people are mainly considered in the method. First, the hue loss of color pairs under normal color vision was defined, an objective function was constructed, and the resulting image was obtained by optimizing it. Finally, the effectiveness of the proposed method is illustrated through comparative tests and evaluation indices. The method achieved higher scores, maintained naturalness of the image, and effectively reduced confusing colors to help dichromatic viewers improve image color perception and reduce image information loss.

However, there are some limitations to the current method. Sometimes there may be a situation where \(\delta _{ij}\) reduces the difference between the b* axes. The next goal is to align the symbols of the corrected part of \(\delta _{ij}\) with those of \(\Delta b_{ij}^*\). And the next step of improvement will focus on personalized color correction algorithms for the different physiological characteristics of red and green blindness, as well as algorithmic speed enhancement. In addition, generalized metrics capable of evaluating the quality of the resultant images in multiple ways will be considered.

Acknowledgments

This study was supported by National Natural Science Foundation of China (62066035), Inner Mongolia Natural Science Foundation (2022LHMS06004, 2022MS06013, 2023MS06021), Universities Directly Under the Autonomous Region Funded by the Fundamental Research Fund Project (JY20230110, JY20220089, JY20230065), Support Program for Young Scientific and Technological Talents in Inner Mongolia Colleges and Universities (NJYT23059), Inner Mongolia Science and Technology Program Project (2020GG0104, 2021GG0140), Research program of science and technology at Universities of Inner Mongolia Autonomous Region (NJZZ22251).

References

[1] S.S. Deeb, “The molecular basis of variation in human color vision,” Clinical Genetics, vol.67, no.5, pp.369-377, 2010.
CrossRef

[2] S. Nakauchi and S. Usui, “Multilayered neural network models for color blindness,” Proc. 1991 IEEE International Joint Conference on Neural Networks, IEEE, pp.473-478, 1991.
CrossRef

[3] T. Wachtler, U. Dohrmann, and R. Hertel, “Modeling color percepts of dichromats,” Vision Research, vol.44, no.24, pp.2843-2855, 2004.
CrossRef

[4] H. Brettel, F. Viénot, and J.D. Mollon, “Computerized simulation of color appearance for dichromats,” J. Opt. Soc. Am. A, vol.14, no.10, pp.2647-2655, 1997.
CrossRef

[5] F. Viénot, H. Brettel, and J.D. Mollon, “Digital video colourmaps for checking the legibility of displays by dichromats,” Color Research & Application, vol.24, no.4, pp.243-252, 1999.
CrossRef

[6] C.E. Martin, J.O. Keller, S.K. Rogers, and M. Kabrisky, “Color blindness and a color human visual system model,” IEEE Trans. Syst. Man Cybern. A, Syst., Humans, vol.30, no.4, pp.494-500, 2000.
CrossRef

[7] G.M. Machado, M.M. Oliveira, and L.A. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Trans. Vis. Comput. Graphics, vol.15, no.6, pp.1291-1298, 2009.
CrossRef

[8] R.C. Gonzalez, R.E. Woods, and B.R. Masters, “Digital image processing, third edition,” J. Biomed. Opt., vol.14, no.2, p.029901, 2009.
CrossRef

[9] M. Ichikawa, K. Tanaka, S. Kondo, K. Hiroshima, K. Ichikawa, S. Tanabe, and K. Fukami, “Web-page color modification for barrier-free color vision with genetic algorithm,” Genetic and Evolutionary Computation Conference, Springer, pp.2134-2146, 2003.
CrossRef

[10] K. Wakita and K. Shimamura, “SmartColor: Disambiguation framework for the colorblind,” Proc. 7th International ACM SIGACCESS Conference on Computers and Accessibility, pp.158-165, 2005.
CrossRef

[11] K. Rasche, R. Geist, and J. Westall, “Re-coloring images for gamuts of lower dimension,” Computer Graphics Forum, vol.24, no.3, pp.423-432, 2005.
CrossRef

[12] L. Jefferson and R. Harvey, “Accommodating color blind computer users,” Proc. 8th International ACM SIGACCESS Conference on Computers and Accessibility, pp.40-47, 2006.
CrossRef

[13] J. Lee and W.P. Dos Santos, “An adaptive fuzzy-based system to simulate, quantify and compensate color blindness,” Integrated Computer-Aided Engineering, vol.18, no.1, pp.29-40, 2011.
CrossRef

[14] C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” 2011 International Conference on Computer Vision, IEEE, pp.1172-1179, 2011.
CrossRef

[15] G.E. Tsekouras, A. Rigos, S. Chatzistamatis, J. Tsimikas, K. Kotis, G. Caridakis, and C.-N. Anagnostopoulos, “A novel approach to image recoloring for color vision deficiency,” Sensors, vol.21, no.8, p.2740, 2021.
CrossRef

[16] J.-B. Huang, Y.-C. Tseng, S.-I. Wu, and S.-J. Wang, “Information preserving color transformation for protanopia and deuteranopia,” IEEE Signal Process. Lett., vol.14, no.10, pp.711-714, 2007.
CrossRef

[17] J.-B. Huang, C.-S. Chen, T.-C. Jen, and S.-J. Wang, “Image recolorization for the colorblind,” 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, pp.1161-1164, 2009.
CrossRef

[18] C.-R. Huang, K.-C. Chiu, and C.-S. Chen, “Key color priority based image recoloring for dichromats,” Advances in Multimedia Information Processing-PCM 2010: 11th Pacific Rim Conference on Multimedia, Shanghai, China, Sept. 2010, Proceedings, Part II 11, Springer, pp.637-647, 2010.
CrossRef

[19] G.R. Kuhn, M.M. Oliveira, and L.A. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Trans. Vis. Comput. Graphics, vol.14, no.6, pp.1747-1754, 2008.
CrossRef

[20] N. Milić, M. Hoffmann, T. Tómács, D. Novaković, and B. Milosavljević, “A content-dependent naturalness-preserving daltonization method for dichromatic and anomalous trichromatic color vision deficiencies,” J. Imag. Sci. Techn., vol.59, no.1, pp.010504-1-010504-10, 2015.
CrossRef

[21] H. Takimoto, H. Yamauchi, M. Jindai, and A. Kanagawa, “Modification of indistinguishable colors for people with color vision deficiency,” Journal of Signal Processing, vol.16, no.6, pp.587-592, 2012.
CrossRef

[22] G. Tennenholtz and I. Zachevsky, “Natural contrast enhancement for dichromats using similarity maps,” 2016 IEEE International Conference on the Science of Electrical Engineering (ICSEE). IEEE, pp.1-5, 2016.
CrossRef

[23] M.F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red-green deficiency,” Signal Processing: Image Communication, vol.57, pp.126-133, 2017.
CrossRef

[24] M.F. Hassan, “Flexible color contrast enhancement method for red-green deficiency,” Multidim. Syst. Sign. Process., vol.30, no.4, pp.1975-1989, 2019.
CrossRef

[25] Z. Zhu, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, T.-T. Wong, and X. Mao, “Personalized image recoloring for color vision deficiency compensation,” IEEE Trans. Multimedia, vol.24, pp.1721-1734, 2021.
CrossRef

[26] G. Tanaka, N. Suetake, and E. Uchino, “Lightness modification of color image for protanopia and deuteranopia,” OPT. REV., vol.17, pp.14-23, 2010.
CrossRef

[27] N. Suetake, G. Tanaka, H. Hashii, and E. Uchino, “Simple lightness modification for color vision impaired based on Craik-O’Brien effect,” Journal of the Franklin Institute, vol.349, no.6, pp.2093-2107, 2012.
CrossRef

[28] S. Bao, G. Tanaka, H. Tamukoh, and N. Suetake, “Lightness modification method considering Craik-O’Brien effect for protanopia and deuteranopia,” IEICE Trans. Fundamentals., vol.E99-A, no.11, pp.2008-2011, Nov. 2016.
CrossRef

[29] M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification method considering visual characteristics of protanopia and deuteranopia,” 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2019.
CrossRef

[30] M. Meng and G. Tanaka, “Lightness modification method considering visual characteristics of protanopia and deuteranopia,” Opt. Rev., vol.27, no.6, pp.548-560, 2020.
CrossRef

[31] G. Tanaka, N. Suetake, and E. Uchino, “Gamut grasping for histogram equalization in cielab color space,” ITC-CSCC: 2009, pp.239-242, 2009.

Authors

Shi BAO
  Inner Mongolia University of Technology

received the B.E. degree in Science, Qinghai University in 2005. Received the M.E. and Ph.D. degrees in Science, Nagoya City University in 2015 and 2018, respectively. 2018- Associate Professor, Inner Mongolia University of Technology. His research direction is image processing.

Xiaoyan SONG
  Inner Mongolia University of Technology

received the B.E. degree in Computer Science and Technology from Inner Mongolia University of Technology of China, in 2021. She is currently working toward the M.S degree in image processing with Inner Mongolia University of Technology, Hohhot, China.

Xufei ZHUANG
  Inner Mongolia University of Technology

received the B.E. and M.E. degrees in engineering from Inner Mongolia University of Technology, China, in 2002 and 2010, respectively. He is currently with the School of Information Engineering, Inner Mongolia University of Technology, China, where he is a assistant professor. He is interested in object detection and tracking.

Min LU
  Inner Mongolia University of Technology

received the B.E. degree in management and the Ph.D. degree in science from Inner Mongolia University, China, in 2014 and 2022, respectively. She is currently with the School of Information Engineering, Inner Mongolia University of Technology, China. She is interested in image processing.

Gao LE
  Inner Mongolia University of Technology

received the B.E. degree in Network engineering from North University of China, Taiyuan, China, in 2020. He is currently working toward the M.S. degree in image processing with Inner Mongolia University of Technology, Hohhot, China. He is interested in deep learning and image processing.

Keyword