The search functionality is under construction.
The search functionality is under construction.

Open Access
Skin Diagnostic Method Using Fontana-Masson Stained Images of Stratum Corneum Cells

Shuto HASEGAWA, Koichiro ENOMOTO, Taeko MIZUTANI, Yuri OKANO, Takenori TANAKA, Osamu SAKAI

  • Full Text Views

    168

  • Cite this
  • Free PDF (7.3MB)

Summary :

Melanin, which is responsible for the appearance of spots and freckles, is an important indicator in evaluating skin condition. To assess the efficacy of cosmetics, skin condition scoring is performed by analyzing the distribution and amount of melanin from microscopic images of the stratum corneum cells. However, the current practice of diagnosing skin condition using stratum corneum cells images relies heavily on visual evaluation by experts. The goal of this study is to develop a quantitative evaluation system for skin condition based on melanin within unstained stratum corneum cells images. The proposed system utilizes principal component regression to perform five-level scoring, which is then compared with visual evaluation scores to assess the system’s usefulness. Additionally, we evaluated the impact of indicators related to melanin obtained from images on the scores, and verified which indicators are effective for evaluation. In conclusion, we confirmed that scoring is possible with an accuracy of more than 60% on a combination of several indicators, which is comparable to the accuracy of visual assessment.

Publication
IEICE TRANSACTIONS on Information Vol.E107-D No.8 pp.1070-1078
Publication Date
2024/08/01
Publicized
2024/04/19
Online ISSN
1745-1361
DOI
10.1587/transinf.2023EDP7256
Type of Manuscript
PAPER
Category
Biological Engineering

1.  Introduction

Melanin, which causes spots and freckles, is an important indicator of skin condition. Experts measure the density and distribution of melanin in stratum corneum cells taken from the skin to evaluate the efficacy of cosmetics.

Melanin is produced by melanocytes in the basal layer of the epidermis [1], [2]. The mechanism by which pigmentation occurs is due to the uneven distribution of melanin within the stratum corneum as a result of melanin remaining in the stratum corneum instead of being expelled from the body through turnover. Therefore, if there is a bias in the distribution of melanin in stratum corneum cells, the skin is judged to be in poor condition as showing signs of blemishes.

Many non-invasive methods have been proposed to measure melanin in the epidermis [3]-[13]. One method of melanin evaluation by experts is to collect stratum corneum cells by tape-stripping and observe them under a microscope [14]. However, this method places a heavy burden on the evaluator as the expert evaluates each stratum corneum cells image. Moreover, the evaluation is largely dependent on the subjectivity of the expert, which may lead to bias in the evaluation of each expert.

Therefore, a method to extract melanin from stratum corneum cells images by image processing has been proposed [15]. Hashimoto et al. [16] also proposed a method for automatic evaluation using features of melanin in images by multiple regression analysis. By using regression analysis, Hashimoto et al. achieved quantitative evaluation in terms of the relationship between the evaluation score and features of melanin; however, this method requires staining of cells and melanin.

The goal of this study is to develop an automated skin condition assessment system using stratum corneum cells images stained solely for melanin by Fontana-Masson stain. The system does not stain the cells, thereby reducing the man-hours required for staining and reducing the burden on specialists. In this paper, we analyze and score the melanin in stratum corneum cells images without cell staining, and compare it with the visual perception score to verify the effectiveness of the proposed method.

This section describes the structure of this paper. Section 2 of this paper summarizes related research, and Sect. 3 describes the proposed system: Section 3.1 describes cell region extraction, and Sect. 3.2 describes melanin region extraction. Cellular region extraction uses deep learning, and melanin region extraction uses adaptive thresholding. Section 3.3 describes the melanin indices used in this paper, and Sect. 3.4 describes the principal component regression used in the scoring. Utilizing regression analysis for quantitative scoring demonstrates the relationship between predicted evaluation values and observed indicators, enabling objective skin diagnosis that does not depend on the subjectivity of the expert. Then, Sect. 4 discusses the results and discussion. Finally, Sect. 5 presents the conclusions.

2.  Related Research

2.1  Melanin Analysis Using Optical Methods

Measuring the amount of melanin in the skin is very useful for evaluating the effectiveness of treatments for pigmented diseases and the effects of cosmetics on the skin. Many non-invasive and optical methods for measuring melanin have been reported. Melanin measurement by ESR spectroscopy [4], confocal scanning laser microscopy [5], multiphoton microscopy (MPM) and spatial frequency domain spectroscopy (SFDS) [6], Methods using optical coherence tomography (OCT) to quantify melanin morphology [7], Methods using high-speed near-infrared (NIR) Raman spectrometry [8], and Characterization using diffuse reflection spectroscopy [9]-[11]. Based on the optical properties of melanin, these methods have been effective in analyzing and quantifying melanin. Moreover, all of these methods are noninvasive, which is an advantage because of the low burden on the patient. Numerous analytical methods have been proposed, and the analyst can choose the appropriate method depending on the situation.

2.2  Melanin Analysis Using Stratum Corneum Cells Images

In analyzing melanin, methods to evaluate skin conditions by directly observing intracellular melanin are also effective and widely used. For example, the method uses images taken by reflectance confocal microscopy (RCM) of Fontana-Masson stained skin sections [12], [13] and microscopic observation of stratum corneum cells collected by the tape stripping method [14]. In particular, the observation of stratum corneum cells by the tape-stripping method does not require any special equipment and is a relatively simple method of measurement. However, as mentioned above, the direct observation method places a heavy burden on the diagnostician, and since it is a subjective observation, the problem is that the diagnostic results depend on the experience of the diagnostician. Therefore, methods of analyzing melanin by image analysis have been studied [15], [16].

Hashimoto et al. [16] constructed an automated quantitative melanin evaluation system based on microscopic images of stratum corneum cells of the face, and showed that the evaluation accuracy is comparable to that of human visual perception. On the other hand, the method of Hashimoto et al. stains stratum corneum cells collected from human skin in addition to staining for melanin. Moreover, template matching is used to extract regions of melanin, and changes in image characteristics, such as changes in the magnification of the microscope image, may reduce the accuracy of the extraction.

In this study, we are investigating an automatic skin condition evaluation system that uses only melanin-stained images without staining of cells. The details are described in the next chapter.

3.  Proposed Method

First, we explain the stratum corneum cells images used in the proposed method. Figure 1 shows an example of the stratum corneum cells images used in this paper. The stratum corneum cells images in Fig. 1 were taken at 1920 \(\times\) 1440 px and cropped to 840 \(\times\) 1440 px. In this paper, stratum corneum cells collected from human skin are stained for melanin (Fontana-Masson stain) and microscopic images are taken at 200x magnification. In the images, granular matter within the cell regions contains melanin. The granular matter outside the cell region is mainly caused by reagents and must be removed when extracting the melanin region. Granular matter generated by the reagent is also present within the cell regions but is difficult to distinguish from melanin and is not considered by experts for visual evaluation. Therefore, we do not distinguish between them in this paper.

Fig. 1  Examples of stratum corneum cells images used. (a) visual evaluation score 5 (b) visual evaluation score 3 (c) visual evaluation score 1

The stratum corneum images used in this paper were also scored in five levels by expert’s visual evaluation. In the scoring, experts make a sensory evaluation based on the density and distribution of melanin in stratum corneum cells, and the more evenly distributed melanin is, the better the skin condition.

Secondly, we explain the proposed system. Figure 2 shows the system configuration. In the proposed system, stratum corneum cells collected from the skin are photographed with a microscope, and the skin condition is scored by image processing and fed back to the specialist. The image processing section is described below. First, a melanin-stained microscopic image is captured as an input image. Next, melanin regions and cell regions are extracted, from which several indicators of melanin are obtained as data. Finally, this data is used for scoring by principal component regression. The following sections describe each part of the process.

Fig. 2  System configuration diagram: Input images are collected by tape stripping method and Fontana-Masson stain is applied.

3.1  Cell Regions Extraction

We do not stain cells in order to reduce the man-hours required for staining when performing expert the shooting environment. Therefore, there is no clear difference in color information such as brightness and saturation between the cell and background regions, making it difficult to extract cell regions using classical methods. On the other hand, recent studies have reported highly accurate cell segmentation using deep learning [17]-[19]. In this paper, we extract cell regions using Residual U-Net [20] (Res U-Net), a deep learning model suitable for semantic segmentation. Res U-Net is a model of network structure that combines U-Net [21], also a model for semantic segmentation, with the residual blocks of Residual-Net [22] (Fig. 3, Fig. 4).

Fig. 3  Res U-Net architecture used to predict cell regions

Fig. 4  (a) Residual Block architecture (b) Conv Block architecture

3.2  Melanin Regions Extraction

The stratum corneum cells images used in this paper are darker at the edges than in the center due to illumination. Moreover, regions with localized darkness exist due to stacking of cells. Therefore, binarization by adaptive thresholding is effective in extracting melanin regions. Adaptive thresholding is a method of binarization in which the average pixel value of the surrounding area of the pixel of interest is used as the threshold value. Assuming that the pixel value of the input image is \(I(x,y)\), the threshold value \(Th(x,y)\) of the pixel of interest is calculated by Eq. (1).

\[\begin{equation*} Th(x,y)=\frac{1}{N} \sum_{x,y \in D} I(x,y) - c \tag{1} \end{equation*}\]

\(N\) is the number of pixels in region \(D\) and \(c\) is a constant value. In this paper, the region size and the constant value were determined empirically. Melanin is extracted from the extracted granular matter within the cell regions.

3.3  Indicators for Melanin

Data on melanin is obtained from the extracted images of cell and melanin regions based on previous studies [16]. In this paper, the skin condition is comprehensively evaluated based on the density and distribution of melanin. Therefore, we mainly use the density of melanin in the cell regions, the luminance of melanin, and the feature values for the size of melanin granules as variables.

3.3.1  Indicators for Melanin Density

The stratum corneum cells image is divided into square regions of a certain size (32 px, 64 px, 128 px), and the percentage of melanin in the cell regions is calculated using the results of cell and melanin regions extraction in each divided region. Then, a histogram is created using the calculation results for each of the divided regions, and the statistical indices shown in Table 1 are obtained.

Table 1  List of statistical indicators for melanin density

In addition to the above, the number of regions with higher melanin density than the average value is counted from the density calculation results for each region as an indicator of the number of regions with high melanin density in the image. Since the score tends to be worse when regions with high melanin density are widely distributed, the mean value of the density of the surrounding regions with high melanin density was calculated and used as an index. We calculated 10 indicators for each divided region size and used a total of 30 indicators in this paper to represent melanin density.

3.3.2  Indicators for Melanin Brightness

As in Sect. 3.3.1, the stratum corneum cells image is divided into square regions (32 px, 64 px, 128 px), and in each divided region, the average value of melanin brightness is calculated using the melanin regions extraction results in Sect. 3.2. Next, as in Sect. 3.3.1, a histogram is created using the calculation results for each region, and the same statistical indices as in Table 1 are obtained. We therefore calculated 8 statistical indicators for each divided region size and used a total of 24 indicators in this paper to represent melanin brightness.

3.3.3  Indicators for the Size of Melanin Granules

To measure the size of the granular material, we apply an opening process to the extracted image of melanin regions in Sect. 3.2 to remove small-sized granular material and measure the number of granular material. In this paper, opening processing is performed using 2, 3, 4, and 5 px square kernels (Fig. 5). The number of granular objects without opening processing is also measured and used as an indicator. The number of melanin in the image is considered to be affected by the number of cells sampled from the skin. For this reason, the measured number of granules is normalized by dividing it by the cell regions [px] using the cell regions extraction result in Sect. 3.1, and this calculation result is used as an indicator of melanin granules. We therefore used a total of five indicators to represent melanin size in this paper.

Fig. 5   Melanin regions after opening process at each kernel size. (a) Meranin regions original image (b) Kernel 2\(\times\)2 px opening process (c) Kernel 3\(\times\)3 px opening process (d) Kernel 4\(\times\)4 px opening process (e) Kernel 5\(\times\)5 px opening process

3.4  Scoring by Principal Component Regression

For scoring by principal component regression, principal component analysis [23], [24] was first conducted using the 59 indicators discussed in Sect. 3.3. The 59 principal components obtained from the results are used as explanatory variables, and multiple regression analysis is performed using the experts’ visual evaluation score as the objective variable. In the visual evaluation, experts scored each image based on the density and distribution of melanin in stratum corneum cells. The visual evaluation scores were classified into five levels from 1 to 5.

In principal component analysis, a new indicator \(Z_m (m=1,2,\cdots,p)\) is calculated as a weighted average of the \(p\) basic variables \(X_i (i = 1,2,\cdots,p)\) for each indicator (Eq. (2)).

\[\begin{equation*} \begin{cases} Z_1 = w_{11}X_1 + w_{12}X_2 + \dots + w_{1p}X_p \\ Z_2 = w_{21}X_1 + w_{22}X_2 + \dots + w_{2p}X_p \\ \hphantom{Z_2 =Z_2 =Z_2 =} \vdots \\ Z_p = w_{p1}X_1 + w_{p2}X_2 + \dots + w_{pp}X_p \\ \end{cases} \tag{2} \end{equation*}\]

Where \(w_{mi}\) represents the weights (eigenvectors) for the basic variables. Each principal component is obtained so that the variance \(\sigma_m^2\) of each of \(Z_m\) is maximized under the condition that they are uncorrelated. Equation (3) is obtained by sequentially constraining the second and subsequent principal components.

\[\begin{equation*} \sigma_1^2 > \sigma_2^2 > \dots > \sigma_p^2 \tag{3} \end{equation*}\]

The eigenvalues \(\lambda_m\) of the covariance matrix calculated from the basic variables are obtained from the constraints for each weight shown in Eq. (4). Note that the calculation of the covariance matrix and the eigenvalues are omitted.

\[\begin{equation*} \sum^{p}_{i=1} w_{mi}^2 = 1 \quad (m = 1, 2, \cdots, p) \tag{4} \end{equation*}\]

This allows the calculation of \(C_m\) (the contribution ratio that expresses how much information on the original underlying variable is retained by each principal component) from Eq. (5).

\[\begin{equation*} C_m = \frac{\lambda_m}{\sum^{p}_{i=1} \lambda_i} \quad (m = 1, 2, \cdots, p) \tag{5} \end{equation*}\]

The contribution ratio represents the degree of influence of the basic variable in each principal component, and principal components with high contribution ratios (large variance) are generally considered important in principal component analysis. However, since principal components with small variance can be important factors in regression problems [25], multiple regression analysis was conducted using p principal components in this paper.

Next, multiple regression analysis is performed using the scores of the principal components of each principal component as explanatory variables and the scores of the visual evaluation by the experts as objective variables. In multiple regression analysis, a regression equation that satisfies Eq. (6) is obtained by using the explanatory variables \(Z_1, Z_2, \cdots, Z_p\) for the objective variable \(Y\). Where \(Y\) is the expert’s visual evaluation score, and \(a_1, a_2, \cdots, a_p\) are partial regression coefficients.

\[\begin{equation*} Y = a_1 Z_1 + a_2 Z_2 + \dots + a_p Z_p \tag{6} \end{equation*}\]

In multiple regression analysis, variable selection was conducted using the stepwise variable reduction method, and significance tests were conducted for each variable. The predicted scores calculated from the regression equation were rounded off and then discretized into five levels, with scores below 1 as score 1 and above 5 as score 5. The error with the visual perception evaluation is measured as an evaluation method for the predicted results of the score using the regression equation.

This paper used 87 images of stratum corneum cells that had been evaluated by experts, which is a small amount of data to perform a statistical analysis. Therefore, a leave-one-out cross validation method was used.

4.  Results

4.1  Cell Regions Extraction Results

This section describes the results of cell regions extraction. Figure 6 shows the results of cell area extraction by Res U-Net, and Fig. 7 shows examples of regions where prediction accuracy was decreased. The ground truth images in Fig. 6 were created under the supervision of an expert. Figures 7 (d) and 7 (h) overlay the ground truth image and the predictive image, with the red regions indicating that the background is misdetected as cells and the blue regions indicating that the cells are misdetected as background. The stratum corneum cells images in Figs. 6 and 7 are cropped to 448 px squares. The predicted images were created by dividing the stratum corneum cells images into 512 \(\times\) 512 px and integrating them after prediction. The stratum corneum cells images used were 1920 \(\times\) 1440 px, and Intersection over Union (IoU) of the three prediction images, sample1 to sample3, were calculated using the groung truth. IoU was calculated by the following Eq. (7) using True Positive (TP), False Positive (FP), and False Negative (FN).

\[\begin{equation*} IoU = \frac{TP}{TP + FP + FN} \tag{7} \end{equation*}\]

Fig. 6  Result of cell regions extraction.

Fig. 7   Examples of regions with decreased prediction accuracy. (a) Example of a portion of sample1. (b) Ground truth image of (a). (c) Predicted image of (a). (d) Overlay image of (b) and (c). The red regions indicate misdetection of background as cells. (e) Example of a portion of sample3. (f) Ground truth image of (e). (g) Predicted image of (e). (h) Overlay image of (f) and (g). The blue regions indicate misdetection of cells as background.

The results were 0.93, 0.96, and 0.96, respectively, confirming that the prediction accuracy was above 0.9 for all images. However, as shown in Figs. 7 (a) and 7 (d), the prediction accuracy decreased when the melanin in the cell regions was significantly less than that of other cells, making it difficult to visually recognize the cell regions, or when there were many granular matter (granular matter that appeared due to the effect of reagents) remaining in the background regions as shown in Figs. 7 (e) and 7 (h). This result suggests that the cells are differentiated in response to the granular matter. Therefore, the prediction accuracy could be improved by augmenting the images as shown in Fig. 7 to the training data.

On the other hand, it should be noted that the current prediction accuracy does not affect the construction of the system, since the purpose of this study is not to improve the accuracy of cell regions.

4.2  Melanin Regions Extraction Results

This section describes the results of melanin regions extraction. Figure 8 shows the results of melanin regions extraction using adaptive thresholding of the proposed method and template matching of a previous study [16]. The stratum corneum cells images in Fig. 8 are cropped to 228 px squares. The images used are the same as in Sect. 4.1. Melanin is a small granular matter with a size of a few pixels in the image. The color density of each granule is different, and it is extremely difficult even for experts to accurately recognize the granular matter of melanin. For this reason, we do not compare the accuracy of the melanin regions extraction results using the ground truth.

Fig. 8  Result of melanin regions extraction.

In template matching, it is possible to dynamically adjust the granular matter to be detected by changing the templates and threshold values. For example, if we want to detect only melanin with large size, template matching is more effective than adaptive thresholding. On the other hand, Adaptive thresholding has the advantages of a simple algorithm and easy tuning due to the small number of parameters used. In this paper, we use the adaptive thresholding method, which is easy to apply, assuming that the magnifications of the microscope images used are different.

4.3  Scoring Results

We applied principal component regression to 87 stratum corneum cells images using the 59 indicators in Sect. 3.3 to validate the scoring results by cross-validation. Table 2 shows a comparison of the expert’s visual evaluation scores and the predicted scores from the regression analysis. Table 2 shows that the number of images with no error from the visual evaluation on a 5-point scale was 54 out of 87, with a prediction accuracy of 62.1%.

Table 2  Results of Comparison of Visual evaluation Scores and Predicted Scores

4.3.1  Validation Results for the Best Model

The percentage of correct predicted scores is expected to vary depending on the combination of indicators. In particular, in this validation we did not know which of the features in the image contributed to the score. Therefore, there was concern that some of the indicators sed in the validation included indicators that were not necessary for the analysis. In this section, we validated the best model for this paper by predicting scores for different combinations of the indicators in Sect. 3.3.

The indicators used in this paper are broadly classified into melanin density, brightness value, and granular size. Therefore, we made a model-by-model comparison with a reduced number of indicators by changing these combinations. Table 3 shows the combinations of melanin indicators that were validated. In Table 3, the data for melanin density and brightness values in No. 8 to No. 13 are based only on the data of 64 px size of the divided square regions in Sect. 3.3.1 and Sect. 3.3.2.

Table 3  Combinations of melanin indicators tested in the verification

Table 4 shows the scoring results for each combination. In Table 4, accuracy is calculated using the results with zero declination. Table 4 shows that the best accuracy in the combination of No.7 and No.8 is 64.4%. In addition, the accuracy tended to be lower when only one type of indicator was used, as in No.2-4 and No.9 and No.10. On the other hand, although there was no noticeable trend between the total number of indicators and accuracy, the selection of appropriate indicators is considered to be important because the interpretation of principal components might become more difficult as the number of indicators increases.

Table 4  Number of each declination and percentage of correct answers for each combination of melanin indicators

Regarding the accuracy of this predicted score, Hashimoto et al. [16] pointed out the ambiguity of human visual evaluation and verified the validity of the predicted results. In this paper, experts also reevaluated the stratum corneum cells images used in this paper to validate the accuracy of the predictions.

4.3.2  Expert Reevaluation and Validation of Prediction Accuracy

In this section, we compared the results of the visual evaluation of each Evaluator A and Evaluator B using 234 stratum corneum cell images, which are different from the 87 images used in Sect. 4.3.1, for the purpose of evaluating the ambiguity of human visual evaluation. The stratum corneum cells images used in this section are the same as those described in Sect. 3. Evaluator A and Evaluator B are Trained Experts who have been trained to avoid evaluation differences in the diagnosis of stratum corneum images. Table 5 shows the comparison results of the visual evaluation of Evaluator A and Evaluator B. Table 5 shows that the number of images for which the two evaluators’ evaluation values matched was 120, and the percentage was 51.3%. In addition, Table 4 shows that the accuracy for the combination of No.7 and No.8 indicators was 64.4%, indicating that the evaluation accuracy of this system is sufficiently high compared to the visual evaluation.

Table 5  Results of Comparison of Visual evaluation Scores of Evaluator A and Evaluator B

These results indicate that it is possible to construct an automatic skin condition evaluation system using the proposed method.

5.  Conclusion

In this paper, the scoring method based on principal component regression was validated with visual evaluation scores using features obtained from stratum corneum cells images. As a result, we confirmed that the scoring method can predict the score of stratum corneum cells images with accuracy equal to or better than that of visual evaluation. This indicates that the proposed method is feasible for an automatic skin condition evaluation system using unstained stratum corneum cells images.

On the other hand, this paper did not consider differences in importance among the features obtained from the images. Therefore, it is possible that some of the indicators used might include indicators of relatively low importance. Further selection of the features to be used, based on a dermatological perspective, would make the system more appropriate.

Acknowledgements

This work is partially supported by Regional ICT Research Center of Human, Industry and Future at The University of Shiga Prefecture, and by Cabinet Office, Government of Japan.

References

[1] K. Jimbow, W.C. Quevedo Jr., T.B. Fitzpatrick, and G. Szabo, “Some Aspects Of Melanin Biology: 1950-1975,” The Journal of Investigative Dermatology, vol.67, no.1, pp.72-89, July 1976.
CrossRef

[2] P.M. Plonka, T. Passeron, M. Brenner, D.J. Tobin, S. Shibahara, A. Thomas, A. Slominski, A.L. Kadekaro, D. Hershkovitz, E. Peters, J.J. Nordlund, Z. Abdel-Malek, K. Takeda, R. Paus, J.P. Ortonne, V.J. Hearing, and K.U. Schallreuter, “What are melanocytes really doing all day long・・・?” Experimental Dermatology, vol.18, no.9, pp.799-819, Aug. 2009. DOI: 10.1111/j.1600-0625.2009.00912.x
CrossRef

[3] N. Tsumura, N. Ojima, K. Sato, M. Shiraishi, H. Shimizu, H. Nabeshima, S. Akazaki, K. Hori, and Y. Miyake, “Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin,” SIGGRAPH ’03: ACM SIGGRAPH 2003 Papers, pp.770-779, July 2003. DOI: 10.1145/1201775.882344
CrossRef

[4] T. Sarna a, J.M. Burke, W. Korytowski, M. Różanowska, C.M.B. Skumatz, A. Zarȩba, and M. Zarȩba, “Loss of melanin from human RPE with aging: possible role of melanin photooxidation,” Experimental Eye Research, vol.76, no.1, pp.89-98, Jan. 2003. DOI: 10.1016/S0014-4835(02)00247-6
CrossRef

[5] M. Rajadhyaksha, M. Grossman, D. Esterowitz, R.H. Webb, and R.R. Anderson, “In Vivo Confocal Scanning Laser. Microscopy of Hultlan Skin: Melanin Provides Strong Contrast,” Journal of Investigative Dermatology, vol.104, no.6, pp.946-952, June 1995. DOI: 10.1111/1523-1747.ep12606215
CrossRef

[6] R.B. Saager, M. Balu, V. Crosignani, A. Sharif, A.J. Durkin, K.M. Kelly, and B.J. Tromberg, “In vivo measurements of cutaneous melanin across spatial scales: using multiphoton microscopy and spatial frequency domain spectroscopy,” Journal of Biomedical Optics, vol.20, no.6, pp.066005-1-066005-10, June 2015. DOI: 10.1117/1.JBO.20.6.066005
CrossRef

[7] I.-L. Chen, Y.-J. Wang, C.-C. Chang, Y.-H. Wu, C.-W. Lu, J.-W. Shen, L. Huang, B.-S. Lin, and H.-M. Chiang, “Computer-Aided Detection (CADe) System with Optical Coherent Tomography for Melanin Morphology Quantification in Melasma Patients,” Diagnostics, vol.11, no.8, pp.1-13, Aug. 2021. DOI: 10.3390/diagnostics11081498
CrossRef

[8] Z. Huang, H. Lui, X.K. Chen, A. Alajlan, D.I. McLean, and H. Zeng, “Raman spectroscopy of in vivo cutaneous melanin,” Journal of Biomedical Optics, vol.9, no.6, pp.1198-1205, Nov. 2004. DOI: 10.1117/1.1805553
CrossRef

[9] G. Zonios and A. Dimou, “Melanin optical properties provide evidence for chemical and structural disorder in vivo,” Optics Express, vol.16, no.11, pp.8263-8268, 2008. DOI: 10.1364/OE.16.008263
CrossRef

[10] D. Yudovsky and L. Pilon, “Rapid and accurate estimation of blood saturation, melanin content, and epidermis thickness from spectral diffuse reflectance,” Applied Optics, vol.49, no.10, pp.1707-1719, 2010. DOI: 10.1364/AO.49.001707
CrossRef

[11] G. Zonios, J. Bykowski, and N. Kollias, “Skin Melanin, Hemoglobin, and Light Scattering Properties can be Quantitatively Assessed In Vivo Using Diffuse Reflectance Spectroscopy,” Journal of Investigative Dermatology, vol.117, no.6, pp.1452-1457, Dec. 2001. DOI: 10.1046/j.0022-202x.2001.01577.x
CrossRef

[12] T. Yamashita, T. Kuwahara, S. González, and M. Takahashi, “Non-Invasive Visualization of Melanin and Melanocytes by Reflectance-Mode Confocal Microscopy,” Journal of Investigative Dermatology, vol.124, no.1, pp.235-240, Jan. 2005. DOI: 10.1111/j.0022-202X.2004.23562.x
CrossRef

[13] H.Y. Kang, P. Bahadoran, I. Suzuki, D. Zugaj, A. Khemis, T. Passeron, P. Andres, and J.-P. Ortonne, “In vivo reflectance confocal microscopy detects pigmentary changes in melasma at a cellular level resolution,” Experimental DermatologyVolume, vol.19, no.8, pp.e228-e233, July 2010. DOI: 10.1111/j.1600-0625.2009.01057.x
CrossRef

[14] J. Lademann, U. Jacobi, C. Surber, H. -J. Weigmann, and J.W. Fluhr, “The tape stripping procedure-evaluation of some critical parameters,” European Journal of Pharmaceutics and Biopharmaceutics, vol.72, no.2, pp.317-323, June 2009. DOI: 10.1016/j.ejpb.2008.08.008
CrossRef

[15] H.A. Miot, G. Brianezi, A. de A. Tamega, and L.D.B. Miot, “Techniques of digital image analysis for histological quantification of melanin,” An. Bras. Dermatol., vol.87, no.4, pp.608-611, Aug. 2012. DOI: 10.1590/S0365-05962012000400014
CrossRef

[16] T. Hashimoto, K. Yamashita, K. Yamazaki, K. Hiroyama, J. Yabuzaki, and H. Kobayashi, “Study of Analysis and Quantitative Estimation of Melanin in Face Epidermal Corneocyte,” Transactions of the JSME (in Japanese), vol.78, no.786, pp.160-174, Feb. 2012.

[17] S.U. Akram, J. Kannala, L. Eklund, and J. Heikkila, “Cell Segmentation Proposal Network for Microscopy Image Analysis,” DLMIA, pp.21-29, Sept. 2016.

[18] C. Stringer, T. Wang, M. Michaelos, and M. Pachitariu, “Cellpose: a generalist algorithm for cellular segmentation,” Nature Methods, vol.18, pp.100-106, Jan. 2021. DOI: 10.1038/s41592-020-01018-x
CrossRef

[19] Y. Al-Kofahi, A. Zaltsman, R. Graves, W. Marshall, and M. Rusu, “A deep learning-based algorithm for 2-D cell segmentation in microscopy images,” BMC bioinformatics, vol.19, pp.1-11, Oct. 2018. DOI: 10.1186/s12859-018-2375-z
CrossRef

[20] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” MICCAI 2015, vol.9351, pp.234-241, Nov. 2015.

[21] K. He, X. Zhang., S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.770-778, 2016.

[22] Z. Zhang, Q. Liu, and Y. Wang, “Road Extraction by Deep Residual U-Net,” IEEE Geoscience and Remote Sensing, vol.15, no.5, pp.749-753, May 2018. DOI: 10.1109/LGRS.2018.2802944
CrossRef

[23] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol.2, no.1-3, pp.37-52, Aug. 1987. DOI: 10.1016/0169-7439(87)80084-9
CrossRef

[24] K. Pearson, “On lines and planes of closest fit to systems of points in space,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol.2, no.11, pp.559-572, 1901. DOI: 10.1080/14786440109462720
CrossRef

[25] I.T. Jolliffe, “A Note on the Use of Principal Components in Regression,” Journal of the Royal Statistical Society Series C: Applied Statistics, vol.31, no.3, pp.300-303, 1982. DOI: 10.2307/2348005
CrossRef

Authors

Shuto HASEGAWA
  University of Shiga Prefecture

received his B.S. from the University of Shiga Prefecture, Japan in 2022. Since then, he has been a graduate student at the university. His main research in image processing technology.

Koichiro ENOMOTO
  University of Shiga Prefecture,Industry and Future, University of Shiga Prefecture

received his B.S. from Future University Hakodate, Japan in 2009 and M.S. and Ph.D. in system information science from Future University Hakodate, Japan in 2011 and 2014. He had also worked under the JSPS doctor fellowship (DC2) from 2012 to 2014, and as a project researcher at Future University Hakodate, Japan in 2014. From 2014 to 2018, he was an assistant professor in the Graduate School of Science and Technology, Niigata University, Japan. From 2018 to 2019, he was an assistant professor in the School of Engineering, University of Shiga Prefecture. Since 2019, he has been a lecturer in the School of Engineering, University of Shiga Prefecture. His main research interests are in image processing technology and human interface. He is a member of The Institute of Electronics, Information and Communication Engineers (IEICE).

Taeko MIZUTANI
  CIEL Co., Ltd.

graduated from Tokyo University of Science in 2001. Finished Master’s course majored in Biotechnology of graduate school, Tokyo University of Science in 2003. Received her doctor degree in Engineering from Tokyo University of Technology in 2017. She has worked in cosmetic industry since 2003. Since 2014, she has worked for CIEL Co., Ltd. Her main research interests are the evaluation of efficacy and safety of ingredients of cosmetics. She received the Best Paper award of the Foundation of Oil & Fat Industry Kaikan in 2017.

Yuri OKANO
  CIEL Co., Ltd.

graduated from the faculty of science, Okayama university. Received her master’s degree from Okayama university, and her doctor degree (Pharmaceutics) from Kyoto pharmaceutical university in 2003. She had worked for Japanese cosmetic manufacturer (Noevir Co., Ltd.) as a researcher from 1983 to 2001, and worked for the ingredient manufacturer (NIKKO Chemicals Co., Ltd) as the manager of evaluation division from 2001 to 2011. She was a general manager of Rhoto Pharmaceutical Co., Ltd. from 2011 to 2013. From 2013, She has been the director of CIEL Co., Ltd (her own company). Her main field is the evaluation of efficacy and safety of ingredients and finish products for topical application. She received the best paper award at International Federation of Societies of Cosmetic Chemists 2009.

Takenori TANAKA
  Niigata SL Co., Ltd.

graduated from Niitsu Technical High School in Niigata Prefecture, Department of Mechanical Engineering in 1980, and joined Toshiba Elevator and Escalator Service Co. Joined Japan SE Corporation in 1986. Joined FIT Corporation in 1990. Established Niigata S-Labo, Inc. in 2007. Engaged in software development since 1986. Since 2013, he has been engaged in research and development of stratum corneum measurement systems, IoT systems, and systems using machine learning.

Osamu SAKAI
  University of Shiga Prefecture,Industry and Future, University of Shiga Prefecture

received his B.S., M.S., and Ph.D. from Kyoto University, Kyoto, Japan, in 1990, 1992, and 1996, respectively. He was a research engineer (1995-2002) with the Sharp Corporation, Nara, Japan, and also he was an assistant professor and an associate professor in the Department of Electronic Science and Engineering, Kyoto University (2003-2014). He is currently a professor in the School of Engineering and a chair with Reginal ICT Research Center for Human, Industry and Future, the University of Shiga Prefecture. His current research interests include network science, monitoring technology, plasma science and engineering, and metamaterial science. He is a member of Institute of Electrical and Electronics Engineers (IEEE), American Physical Socienty (APS), Japan Society of Applied Physics (JSAP), and The Institute of Electronics, Information and Communication Engineers (IEICE).

Keyword