The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SSM(127hit)

21-40hit(127hit)

  • Validity of Kit-Build Method for Assessment of Learner-Build Map by Comparing with Manual Methods

    Warunya WUNNASRI  Jaruwat PAILAI  Yusuke HAYASHI  Tsukasa HIRASHIMA  

     
    PAPER-Educational Technology

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1141-1150

    This paper describes an investigation into the validity of an automatic assessment method of the learner-build concept map by comparing it with two well-known manual methods. We have previously proposed the Kit-Build (KB) concept map framework where a learner builds a concept map by using only a provided set of components, known as the set “kit”. In this framework, instant and automatic assessment of a learner-build concept map has been realized. We call this assessment method the “Kit-Build method” (KB method). The framework and assessment method have already been practically used in classrooms in various schools. As an investigation of the validity of this method, we have conducted an experiment as a case study to compare the assessment results of the method with the assessment results of two other manual assessment methods. In this experiment, 22 university students attended as subjects and four as raters. It was found that the scores of the KB method had a very strong correlation with the scores of the other manual methods. The results of this experiment are one of evidence to show the automatic assessment of the Kit-Build concept map can attain almost the same level of validity as well-known manual assessment methods.

  • Collaborative Ontology Development Approach for Multidisciplinary Knowledge: A Scenario-Based Knowledge Construction System in Life Cycle Assessment

    Akkharawoot TAKHOM  Sasiporn USANAVASIN  Thepchai SUPNITHI  Mitsuru IKEDA  

     
    PAPER-Knowledge Representation

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    892-900

    Creating an ontology from multidisciplinary knowledge is a challenge because it needs a number of various domain experts to collaborate in knowledge construction and verify the semantic meanings of the cross-domain concepts. Confusions and misinterpretations of concepts during knowledge creation are usually caused by having different perspectives and different business goals from different domain experts. In this paper, we propose a community-driven ontology-based application management (CD-OAM) framework that provides a collaborative environment with supporting features to enable collaborative knowledge creation. It can also reduce confusions and misinterpretations among domain stakeholders during knowledge construction process. We selected one of the multidisciplinary domains, which is Life Cycle Assessment (LCA) for our scenario-based knowledge construction. Constructing the LCA knowledge requires many concepts from various fields including environment protection, economic development, social development, etc. The output of this collaborative knowledge construction is called MLCA (multidisciplinary LCA) ontology. Based on our scenario-based experiment, it shows that CD-OAM framework can support the collaborative activities for MLCA knowledge construction and also reduce confusions and misinterpretations of cross-domain concepts that usually presents in general approach.

  • Performance Comparison of Subjective Quality Assessment Methods for 4k Video

    Kimiko KAWASHIMA  Kazuhisa YAMAGISHI  Takanori HAYASHI  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2017/08/29
      Vol:
    E101-B No:3
      Page(s):
    933-945

    Many subjective quality assessment methods have been standardized. Experimenters can select a method from these methods in accordance with the aim of the planned subjective assessment experiment. It is often argued that the results of subjective quality assessment are affected by range effects that are caused by the quality distribution of the assessment videos. However, there are no studies on the double stimulus continuous quality-scale (DSCQS) and absolute category rating with hidden reference (ACR-HR) methods that investigate range effects in the high-quality range. Therefore, we conduct experiments using high-quality assessment videos (high-quality experiment) and low-to-high-quality assessment videos (low-to-high-quality experiment) and compare the DSCQS and ACR-HR methods in terms of accuracy, stability, and discrimination ability. Regarding accuracy, we find that the mean opinion scores of the DSCQS and ACR-HR methods were marginally affected by range effects, although almost all common processed video sequences showed no significant difference for the high- and low-to-high-quality experiments. Second, the DSCQS and ACR-HR methods were equally stable in the low-to-high-quality experiment, whereas the DSCQS method was more stable than the ACR-HR method in the high-quality experiment. Finally, the DSCQS method had higher discrimination ability than the ACR-HR method in the low-to-high-quality experiment, whereas both methods had almost the same discrimination ability for the high-quality experiment. We thus determined that the DSCQS method is better at minimizing the range effects than the ACR-HR method in the high-quality range.

  • Web-Browsing QoE Estimation Model

    Toshiko TOMINAGA  Kanako SATO  Noriko YOSHIMURA  Masataka MASUDA  Hitoshi AOKI  Takanori HAYASHI  

     
    PAPER-Network

      Pubricized:
    2017/03/29
      Vol:
    E100-B No:10
      Page(s):
    1837-1845

    Web browsing services are expanding as smartphones are becoming increasingly popular worldwide. To provide customers with appropriate quality of web-browsing services, quality design and in-service quality management on the basis of quality of experience (QoE) is important. We propose a web-browsing QoE estimation model. The most important QoE factor for web-browsing is the waiting time for a web page to load. Next, the variation in the communication quality based on a mobile network should be considered. We developed a subjective quality assessment test to clarify QoE characteristics in terms of waiting time using 20 different types of web pages and constructed a web-page QoE estimation model. We then conducted a subjective quality assessment test of web-browsing to clarify the relationship between web-page QoE and web-browsing QoE for three web sites. We obtained the following two QoE characteristics. First, the main factor influencing web-browsing QoE is the average web-page QoE. Second, when web-page QoE variation occurs, a decrease in web-page QoE with a huge amplitude causes the web-browsing QoE to decrease. We used these characteristics in constructing our web-browsing QoE estimation model. The verification test results using non-training data indicate the accuracy of the model. We also show that our findings are applicable to web-browsing quality design and solving management issues on the basis of QoE.

  • Image Quality Assessment Based on Multi-Order Local Features Description, Modeling and Quantification

    Yong DING  Xinyu ZHAO  Zhi ZHANG  Hang DAI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/03/16
      Vol:
    E100-D No:6
      Page(s):
    1303-1315

    Image quality assessment (IQA) plays an important role in quality monitoring, evaluation and optimization for image processing systems. However, current quality-aware feature extraction methods for IQA can hardly balance accuracy and complexity. This paper introduces multi-order local description into image quality assessment for feature extraction. The first-order structure derivative and high-order discriminative information are integrated into local pattern representation to serve as the quality-aware features. Then joint distributions of the local pattern representation are modeled by spatially enhanced histogram. Finally, the image quality degradation is estimated by quantifying the divergence between such distributions of the reference image and those of the distorted image. Experimental results demonstrate that the proposed method outperforms other state-of-the-art approaches in consideration of not only accuracy that is consistent with human subjective evaluation, but also robustness and stability across different distortion types and various public databases. It provides a promising choice for image quality assessment development.

  • Development and Evaluation of Online Infrastructure to Aid Teaching and Learning of Japanese Prosody Open Access

    Nobuaki MINEMATSU  Ibuki NAKAMURA  Masayuki SUZUKI  Hiroko HIRANO  Chieko NAKAGAWA  Noriko NAKAMURA  Yukinori TAGAWA  Keikichi HIROSE  Hiroya HASHIMOTO  

     
    INVITED PAPER

      Pubricized:
    2016/12/22
      Vol:
    E100-D No:4
      Page(s):
    662-669

    This paper develops an online and freely available framework to aid teaching and learning the prosodic control of Tokyo Japanese: how to generate its adequate word accent and phrase intonation. This framework is called OJAD (Online Japanese Accent Dictionary) [1] and it provides three features. 1) Visual, auditory, systematic, and comprehensive illustration of patterns of accent change (accent sandhi) of verbs and adjectives. Here only the changes caused by twelve fundamental conjugations are focused upon. 2) Visual illustration of the accent pattern of a given verbal expression, which is a combination of a verb and its postpositional auxiliary words. 3) Visual illustration of the pitch pattern of any given sentence and the expected positions of accent nuclei in the sentence. The third feature is technically implemented by using an accent change prediction module that we developed for Japanese Text-To-Speech (TTS) synthesis [2],[3]. Experiments show that accent nucleus assignment to given texts by the proposed framework is much more accurate than that by native speakers. Subjective assessment and objective assessment done by teachers and learners show extremely high pedagogical effectiveness of the developed framework.

  • Naturalization of Screen Content Images for Enhanced Quality Evaluation

    Xingge GUO  Liping HUANG  Ke GU  Leida LI  Zhili ZHOU  Lu TANG  

     
    LETTER-Information Network

      Pubricized:
    2016/11/24
      Vol:
    E100-D No:3
      Page(s):
    574-577

    The quality assessment of screen content images (SCIs) has been attractive recently. Different from natural images, SCI is usually a mixture of picture and text. Traditional quality metrics are mainly designed for natural images, which do not fit well into the SCIs. Motivated by this, this letter presents a simple and effective method to naturalize SCIs, so that the traditional quality models can be applied for SCI quality prediction. Specifically, bicubic interpolation-based up-sampling is proposed to achieve this goal. Extensive experiments and comparisons demonstrate the effectiveness of the proposed method.

  • Driver Behavior Assessment in Case of Critical Driving Situations

    Oussama DERBEL  René LANDRY, Jr.  

     
    PAPER

      Vol:
    E100-A No:2
      Page(s):
    491-498

    Driver behavior assessment is a hard task since it involves distinctive interconnected factors of different types. Especially in case of insurance applications, a trade-off between application cost and data accuracy remains a challenge. Data uncertainty and noises make smart-phone or low-cost sensor platforms unreliable. In order to deal with such problems, this paper proposes the combination between the Belief and Fuzzy theories with a two-level fusion based architecture. It enables the propagation of information errors from the lower to the higher level of fusion using the belief and/or the plausibility functions at the decision step. The new developed risk models of the Driver and Environment are based on the accident statistics analysis regarding each significant driving risk parameter. The developed Vehicle risk models are based on the longitudinal and lateral accelerations (G-G diagram) and the velocity to qualify the driving behavior in case of critical events (e.g. Zig-Zag scenario). In case of over-speed and/or accident scenario, the risk is evaluated using our new developed Fuzzy Inference System model based on the Equivalent Energy Speed (EES). The proposed approach and risk models are illustrated by two examples of driving scenarios using the CarSim vehicle simulator. Results have shown the validity of the developed risk models and the coherence with the a-priori risk assessment.

  • Revisiting the Regression between Raw Outputs of Image Quality Metrics and Ground Truth Measurements

    Chanho JUNG  Sanghyun JOO  Do-Won NAM  Wonjun KIM  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/08/08
      Vol:
    E99-D No:11
      Page(s):
    2778-2787

    In this paper, we aim to investigate the potential usefulness of machine learning in image quality assessment (IQA). Most previous studies have focused on designing effective image quality metrics (IQMs), and significant advances have been made in the development of IQMs over the last decade. Here, our goal is to improve prediction outcomes of “any” given image quality metric. We call this the “IQM's Outcome Improvement” problem, in order to distinguish the proposed approach from the existing IQA approaches. We propose a method that focuses on the underlying IQM and improves its prediction results by using machine learning techniques. Extensive experiments have been conducted on three different publicly available image databases. Particularly, through both 1) in-database and 2) cross-database validations, the generality and technological feasibility (in real-world applications) of our machine-learning-based algorithm have been evaluated. Our results demonstrate that the proposed framework improves prediction outcomes of various existing commonly used IQMs (e.g., MSE, PSNR, SSIM-based IQMs, etc.) in terms of not only prediction accuracy, but also prediction monotonicity.

  • Discriminative Metric Learning on Extended Grassmann Manifold for Classification of Brain Signals

    Yoshikazu WASHIZAWA  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E99-A No:4
      Page(s):
    880-883

    Electroencephalography (EEG) and magnetoencephalography (MEG) measure the brain signal from spatially-distributed electrodes. In order to detect event-related synchronization and desynchronization (ERS/ERD), which are utilized for brain-computer/machine interfaces (BCI/BMI), spatial filtering techniques are often used. Common spatial potential (CSP) filtering and its extensions which are the spatial filtering methods have been widely used for BCIs. CSP transforms brain signals that have a spatial and temporal index into vectors via a covariance representation. However, the variance-covariance structure is essentially different from the vector space, and not all the information can be transformed into an element of the vector structure. Grassmannian embedding methods, therefore, have been proposed to utilize the variance-covariance structure of variational patterns. In this paper, we propose a metric learning method to classify the brain signal utilizing the covariance structure. We embed the brain signal in the extended Grassmann manifold, and classify it on the manifold using the proposed metric. Due to this embedding, the pattern structure is fully utilized for the classification. We conducted an experiment using an open benchmark dataset and found that the proposed method exhibited a better performance than CSP and its extensions.

  • Color-Enriched Gradient Similarity for Retouched Image Quality Evaluation

    Leida LI  Yu ZHOU  Jinjian WU  Jiansheng QIAN  Beijing CHEN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/12/09
      Vol:
    E99-D No:3
      Page(s):
    773-776

    Image retouching is fundamental in photography, which is widely used to improve the perceptual quality of a low-quality image. Traditional image quality metrics are designed for degraded images, so they are limited in evaluating the quality of retouched images. This letter presents a RETouched Image QUality Evaluation (RETIQUE) algorithm by measuring structure and color changes between the original and retouched images. Structure changes are measured by gradient similarity. Color colorfulness and saturation are utilized to measure color changes. The overall quality score of a retouched image is computed as the linear combination of gradient similarity and color similarity. The performance of RETIQUE is evaluated on a public Digitally Retouched Image Quality (DRIQ) database. Experimental results demonstrate that the proposed metric outperforms the state-of-the-arts.

  • An Image Quality Assessment Using Mean-Centered Weber Ratio and Saliency Map

    Soyoung CHUNG  Min Gyo CHUNG  

     
    LETTER

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    138-140

    Chen proposed an image quality assessment method to evaluate image quality at a ratio of noise in an image. However, Chen's method had some drawbacks that unnoticeable noise is reflected in the evaluation or noise position is not accurately detected. Therefore, in this paper, we propose a new image quality measurement scheme using the mean-centered WLNI (Weber's Law Noise Identifier) and the saliency map. The experimental results show that the proposed method outperforms Chen's and agrees more consistently with human visual judgment.

  • Quantitative Assessment of Facial Paralysis Based on Spatiotemporal Features

    Truc Hung NGO  Yen-Wei CHEN  Naoki MATSUSHIRO  Masataka SEO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/10/01
      Vol:
    E99-D No:1
      Page(s):
    187-196

    Facial paralysis is a popular clinical condition occurring in 30 to 40 patients per 100,000 people per year. A quantitative tool to support medical diagnostics is necessary. This paper proposes a simple, visual and robust method that can objectively measure the degree of the facial paralysis by the use of spatiotemporal features. The main contribution of this paper is the proposal of an effective spatiotemporal feature extraction method based on a tracking of landmarks. Our method overcomes the drawbacks of the other techniques such as the influence of irrelevant regions, noise, illumination change and time-consuming process. In addition, the method is simple and visual. The simplification helps to reduce the time-consuming process. Also, the movements of landmarks, which relate to muscle movement ability, are visual. Therefore, the visualization helps reveal regions of serious facial paralysis. For recognition rate, experimental results show that our proposed method outperformed the other techniques tested on a dynamic facial expression image database.

  • Reduced-Reference Image Quality Assessment Based on Discrete Cosine Transform Entropy

    Yazhong ZHANG  Jinjian WU  Guangming SHI  Xuemei XIE  Yi NIU  Chunxiao FAN  

     
    PAPER-Digital Signal Processing

      Vol:
    E98-A No:12
      Page(s):
    2642-2649

    Reduced-reference (RR) image quality assessment (IQA) algorithm aims to automatically evaluate the distorted image quality with partial reference data. The goal of RR IQA metric is to achieve higher quality prediction accuracy using less reference information. In this paper, we introduce a new RR IQA metric by quantifying the difference of discrete cosine transform (DCT) entropy features between the reference and distorted images. Neurophysiological evidences indicate that the human visual system presents different sensitivities to different frequency bands. Moreover, distortions on different bands result in individual quality degradations. Therefore, we suggest to calculate the information degradation on each band separately for quality assessment. The information degradations are firstly measured by the entropy difference of reorganized DCT coefficients. Then, the entropy differences on all bands are pooled to obtain the quality score. Experimental results on LIVE, CSIQ, TID2008, Toyama and IVC databases show that the proposed method performs highly consistent with human perception with limited reference data (8 values).

  • Software Reliability Assessment with Multiple Changes of Testing-Environment

    Shinji INOUE  Shigeru YAMADA  

     
    PAPER

      Vol:
    E98-A No:10
      Page(s):
    2031-2041

    We discuss software reliability assessment considering multiple changes of software fault-detection phenomenon. The testing-time when the characteristic of the software failure-occurrence or fault-detection phenomenon changes notably in the testing-phase of a software development process is called change-point. It is known that the occurrence of the change-point influences the accuracy for the software reliability assessment based on a software reliability growth models, which are mainly divided into software failure-occurrence time and fault counting models. This paper discusses software reliability growth modeling frameworks considering with the effect of the multiple change-point occurrence on the software reliability growth process in software failure-occurrence time and fault counting modeling. And we show numerical illustrations for the software reliability analyses based on our models by using actual data.

  • Saliency Guided Gradient Similarity for Fast Perceptual Blur Assessment

    Peipei ZHAO  Leida LI  Hao CAI  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/05/18
      Vol:
    E98-D No:8
      Page(s):
    1613-1616

    Blur is one of the most common distortion type and greatly impacts image quality. Most existing no-reference (NR) image blur metrics produce scores without a fixed range, so it is hard to judge the extent of blur directly. This letter presents a NR perceptual blur metric using Saliency Guided Gradient Similarity (SGGS), which produces blur scores with a fixed range of (0,1). A blurred image is first reblurred using a Gaussian low-pass filter, producing a heavily blurred image. With this reblurred image as reference, a local blur map is generated by computing the gradient similarity. Finally, visual saliency is employed in the pooling to adapt to the characteristics of the human visual system (HVS). The proposed metric features fixed range, fast computation and better consistency with the HVS. Experiments demonstrate its advantages.

  • Objective Estimation Methods for the Quality of HDR Images and Their Evaluation with Subjective Assessment

    Hirofumi TAKANO  Naoyuki AWANO  Kenji SUGIYAMA  

     
    PAPER

      Vol:
    E98-A No:8
      Page(s):
    1689-1695

    High dynamic range (HDR) images that include large differences in brightness levels are studied to address the lack of knowledge on the quality estimation method for real HDR images. For this, we earlier proposed a new metric, the independent signal-to-noise ratio (ISNR), using the independent pixel value as the signal instead of the peak value (PSNR). Next, we proposed the local peak signal-to-noise ratio (LPSNR), using the maximum value of neighboring pixels, as an improved version. However, these methods did not sufficiently consider human perception. To address this issue, here we proposed an objective estimation method that considers spatial frequency characteristics based on the actual brightness. In this method, the approximated function for human characteristics is calculated and used as a 2D filter on an FFT for spatial frequency weighting. In order to confirm the usefulness of this objective estimation method, we compared the results of the objective estimation with a subjective assessment. We used the organic EL display which has a perfect contrast ratio for the subjective assessment. The results of experiments showed that perceptual weighting improves the correlation between the SNR and MOS of the subjective assessment. It is recognized that the weighted LPSNR gives the best correlation.

  • Selective Attention Mechanisms for Visual Quality Assessment

    Ulrich ENGELKE  

     
    INVITED PAPER

      Vol:
    E98-A No:8
      Page(s):
    1681-1688

    Selective visual attention is an integral mechanism of the human visual system that is often neglected when designing perceptually relevant image and video quality metrics. Disregarding attention mechanisms assumes that all distortions in the visual content impact equally on the overall quality perception, which is typically not the case. Over the past years we have performed several experiments to study the effect of visual attention on quality perception. In addition to gaining a deeper scientific understanding of this matter, we were also able to use this knowledge to further improve various quality prediction models. In this article, I review our work with the aim to increase awareness on the importance of visual attention mechanisms for the effective design of quality prediction models.

  • Perceptually Optimized Missing Texture Reconstruction via Neighboring Embedding

    Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER

      Vol:
    E98-A No:8
      Page(s):
    1709-1717

    Perceptually optimized missing texture reconstruction via neighboring embedding (NE) is presented in this paper. The proposed method adopts the structural similarity (SSIM) index as a measure for representing texture reconstruction performance of missing areas. This provides a solution to the problem of previously reported methods not being able to perform perceptually optimized reconstruction. Furthermore, in the proposed method, a new scheme for selection of the known nearest neighbor patches for reconstruction of target patches including missing areas is introduced. Specifically, by monitoring the SSIM index observed by the proposed NE-based reconstruction algorithm, selection of known patches optimal for the reconstruction becomes feasible even if target patches include missing pixels. The above novel approaches enable successful reconstruction of missing areas. Experimental results show improvement of the proposed method over previously reported methods.

  • Objective No-Reference Video Quality Assessment Method Based on Spatio-Temporal Pixel Analysis

    Wyllian B. da SILVA  Keiko V. O. FONSECA  Alexandre de A. P. POHL  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/04/03
      Vol:
    E98-D No:7
      Page(s):
    1325-1332

    Digital video signals are subject to several distortions due to compression processes, transmission over noisy channels or video processing. Therefore, the video quality evaluation has become a necessity for broadcasters and content providers interested in offering a high video quality to the customers. Thus, an objective no-reference video quality assessment metric is proposed based on the sigmoid model using spatial-temporal features weighted by parameters obtained through the solution of a nonlinear least squares problem using the Levenberg-Marquardt algorithm. Experimental results show that when it is applied to MPEG-2 streams our method presents better linearity than full-reference metrics, and its performance is close to that achieved with full-reference metrics for H.264 streams.

21-40hit(127hit)