The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E94-D No.4  (Publication Date:2011/04/01)

    Special Section on Advanced Technologies in Knowledge Media and Intelligent Learning Environment
  • FOREWORD Open Access

    Setsuo YOKOYAMA  Toyohide WATANABE  

     
    FOREWORD

      Page(s):
    741-742
  • Latent Conditional Independence Test Using Bayesian Network Item Response Theory

    Takamitsu HASHIMOTO  Maomi UENO  

     
    PAPER

      Page(s):
    743-753

    Item response theory (IRT) is widely used for test analyses. Most models of IRT assume that a subject's responses to different items in a test are statistically independent. However, actual situations often violate this assumption. Thus, conditional independence (CI) tests among items given a latent ability variable are needed, but traditional CI tests suffer from biases. This study investigated a latent conditional independence (LCI) test given a latent variable. Results show that the LCI test can detect CI given a latent variable correctly, whereas traditional CI tests often fail to detect CI. Application of the LCI test to mathematics test data revealed that items that share common alternatives might be conditionally dependent.

  • Exploring the Teaching Efficiency of Integrating an Animated Agent into Web-Based Multimedia Learning System

    Kai-Yi CHIN  Yen-Lin CHEN  Jong-Shin CHEN  Zeng-Wei HONG  Jim-Min LIN  

     
    PAPER

      Page(s):
    754-762

    In our previous project, an XML-based authoring tool was provided for teachers to script multimedia teaching material with animated agents, and a stand-alone learning system was designed for students to display the material and interact with animated agents. We also provided evidence that the authoring tool and learning system in computer-assisted learning systems successfully enhances learning performance. The aim of this study is to continue the previous project, to develop a Web-based multimedia learning system that presents materials and an animated agent on a Web browser. The Web-based multimedia learning system can provide an opportunity for students to engage in independent learning or review of school course work. In order to demonstrate the efficiency of this learning system, it was applied to one elementary school. An experimental material, `Road Traffic Safety', was presented in two learning systems: a Web-based PowerPoint learning system and a Web-based multimedia learning system. The experiment was carried out in two classes that had a total of thirty-one 3rd-grade students. The results suggest that using our authoring tool in a Web-based learning system can improve learning, and in particular, enhance learners' problem-solving ability. Students with higher achievement on the post-test showed better comprehension in problem-solving questions. Furthermore, the feedback from the questionnaire surveys show students' learning interest can be fostered when an animated agent is integrated into multimedia teaching materials, and that students prefer to adopt the Web-based multimedia learning system for independent learning after school.

  • Real-World Oriented Mobile Constellation Learning Environment Using Gaze Pointing

    Masato SOGA  Masahito OHAMA  Yosikazu EHARA  Masafumi MIWA  

     
    PAPER

      Page(s):
    763-771

    We developed a real-world oriented mobile constellation learning environment. Learners point at a target constellation by gazing through a cylinder with a gyro-sensor under the real starry sky. The system can display information related to the constellation. The system has original exercise functions which are not supported by existing systems or products by other research group or companies. Through experimentation, we evaluated the learning environment to assess its learning effects.

  • Regular Section
  • Parameter Estimation for Non-convex Target Object Using Networked Binary Sensors

    Hiroshi SAITO  Sadaharu TANAKA  Shigeo SHIODA  

     
    PAPER-Fundamentals of Information Systems

      Page(s):
    772-785

    We describe a parameter estimation method for a target object in an area that sensors monitor. The parameters to be estimated are the perimeter length, size, and parameter determined by the interior angles of the target object. The estimation method does not use sensor location information, only the binary information on whether each sensor detects the target object. First, the sensing area of each sensor is assumed to be line-segment-shaped, which is a model of an infrared distance measurement sensor. Second, based on the analytical results of assuming line-segment-shaped sensing areas, we developed a unified equation that works with general sensing areas and general target-object shapes to estimate the parameters of the target objects. Numerical examples using computer simulation show that our method yields accurate results.

  • AMT-PSO: An Adaptive Magnification Transformation Based Particle Swarm Optimizer

    Junqi ZHANG  Lina NI  Chen XIE  Ying TAN  Zheng TANG  

     
    PAPER-Fundamentals of Information Systems

      Page(s):
    786-797

    This paper presents an adaptive magnification transformation based particle swarm optimizer (AMT-PSO) that provides an adaptive search strategy for each particle along the search process. Magnification transformation is a simple but very powerful mechanism, which is inspired by using a convex lens to see things much clearer. The essence of this transformation is to set a magnifier around an area we are interested in, so that we could inspect the area of interest more carefully and precisely. An evolutionary factor, which utilizes the information of population distribution in particle swarm, is used as an index to adaptively tune the magnification scale factor for each particle in each dimension. Furthermore, a perturbation-based elitist learning strategy is utilized to help the swarm's best particle to escape the local optimum and explore the potential better space. The AMT-PSO is evaluated on 15 unimodal and multimodal benchmark functions. The effects of the adaptive magnification transformation mechanism and the elitist learning strategy in AMT-PSO are studied. Results show that the adaptive magnification transformation mechanism provides the main contribution to the proposed AMT-PSO in terms of convergence speed and solution accuracy on four categories of benchmark test functions.

  • An H.264/AVC Decoder with Reduced External Memory Access for Motion Compensation

    Jaesun KIM  Younghoon KIM  Hyuk-Jae LEE  

     
    PAPER-Computer System

      Page(s):
    798-808

    The excessive memory access required to perform motion compensation when decoding compressed video is one of the main limitations in improving the performance of an H.264/AVC decoder. This paper proposes an H.264/AVC decoder that employs three techniques to reduce external memory access events: efficient distribution of reference frame data, on-chip cache memory, and frame memory recompression. The distribution of reference frame data is optimized to reduce the number of row activations during SDRAM access. The novel cache organization is proposed to simplify tag comparisons and ease the access to consecutive 4×4 blocks. A recompression algorithm is modified to improve compression efficiency by using unused storage space in neighboring blocks as well as the correlation with the neighboring pixels stored in the cache. Experimental results show that the three techniques together reduce external memory access time by an average of 90%, which is 16% better than the improvements achieved by previous work. Efficiency of the frame memory recompression algorithm is improved with a 32×32 cache, resulting in a PSNR improvement of 0.371 dB. The H.264/AVC decoder with the three techniques is fabricated and implemented as an ASIC using 0.18 µm technology.

  • A New Multiple-Round Dimension-Order Routing for Networks-on-Chip

    Binzhang FU  Yinhe HAN  Huawei LI  Xiaowei LI  

     
    PAPER-Computer System

      Page(s):
    809-821

    The Network-on-Chip (NoC) is limited by the reliability constraint, which impels us to exploit the fault-tolerant routing. Generally, there are two main design objectives: tolerating more faults and achieving high network performance. To this end, we propose a new multiple-round dimension-order routing (NMR-DOR). Unlike existing solutions, besides the intermediate nodes inter virtual channels (VCs), some turn-legally intermediate nodes inside each VC are also utilized. Hence, more faults are tolerated by those new introduced intermediate nodes without adding extra VCs. Furthermore, unlike the previous solutions where some VCs are prioritized, the NMR-DOR provides a more flexible manner to evenly distribute packets among different VCs. With extensive simulations, we prove that the NMR-DOR maximally saves more than 90% unreachable node pairs blocked by faults in previous solutions, and significantly reduces the packet latency compared with existing solutions.

  • Energy-Aware Task Scheduling for Real-Time Systems with Discrete Frequencies

    Dejun QIAN  Zhe ZHANG  Chen HU  Xincun JI  

     
    PAPER-Software System

      Page(s):
    822-832

    Power-aware scheduling of periodic tasks in real-time systems has been extensively studied to save energy while still meeting the performance requirement. Many previous studies use the probability information of tasks' execution cycles to assist the scheduling. However, most of these approaches adopt heuristic algorithms to cope with realistic CPU models with discrete frequencies and cannot achieve the globally optimal solution. Sometimes they even show worse results than non-stochastic DVS schemes. This paper presents an optimal DVS scheme for frame-based real-time systems under realistic power models in which the processor provides only a limited number of speeds and no assumption is made on power/frequency relation. A suboptimal DVS scheme is also presented in this paper to work out a solution near enough to the optimal one with only polynomial time expense. Experiment results show that the proposed algorithm can save at most 40% more energy compared with previous ones.

  • A GA-Based X-Filling for Reducing Launch Switching Activity toward Specific Objectives in At-Speed Scan Testing

    Yuta YAMATO  Xiaoqing WEN  Kohei MIYASE  Hiroshi FURUKAWA  Seiji KAJIHARA  

     
    PAPER-Dependable Computing

      Page(s):
    833-840

    Power-aware X-filling is a preferable approach to avoiding IR-drop-induced yield loss in at-speed scan testing. However, the ability of previous X-filling methods to reduce launch switching activity may be unsatisfactory, due to low effect (insufficient and global-only reduction) and/or low scalability (long CPU time). This paper addresses this reduction quality problem with a novel GA (Genetic Algorithm) based X-filling method, called GA-fill. Its goals are (1) to achieve both effectiveness and scalability in a more balanced manner and (2) to make the reduction effect of launch switching activity more concentrated on critical areas that have higher impact on IR-drop-induced yield loss. Evaluation experiments are being conducted on both benchmark and industrial circuits, and the results have demonstrated the usefulness of GA-fill.

  • Extraction of Informative Genes from Multiple Microarray Data Integrated by Rank-Based Approach

    Dongwan HONG  Jeehee YOON  Jongkeun LEE  Sanghyun PARK  Jongil KIM  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    841-854

    By converting the expression values of each sample into the corresponding rank values, the rank-based approach enables the direct integration of multiple microarray data produced by different laboratories and/or different techniques. In this study, we verify through statistical and experimental methods that informative genes can be extracted from multiple microarray data integrated by the rank-based approach (briefly, integrated rank-based microarray data). First, after showing that a nonparametric technique can be used effectively as a scoring metric for rank-based microarray data, we prove that the scoring results from integrated rank-based microarray data are statistically significant. Next, through experimental comparisons, we show that the informative genes from integrated rank-based microarray data are statistically more significant than those of single-microarray data. In addition, by comparing the lists of informative genes extracted from experimental data, we show that the rank-based data integration method extracts more significant genes than the z-score-based normalization technique or the rank products technique. Public cancer microarray data were used for our experiments and the marker genes list from the CGAP database was used to compare the extracted genes. The GO database and the GSEA method were also used to analyze the functionalities of the extracted genes.

  • Improved Gini-Index Algorithm to Correct Feature-Selection Bias in Text Classification

    Heum PARK  Hyuk-Chul KWON  

     
    PAPER-Pattern Recognition

      Page(s):
    855-865

    This paper presents an improved Gini-Index algorithm to correct feature-selection bias in text classification. Gini-Index has been used as a split measure for choosing the most appropriate splitting attribute in decision tree. Recently, an improved Gini-Index algorithm for feature selection, designed for text categorization and based on Gini-Index theory, was introduced, and it has proved to be better than the other methods. However, we found that the Gini-Index still shows a feature selection bias in text classification, specifically for unbalanced datasets having a huge number of features. The feature selection bias of the Gini-Index in feature selection is shown in three ways: 1) the Gini values of low-frequency features are low (on purity measure) overall, irrespective of the distribution of features among classes, 2) for high-frequency features, the Gini values are always relatively high and 3) for specific features belonging to large classes, the Gini values are relatively lower than those belonging to small classes. Therefore, to correct that bias and improve feature selection in text classification using Gini-Index, we propose an improved Gini-Index (I-GI) algorithm with three reformulated Gini-Index expressions. In the present study, we used global dimensionality reduction (DR) and local DR to measure the goodness of features in feature selections. In experimental results for the I-GI algorithm, we obtained unbiased feature values and eliminated many irrelevant general features while retaining many specific features. Furthermore, we could improve the overall classification performances when we used the local DR method. The total averages of the classification performance were increased by 19.4 %, 15.9 %, 3.3 %, 2.8 % and 2.9 % (kNN) in Micro-F1, 14 %, 9.8 %, 9.2 %, 3.5 % and 4.3 % (SVM) in Micro-F1, 20 %, 16.9 %, 2.8 %, 3.6 % and 3.1 % (kNN) in Macro-F1, 16.3 %, 14 %, 7.1 %, 4.4 %, 6.3 % (SVM) in Macro-F1, compared with tf*idf, χ2, Information Gain, Odds Ratio and the existing Gini-Index methods according to each classifier.

  • Adaptive Script-Independent Text Line Extraction

    Majid ZIARATBAN  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Page(s):
    866-877

    In this paper, an adaptive block-based text line extraction algorithm is proposed. Three global and two local parameters are defined to adapt the method to various handwritings in different languages. A document image is segmented into several overlapping blocks. The skew of each block is estimated. Text block is de-skewed by using the estimated skew angle. Text regions are detected in the de-skewed text block. A number of data points are extracted from the detected text regions in each block. These data points are used to estimate the paths of text lines. By thinning the background of the image including text line paths, text line boundaries or separators are estimated. Furthermore, an algorithm is proposed to assign to the extracted text lines the connected components which have intersections with the estimated separators. Extensive experiments on different standard datasets in various languages demonstrate that the proposed algorithm outperforms previous methods.

  • A Practical CFA Interpolation Using Local Map

    Yuji ITOH  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    878-885

    This paper introduces a practical color filter array (CFA) interpolation technique. Among the many technologies proposed in this field, the inter-color methods that exploit correlation between color planes generally outperform the intra-color approaches. We have found that the filtering direction, e.g., horizontal or vertical, is among the most decisive factors for the performance of the CFA interpolation. However, most of the state-of-the-art technologies are not flexible enough in determining the filtering direction. For example, filtering only in the upper direction is not usually supported. In this context, we propose an inter-color CFA interpolation using a local map called unified geometry map (UGM). In this method, the filtering direction is determined based on the similarity of the local map data. Thus, it provides more choices of the filtering directions, which enhances the probability of finding the most appropriate direction. It is confirmed through simulations that the proposal outperforms the state-of-the-art algorithms in terms of objective quality measures. In addition, the proposed scheme is as inexpensive as the conventional methods with regard to resource consumption.

  • Geometry Coding for Triangular Mesh Model with Structuring Surrounding Vertices and Connectivity-Oriented Multiresolution Decomposition

    Shuji WATANABE  Akira KAWANAKA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    886-894

    In this paper, we propose a novel coding scheme for the geometry of the triangular mesh model. The geometry coding schemes can be classified into two groups: schemes with perfect reconstruction property that maintains their connectivity, and schemes without it in which the remeshing procedure is performed to change the mesh to semi-regular or regular mesh. The former schemes have good coding performance at higher coding rate, while the latter give excellent coding performance at lower coding rate. We propose a geometry coding scheme that maintains the connectivity and has a perfect reconstruction property. We apply a method that successively structures on 2-D plane the surrounding vertices obtained by expanding vertex sequences neighboring the previous layer. Non-separable component decomposition is applied, in which 2-D structured data are decomposed into four components depending on whether their location was even or odd on the horizontal and vertical axes in the 2-D plane. And a prediction and update are performed for the decomposed components. In the prediction process the predicted value is obtained from the vertices, which were not processed, neighboring the target vertex in the 3-D space. And the zero-tree coding is introduced in order to remove the redundancies between the coefficients at similar positions in different resolution levels. SFQ (Space-Frequency Quantization) is applied, which gives the optimal combination of coefficient pruning for the descendant coefficients of each tree element and a uniform quantization for each coefficient. Experiments applying the proposed method to several polygon meshes of different resolutions show that the proposed method gives a better coding performance at lower bit rate when compared to the conventional schemes.

  • A Novel Low-Cost High-Throughput CAVLC Decoder for H.264/AVC

    Kyu-Yeul WANG  Byung-Soo KIM  Sang-Seol LEE  Dong-Sun KIM  Duck-Jin CHUNG  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    895-904

    This paper presents a novel low-cost high-performance CAVLC decoder for H.264/AVC. The proposed CAVLC decoder generates the length of coeff_token and total_zeros symbols with simple arithmetic operation. So, it can be implemented with reduced look-up table. And we propose multi-symbol run_before decoder which has enhanced throughput. It can decode more than 2.5 symbols in a cycle if there are run_before symbols to be decoded. The hardware cost is about 12 K gates when synthesized at 125 MHz.

  • Efficient Human Body Tracking by Quick Shift Belief Propagation

    Kittiya KHONGKRAPHAN  Pakorn KAEWTRAKULPONG  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    905-912

    We propose a novel and efficient approach for tracking 2D articulated human body parts. In our approach, the human body is modeled by a graphical model where each part is represented by a node and the relationship between a pair of adjacent parts is indicated by an edge in the graph. Various approaches have been proposed to solve such problems, but efficiency is still a vital problem. We present a new Quick Shift Belief Propagation (QSBP) based approach which benefits from Quick Shift, a simple and efficient mode seeking method, in a part based belief propagation model. The unique aspect of this model is its ability to efficiently discover modes of the underlying marginal probability distribution while preserving the accuracy. This gives QSBP a significant advantage over approaches like Belief Propagation (BP) and Mean Shift Belief Propagation (MSBP). Moreover, we demonstrate the use of QSBP with an action based model; this provides additional advantages of handling self-occlusion and further reducing the search space. We present qualitative and quantitative analysis of the proposed approach with encouraging results.

  • An Association Rule Based Grid Resource Discovery Method

    Yuan LIN  Siwei LUO  Guohao LU  Zhe WANG  

     
    LETTER-Computer System

      Page(s):
    913-916

    There are a great amount of various resources described in many different ways for service oriented grid environment, while traditional grid resource discovery methods could not fit more complex future grid system. Therefore, this paper proposes a novel grid resource discovery method based on association rule hypergraph partitioning algorithm which analyzes user behavior in history transaction records to provide personality service for user. And this resource discovery method gives a new way to improve resource retrieval and management in grid research.

  • Linear Detrending Subsequence Matching in Time-Series Databases

    Myeong-Seon GIL  Yang-Sae MOON  Bum-Soo KIM  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    917-920

    Every time-series has its own linear trend, the directionality of a time-series, and removing the linear trend is crucial to get more intuitive matching results. Supporting the linear detrending in subsequence matching is a challenging problem due to the huge number of all possible subsequences. In this paper we define this problem as the linear detrending subsequence matching and propose its efficient index-based solution. To this end, we first present a notion of LD-windows (LD means linear detrending). Using the LD-windows we then present a lower bounding theorem for the index-based matching solution and show its correctness. We next propose the index building and subsequence matching algorithms. We finally show the superiority of the index-based solution.

  • Improving Hessian Matrix Detector for SURF

    Yitao CHI  Zhang XIONG  Qing CHANG  Chao LI  Hao SHENG  

     
    LETTER-Pattern Recognition

      Page(s):
    921-925

    An advanced interest point detector is proposed to improve the Hessian-Matrix based detector of the SURF algorithm. Round-like shapes are utilized as the filter shape to calculate of the Hessian determinant. Dxy can be acquired from approximate round areas, while the regions for computing Dyy or Dxx are designed with the consideration to symmetry and a balance of pixel number. Experimental results indicate that the proposed method has higher repeatability than the one used in SURF, especially in the aspects of rotation and viewpoint, due to the centrosymmetry of the proposed filter shapes. The results of image matching also show that more precision can be gained with the application of proposed detector.

  • Non-iterative Symmetric Two-Dimensional Linear Discriminant Analysis

    Kohei INOUE  Kenji HARA  Kiichi URAHAMA  

     
    LETTER-Pattern Recognition

      Page(s):
    926-929

    Linear discriminant analysis (LDA) is one of the well-known schemes for feature extraction and dimensionality reduction of labeled data. Recently, two-dimensional LDA (2DLDA) for matrices such as images has been reformulated into symmetric 2DLDA (S2DLDA), which is solved by an iterative algorithm. In this paper, we propose a non-iterative S2DLDA and experimentally show that the proposed method achieves comparable classification accuracy with the conventional S2DLDA, while the proposed method is computationally more efficient than the conventional S2DLDA.

  • DSP-Based Parallel Implementation of Speeded-Up Robust Features

    Chao LIAO  Guijin WANG  Quan MIAO  Zhiguo WANG  Chenbo SHI  Xinggang LIN  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    930-933

    Robust local image features have become crucial components of many state-of-the-art computer vision algorithms. Due to limited hardware resources, computing local features on embedded system is not an easy task. In this paper, we propose an efficient parallel computing framework for speeded-up robust features with an orientation towards multi-DSP based embedded system. We optimize modules in SURF to better utilize the capability of DSP chips. We also design a compact data layout to adapt to the limited memory resource and to increase data access bandwidth. A data-driven barrier and workload balance schemes are presented to synchronize parallel working chips and reduce overall cost. The experiment shows our implementation achieves competitive time efficiency compared with related works.

  • Non-rigid Object Tracking as Salient Region Segmentation and Association

    Xiaolin ZHAO  Xin YU  Liguo SUN  Kangqiao HU  Guijin WANG  Li ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    934-937

    Tracking a non-rigid object in a video in the presence of background clutter and partial occlusion is challenging. We propose a non-rigid object-tracking paradigm by repeatedly detecting and associating saliency regions. Saliency region segmentation is operated in each frame. The segmentation results provide rich spatial support for tracking and make the reliable tracking of non-rigid object without drifting possible. The precise object region is obtained simultaneously by associating the saliency region using two independent observers. Our formulation is quite general and other salient-region segmentation algorithms also can be used. Experimental results have shown that such a paradigm can effectively handle tracking problems of objects with rapid movement, rotation and partial occlusion.

  • Automatic Adjustment of the Distance Ratio Threshold in Nearest Neighbor Distance Ratio Matching for Robust Camera Tracking

    Hanhoon PARK  Hideki MITSUMINE  Mahito FUJII  

     
    LETTER-Multimedia Pattern Processing

      Page(s):
    938-940

    In nearest neighbor distance ratio (NNDR) matching the fixed distance ratio threshold sometimes results in an insufficient number of inliers or a huge number of outliers, which is not good for robust tracking. In this letter, we propose adjusting the distance ratio threshold based on maximizing the number of inliers while maintaining the ratio of the number of outliers to that of inliers. By applying the proposed method to a model-based camera tracking system, its effectiveness is verified.