The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E87-D No.1  (Publication Date:2004/01/01)

    Special Section on the 2002 IEICE Excellent Paper Award
  • FOREWORD

    Koji NAKANO  

     
    FOREWORD

      Page(s):
    1-2
  • Requirement Specification and Derivation of ECA Rules for Integrating Multiple Dissemination-Based Information Sources

    Tomoyuki KAJINO  Hiroyuki KITAGAWA  Yoshiharu ISHIKAWA  

     
    PAPER

      Page(s):
    3-14

    The recent development of network technology has enabled us to access various information sources easily, and their integration has been studied intensively by the data engineering research community. Although technological advancement has made it possible to integrate existing heterogeneous information sources, we still have to deal with information sources of a new kind--dissemination-based information sources. They actively and autonomously deliver information from server sites to users. Integration of dissemination-based information sources is one of the popular research topics. We have been developing an information integration system in which we employ ECA rules to enable users to define new information delivery services integrating multiple existing dissemination-based information sources. However, it is not easy for users to directly specify ECA rules and to verify them. In this paper, we propose a scheme to specify new dissemination-based information delivery services using the framework of relational algebra. We discuss some important properties of the specification, and show how we can derive ECA rules to implement the services.

  • Speech Summarization: An Approach through Word Extraction and a Method for Evaluation

    Chiori HORI  Sadaoki FURUI  

     
    PAPER

      Page(s):
    15-25

    In this paper, we propose a new method of automatic speech summarization for each utterance, where a set of words that maximizes a summarization score is extracted from automatic speech transcriptions. The summarization score indicates the appropriateness of summarized sentences. This extraction is achieved by using a dynamic programming technique according to a target summarization ratio. This ratio is the number of characters/words in the summarized sentence divided by the number of characters/words in the original sentence. The extracted set of words is then connected to build a summarized sentence. The summarization score consists of a word significance measure, linguistic likelihood, and a confidence measure. This paper also proposes a new method of measuring summarization accuracy based on a word network expressing manual summarization results. The summarization accuracy of each automatic summarization is calculated by comparing it with the most similar word string in the network. Japanese broadcast-news speech, transcribed using a large-vocabulary continuous-speech recognition (LVCSR) system, is summarized and evaluated using our proposed method with 20, 40, 60, 70 and 80% summarization ratios. Experimental results reveal that the proposed method can effectively extract relatively important information by removing redundant or irrelevant information.

  • Special Section on Machine Vision Applications
  • FOREWORD

    Hiroyasu KOSHIMIZU  

     
    FOREWORD

      Page(s):
    26-26
  • Reconstruction of Outdoor Sculptures from Silhouettes under Approximate Circular Motion of an Uncalibrated Hand-Held Camera

    Kwan-Yee Kenneth WONG  Roberto CIPOLLA  

     
    PAPER-Reconstruction

      Page(s):
    27-33

    This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.

  • Robust Projection onto Normalized Eigenspace Using Relative Residual Analysis and Optimal Partial Projection

    Fumihiko SAKAUE  Takeshi SHAKUNAGA  

     
    PAPER-Reconstruction

      Page(s):
    34-41

    The present paper reports a robust projection onto eigenspace that is based on iterative projection. The fundamental method proposed in Shakunaga and Sakaue and involves iterative analysis of relative residual and projection. The present paper refines the projection method by solving linear equations while taking noise ratio into account. The refinement improves both the efficiency and robustness of the projection. Experimental results indicate that the proposed method works well for various kinds of noise, including shadows, reflections and occlusions. The proposed method can be applied to a wide variety of computer vision problems, which include object/face recognition and image-based rendering.

  • Calibration of Real Scenes for the Reconstruction of Dynamic Light Fields

    Ingo SCHOLZ  Joachim DENZLER  Heinrich NIEMANN  

     
    PAPER-Background Estimation

      Page(s):
    42-49

    The classic light field and lumigraph are two well-known approaches to image-based rendering, and subsequently many new rendering techniques and representations have been proposed based on them. Nevertheless the main limitation remains that in almost all of them only static scenes are considered. In this contribution we describe a method for calibrating a scene which includes moving or deforming objects from multiple image sequences taken with a hand-held camera. For each image sequence the scene is assumed to be static, which allows the reconstruction of a conventional static light field. The dynamic light field is thus composed of multiple static light fields, each of which describes the state of the scene at a certain point in time. This allows not only the modeling of rigid moving objects, but any kind of motion including deformations. In order to facilitate the automatic calibration, some assumptions are made for the scene and input data, such as that the image sequences for each respective time step share one common camera pose and that only the minor part of the scene is actually in motion.

  • Adaptive Background Estimation: Computing a Pixel-Wise Learning Rate from Local Confidence and Global Correlation Values

    Mickael PIC  Luc BERTHOUZE  Takio KURITA  

     
    PAPER-Background Estimation

      Page(s):
    50-57

    Adaptive background techniques are useful for a wide spectrum of applications, ranging from security surveillance, traffic monitoring to medical and space imaging. With a properly estimated background, moving or new objects can be easily detected and tracked. Existing techniques are not suitable for real-world implementation, either because they are slow or because they do not perform well in the presence of frequent outliers or camera motion. We address the issue by computing a learning rate for each pixel, a function of a local confidence value that estimates whether a pixel is (or not) an outlier, and a global correlation value that detects camera motion. After discussing the role of each parameter, we report experimental results, showing that our technique is fast but efficient, even in a real-world situation. Furthermore, we show that the same method applies equally well to a 3-camera stereoscopic system for depth perception.

  • Precise and Reliable Image Shift Detection by a New Phase-Difference Spectrum Analysis (PSA) Method

    Isamu KOUZUKI  Tomonori KANEKO  Minoru ITO  

     
    PAPER-Methodologies

      Page(s):
    58-65

    An analysis of the phase difference spectrum between two images allows precise image shift detection. Image shifts are directly evaluated from the phase difference spectrum without Fourier inversion. In the calculation, the weight function containing the frequency and the cross spectrum is used and an unlapping procedure is carried out. In an experiment using synthetic and real images of typical image patterns, accuracy as high as 0.01-0.02 pixel was achieved stably and reliably for most of the image patterns.

  • Region Extraction with Chromakey Using Stripe Backgrounds

    Atsushi YAMASHITA  Toru KANEKO  Shinya MATSUSHITA  Kenjiro T. MIURA  

     
    PAPER-Methodologies

      Page(s):
    66-73

    In this paper, we propose a new region extraction method with a chromakey technique using a two-tone striped background. A chromakey compositing is a technique for separating actors or actresses from a background, and then compositing a different background. The conventional chromakey technique usually uses an unicolored blue or green background, and has a problem that one's clothes are regarded as the background if their colors are same with the background's color. Therefore, we use two-tone striped background and utilize the adjacency condition between two-tone striped areas on the background to extract the foreground regions whose colors are same with the background. The procedure of our proposed method consists of four steps: 1) background color extraction, 2) striped region extraction, 3) foreground extraction, and 4) image composition. As to the background color extraction, the color space approach is used. As to the striped region extraction, it is difficult to extract striped region by a color space approach because the color of this region may be a composite of two background colors and different from them. Therefore, the striped region is extracted from adjacency conditions between two background colors. As to the foreground extraction, the boundary between the foreground and the background is detected to recheck the foreground region whose color is same as the background, and the background region whose color is same as the foreground. To detect the region whose color is same as the background, the adjacency conditions with the striped region are utilized. As to the image composition, the process that smoothes the color of the foreground's boundary against the new background is carried out to create natural images. The validity of proposed method is shown through experiments with the foreground objects whose color is same as the background color.

  • Compression Performances of Computer Vision Based Coding

    Franck GALPIN  Luce MORIN  Koichiro DEGUCHI  

     
    PAPER-Methodologies

      Page(s):
    74-79

    This paper presents new results in the field of very low bitrate coding and compression using 3D informations. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. The camera parameters and the scene content are supposed unknown and the video sequence is processed on the fly. A stream of 3D models is then extracted and compressed, using adapted compression techniques. We finally show the results of the proposed compression scheme, and the efficiency of this approach.

  • Robust and Fast Stereovision Based Obstacles Detection for Driving Safety Assistance

    Raphael LABAYRADE  Didier AUBERT  

     
    PAPER-ITS

      Page(s):
    80-88

    This paper deals with a first evaluation of the efficiency and the robustness of the real-time "v-disparity" algorithm in stereovision for generic road obstacles detection towards various types of obstacles (vehicle, pedestrian, motorbike, cyclist, boxes) and under adverse conditions (day, night, rain, glowing effect, noise and false matches in the disparity map). The theoretical good properties of the "v-disparity" algorithm--accuracy, robustness, computational speed--are experimentally confirmed. The good results obtained allow us to use this stereo algorithm as the onboard perception process for Driving Safety Assistance: conductor warning and longitudinal control of a low speed automated vehicle (using a second order sliding mode control) in difficult and original situations, at frame rate using no special hardware. Results of experiments--Vehicle following at low speed, Stop'n'Go, Stop on Obstacle (pedestrian, fallen motorbike, load dropping obstacle)--are presented.

  • Detecting Method Applicable to Individual Features for Drivers' Drowsiness

    Takahiro HAMADA  Kazumasa ADACHI  Tomoaki NAKANO  Shin YAMAMOTO  

     
    PAPER-ITS

      Page(s):
    89-96

    It is inevitable for driver assist and warning systems to consider the drivers' state of consciousness. Drowsiness is one of the important factors in estimating the drivers' state of consciousness. A Method to extract the driver's initial stage of drowsiness was developed by means of the eyelid's opening relevant to each various characteristic of objects with motion pictures processing in the actual driving environment. The result was that an increase of the long eyelid closure time was the key factor in estimating the initial stage of drivers' drowsiness while driving. And the state of drowsiness could be presumed by checking the frequencies of long eyelid closure time per unit period.

  • Robust Vehicle Detection under Poor Environmental Conditions for Rear and Side Surveillance

    Osafumi NAKAYAMA  Morito SHIOHARA  Shigeru SASAKI  Tomonobu TAKASHIMA  Daisuke UENO  

     
    PAPER-ITS

      Page(s):
    97-104

    During the period from dusk to dark, when it is difficult for drivers to see other vehicles, or when visibility is poor due to rain, snow, etc., the contrast between nearby vehicles and the background is lower. Under such conditions, conventional surveillance systems have difficulty detecting the outline of nearby vehicles and may thus fail to recognize them. To solve this problem, we have developed a rear and side surveillance system for vehicles that uses image processing. The system uses two stereo cameras to monitor the areas to the rear and sides of a vehicle, i.e., a driver's blind spots, and to detect the positions and relative speeds of other vehicles. The proposed system can estimate the shape of a vehicle from a partial outline of it, thus identifying the vehicle by filling in the missing parts of the vehicle outline. Testing of the system under various environmental conditions showed that the rate of errors (false and missed detection) in detecting approaching vehicles was reduced to less than 10%, even under conditions that are problematic for conventional processing.

  • Precise Pupil Contour Detection Based on Minimizing the Energy of Pattern and Edge

    Mayumi YUASA  Osamu YAMAGUCHI  Kazuhiro FUKUI  

     
    PAPER-Face

      Page(s):
    105-112

    We propose a new method to precisely detect pupil contours in face images. Pupil contour detection is necessary for various applications using face images. It is, however, difficult to detect pupils precisely because of their weak edges or lack of edges. The proposed method is based on minimizing the energy of pattern and edge. The basic idea of this method is that the energy, which consists of the pattern and the edge energy, has to be minimized. An efficient search method is also introduced to overcome the underlying problem of efficiency in energy minimization methods. "Guide patterns" are introduced for this purpose. Moreover, to detect pupils more precisely we use an ellipse model as pupil shape in this paper. Experimental results show the effectiveness of the proposed method.

  • Real-Time Human Motion Analysis by Image Skeletonization

    Hironobu FUJIYOSHI  Alan J. LIPTON  Takeo KANADE  

     
    PAPER-Face

      Page(s):
    113-120

    In this paper, a process is described for analysing the motion of a human target in a video stream. Moving targets are detected and their boundaries extracted. From these, a "star" skeleton is produced. Two motion cues are determined from this skeletonization: body posture, and cyclic motion of skeleton segments. These cues are used to determine human activities such as walking or running, and even potentially, the target's gait. Unlike other methods, this does not require an a priori human model, or a large number of "pixels on target". Furthermore, it is computationally inexpensive, and thus ideal for real-world video applications such as outdoor video surveillance.

  • Sequential Fusion of Output Coding Methods and Its Application to Face Recognition

    Jaepil KO  Hyeran BYUN  

     
    PAPER-Face

      Page(s):
    121-128

    In face recognition, simple classifiers are frequently used. For a robust system, it is common to construct a multi-class classifier by combining the outputs of several binary classifiers; this is called output coding method. The two basic output coding methods for this purpose are known as OnePerClass (OPC) and PairWise Coupling (PWC). The performance of output coding methods depends on accuracy of base dichotomizers. Support Vector Machine (SVM) is suitable for this purpose. In this paper, we review output coding methods and introduce a new sequential fusion method using SVM as a base classifier based on OPC and PWC according to their properties. In the experiments, we compare our proposed method with others. The experimental results show that our proposed method can improve the performance significantly on the real dataset.

  • Facial Parts Recognition by Hierarchical Tracking from Motion Image and Its Application

    Takuma FUNAHASHI  Tsuyoshi YAMAGUCHI  Masafumi TOMINAGA  Hiroyasu KOSHIMIZU  

     
    PAPER-Face

      Page(s):
    129-135

    Faces of a person performing freely in front of the camera can be captured in a sufficient resolution for facial parts recognition by the proposed camera system enhanced with a special PTZ camera. Head region, facial parts regions such as eyes and mouth and the borders of facial parts are extracted hierarchically by being guided by the irises and nostrils preliminarily extracted from the images of PTZ camera. In order to show the effectivity of this system, we proposed a possibility to generate the borders of facial parts of the face for the facial caricaturing and to introduce eye-contacting facial images which can eye-contact bilaterally with each other on the TV conference environment.

  • A 51.2 GOPS Programmable Video Recognition Processor for Vision-Based Intelligent Cruise Control Applications

    Shorin KYO  Takuya KOGA  Shin'ichiro OKAZAKI  Ichiro KURODA  

     
    PAPER-Processor

      Page(s):
    136-145

    This paper describes a 51.2 GOPS video recognition processor that provides a cost effective device solution for vision-based intelligent cruise control (ICC) applications. By integrating 128 4-way VLIW (Very Low Instruction Word) processing elements and operating at 100 MHz, the processor achieves to provide a computation power enough for a weather robust lane mark and vehicle detection function written in a high level programming language, to run in video rate, while at the same time it satisfies power efficiency requirements of an in-vehicle LSI. Basing on four basic parallel methods and a software environment including an optimizing compiler of an extended C language and video-based GUI tools, efficient development of real-time video recognition applications that effectively utilize the 128 processing elements are facilitated. Benchmark results show that, this processor can provide a four times better performance compared with a 2.4 GHz general purpose micro-processor.

  • Human Spine Posture Estimation from 2D Frontal and Lateral Views Using 3D Physically Accurate Spine Model

    Daisuke FURUKAWA  Kensaku MORI  Takayuki KITASAKA  Yasuhito SUENAGA  Kenji MASE  Tomoichi TAKAHASHI  

     
    PAPER-ME and Human Body

      Page(s):
    146-154

    This paper proposes the design of a physically accurate spine model and its application to estimate three dimensional spine posture from the frontal and lateral views of a human body taken by two conventional video cameras. The accurate spine model proposed here is composed of rigid body parts approximating vertebral bodies and elastic body parts representing intervertebral disks. In the estimation process, we obtain neck and waist positions by fitting the Connected Vertebra Spheres Model to frontal and lateral silhouette images. Then the virtual forces acting on the top and the bottom vertebrae of the accurate spine model are computed based on the obtained neck and waist positions. The accurate model is deformed by the virtual forces, the gravitational force, and the forces of repulsion. The model thus deformed is regarded as the current posture. According to the preliminary experiments based on one real MR image data set of only one subject person, we confirmed that our proposed deformation method estimates the positions of the vertebrae within positional shifts of 3.2 6.8 mm. 3D posture of the spine could be estimated reasonably by applying the estimation method to actual human images taken by video cameras.

  • Accurate Retinal Blood Vessel Segmentation by Using Multi-Resolution Matched Filtering and Directional Region Growing

    Mitsutoshi HIMAGA  David USHER  James F. BOYCE  

     
    PAPER-ME and Human Body

      Page(s):
    155-163

    A new method to extract retinal blood vessels from a colour fundus image is described. Digital colour fundus images are contrast enhanced in order to obtain sharp edges. The green bands are selected and transformed to correlation coefficient images by using two sets of Gaussian kernel patches of distinct scales of resolution. Blood vessels are then extracted by means of a new algorithm, directional recursive region growing segmentation or D-RRGS. The segmentation results have been compared with clinically-generated ground truth and evaluated in terms of sensitivity and specificity. The results are encouraging and will be used for further application such as blood vessel diameter measurement.

  • Regular Section
  • Determining Consistent Global Checkpoints of a Distributed Computation

    Dakshnamoorthy MANIVANNAN  

     
    PAPER-Computer Systems

      Page(s):
    164-174

    Determining consistent global checkpoints of a distributed computation has applications in the areas such as rollback recovery, distributed debugging, output commit and others. Netzer and Xu introduced the notion of zigzag paths and presented necessary and sufficient conditions for a set of checkpoints to be part of a consistent global checkpoint. This result also reveals that determining the existence of zigzag paths between checkpoints is crucial for determining consistent global checkpoints. Recent research also reveals that determining zigzag paths on-line is not possible. In this paper, we present an off-line method for determining the existence of zigzag paths between checkpoints.

  • Retrieving Correlated Software Products for Reuse

    Shih-Chien CHOU  

     
    PAPER-Software Systems

      Page(s):
    175-182

    Software reuse has been recognized as important. According to our research, when a software product is reused, products correlated to the reused one may be reusable. This paper proposes a model for software products and a technique to retrieve correlated products. The paper also presents equations to evaluate correlation values, which is guidance for selecting reusable correlated products. Since correlated products can be identified by tracing product relationships, the proposed model manages both products and relationships.

  • Decomposition Approach of Banker's Algorithm: Design and Concurrency Analysis

    Hoon OH  

     
    PAPER-Software Systems

      Page(s):
    183-195

    Concurrency in the computing systems using a deadlock avoidance strategy varies largely according to the way that resource usage plan of process is used in testing the possibility of deadlocks. If a detailed resource usage plan of process is to be taken into account, the deadlock avoidance problem is known to be NP-complete. A decomposition model to manage resources is proposed, where process is logically partitioned into a number of segments each of which uses at least one resource. It is shown that one of our deadlock avoidance algorithms achieves the best concurrency among the polynomial-bounded algorithms. We also present a heuristic algorithm that achieves concurrency close to the optimal one. Finally, we analyze concurrency of various algorithms.

  • A New Approach for Distributed Main Memory Database Systems: A Causal Commit Protocol

    Inseon LEE  Heon Y. YEOM  Taesoon PARK  

     
    PAPER-Databases

      Page(s):
    196-204

    Distributed database systems require a commit process to preserve the ACID property of transactions executed on a number of system sites. With the appearance of main memory database system, the database processing time has been reduced in the order of magnitude, since the database access does not incur any disk access at all. However, when it comes to distributed main memory database systems, the distributed commit process is still very slow since the disk logging at several sites has to precede the transaction commit. In this paper, we re-evaluate various distributed commit protocols and come up with a causal commit protocol suitable for distributed main memory database systems. To evaluate the performance of the proposed commit protocol, extensive simulation study has been performed. The simulation results confirm that the new protocol greatly reduces the time to commit the distributed transactions without any consistency problem.

  • Fundamental Frequency Estimation for Noisy Speech Using Entropy-Weighted Periodic and Harmonic Features

    Yuichi ISHIMOTO  Kentaro ISHIZUKA  Kiyoaki AIKAWA  Masato AKAGI  

     
    PAPER-Speech and Hearing

      Page(s):
    205-214

    This paper proposes a robust method for estimating the fundamental frequency (F0) in real environments. It is assumed that the spectral structure of real environmental noise varies momentarily and its energy does not distribute evenly in the time-frequency domain. Therefore, segmenting a spectrogram of speech mixed with environmental noise into narrow time-frequency regions will produce low-noise regions in which the signal-to-noise ratio is high. The proposed method estimates F0 from the periodic and harmonic features that are clearly observed in the low-noise regions. It first uses two kinds of spectrogram, one with high frequency resolution and another with high temporal resolution, to represent the periodic and harmonic features corresponding to F0. Next, the method segments these two kinds of feature plane into narrow time-frequency regions, and calculates the probability function of F0 for each region. It then utilizes the entropy of the probability function as weight to emphasize the probability function in the low-noise region and to enhance noise robustness. Finally, the probability functions are grouped in each time, and F0 is obtained as the frequency with the highest probability of the function. The experimental results showed that, in comparison with other approaches such as the cepstrum method and the autocorrelation method, the developed method can more robustly estimate F0s from speech in the presence of band-limited noise and car noise.

  • Quadratic Surface Reconstruction from Multiple Views Using SQP

    Rubin GONG  Gang XU  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    215-223

    We propose using SQP (Sequential Quadratic Programming) to directly recover 3D quadratic surface parameters from multiple views. A surface equation is used as a constraint. In addition to the sum of squared reprojection errors defined in the traditional bundle adjustment, a Lagrangian term is added to force recovered points to satisfy the constraint. The minimization is realized by SQP. Our algorithm has three advantages. First, given corresponding features in multiple views, the SQP implementation can directly recover the quadratic surface parameters optimally instead of a collection of isolated 3D points coordinates. Second, the specified constraints are strictly satisfied and the camera parameters and 3D coordinates of points can be determined more accurately than that by unconstrained methods. Third, the recovered quadratic surface model can be represented by a much smaller number of parameters instead of point clouds and triangular patches. Experiments with both synthetic and real images show the power of this approach.

  • A Method for Watermarking to Bezier Polynomial Surface Models

    Hiroshi NAGAHASHI  Rikima MITSUHASHI  Ken'ichi MOROOKA  

     
    PAPER-Computer Graphics

      Page(s):
    224-232

    This paper presents a new method for embedding digital watermarks into Bezier polynomial patches. An object surface is supposed to be represented by multiple piecewise Bezier polynomial patches. A Bezier patch passes through its four-corner control points, which are called data points, and does not pass through the other control points. To embed a watermark, a Bezier patch is divided into two patches. Since each subdivided patch shares two data points of the original patch, the subdivision apparently generates two additional data points on the boundaries of the original patch. We can generate the new data points in any position on the boundaries by changing the subdivision parameters. The additional data points can not be removed without knowing some parameters for subdividing and deforming the patch, hence the patch subdivision enables us to embed a watermark into the surface.

  • 3D Human Whole Body Construction by Contour Triangulation

    Bon-Ki KOO  Young-Kyu CHOI  Sung-Il CHIEN  

     
    PAPER-Computer Graphics

      Page(s):
    233-243

    In the past decade, significant effort has been made toward increasing the accuracy and robustness of three-dimensional scanning methods. In this paper, we present a new prototype vision system named 3D Model Studio, which has been built to reconstruct a complete 3D model in as less as a few minutes. New schemes for a probe calibration and a 3D data merging (axis consolidation) are employed. We also propose a new semi-automatic contour registration method to generate accurate contour model from 3D data points, along with a contour triangulation based surface reconstruction. Experimental result shows that our system works well for reconstructing a complete 3D surface model of a human body.

  • Software Implementation of a Secure Socket Layer (SSL) Accelerator Based on Kernel Thread

    Euiseok NAHM  Byungjo MIN  Jinbae PARK  Hagbae KIM  

     
    LETTER-Software Engineering

      Page(s):
    244-245

    We implement an efficient Secure Socket Layer (SSL) accelerator, which is embedded in the kernel level and utilizes kernel threads as the same number of CPUs. In comparison with the conventional Apache with/without our SSL accelerator, the SSL accelerator significantly improves the web-server performance by up to 200%.

  • A Spatial Weighted Color Histogram for Image Retrieval

    Jian CHENG  Yen-Wei CHEN  Hanqing LU  Xiang-Yan ZENG  

     
    LETTER-Pattern Recognition

      Page(s):
    246-249

    Color histograms have been considered to be effective for color image indexing and retrieval. However, the histogram only represents the global statistical color information. We propose a new method: A Spatial Weighted Color Histogram (SWCH), for image retrieval. The color space of a color image is partitioned into several color subsets according to hue, saturation and value in HSV color space. Then, the spatial center moment of each subset is calculated as the weight of the corresponding subset. Experiments show that our method is more effective in indexing color image and insensitive to intensity variations.

  • Depth from Defocus Using Wavelet Transform

    Muhammad ASIF  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Page(s):
    250-253

    We propose a new method for Depth from Defocus (DFD) using wavelet transform. Most of the existing DFD methods use inverse filtering in a transform domain to determine the measure of defocus. These methods suffer from inaccuracies in finding the frequency domain representation due to windowing and border effects. The proposed method uses wavelets that allow performing both the local analysis and windowing with variable-sized regions for images with varying textural properties. Experimental results show that the proposed method gives more accurate depth maps than the previous methods.

  • List Based Zerotree Wavelet Image Coding with Two Symbols

    Tanzeem MUZAFFAR  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Page(s):
    254-257

    This paper presents a novel wavelet compression technique to increase compression of images. Based on zerotree entropy coding method, this technique initially uses only two symbols (significant and zerotree) to compress image data for each level. Additionally, sign bit is used for newly significant coefficients to indicate them being positive or negative. Contrary to isolated zero symbols used in conventional zerotree algorithms, the proposed algorithm changes them to significant coefficients and saves its location, they are then treated just like other significant coefficients. This is done to decrease number of symbols and hence, decrease number of bits to represent the symbols used. In the end, algorithm indicates isolated zero coordinates that are used to change the value back to original during reconstruction. Noticeably high compression ratio is achieved for most of the images, without changing image quality.

  • Boundedness of Input Space and Effective Dimension of Feature Space in Kernel Methods

    Kazushi IKEDA  

     
    LETTER-Biocybernetics, Neurocomputing

      Page(s):
    258-260

    Kernel methods such as the support vector machines map input vectors into a high-dimensional feature space and linearly separate them there. The dimensionality of the feature space depends on a kernel function and is sometimes of an infinite dimension. The Gauss kernel is such an example. We discuss the effective dimension of the feature space with the Gauss kernel and show that it can be approximated to a sum of polynomial kernels and that its dimensionality is determined by the boundedness of the input space by considering the Taylor expansion of the kernel Gram matrix.

  • Adaptive Filtering for Baseline Wander Noise of ECG Using Neural Networks

    Juwon LEE  Weonrae JO  Gunki LEE  

     
    LETTER-Medical Engineering

      Page(s):
    261-266

    This study proposed the new method to minimize distortion of the ST segment and noise deletion of ECG baseline wander. In general, the standard filter and adaptive filter are used to remove the baseline wander of the ECG. The standard filter, however, is limited because the frequency of the baseline signal is variable and the baseline wander's spectrum overlaps with the ST segment's spectrum, and for the adaptive filter, it is difficult to select the reference signal. This study proposed a new, structured adaptive filter that is to remove noise without reference signal using neural networks. In order to confirm performance, this paper used ECG data of MIT-BIHs and obtained significant results through the tests.