The search functionality is under construction.

Author Search Result

[Author] Hiroshi NAGAHASHI(27hit)

1-20hit(27hit)

  • Parametric Piecewise Modeling of Bezier and Polynomial Surfaces

    Mohamed IMINE  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:1
      Page(s):
    94-104

    The act of finding or constructing a model for a portion of a given polynomial or Bezier parametric surface from the whole original one is an encountered problem in surface modeling. A new method is described for constructing polynomial or Bezier piecewise model from an original one. It is based on the "Parametric Piecewise Model," abbreviated to PPM, of curve representation. The PPM representation is given by explicit expressions in terms of only control points or polynomial coefficients. The generated piecewise model behaves completely as a normal, polynomial or Bezier model in the same way as the original one for the piece of region considered. Also it has all characteristics, i. e, order and number of control points as the original one, and satisfies at the boundaries all order continuities. The PPM representation permits normalization, piecewise modeling, PPM reduction and systematic processes.

  • Visual Correspondence Grouping via Local Consistent Neighborhood

    Kota AOKI  Hiroshi NAGAHASHI  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:6
      Page(s):
    1351-1358

    In this paper we aim to group visual correspondences in order to detect objects or parts of objects commonly appearing in a pair of images. We first extract visual keypoints from images and establish initial point correspondences between two images by comparing their descriptors. Our method is based on two types of graphs, named relational graphs and correspondence graphs. A relational graph of a point is constructed by thresholding geometric and topological distances between the point and its neighboring points. A threshold value of a geometric distance is determined according to the scale of each keypoint, and a topological distance is defined as the shortest path on a Delaunay triangulation built from keypoints. We also construct a correspondence graph whose nodes represent two pairs of matched points or correspondences and edges connect consistent correspondences. Two correspondences are consistent with each other if they meet the local consistency induced by their relational graphs. The consistent neighborhoods should represent an object or a part of an object contained in a pair of images. The enumeration of maximal cliques of a correspondence graph results in groups of keypoint pairs which therefore involve common objects or parts of objects. We apply our method to common visual pattern detection, object detection, and object recognition. Quantitative experimental results demonstrate that our method is comparable to or better than other methods.

  • Motion Segmentation in RGB Image Sequence Based on Stochastic Modeling

    Adam KURIASKI  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:12
      Page(s):
    1708-1715

    A method of motion segmentation in RGB image sequences is presented in details. The method is based on moving object modeling by a six-variate Gaussian distribution and a hidden Markov random field (MRF) framework. It is an extended and improved version of our previous work. Based on mathematical principles the energy expression of MRF is modified. Moreover, an initialization procedure for the first frame of the sequence is introduced. Both modifications result in new interesting features. The first involves a rather simple parameter estimation which has to be performed before the use of the method. Now, the values of Maximum Likelihood (ML) estimators of the parameters can be used without any user's modifications. The last allows one to avoid finding manually the localization mask of moving object in the first frame. Experimental results showing the usefulness of the method are also included.

  • Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution

    Shuqiong WU  Hiroshi NAGAHASHI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/08/13
      Vol:
    E98-D No:11
      Page(s):
    1906-1915

    Gentle AdaBoost is widely used in object detection and pattern recognition due to its efficiency and stability. To focus on instances with small margins, Gentle AdaBoost assigns larger weights to these instances during the training. However, misclassification of small-margin instances can still occur, which will cause the weights of these instances to become larger and larger. Eventually, several large-weight instances might dominate the whole data distribution, encouraging Gentle AdaBoost to choose weak hypotheses that fit only these instances in the late training phase. This phenomenon, known as “classifier distortion”, degrades the generalization error and can easily lead to overfitting since the deviation of all selected weak hypotheses is increased by the late-selected ones. To solve this problem, we propose a new variant which we call “Penalized AdaBoost”. In each iteration, our approach not only penalizes the misclassification of instances with small margins but also restrains the weight increase for instances with minimal margins. Our method performs better than Gentle AdaBoost because it avoids the “classifier distortion” effectively. Experiments show that our method achieves far lower generalization errors and a similar training speed compared with Gentle AdaBoost.

  • Image Categorization Using Scene-Context Scale Based on Random Forests

    Yousun KANG  Hiroshi NAGAHASHI  Akihiro SUGIMOTO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E94-D No:9
      Page(s):
    1809-1816

    Scene-context plays an important role in scene analysis and object recognition. Among various sources of scene-context, we focus on scene-context scale, which means the effective scale of local context to classify an image pixel in a scene. This paper presents random forests based image categorization using the scene-context scale. The proposed method uses random forests, which are ensembles of randomized decision trees. Since the random forests are extremely fast in both training and testing, it is possible to perform classification, clustering and regression in real time. We train multi-scale texton forests which efficiently provide both a hierarchical clustering into semantic textons and local classification in various scale levels. The scene-context scale can be estimated by the entropy of the leaf node in the multi-scale texton forests. For image categorization, we combine the classified category distributions in each scale and the estimated scene-context scale. We evaluate on the MSRC21 segmentation dataset and find that the use of the scene-context scale improves image categorization performance. Our results have outperformed the state-of-the-art in image categorization accuracy.

  • A New Method for Smooth Interpolation without Twist Constraints

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  Tomoharu NAGAO  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:2
      Page(s):
    243-250

    A new method for interpolating boundary function values and first derivatives of a triangle is presented. This method has a relatively simple construction and involves no compatibility constraints. The polynomial precision set of the interpolation function constructed includes all the cubic polynomial and less. The testing results show that the surface produced by the proposed method is better than the ones by weighted combination schemes in both of the fairness and preciseness.

  • Interpolation of CT Slices for Laser Stereolithography

    Takanori NAGAE  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:8
      Page(s):
    905-911

    An algorithm interpolating parallel cross-sections between CT slices is described. Contours of equiscalar or constant-density surfaces on cross-sections are directly obtained as non-intersecting loops from grayscale slice images. This algorithm is based on a general algorithm that the authors have proposed earlier, constructing triangulated orientable closed surfaces from grayscale volumes and is particularly suited for a new technique, called laser stereolithography, which creates real 3D plastic objects using UV laser to scan and harden liquid polymer. The process of laser stereolithography is executed slice by slice, and this technique really requires some interpolation of intermediate cross-sections between slices. For visualizing, surfaces are only expected to be shaded almost continuously. The local defects are invisible and not cared about if the picture resolution is rather poor. On the contrary, topological faults are fatal to construct solid models by laser stereolithography, i.e., every contour line on cross-sections must be closed with no intersection. Not a single break of a contour line is tolerated. We already have many algorithms available for equiscalar surface construction, and it seems that if we cut the surfaces, then contour lines could be obtained. However, few of them are directly applicable to solid modeling. Marching cubes algorithm, for example, does not ensure the consistency of surface topology. Our algorithm guarantee an adequate topology of contour lines.

  • A Query Translation System Based on Map Expressions

    Hiroshi NAGAHASHI  Adel SHEFFI  Mikio NAKATSUYAMA  

     
    PAPER-Databases

      Vol:
    E73-E No:1
      Page(s):
    155-164

    In this paper, we will develop a translation system of database expressions which we define to describe user queries into iterative program. This translation system uses both techniques of functional composition and program transformation. Database expressions are first translated into map-expressions. In the expressions, a function map* being an extension of map function in lisp is used and expresses structural loop control aspects. Being transformed into normal forms by a term rewriting system, the map-expressions are translated into lambda expressions to perform functional combination. After some simplifications by another term rewriting system, iterative programs are generated. Our transformation algorithm has an advantage of systematical generation of structured iterative program without user intervention.

  • A Paint System of Monochromatic Moving Images

    Hiroshi NAGAHASHI  Takeshi AGUI  Tatsushi ISHIGURO  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E78-D No:4
      Page(s):
    476-483

    A method for painting a sequence of monochromatic images is proposed. In this method, a color model, whose base components are hue, saturation and intensity, is used to keep the lightness of images unchanged before and after painting. Two successive frames in the monochromatic image sequence and a colored image of the first frame which is interactively painted, are analyzed in order to paint the next monochromatic frame. The painting process is composed of two phases, that is, an automatic coloring phase and an interactive retouching phase. In the automatic coloring phase, hierarchical image segmentation and region matching procedures are performed, and the two attributes of hue and saturation are mapped from the painted image of the first frame to the next image. In the retouching phase, using an interactive paint system based on the color model, users can modify the chromatic components of pixels whose colors were not mapped correctly. Several experiments show that our method is very effective in reducing tedious painting.

  • Analysis of Noteworthy Issues in Illumination Processing for Face Recognition

    Min YAO  Hiroshi NAGAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E98-D No:3
      Page(s):
    681-691

    Face recognition under variable illumination conditions is a challenging task. Numbers of approaches have been developed for solving the illumination problem. In this paper, we summarize and analyze some noteworthy issues in illumination processing for face recognition by reviewing various representative approaches. These issues include a principle that associates various approaches with a commonly used reflectance model and the shared considerations like contribution of basic processing methods, processing domain, feature scale, and a common problem. We also address a more essential question-what to actually normalize. Through the discussion on these issues, we also provide suggestions on potential directions for future research. In addition, we conduct evaluation experiments on 1) contribution of fundamental illumination correction to illumination insensitive face recognition and 2) comparative performance of various approaches. Experimental results show that the approaches with fundamental illumination correction methods are more insensitive to extreme illumination than without them. Tan and Triggs' method (TT) using L1 norm achieves the best results among nine tested approaches.

  • A Method for Watermarking to Bezier Polynomial Surface Models

    Hiroshi NAGAHASHI  Rikima MITSUHASHI  Ken'ichi MOROOKA  

     
    PAPER-Computer Graphics

      Vol:
    E87-D No:1
      Page(s):
    224-232

    This paper presents a new method for embedding digital watermarks into Bezier polynomial patches. An object surface is supposed to be represented by multiple piecewise Bezier polynomial patches. A Bezier patch passes through its four-corner control points, which are called data points, and does not pass through the other control points. To embed a watermark, a Bezier patch is divided into two patches. Since each subdivided patch shares two data points of the original patch, the subdivision apparently generates two additional data points on the boundaries of the original patch. We can generate the new data points in any position on the boundaries by changing the subdivision parameters. The additional data points can not be removed without knowing some parameters for subdividing and deforming the patch, hence the patch subdivision enables us to embed a watermark into the surface.

  • Structural Evolution of Neural Networks Having Arbitrary Connections by a Genetic Method

    Tomoharu NAGAO  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E76-D No:6
      Page(s):
    689-697

    A genetic method to generate a neural network which has both structure and connection weights adequate for a given task is proposed. A neural network having arbitrary connections is regarded as a virtual living thing which has genes representing its connections among neural units. Effectiveness of the network is estimated from its time sequential input and output signals. Excellent individuals, namely appropriate neural networks, are generated through generation iterations. The basic principle of the method and its applications are described. As an example of evolution from randomly generated networks to feedforward networks, an XOR problem is dealt with, and an action control problem is used for making networks containing feedback and mutual connections. The proposed method is available for designing a neural network whose adequate structure is unknown.

  • BPL: A Language for Parallel Algorithms on the Butterfly Network

    Fattaneh TAGHIYAREH  Hiroshi NAGAHASHI  

     
    PAPER-Algorithms

      Vol:
    E83-D No:7
      Page(s):
    1488-1496

    A number of parallel algorithms have been developed to solve large-scale real world problems. Although there has been much work on the design of parallel algorithms, there has been little on the design of languages for expressing these algorithms. This paper describes the BPL, a new parallel language designed for butterfly networks. The purpose of this language is to help designers in hiding the complexity of the algorithm and leaving details of mapping between data and processors for lower level. BPL provides a simpler virtual machine for the designer , in order to avoid thinking about control of processors and data. From another point of view, BPL helps designer to logically check the algorithm and correct any possible error in it. The paper gives some examples implemented by this language. In addition, we have also implemented a software tool which simulates the running of the algorithm on the network. The results lead us to believe that this language would be useful in representing all kinds of algorithms on this network including normal algorithms and others.

  • Texture Classification Using Hierarchical Linear Discriminant Space

    Yousun KANG  Ken'ichi MOROOKA  Hiroshi NAGAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:10
      Page(s):
    2380-2388

    As a representative of the linear discriminant analysis, the Fisher method is most widely used in practice and it is very effective in two-class classification. However, when it is expanded to a multi-class classification problem, the precision of its discrimination may become worse. A main reason is an occurrence of overlapped distributions on the discriminant space built by Fisher criterion. In order to take such overlaps among classes into consideration, our approach builds a new discriminant space by hierarchically classifying the overlapped classes. In this paper, we propose a new hierarchical discriminant analysis for texture classification. We divide the discriminant space into subspaces by recursively grouping the overlapped classes. In the experiment, texture images from many classes are classified based on the proposed method. We show the outstanding result compared with the conventional Fisher method.

  • 3D Triangular Mesh Parameterization with Semantic Features Based on Competitive Learning Methods

    Shun MATSUI  Kota AOKI  Hiroshi NAGAHASHI  

     
    PAPER-Computer Graphics

      Vol:
    E91-D No:11
      Page(s):
    2718-2726

    In 3D computer graphics, mesh parameterization is a key technique for digital geometry processings such as morphing, shape blending, texture mapping, re-meshing and so on. Most of the previous approaches made use of an identical primitive domain to parameterize a mesh model. In recent works of mesh parameterization, more flexible and attractive methods that can create direct mappings between two meshes have been reported. These mappings are called "cross-parameterization" and typically preserve semantic feature correspondences between target meshes. This paper proposes a novel approach for parameterizing a mesh into another one directly. The main idea of our method is to combine a competitive learning and a least-square mesh techniques. It is enough to give some semantic feature correspondences between target meshes, even if they are in different shapes or in different poses.

  • Representation of Surfaces on 5 and 6 Sided Regions

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    326-334

    A C1 interpolation scheme for constructing surface patch on n-sided region (n5, 6) is presented. The constructed surface patch matches the given boundary curves and cross-boundary slopes on the sides of the n-sided region (n5, 6). This scheme has relatively simple construction, and offers one degree of freedom for adjusting interior shape of the constructed interpolation surface. The polynomial precision set of the scheme includes all the polynomials of degree three or less. The experiments for comparing the proposed scheme with two schemes proposed by Gregory and Varady respectively and also shown.

  • Depth Perception from a 2D Natural Scene Using Scale Variation of Texture Patterns

    Yousun KANG  Hiroshi NAGAHASHI  

     
    LETTER-Pattern Recognition

      Vol:
    E89-D No:3
      Page(s):
    1294-1298

    In this paper, we introduce a new method for depth perception from a 2D natural scene using scale variation of patterns. As the surface from a 2D scene gets farther away from us, the texture appears finer and smoother. Texture gradient is one of the monocular depth cues which can be represented by gradual scale variations of textured patterns. To extract feature vectors from textured patterns, higher order local autocorrelation functions are utilized at each scale step. The hierarchical linear discriminant analysis is employed to classify the scale rate of the feature vector which can be divided into subspaces by recursively grouping the overlapped classes. In the experiment, relative depth perception of 2D natural scenes is performed on the proposed method and it is expected to play an important role in natural scene analysis.

  • Piecewise Parametric Cubic Interpolation

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:8
      Page(s):
    869-876

    A method is described for constructing an interpolant to a set of arbitrary data points (xi, yi), i1, 2, , n. The constructed interpolant is a piecewise parametric cubic polynomial and satisfies C1 continuity, and it reproduces all parametric polynomials of degree two or less exactly. The experiments to compare the new method with Bessel method and spline method are also shown.

  • Illumination Normalization-Based Face Detection under Varying Illumination

    Min YAO  Hiroshi NAGAHASHI  Kota AOKI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:6
      Page(s):
    1590-1598

    A number of well-known learning-based face detectors can achieve extraordinary performance in controlled environments. But face detection under varying illumination is still challenging. Possible solutions to this illumination problem could be creating illumination invariant features or utilizing skin color information. However, the features and skin colors are not sufficiently reliable under difficult lighting conditions. Another possible solution is to do illumination normalization (e.g., Histogram Equalization (HE)) prior to executing face detectors. However, applications of normalization to face detection have not been widely studied in the literature. This paper applies and evaluates various existing normalization methods under the framework of combining the illumination normalization and two learning-based face detectors (Haar-like face detector and LBP face detector). These methods were initially proposed for different purposes (face recognition or image quality enhancement), but some of them significantly improve the original face detectors and lead to better performance than HE according to the results of the comparative experiments on two databases. Meanwhile, we propose a new normalization method called segmentation-based half histogram stretching and truncation (SH) for face detection under varying illumination. It first employs Otsu method to segment the histogram (intensities) of the input image into several spans and then does the redistribution on the segmented spans. In this way, the non-uniform illumination can be efficiently compensated and local facial structures can be appropriately enhanced. Our method obtains good performance according to the experiments.

  • Parallel Computation of Parametric Piecewise Modeling Method

    Hiroshi NAGAHASHI  Mohamed IMINE  

     
    PAPER-Computer Graphics

      Vol:
    E85-D No:2
      Page(s):
    411-417

    This paper develops a simple algorithm for calculating a polynomial curve or surface in a parallel way. The number of arithmetic operations and the necessary time for the calculation are evaluated in terms of polynomial degree and resolution of a curve and the number of processors used. We made some comparisons between our method and a conventional method for generating polynomial curves and surfaces, especially in computation time and approximation error due to the reduction of the polynomial degree. It is shown that our method can perform fast calculation within tolerable error.

1-20hit(27hit)