1-6hit |
Hiromasa NAKATANI Tadahiro KITAHASHI
By the location of a vanishing point of an object in a picture we can determine the spatial relationship between the object and the observer. In this paper we present a technique for determining its location by Hough transformation. As an application we present a method of calculating panned angle of the observer.
Keiji GYOHTEN Noboru BABAGUCHI Tadahiro KITAHASHI
In this paper, we present a method for extracting the Japanese printed characters from unformatted document images. This research takes into account the multiple general features specific to the Japanese printed characters. In our method, these features are thought of as the constraints for the regions to be extracted within the constraint satisfaction approach. This is achieved by minimizing a constraint function estimating quantitative satisfaction of the features. Our method is applicable to all kinds of the Japanese documents because it is no need of a priori knowledge about the document layout. We have favorable experimental results for the effectiveness of this method.
Wei MING Noboru BABAGUCHI Tadahiro KITAHASHI
In this paper, a novel approach is proposed to identify the detailed typeface of Gothic characters in document images. The identification is performed by evaluating two types of typeface models, named the Gs-pattern and the Gd-pattern according to the principle of MDL. The typeface models are generated from the observed character image by using morphology and are viewed as approximating expressions of the observed character. Consequently, this method is unique in that it is free from both character recognition and dictionary lookup.
Shoujie HE Norihiro ABE Tadahiro KITAHASHI
This paper presents an approach for assembly plan generation from an assembly illustration. Previously, we have already proposed an approach for the assembly plan related information acquisition from an assembly illustration, in which auxiliary lines were taken as clues. However, some ambiguity remains in dynamic information such as assembly operations and their execution order. We have verified through experiments that the ambiguity could be made clear by referring to the feedback information from the completed assemblage after the assembly operations shown in the current illustration. But in fact, in an assembly illustration there are not only the figures of mechanical parts and the auxiliary lines for visualizing their assembly relations, but explanatory words and explanatory lines as well. Explanatory words can basically be classified into two categories: instructions on assembly operations and mechanical part names. The former explicitly describes dynamic information such as the details of assembly operations. The latter also implies dynamic information such as the function of a mechanical part. Explanatory lines are usually drawn for making clear the explanatory relations. Naturally we consider that to integrate the information from explanatory words with that already obtained through the extraction of auxiliary lines will probably enable us to generate an unambiguous assembly plan from the currently observing illustration.
Keiji GYOHTEN Tomoko SUMIYA Noboru BABAGUCHI Koh KAKUSHO Tadahiro KITAHASHI
This paper describes COCE (COordinative Character Extractor), a method for extracting printed Japanese characters and their character strings from all sorts of document images. COCE is based on a multi-agent system where each agent tries to find a character string and extracts the characters in it. For the adaptability, the agents are allowed to look after arbitrary parts of documents and extract the characters using only the knowledge independent of the layouts. Moreover, the agents check and correct their results sometimes with the help of the other agents. From experimental results, we have verified the effectiveness of our approach.
Seiichiro DAN Toshiyasu NAKAO Tadahiro KITAHASHI
We can understand and recover a scene even from a picture or a line drawing. A number of methods have been developed for solving this problem. They have scarcely aimed to deal with scenes of multiple objects although they have ability to recognize three-dimensional shapes of every object. In this paper, challenging to solve this problem, we describe a method for deciding configurations of multiple objects. This method employs the assumption of coplanarity and the constraint of occlusion. The assumption of coplanarity generates the candidates of configurations of multiple objects and the constraint of occlusion prunes impossible configurations. By combining this method with a method of shape recovery for individual objects, we have implemented a system acquirig a three-dimensional information of scene including multiple objects from a monocular image.