1-5hit |
Jun MIURA Tsuyoshi KANDA Shusaku NAKATANI Yoshiaki SHIRAI
This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lens, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a larger size in the image. The recognition algorithm is designed by intensively using built-in functions of an off-the-shelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system.
Hidetoshi MIIKE Sosuke TSUKAMOTO Keishi NISHIHARA Takashi KURODA
This paper proposes a precise method of realizing simultaneous measurement of microscopic defects and the macroscopic three-dimensional shapes of planar objects having specular reflection surfaces. The direction vector field of surface tilt is evaluated directly by the introduction of a moving slit-light technique based on computer graphic animation. A reflected image created by the moving slit-light is captured by a video camera, and the image sequence of the slit-light deformation is analyzed. The obtained direction vector field of the surface tilt recovers the surface shape by means of integration. Two sample objects, a concave mirror and a plane plastic injection molding, are tested to measure the performance of the proposed method. Surface anomalies such as surface dent and warpage are detected quantitatively at a high resolution (about 0.2 [µm]) and a high accuracy (about 95%) in a wide area (about 15 [cm]) of the test object.
Selective attention mechanism, plays an important role in human visual perception, can be investigated by developing an approach to perceiving the multi-meaningful-dotted-pattern in a color blindness plate (CBP). In this Letter, a perception model driven by a simple active vision mechanism is presented for the image segmentation and understanding of a CBP. Experiments show that to understand one meaningful pattern in an image containing multi-meaningful patterns, the active visual search (i.e., pattern attention) is a very useful function.
Sang-Woo BAN Jun-Ki CHO Soon-Ki JUNG Minho LEE
We propose a new active vision system that mimics a saccadic movement of human eye. It is implemented based on a new computational model using neural networks. In this model, the visual pathway was divided in order to categorize a saccadic eye movement into three parts, each of which was then individually modeled using different neural networks to reflect a principal functionality of brain structures related with the saccadic eye movement in our brain. Initially, the visual cortex for saccadic eye movements was modeled using a self-organizing feature map, then a modified learning vector quantization network was applied to imitate the activity of the superior colliculus relative to a visual stimulus. In addition, a multilayer recurrent neural network, which is learned by an evolutionary computation algorithm, was used to model the visual pathway from the superior colliculus to the oculomotor neurons. Results from a computer simulation show that the proposed computational model is effective in mimicking the human eye movements during a saccade. Based on the proposed model, an active vision system using a CCD type camera and motor system was developed and demonstrated with experimental results.
Terence Chek Hion HENG Yoshinori KUNO Yoshiaki SHIRAI
Presently, mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. These systems each have their strengths and weaknesses. For example, although the visual system enables a rich input of data from the surrounding environment, allowing an accurate perception of the area, processing of the images invariably takes time. The sonar system, on the other hand, though quicker in response, is limited in terms of quality, accuracy and range of data. Therefore, any navigation methods that involves only any one system as the primary source for navigation, will result in the incompetency of the robot to navigate efficiently in a foreign, slightly-more-complicated-than-usual surrounding. Of course, this is not acceptable if robots are to work harmoniously with humans in a normal office/laboratory environment. Thus, to fully utilise the strengths of both the sonar and visual sensing systems, this paper proposes a fusion of navigating methods involving both the sonar and visual systems as primary sources to produce a fast, efficient and reliable obstacle-avoiding and navigating system. Furthermore, to further enhance a better perception of the surroundings and to improve the navigation capabilities of the mobile robot, active sensing modules are also included. The result is an active sensor fusion system for the collision avoiding behaviour of mobile robots. This behaviour can then be incorporated into other purposive behaviours (eg. Goal Seeking, Path Finding, etc. ). The validity of this system is also shown in real robot experiments.