The search functionality is under construction.

Author Search Result

[Author] Karim FAEZ(10hit)

1-10hit
  • A Dynamic Model for the Seismic Signals Processing and Application in Seismic Prediction and Discrimination

    Payam NASSERY  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Vol:
    E83-D No:12
      Page(s):
    2098-2106

    In this paper we have presented a new method for seismic signal analysis, based on the ARMA modeling and a fuzzy LVQ clustering method. The objective achieved in this work is to sense the changes made naturally or artificially on the seismogram signal, and to detect the sources, which caused these changes (seismic classification). During the study, we have also found out that the model is sometimes capable to alarm the further seismic events just a little time before the onset of those events (seismic prediction). So the application of the proposed method both in seismic classification and seismic prediction are studied through the experimental results. The study is based on the background noise of the teleseismic short period recordings. The ARMA model coefficients are derived for the consecutive overlapped windows. A base model is then generated by clustering the calculated model parameters, using the fuzzy LVQ method proposed by Nassery & Faez in [19]. The time windows, which do not take part in [19] model generation process, are named as the test windows. The model coefficients of the test windows are then compared to the base model coefficients through some pre-defined composition rules. The result of this comparison is a normalized value generated as a measure of similarity. The set of the consecutive similarity measures generate above, produce a curve versus the time windows indices called as the characteristic curves. The numerical results have shown that the characteristic curves often contain much vital seismological information and can be used for source classification and prediction purposes.

  • Signature Pattern Recognition Using Moments Invariant and a New Fuzzy LVQ Model

    Payam NASSERY  Karim FAEZ  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:12
      Page(s):
    1483-1493

    In this paper we have introduced a new method for signature pattern recognition, taking advantage of some image moment transformations combined with fuzzy logic approach. For this purpose first we tried to model the noise embedded in signature patterns inherently and separate it from environmental effects. Based on the first step results, we have performed a mapping into the unit circle using the error least mean square (LMS) error criterion, to get ride of the variations caused by shifting or scaling. Then we derived some orientation invariant moments introduced in former reports and studied their statistical properties in our special input space. Later we defined a fuzzy complex space and also a fuzzy complex similarity measure in this space and constructed a new training algorithm based on fuzzy learning vector quantization (FLVQ) method. A comparison method has also been proposed so that any input pattern could be compared to the learned prototypes through the pre-defined fuzzy similarity measure. Each set of the above image moments were used by the fuzzy classifier separately and the mis-classifications were detected as a measure of error magnitude. The efficiency of the proposed FLVQ model has been numerically shown compared to the conventional FLVQs reported so far. Finally some satisfactory results are derived and also a comparison is made between the above considered image transformations.

  • Precise Vehicle Speed Measurement Based on a Hierarchical Homographic Transform Estimation for Law Enforcement Applications

    Hamed ESLAMI  Abolghasem A. RAIE  Karim FAEZ  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/03/11
      Vol:
    E99-D No:6
      Page(s):
    1635-1644

    Today, computer vision is used in different applications for intelligent transportation systems like: traffic surveillance, driver assistance, law enforcement etc. Amongst these applications, we are concentrating on speed measurement for law enforcement. In law enforcement applications, the presence of the license plate in the scene is a presupposition and metric parameters like vehicle's speed are to be estimated with a high degree of precision. The novelty of this paper is to propose a new precise, practical and fast procedure, with hierarchical architecture, to estimate the homraphic transform of the license plate and using this transform to estimate the vehicle's speed. The proposed method uses the RANSAC algorithm to improve the robustness of the estimation. Hence, it is possible to replace the peripheral equipment with vision based systems, or in conjunction with these peripherals, it is possible to improve the accuracy and reliability of the system. Results of experiments on different datasets, with different specifications, show that the proposed method can be used in law enforcement applications to measure the vehicle's speed.

  • A New Efficient Stereo Line Segment Matching Algorithm Based on More Effective Usage of the Photometric, Geometric and Structural Information

    Ghader KARIMIAN  Abolghasem A. RAIE  Karim FAEZ  

     
    PAPER-Stereo and Multiple View Analysis

      Vol:
    E89-D No:7
      Page(s):
    2012-2020

    In this paper, a new stereo line segment matching algorithm is presented. The main purpose of this algorithm is to increase efficiency, i.e. increasing the number of correctly matched lines while avoiding the increase of mismatches. In this regard, the reasons for the elimination of correct matches as well as the existence of the erroneous ones in some existing algorithms have been investigated. An attempt was also made to make efficient uses of the photometric, geometric and structural information through the introduction of new constraints, criteria, and procedures. Hence, in the candidate determination stage of the designed algorithm two new constraints, in addition to the reliable epipolar, maximum and minimum disparity and orientation similarity constraints were employed. In the process of disambiguation and final matches selection, being the main problem of the matching issue, regarding the employed constraints, criterion function and its optimization, it is a completely new development. The algorithm was applied to the images of several indoor scenes and its high efficiency illustrated by correct matching of 96% of the line segments with no mismatches.

  • Design of RBF Neural Network Using An Efficient Hybrid Learning Algorithm with Application in Human Face Recognition with Pseudo Zernike Moment

    Javad HADDADNIA  Karim FAEZ  Majid AHMADI  Payman MOALLEM  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E86-D No:2
      Page(s):
    316-325

    This paper presents an efficient Hybrid Learning Algorithm (HLA) for Radial Basis Function Neural Network (RBFNN). The HLA combines the gradient method and the linear least squared method for adjusting the RBF parameters and connection weights. The number of hidden neurons and their characteristics are determined using an unsupervised clustering procedure, and are used as input parameters to the learning algorithm. We demonstrate that the HLA, while providing faster convergence in training phase, is also less sensitive to training and testing patterns. The proposed HLA in conjunction with RBFNN is used as a classifier in a face recognition system to show the usefulness of the learning algorithm. The inputs to the RBFNN are the feature vectors obtained by combining shape information and Pseudo Zernike Moment (PZM). Simulation results on the Olivetti Research Laboratory (ORL) database and comparison with other algorithms indicate that the HLA yields excellent recognition rate with less hidden neurons in human face recognition.

  • GA-Based Affine PPM Using Matrix Polar Decomposition

    Mehdi EZOJI  Karim FAEZ  Hamidreza RASHIDY KANAN  Saeed MOZAFFARI  

     
    PAPER-Pattern Discrimination and Classification

      Vol:
    E89-D No:7
      Page(s):
    2053-2060

    Point pattern matching (PPM) arises in areas such as pattern recognition, digital video processing and computer vision. In this study, a novel Genetic Algorithm (GA) based method for matching affine-related point sets is described. Most common techniques for solving the PPM problem, consist in determining the correspondence between points localized spatially within two sets and then find the proper transformation parameters, using a set of equations. In this paper, we use this fact that the correspondence and transformation matrices are two unitary polar factors of Grammian matrices. We estimate one of these factors by the GA's population and then evaluate this estimation by computing an error function using another factor. This approach is an easily implemented one and because of using the GA in it, its computational complexity is lower than other known methods. Simulation results on synthetic and real point patterns with varying amount of noise, confirm that the algorithm is very effective.

  • Illumination-Robust Face Recognition from a Single Image per Person Using Matrix Polar Decomposition

    Mehdi EZOJI  Karim FAEZ  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E92-D No:8
      Page(s):
    1561-1569

    In this paper, a novel illumination invariant face recognition algorithm is proposed for face recognition. This algorithm is composed of two phases. In the first phase, we reduce the effect of illumination changes using a nonlinear mapping of image intensities. Then, we modify the distribution of the coefficients of wavelet transform in certain sub-bands. In this step, the recognition performance is more important than image quality. In the second phase, we used the unitary factor of polar decomposition of enhanced image as a feature vector. In the recognition phase, the correlation-based nearest neighbor rule is applied for the matching. We have performed some experiments on several databases and have evaluated the proposed method in different aspects. Experimental results in recognition show that this approach provides a suitable representation for overcoming illumination effects.

  • Seismic Events Discrimination Using a New FLVQ Clustering Model

    Payam NASSERY  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Vol:
    E83-D No:7
      Page(s):
    1533-1539

    In this paper, the LVQ (Learning Vector Quantization) model and its variants are regarded as the clustering tools to discriminate the natural seismic events (earthquakes) from the artificial ones (nuclear explosions). The study is based on the six spectral features of the P-wave spectra computed from the short period teleseismic recordings. The conventional LVQ proposed by Kohenen and also the Fuzzy LVQ (FLVQ) models proposed by Sakuraba and Bezdek are all tested on a set of 26 earthquakes and 24 nuclear explosions using the leave-one-out testing strategy. The primary experimental results have shown that the shapes, the number and also the overlaps of the clusters play an important role in seismic classification. The results also showed how an improper feature space partitioning would strongly weaken both the clustering and recognition phases. To improve the numerical results, a new combined FLVQ algorithm is employed in this paper. The algorithm is composed of two nested sub-algorithms. The inner sub-algorithm tries to generate a well-defined fuzzy partitioning with the fuzzy reference vectors in the feature space. To achieve this goal, a cost function is defined as a function of the number, the shapes and also the overlaps of the fuzzy reference vectors. The update rule tries to minimize this cost function in a stepwise learning algorithm. On the other hand, the outer sub-algorithm tries to find an optimum value for the number of the clusters, in each step. For this optimization in the outer loop, we have used two different criteria. In the first criterion, the newly defined "fuzzy entropy" is used while in the second criterion, a performance index is employed by generalizing the Huntsberger formula for the learning rate, using the concept of fuzzy distance. The experimental results of the new model show a promising improvement in the error rate, an acceptable convergence time, and also more flexibility in boundary decision making.

  • Adaptive Script-Independent Text Line Extraction

    Majid ZIARATBAN  Karim FAEZ  

     
    PAPER-Pattern Recognition

      Vol:
    E94-D No:4
      Page(s):
    866-877

    In this paper, an adaptive block-based text line extraction algorithm is proposed. Three global and two local parameters are defined to adapt the method to various handwritings in different languages. A document image is segmented into several overlapping blocks. The skew of each block is estimated. Text block is de-skewed by using the estimated skew angle. Text regions are detected in the de-skewed text block. A number of data points are extracted from the detected text regions in each block. These data points are used to estimate the paths of text lines. By thinning the background of the image including text line paths, text line boundaries or separators are estimated. Furthermore, an algorithm is proposed to assign to the extracted text lines the connected components which have intersections with the estimated separators. Extensive experiments on different standard datasets in various languages demonstrate that the proposed algorithm outperforms previous methods.

  • Fast Edge-Based Stereo Matching Algorithms through Search Space Reduction

    Payman MOALLEM  Karim FAEZ  Javad HADDADNIA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E85-D No:11
      Page(s):
    1859-1871

    Finding corresponding edges is considered being the most difficult part of edge-based stereo matching algorithms. Usually, correspondence for a feature point in the first image is obtained by searching in a predefined region of the second image, based on epipolar line and maximum disparity. Reduction of search region can increase performances of the matching process, in the context of execution time and accuracy. Traditionally, hierarchical multiresolution techniques, as the fastest methods are used to decrease the search space and therefore increase the processing speed. Considering maximum of directional derivative of disparity in real scenes, we formulated some relations between maximum search space in the second images with respect to relative displacement of connected edges (as the feature points), in successive scan lines of the first images. Then we proposed a new matching strategy to reduce the search space for edge-based stereo matching algorithms. Afterward, we developed some fast stereo matching algorithms based on the proposed matching strategy and the hierarchical multiresolution techniques. The proposed algorithms have two stages: feature extraction and feature matching. We applied these new algorithms on some stereo images and compared their results with those of some hierarchical multiresolution ones. The execution times of our proposed methods are decreased between 30% to 55%, in the feature matching stage. Moreover, the execution time of the overall algorithms (including the feature extraction and the feature matching) is decreased between 15% to 40% in real scenes. Meanwhile in some cases, the accuracy is increased too. Theoretical investigation and experimental results show that our algorithms have a very good performance with real complex scenes, therefore these new algorithms are very suitable for fast edge-based stereo applications in real scenes like robotic applications.