The search functionality is under construction.

Author Search Result

[Author] Boualem BOASHASH(5hit)

1-5hit
  • A Probabilistic Approach for Automatic Parameters Selection for the Hybrid Edge Detector

    Mohammed BENNAMOUN  Boualem BOASHASH  

     
    PAPER

      Vol:
    E80-A No:8
      Page(s):
    1423-1429

    We previously proposed a robust hybrid edge detector which relaxes the trade off between robustess against noise and accurate localization of the edges. This hybrid detector separates the tasks of localization and noise suppresion between two sub-detectors. In this paper, we present an extension to this hybrid detector to determine its optimal parameters, independently of the scene. This extension defines a probabilistic cost function using for criteria the probability of missing an edge buried in noise and the probability of detecting false edges. The optimization of this cost function allows the automatic selection of the parameters of the hybrid edge detector given the height of the minimum edge to be detected and the variance of the noise, σ2n. The results were applied to the 2D case and the performance of the adaptive hybrid detector was compared to other detectors.

  • Identification of a Class of Time-Varying Nonlinear System Based on the Wiener Model with Application to Automotive Engineering

    Jonathon C. RALSTON  Abdelhak M. ZOUBIR  Boualem BOASHASH  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1192-1200

    We consider the identification of a class of systems which are both time-varying and nonlinear. Time-varying nonlinear systems are often encountered in practice, but tend to be avoided due to the difficulties that arise in modelling and estimation. We study a particular time-varying polynomial model, which is a member of the class of time-varying Wiener models. The model can characterise both time-variation and nonlinearity in a straightforward manner, without requiring an excessively large number of coefficients. We formulate a procedure to find least-squares estimates of the model coefficients. An advantage of the approach is that systems with rapidly changing dynamics can be characterised. In addition, we do not require that the input is stationary or Gaussian. The approach is validated with an application to an automobile modelling problem, where a time-varying nonlinear model is seen to more accurately characterise the system than a time-invariant nonlinear one.

  • Signal Dependent Time-Frequency and Time-Scale Signal Representations Designed Using the Radon Transform

    Branko RISTIC  Boualem BOASHASH  

     
    PAPER

      Vol:
    E78-A No:9
      Page(s):
    1170-1177

    Time-frequency representations (TFRs) have been developed as tools for analysis of non-stationary signals. Signal dependent TFRs are known to perform well for a much wider range of signals than any fixed (signal independent) TFR. This paper describes customised and sequential versions of the signal dependent TFR proposed in [1]. The method, which is based on the use of the Radon transform at distance zero in the ambiguity domain, is simple and effective in dealing with both simulated and real data. The use of the described method for time-scale analysis is also presented. In addition, the paper investigates a simple technique for detection of noisy chirp signals using the Radon transfrom in the ambiguity domain.

  • Fingerprint Compression Using Wavelet Packet Transform and Pyramid Lattice Vector Quantization

    Shohreh KASAEI  Mohamed DERICHE  Boualem BOASHASH  

     
    PAPER

      Vol:
    E80-A No:8
      Page(s):
    1446-1452

    A new compression algorithm for fingerprint images is introduced. A modified wavelet packet scheme which uses a fixed decomposition structure, matched to the statistics of fingerprint images, is used. Based on statistical studies of the subbands, different compression techniques are chosen for different subbands. The decision is based on the effect of each subband on reconstructed image, taking into account the characteristics of the Human Visual System (HVS). A noise shaping bit allocation procedure which considers the HVS, is then used to assign the bit rate among subbands. Using Lattice Vector Quantization (LVQ), a new technique for determining the largest radius of the Lattice and its scaling factor is presented. The design is based on obtaining the smallest possible Expected Total Distortion (ETD) measure, using the given bit budget. At low bit rates, for the coefficients with high-frequency content, we propose the Positive-Negative Mean (PNM) algorithm to improve the resolution of the reconstructed image. Furthermore, for the coefficients with low-frequency content, a lossless predictive compression scheme is developed. The proposed algorithm results in a high compression ratio and a high reconstructed image quality with a low computational load compared to other available algorithms.

  • A Contour-Based Part Segmentation Algorithm

    Mohammed BENNAMOUN  Boualem BOASHASH  

     
    PAPER-Image Theory

      Vol:
    E80-A No:8
      Page(s):
    1516-1521

    Within the framework of a previously proposed vision system, a new part-segmentation algorithm, that breaks an object defined by its contour into its constituent parts, is presented. The contour is assumed to be obtained using an edge detector. This decomposition is achieved in two stages. The first stage is a preprocessing step which consists of extracting the convex dominant points (CDPs) of the contour. For this aim, we present a new technique which relaxes the compromise that exists in most classical methods for the selection of the width of the Gaussian filter. In the subsequent stage, the extracted CDPs are used to break the object into convex parts. This is performed as follows: among all the points of the contour only the CDPs are moved along their normals nutil they touch another moving CDP or a point on the contour. The results show that this part-segmentation algorithm is invariant to transformations such as rotation, scaling and shift in position of the object, which is very important for object recognition. The algorithm has been tested on many object contours, with and without noise and the advantages of the algorithm are listed in this paper. Our results are visually similar to a human intuitive decomposition of objects into their parts.