About the series
Signal compression and statistical classification share many goals and properties, both in theory and in practice. Moreover, both theory and practice suggest that the design of good systems for signal processing and coding is intimately connected to the design of good probabilistic models for the sources producing real signals such as speech and image signals. The goal of this talk is to describe common aspects of the title areas, to sketch classic and recent relevant theoretical results -- especially from information theory, quantization theory, and statistical clustering, and to consider applications to image coding, segmentation, modeling, and database browsing. A common thread in both theory and algorithms is the use of relative entropy (or Kullback-Leibler divergence) as a measure of the distance between probability distributions and its use in deriving performance bounds and in clustering algorithms for compression, classification, and modeling. Content-addressable browsing of image databases incorporating Gauss mixture models will be considered as an illustration of the general ideas.