An Image Generalization Technique – The Key to Finding Similar Images
Wojciech Tarnawski, Roman Pawlikowski, Krzysztof Ociepa
(Wroclaw University of Technology, Poland)
Abstract:
We present the complete image retrieval system that includes a novel
segmentation technique tailored to acquiring a very generalized view and
more reliable with annotations assigned to analyzed natural images. The
philosophy behind our approach to finding similar images is introduced at
the beginning of the presentation. The segmentation method follows the
principle of clustering the regions visible in the image to receive most
meaningful image regions. The concept is based on the multiscale approach
(anisotropic diffusion) followed by the procedure of mean-shift
segmentation. The information gained while performing diffusion with
subsequent meanshift segmentation results are accumulated, and a new
image which visualizes the generalized effect is produced. Next, to
attain the desired level of generalization, smaller regions in the image
are merged into greater ones by taking into consideration their areas and
co-occurrence in the planar space of the image. This unifying process
produces a very high-level view of the image, with only a small number of
regions to consider while performing feature extraction. The features
extracted, which include color, texture as well as shape , are a subset
of the MPEG-7 visual descriptors. The actual goal of finding similar
images consists of comparing the regions of a query image with the
regions of images held in the database, and stating their level of
similarity. Two propositions of similarity metrics between images are
presented. Many examples illustrating the intermediate results obtained
while performing the above mentioned