08-11-2012, 01:21 PM
A seminar report on CONTENT-BASED IMAGE RETRIEVAL
1IMAGE RETRIEVAL.pdf (Size: 690.7 KB / Downloads: 114)
INTRODUCTION:
Content-based image retrieval is a part of image processing and is also comes under artificial intelligence. We know that interest in digital images is growing day by day; Users in many professional fields are exploiting the opportunities offered by the ability to access and manipulate remotely-stored images in all kinds of new and exciting ways. The problems in image retrieval is becoming widely recognized, and the search for solutions an increasingly active area for research and development. So here is the technique Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval (CBVIR) is the application of computer vision to the image retrieval problem, that is, the problem of searching for digital images in large databases. 2 BACKGROUNDS: The use of digital images and pictures in communication is new – what olden days cave-dwelling ancestors did was they use to paint pictures on the walls of their caves, and the use of maps and building plans to convey information almost certainly dates back to pre-Roman times. But the twentieth century has witnessed unparalleled growth in the number, availability and importance of images in daily life. They now play an important role in fields as diverse as medicine, journalism, advertising, design, education and entertainment. The term CBIR seems to have originated in 1992, when it was used by T. Kato to describe experiments into automatic retrieval of images from a database, based on the colors and shapes present. Since then, the term has been used to describe the process of retrieving desired images from a large collection on the basis of syntactical image features. The techniques, tools and algorithms that are used originate from fields such as statistics, pattern recognition, signal processing, and computer vision.
NEED FOR CBIR:
In today’s world more and more multimedia Information is stored in databases like images audio and video. An image may be better or more effective than a substantial amount of text. It also aptly characterizes the goals of visualization where large amounts of data must be absorbed quickly and it is a picture is worth of thousand words. If we use text for retrieving/searching an image what happens is-we can’t write whole description of image this is one thing and if the image contains geographical data then manual keyword description are not possible ex: Google earth contains geographical data. So then why to go for this is because- It is cultural language dependent and is not possible to describe every image in database. Keyword matching will not give most relevant images.
TECHNIQUES USED IN CBIR:
CBIR techniques
CBIR operates on a totally different principle, retrieving/searching stored images from a collection by comparing features automatically extracted from the images themselves. The commonest features used are mathematical measures of color, texture or shape (basic). A system (CBIR) allows users to formulate queries by submitting an example of the type of image being sought (input), though some offer alternatives such as selection from a palette or sketch input we can also select color textures or any other visual information. The system then identifies those stored images whose feature values match those of the query most closely (right side), and displays thumbnails of these images on the screen like in fig1.
Colour retrieval
Each image added to the collection is analyzed to compute a ‘color histogram’ where ‘color histogram’ is a representation of the distribution of colors in an image. For digital images, it is basically the number of pixels that have colors in each of a fixed list of color ranges, which span the image's color space, the set of all possible colors. This shows the proportion of pixels of each color within the image. The color histogram for each image is then stored in the database. During searching time, the user can either specify the desired proportion of each color (75% olive green and 25% red, for example)it depends on the system to give that option or not(design and options), or submit an example image from which a color histogram is calculated. Matching process then retrieves those images whose color histograms match those of the query most closely as described above in percentages. The results from some of these systems can look quite impressive.
Texture retrieval
What if the images are of same color? This will be answered by textures. The ability to retrieve images on the basis of texture similarity may not seem very useful-But the ability to match on texture similarity can often be useful in distinguishing between areas of images with similar color (such as sky and sea, or leaves and grass). A variety of techniques has been used for measuring texture similarity; the best-established rely on comparing values of what are known as second-order statistics calculated from query and stored images. Essentially, these calculate the relative brightness of selected pairs of pixels from each image. From these it is possible to calculate measures of image texture such as the degree of contrast, coarseness, homogenality and regularity [Tamura et al, 1978], or periodicity, correlation and entropy [Liu and Picard, 1996]. Alternative methods of texture analysis for retrieval include the use of Gabor filters this is the widely used technique now a days. Texture queries/specifying can be formulated in a similar manner to color queries, by selecting examples of desired textures from a palette, or by supplying an example query image. The system then retrieves images with texture measures similar in value. A recent technique is the texture thesaurus developed by Ma and Manjunath [1998], which retrieves textured regions in images on the basis of similarity to automatically-derived codeword’s representing important classes of texture within the collection.