21-01-2013, 03:52 PM
GOOGOL
GOOGOL.pptx (Size: 432.38 KB / Downloads: 21)
To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites.
When a spider is building its lists, the process is called Web crawling.
The spider will begin with a popular site, indexing the words on its pages and following every link found within the site.
In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.
Words occurring in the title, subtitles, meta tags and other positions of relative importance.
The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the."