Ever wondered how these search engine works. Crawlers also called spiders, are the programs that most of the search engines use for web documenting. Some rely on human fed information about pages; others use a combination of both. The crawlers visit web sites to read the information in them including the content and the meta-tags, and follow the links that the site point to. This information is sent back to a database where it will be indexed. It is a continuous process working all the time, so that the indexed information is up-to-date.
During a search for the given keywords, the search engine searches its indexes to see if the keywords are present. If present, the metadata is used to determine the relevancy of each document that contains the keywords. There is difference in the result produced by different search engine; due to difference in their algorithm. For example, Google uses an algorithm called PageRank. PageRank checks how many pages links to a given page, if more pages link, then its importance will be increased. This page will more likely show up in the search results. Another thing is, the more important the page that link to this page is, the more important this page becomes, and so on.