Sitemaps XML format | Search for a title, author or keyword | ||||||||
Sitemaps XML format Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL ( when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site ) so that search engines can more intelligently crawl the site. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Sitemap 0.90 has wide adoption, including support from Google, Yahoo!, and Microsoft. Sitemaps are particularly helpful if: Your site has dynamic content; Your site has pages that aren't easily discovered by crawlers during the crawl process, for example, pages featuring rich AJAX or images; Your site is new and has few links to it ( crawlers crawls the web by following links from one page to another, so if your site isn't well linked, it may be hard for them to discover it ); Your site has a large archive of content pages that are not well linked to each other, or are not linked at all. This document describes the XML schema for the Sitemap protocol.
|
|||||||||
Sitemaps XML format | Disclaimer: this link points to content provided by other sites. |