There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups, and tendering websites.
Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as
<TITLE>, in order to return the relevant search results . If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to remove such content.
To understand what sensitive design and configuration information of the application/system/organization is exposed both directly (on the organization's website) or indirectly (on a third party website).
Use a search engine to search for:
Using the advanced "site:" search operator, it is possible to restrict search results to a specific domain . Do not limit testing to just one search engine provider as they may generate different results depending on when they crawled content and their own algorithms. Consider using the following search engines:
Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.
Google provides the Advanced "cache:" search operator , but this is the equivalent to clicking the "Cached" next to each Google Search Result. Hence, the use of the Advanced "site:" Search Operator and then clicking "Cached" is preferred.
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages  to assist with retrieving cached pages. An implementation of this is under development by the OWASP "Google Hacking" Project.
PunkSpider is web application vulnerability search engine. It is of little use for a penetration tester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.
To find the web content of owasp.org indexed by a typical search engine, the syntax required is:
To display the index.html of owasp.org as cached, the syntax is:
The Google Hacking Database is list of useful search queries for Google. Queries are put in several categories:
 "Google Basics: Learn how Google Discovers, Crawls, and Serves Web Pages" - https://support.google.com/webmasters/answer/70897
 "Operators and More Search Help" - https://support.google.com/websearch/answer/136861?hl=en
 "Google Hacking Database" - http://www.exploit-db.com/google-dorks/
Carefully consider the sensitivity of design and configuration information before it is posted online.
Periodically review the sensitivity of existing design and configuration information that is posted online.