Instead of wasting a lot of time researching through the standard search engines, intelligent people automate this grunt work and extract information from the HTML spider that is provided in web grabbing software. A spider is a program that accepts input of criteria and then goes out onto the web and searches for pages with information meeting said criteria. The spider makes copies of the web pages and then the user is able to extract information from the HTML spider and save it for use in a database or spreadsheet where it can be read and used in a manner much more suitable than the original html code of the webpage.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment