Please use this identifier to cite or link to this item:
http://ir.juit.ac.in:8080/jspui/jspui/handle/123456789/8034
Title: | Web Crawler for eBook Library |
Authors: | Kumar, Abhishek Saha, Suman [Guided by] |
Keywords: | Web crawler eBook library |
Issue Date: | 2014 |
Publisher: | Jaypee University of Information Technology, Solan, H.P. |
Abstract: | A crawler is a program that retrieves and stores pages from the Web, commonly for a Web search engine. A crawler often has to download hundreds of millions of pages in a short period of time and has to constantly monitor and refresh the downloaded pages. A focused crawler is a Web crawler aiming to search and retrieve Web pages from the World Wide Web, which are related to a domain-specific topic. Rather than downloading all accessible Web pages, a focused crawler analyzes the frontier of the crawled region to visit only the portion of the Web that contains relevant Web pages, and at the same time, try to skip irrelevant regions. A web crawler for ebook library is a web crawler which replies to ebook related queries. The crawler for ebook Library must crawl through the ebook specific Web pages in the World Wide Web (WWW). For a crawler it is not an easy task to download the ebook specific Web pages. Focus Crawling Mechanism can play a vital role in this context. In our approach we crawl through the Web and store eBooks in a fileSystem, which are related to a eBook domain using Focus Crawling Mechanism. |
URI: | http://ir.juit.ac.in:8080/jspui/jspui/handle/123456789/8034 |
Appears in Collections: | B.Tech. Project Reports |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Web Crawler for eBook Library.pdf | 996.11 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.