The word "literature" has different meanings depending on who is using it. It could be applied broadly to mean any symbolic record, encompassing everything from images and sculptures to letters.
jueves, 6 de noviembre de 2008
FIX University Welcomes CAMACOL Biocasa 2008
Need to crawl the web?
package com.zapcaster.crawl;
import java.net.MalformedURLException;
import java.net.URL;
public class GetContent {
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("java GetContent");
System.exit(-1);
}
LinkSpider spider = null;
try {
spider = new LinkSpider(args[0]); }
catch(MalformedURLException e) {
System.out.println(e);
System.out.println("Invalid URL: "+args[0]);
System.exit(-1);
}
System.out.println("Get Content:");
Long start = System.currentTimeMillis(); // performancce tracking
spider.traverse();
System.out.println("Time elapsed = " + (System.currentTimeMillis()-start));
System.out.println("Finished");
try {
byte buff[] = spider.getContent(new URL(args[0]));
StringBuffer sb = new StringBuffer();
for (byte aBuff : buff) {
sb.append((char) aBuff);
}
System.out.println("[[--[" + sb.toString() + "]--]]") ;
} catch(Exception e) { System.out.println("URL error"); }
}
}
package com.zapcaster.crawl;
import bplatt.spider.Arachnid;
import bplatt.spider.PageInfo;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashSet;
public class LinkSpider extends Arachnid {
private HashSet images;
private HashSetlinks;
public LinkSpider(String base) throws MalformedURLException
{
super(base);
super.setDelay(5);
links = new HashSet();
// no need to file out this.outdir = outdir;
}
protected void handleBadLink(URL url,URL parent, PageInfo p) { }
protected void handleBadIO(URL url, URL parent) { }
protected void handleLink(PageInfo p) {
URL[] list = p.getLinks();
int x=0;
if (list != null) {
for (URL aList : list)
if (!links.contains(aList)) {
links.add(aList);
System.out.println("#" + (++x) +"Link SAVED : " + aList.toString());
}
}
}
protected void handleNonHTMLlink(URL url, URL parent,PageInfo p) { }
protected void handleExternalLink(URL url, URL parent) { }
private void printURLs()
{
int x=0;
for (URL link : links) {
System.out.println("Link [" + (x++) + "] " + link.toString());
}
}
}
I found several Open Source web crawlers I am especially interested in JOBO and GRUNK which are included in the list below:
- GRUNK - Grunk (for GRammar UNderstanding Kernel) is a library for parsing and extracting structured metadata from semi-structured text formats. It is based on a very flexible parsing engine capable of detecting a wide variety of patterns in text formats and extracting information from them. Formats are described in a simple and powerful XML configuration from which Grunk builds a parser at runtime, so adapting Grunk to a new format does not require a coding or compilation step. Not really a crawler, but something that may prove extremely useful in crawling
- Heritrix - Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project. Heritrix is designed to respect the robots.txt exclusion directives and META robots tags .
- WebSPHINX - WebSPHINX ( Website-Specific Processors for HTML INformation eXtraction) is a Java class library and interactive development environment for web crawlers. A web crawler (also called a robot or spider) is a program that browses and processes Web pages automatically. WebSPHINX consists of two parts: the Crawler Workbench and the WebSPHINX class library. Nutch - Nutch provides a transparent alternative to commercial web search engines. As of June, 2003, we have successfully built a 100 million page demo system. Uses Lucene for its indexing, however provides its own Crawler implementation.
- WebLech - WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and will feature a GUI console.
- Arale - While many bots around are focused on page indexing, Arale is primarly designed for personal use. It fits the needs of advanced web surfers and web developers.
- J-Spider - Based on the book "Programming Spiders, Bots and Aggregators in Java". This book begins by showing how to create simple bots that will retrieve information from a single website. Then a spider is developed that can move from site to site as it crawls across the Web. Next we build aggregators that can take data from many sites and present a consolidated view.
- HyperSpider - HyperSpider (Java app) collects the link structure of a website. Data import/export from/to database and CSV-files. Export to Graphviz DOT, Resource Description Framework (RDF/DC), XML Topic Maps (XTM), Prolog, HTML. Visualization as hierarchy and map.
- Arachnid - Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page of a Web site is parsed.
- Spindle- spindle is a web indexing/search tool built on top of the Lucene toolkit. It includes a HTTP spider that is used to build the index, and a search class that is used to search the index. In addition, support is provided for the Bitmechanic listlib JSP TagLib, so that a search can be added to a JSP based site without writing any Java classes.
- Spider - Spider is a complete standalone Java application designed to easily integrate varied datasources. XML driven framework for data retrieval from network accessible sources, scheduled pulling, highly extensible, provides hooks for custom post-processing and configuration and implemented as a Avalon/Keel framework datafeed service.
- JOBO - JoBo is a simple program to download complete websites to your local computer. Internally it is basically a web spider. The main advantage to other download tools is that it can automatically fill out forms (e.g. for automated login) and also use cookies for session handling. Compared to other products the GUI seems to be very simple, but the internal features matters ! Do you know any download tool that allows it to login to a web server and download content if that server uses a web forms for login and cookies for session handling ? It also features very flexible rules to limit downloads by URL, size and/or MIME type.
- LARM - LARM is a 100% Java search solution for end-users of the Jakarta Lucene search engine framework. It contains methods for indexing files, database tables, and a crawler for indexing web sites. Well, it will be. At the moment we only have some specifications. It's up to you to turn this into a working program. Its predecessor was an experimental crawler called larm-webcrawler available from the Jakarta project.
- Metis - Metis is a tool to collect information from the content of web sites. This was written for the Ideahamster Group for finding the competitive intelligence weight of a web server and assists in satisfying the CI Scouting portion of the Open Source Security Testing Methodology Manual (OSSTMM).
- SimpleSpider - The simple spider is a real application to provide the search capability for DevelopMentor's web site. It is also an example application, for classroom use learning about open source programming with Java. Grunk - Grunk (for GRammar UNderstanding Kernel) is a library for parsing and extracting structured metadata from semi-structured text formats. It is based on a very flexible parsing engine capable of detecting a wide variety of patterns in text formats and extracting information from them. Formats are described in a simple and powerful XML configuration from which Grunk builds a parser at runtime, so adapting Grunk to a new format does not require a coding or compilation step. Not really a crawler, but something that may prove extremely useful in crawling. CAPEK - CAPEK is an Open Source robot entirely written in Java. It gathers web pages for EGOTHOR in a sophisticated way. The pages are ordered by their pagerank, stability of the connection between Capek and the respective web-site, and many other factors.
tags: spindle jobo websphinx larm metis hyperspider open source grunk arachnid web craler spider linkspider weblech heritrix simplespider comzapcastercrawl jspider
links: digg this del.icio.us technorati reddit
Suscribirse a:
Entradas (Atom)
Archivo del blog
-
►
2015
(92)
- ► septiembre (10)
-
►
2014
(129)
- ► septiembre (8)
-
►
2013
(136)
- ► septiembre (7)
-
►
2012
(207)
- ► septiembre (16)
-
►
2010
(32)
- ► septiembre (1)