Skip to main content

Web Crawler, HTML Parser, and Data Visualization

Project description

! -- Make sure you have downloaded libslt1 and libxml2 in order for lxml to work properly on Linux systems. (Ubuntu at least)


Documentation is under construction right now!

If you have any questions you can email me at

gitwebby@gmail.com

NOW recieved pylint rating of 9.62/10


Webby quickly brings web crawling and xml/html parsing to your fingertips.


Creating Crawlers has never been this easy, feel free to connect to any website and use XPATHs to harvest valuable web data.

Example Setup:

import webby
spider = webby.Crawler('http://example.com')

parse = webby.Parser(spider.source)
parse.scrape("//p") #Will return all 'p' data tags from html on example.com

#To print out what you just scraped
for value in parse.data.itervalues():
print value

Project details


Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page