Skip to main content

Trawl web pages for files to download

Project description

Given the url of an html web page, this Python package asynchronously downloads all files linked to from that web page. Optionally, all web pages linked to from the original web page can be trawled for files as well.

Installation

$ pip install web_trawler

Alternatively, the source code is available on gitlab.com.

The package depends on lxml and cssselect. For testing, pytest is required.

On Windows, lxml may have to be installed manually, with pip install lxml.

The Anaconda distribution of Python includes lxml by default. If you use Anaconda and it still doesn’t work, try conda install lxml.

Usage

Command line

Once installed with pip, web_trawler can be used like this:

$ web_trawler google.com

Run this command to see how web_trawler finds links and inspects their http headers for more information. There is ordinarily no files linked to from google.com, but if there are, they will be downloaded to the directory download/ relative to where you ran the command.

The url argument must be provided. The following optional arguments are supported:

--target TARGET

Give a path for where you would like the files to be downloaded. The default path is “download”.

--include_links_from_linked_pages

Set web_trawler to find download links from all web pages linked to from the original web page as well (only goes one step, and only for links within the domain of the original web page)

--quiet

Suppresses output information about which links are being processed and which files are being downloaded.

--processes PROCESSES

Manually set how many processes will be spawned. The default is to spawn one less than the number of processors detected (so as not to stall the system). For each process, up to 10 threads are spawned.

--whitelist WHITELIST

Space-separated file endings to whitelist. Allows use of wildcards, e.g. “xls*” to capture all the variants (xlsx, xlsb, xlsm, xls). Blacklist takes precedence.

--blacklist BLACKLIST

Space-separated file endings to blacklist. Like whitelist.

--no_of_files_limit LIMIT

Set a maximum number of files you are willing to download, in case web_trawler finds more than expected.

--mb_per_file_limit LIMIT

Set a maximum file size you are willing to download. Warnings are printed to stdout for each file excluded.

Each argument has a shorthand consisting of their first letters, e.g. -t, -i, -q, etc.

A realistic example of use

If we’d like to download, say, all zip and Excel files up to 100 MB from a web page on the World Input-Output Database site, into a local directory called “data”, we’d need to use the arguments -t (for target), -w (for whitelist) and -m (for mb_per_file_limit):

$ web_trawler http://www.wiod.org/database/wiots16 -t "data" -w "zip xls*" -m 100

Notice the use of a wildcard in the whitelist. The web page specified links to two different Excel associated file endings. The wildcard ensures that both are captured.

If you test this command, downloads of a bunch of large files will start. Press ctrl-c or ctrl-z to interrupt or force quit the process, respectively.

Make sure to clean up any downloaded files you don’t want. They should be in a folder relative to where you ran the command. If you didn’t specify a target, they are downloaded to a directory called “download”.

Use within Python

The following code does the exact same thing as the last example for the command line usage:

import web_trawler

web_trawler.trawl("http://www.wiod.org/database/wiots16", include_links_from_linked_pages=True, mb_per_file_limit=0)

The function trawl does the same thing as web_trawler as run from the command line, but with the arguments passed to it directly in Python.

Several of the intermediary functions used in web_trawler can also be accessed through Python, i.e. to get a list with information about all links on a webpage, or just the links to files, filtered with a blacklist or whitelist. Here’s a brief description of each of them:

get_links:

Takes only one argument, a url, and returns a list of Link namedtuples, described below. This list is unfiltered. All http links that return a http request are included.

get_file_links:

Runs get_links and returns a filtered list of Link namedtuples for files only, with whitelist and/or blacklist applied if specified. Arguments have self-explanatory names. The whitelist and blacklist can be provided as a space-separated string or as a list.

Both get_links and get_file_links return lists of namedtuples with the following fields:

href:

the link url

title:

the content of the <a> tag containing the link

mb:

calculated from the http header content-length

type:

the http header content-type, unmodified

Use in Matlab

In Matlab, functions of pip installed Python packages can be called using the py script, where optional arguments can be specified using the pyargs function:

>> py.web_trawler.get_file_links('http://www.wiod.org/database/wiots16', pyargs('whitelist', 'xls*'))

Stdout isn’t displayed, that’s why the get_file_links function was chosen, as it returns something. To use the full functionality of web_trawler, you could run the function trawl instead. As long as there are no errors, nothing will show up in the Command Window. Files will nevertheless be downloaded, relative to your Current Folder in Matlab.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

web_trawler-0.1.2.tar.gz (21.6 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page