Skip to main content

Scrape biological data from websites

Project description

#Bioscraping

Web scrapers to interact with remote databases programatically in Python that makes a local cache of web data with sqlite3 to prevent excessive web traffic.

So far, implemented:

  • [Uniprot](http://uniprot.org) by uniprot protein ID. (e.g. ‘Q8BP71’)

  • [PubMed](www.ncbi.nlm.nih.gov/pubmed/) by PMID (e.g. ‘24213538’)

#Install

##Python 2.7.x and 3.x pip install bioscraping

#Test

Real unit tests are absent, but you can test basic functionality with python test/not_a_real_test.py.

#Usage

##PubMed

from bioscraping import PubMedClient

pubmed = PubMedClient()

defaults to writing a file called .bioscraping.pubmed.sqlite.db. Use PubMedClient(“:memory:”) for in-memory data storage.

pubmed.fetch(<PMID>)

Returns text with author and abstract for PMID.

##Uniprot

from bioscraping import UniprotClient

uniprot = UniprotClient()

defaults to writing a file called .bioscraping.uniprot.sqlite.db. Use UniprotClient(“:memory:”) for in-memory data storage.

uniprot.fetch(<Uniprot ID>)

Returns a dictionary of data parsed from xml.

#Buyer beware

UniprotClient has a potential race condition and tempfile needs to be implemented before it is safe for concurrent processes. (see TODO)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bioscraping-0.0.3.tar.gz (14.1 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page