skip to navigation
skip to content

scrapy_model 0.1.2

Scrapy helper to create scrapers from models

Latest Version: 0.1.5

Create scraper using Scrapy Selectors

## What is Scrapy?

Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

## What is scrapy_model ?

It is just a helper to create scrapers using the Scrapy Selectors allowing you to select elements by CSS or by XPATH and structuring your scraper via Models (just like an ORM model) and plugable to an ORM model via ``populate`` method.

Import the BaseFetcherModel, CSSField or XPathField (you can use both)

from scrapy_model import BaseFetcherModel, CSSField

Go to a webpage you want to scrap and use chrome dev tools or firebug to figure out the css paths then considering you want to get the following fragment from some page.

<span id="person">Bruno Rocha <a href="">website</a></span>

class MyFetcher(BaseFetcherModel):
name = CSSField('span#person')
website = CSSField('span#person a')
# XPathField('//xpath_selector_here')

Every method named ``parse_<field>`` will run after all the fields are fetched for each field.

def parse_name(self, selector):
# here selector is the scrapy selector for 'span#person'
name = selector.css('::text').extract()
return name

def parse_website(self, selector):
# here selector is the scrapy selector for 'span#person a'
website_url = selector.css('::attr(href)').extract()
return website_url


after defined need to run the scraper


fetcher = Myfetcher(url='http://.....') # optionally you can use cached_fetch=True to cache requests on redis

Now you can iterate ``_data``, ``_raw_data`` and atributes in fetcher

<cssfield -="" name="" -="" bruno="" rocha="">
Bruno Rocha
>>> fetcher._data
{"name": "Bruno Rocha", "website": ""}

You can populate some object

>>> obj = MyObject()
>>> fetcher.populate(obj) # fields optional

Bruno Rocha

If you do not want to define each field explicitly in the class, you can use a json file to automate the process

class MyFetcher(BaseFetcherModel):
""" will load from json """

fetcher = MyFetcher(url='http://.....')

In that case file.json should be

"name": {"css", "span#person"},
"website": {"css": "span#person a"}

You can use ``{"xpath": "..."}`` in case you prefer select by xpath

### Instalation

easy to install

If running ubuntu maybe you need to run:

sudo apt-get install python-scrapy
sudo apt-get install libffi-dev
sudo apt-get install python-dev


pip install scrapy_model


git clone
cd scrapy_model
pip install -r requirements.txt
python install

Example code to fetch the url

#coding: utf-8

from scrapy_model import BaseFetcherModel, CSSField, XPathField

class TestFetcher(BaseFetcherModel):
photo_url = XPathField('//*[@id="content"]/div[1]/table/tr[2]/td/a')

nationality = CSSField(
'#content > div:nth-child(1) > table > tr:nth-child(4) > td > a',

links = CSSField(
'#content > div:nth-child(11) > ul > li > a.external::attr(href)',

def parse_photo_url(self, selector):
return "{}".format(

def parse_nationality(self, selector):
return selector.css("::text").extract()[0]

def parse_name(self, selector):
return selector.extract()[0]

def post_parse(self):
# executed after all parsers
# you can load any data on to self._data
# access self._data and self._fields for current data
# self.selector contains original page
# self.fetch() returns original html
self._data.url = self.url

class DummyModel(object):
For tests only, it can be a model in your database ORM

if __name__ == "__main__":
from pprint import pprint

fetcher = TestFetcher(cache_fetch=True)
fetcher.url = ""

# Mappings can be loaded from a json file
# fetcher.load_mappings_from_file('path/to/file')
fetcher.mappings['name'] = {
"css": ("#section_0::text")


print "Fetcher holds the data"
print fetcher._data

# How to populate an object
print "Populating an object"
dummy = DummyModel()

fetcher.populate(dummy, fields=["name", "nationality"])
# fields attr is optional
print dummy.nationality


# outputs

Fetcher holds the data
Guido van Rossum
{'links': [u''
'name': u'Guido van Rossum',
'nationality': u'Dutch',
'photo_url': ''
'url': ''}
Populating an object
{'name': u'Guido van Rossum', 'nationality': u'Dutch'}
File Type Py Version Uploaded on Size
scrapy_model-0.1.2-py2.py3-none-any.whl (md5) Python Wheel 2.7 2014-05-18 8KB
scrapy_model-0.1.2.tar.gz (md5) Source 2014-05-18 7KB
  • Downloads (All Versions):
  • 34 downloads in the last day
  • 210 downloads in the last week
  • 859 downloads in the last month