Skip to main content

Optical Flow counter-measures for the REPLAY-ATTACK database

Project description

This package contains our published Optical Flow algorithm for face recognition anti-spoofing. This document explains how to install it and use it to produce our paper results.

If you use this package and/or its results, please cite the following publications:

  1. The original paper with the counter-measure explained in details (to appear):

    @article{Anjos_IETBMT_2013,
      author = {Anjos, Andr{\'{e}} and Murali Mohan Chakka and Marcel, S{\'{e}}bastien},
      keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick, Optical Flow},
      month = apr,
      title = {Motion-Based Counter-Measures to Photo Attacks in Face Recognition},
      journal = {Institution of Engineering and Technology - Biometrics},
      year = {2013},
    }
  2. Bob as the core framework used to run the experiments:

    @inproceedings{Anjos_ACMMM_2012,
      author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel},
      title = {Bob: a free signal processing and machine learning toolbox for researchers},
      year = {2012},
      month = oct,
      booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
      publisher = {ACM Press},
      url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf},
    }

If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.

Raw data

The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the PHOTO-ATTACK database portal for more information.

Installation

There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.

Using an automatic installer

Using pip is the easiest (shell commands are marked with a $ signal):

$ pip install antispoofing.optflow

You can also do the same with easy_install:

$ easy_install antispoofing.optflow

This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.

This scheme works well with virtual environments by virtualenv or if you have root access to your machine. Otherwise, we recommend you use the next option.

Using zc.buildout

Download the latest version of this package from PyPI and unpack it in your working area. The installation of the toolkit itself uses buildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:

$ python bootstrap.py
$ ./bin/buildout

These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.

User Guide

It is assumed you have followed the installation instructions for the package and got this package installed and the PHOTO-ATTACK database downloaded and uncompressed in a directory. You should have all required utilities sitting inside a binary directory depending on your installation strategy (utilities will be inside the bin if you used the buildout option). We expect that the video files downloaded for the PHOTO-ATTACK database are installed in a sub-directory called database at the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:

$ ln -s /path/where/you/installed/the/photo-attack-database database

If you don’t want to create a link, use the --input-dir flag to specify the root directory containing the database files. That would be the directory that contains the sub-directories train, test, devel and face-locations.

Paper Layout: How to Reproduce our Results

The paper studies 4 algorithms (the first 3 are published elsewhere and are not a contribution to this paper):

Algorithm 1 - Kollreider’s Optical Flow anti-spoofing:

@article{Kollreider_2009,
  author={K. Kollreider AND H. Fronthaler AND J. Bigun},
  title={Non-intrusive liveness detection by face images},
  volume={27},
  number={3},
  journal={Image and Vision Computing},
  publisher={Elsevier B.V.},
  year={2009},
  pages={233--244},
}

Algorithm 2 - Bao’s Optical Flow anti-spoofing:

@inproceedings{Bao_2009,
  author={Wei Bao AND H. Li AND Nan Li AND Wei Jiang},
  title={A liveness detection method for face recognition based on optical flow field},
  booktitle={2009 International Conference on Image Analysis and Signal Processi
ng},
  publisher={IEEE},
  year={2009},
  pages={233--236},
}

Algorithm 3 - Our own Frame Difference’s based anti-spoofing:

@inproceedings{Anjos_IJCB_2011,
  author = {Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien},
  keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick},
  month = oct,
  title = {Counter-Measures to Photo Attacks in Face Recognition: a public database and a baseline},
  booktitle = {International Joint Conference on Biometrics 2011},
  year = {2011},
  url = {http://publications.idiap.ch/downloads/papers/2011/Anjos_IJCB_2011.pdf}
}

The final Algorithm 4 represents our contribution in this paper.

To reproduce the results for Algorithm 3, you can follow the instructions on its own satellite package for Bob. The scripts for that package should be auto-generated and made available for you under your bin as well (this package depends on that one).

In this manual, we address how to extract results for Algorithms 1, 2 and 4, which operate on the top of a previously estimated Optical Flow (OF) field. OF is, therefore, the first topic in this manual.

Extract the Optical Flow Features

We ship this package with a preset to use Ce Liu’s OF framework. This is of course not required, but it is the framework we have tested our method with, therefore the one we recommend you to start using. This framework estimates the dense OF field between any two successive frames. It is quite slow. Be warned, it may take quite some time to get through all the videos. To run the extraction sequentially, for all videos, use the following command:

$ ./bin/optflow_estimate.py --verbose /root/of/database results/flows replay --protocol=photo

Once you are in possession of the flow fields. You can start calculating the scores required by each of the methods reviewed in the paper. It can help in terms of processing speed to have the features located on a local hard-drive. The HDF5 files tend to be huge.

Reference System 1: Scores from Kollreider’s ===============——-======================

To calculate scores using Kollreider’s method, use the script optflow_kollreider.py in the following way:

$ ./bin/optflow_kollreider.py --verbose /root/of/database results/flows results/kollreider replay --protocol=photo

You can modify the \(\tau\) parameter required by the method with the program option --tau=<float-value>. By default, this parameter is set to 1.0. Refer to the original paper by Kollreider to understand the meaning and how to tune this parameter. If you tune the parameter and execute the error analysis as explained below, you will get to the results shown on Table 1 of our paper.

Besides generating output for the tests in the paper, you can also generate an annotated video, showing how our extrapolation of the face bounding boxes work for finding out the regions of interest to apply Kollreider’s work on. To do this, use the script optflow_kollreider_annotate.py. It works on a similar way to the above script and will process the whole database if not told otherwise. This can be somewhat long as well, but you can grid-fy it if you wish or use filtering options for the database to limit the number of videos analysed. For example:

$ bin/optflow_kollreider_annotate.py -v /idiap/group/replay/database/protocols/replayattack-database tmp replay --protocol=photo --client=101 --light=adverse

Reference System 2: Scores from Bao’s

To calculate scores for Bao’s method, use the script optflow_bao.py in the following way:

$ ./bin/optflow_bao.py --verbose /root/of/database results/flows results/bao replay --protocol=photo

You can modify the border parameter required by the method with the program option --border=<integer-value>. By default, this parameter is set to 5 (pixels). The original paper by Bao and others does not suggest such a parameter or mention how does the face bounding-boxes are set. We assume a default value of pixels surrounding our detected face. In the paper, we scan this value from 0 (zero) to a number of pixels to test the method. If you tune the parameter and execute the error analysis as explained below, you will get to the results shown on Table 2 of our paper.

Reference System 3: Frame-differences

As mentioned before, you should follow the instructions on its own satellite package for Bob. The scripts for that package should be auto-generated and made available for you under your bin as well (this package depends on that one).

Optical Flow Correlation (OFC)

To reproduce the results on our paper, you will need first to generate the scores for the \(\chi^2\) comparison for every frame in the sequence. Frames with no faces detected generate a score valued numpy.NaN, similar to other counter-measures implemented by our group. To generate each score per frame, you can use the application optflow_histocomp.py:

$ ./bin/optflow_histocomp.py --verbose /root/of/database results/flows results/histocomp replay --protocol=photo

You can generate the results in Figure 5 and 6 of our paper by setting 2 parameters on the above script:

--number-of-bins

This changes the parameter \(Q\), explained on the paper, related to the quantization of the angle space. (see results in Figure 5.)

--offset

This changes the offset for the quantization. Its effect is studied in Figure 6, for --number-of-bins=2, as explained in the paper.

By modifying the above parameters and executing an error analysis as explained bellow, with --window-size=220, you will get to the results plotted.

Error Analysis

Once the scores you want to analyzed are produced by one of the methods above, you can calculate the error on the database using the application score_analysis.py. This program receives one directory (containing the scores output by a given method) and produces a console analysis of such a method, which is used by the paper:

$ ./bin/score_analysis.py results/histocomp replay --protocol=photo

That command will calculate a development set threshold at the Equal Error Rate (EER) and will apply it to the test set, reporting errors on both sets. A typical output would be like this:

Input data: /idiap/temp/aanjos/spoofing/scores/optflow_histocomp
Thres. at EER of development set: 6.9459e-02
[EER @devel] FAR: 37.04% (15601 / 42120) | FRR: 37.04% (8312 / 22440) | HTER: 37.04%
[HTER @test] FAR: 37.11% (20843 / 56160) | FRR: 35.75% (10696 / 29920) | HTER: 36.43%

The error analysis program considers, by default, every frame analyzed as an individual, independent observation and calculates the error rates based on the overall set of frames found on the whole development and test sets. The numbers printed inside the parentheses indicate how many frames were evaluated in each set (denominator) and how many of those contributed to the percentage displayed (numerator). The Half-Total Error Rate (HTER) is evaluated for both the development set and test sets. The HTER for the develpment set is equal to the EER on the same set, naturally.

The score_analysis.py script contains 2 parameters that can be used to fine-tune the program behaviour, to be known:

--window-size=<integer>

Defines a window size to which the scores are going to be averaged to, within the same score sequence. So, for example, if one of the files produced by optflow_histocomp.py contains a sequence of scores that reads like [1.0, 2.0, 1.5, 3.5, 0.5], and the window-size parameter is set to 2, then, the scores evaluated by this procedure are [1.5, 1.75, 2.5, 2.0], which represent the averages of [1.0, 2.0], [2.0, 1.5], [1.5, 3.5] and [3.5, 0.5].

--overlap=<integer>

Controls the amount of overlap between the windows. If not set, the default overlap is set to window-size - 1. You can modify this behaviour by setting this parameter to a different value. Taking the example above, if you set the window-size to 2 and the overlap to zero, then the score set produced by this analysis would be [1.5, 2.5]. Notice that the frame value 0.5 (the last of the sequence) is ignored.

You will observe the effect of setting the window-size on the score analysis by looking at the number of averaged frames analyzed:

$ ./bin/score_analysis.py --window-size=220 --overlap=80 results/histocomp replay --protocol=photo

And the output:

Input data: /idiap/temp/aanjos/spoofing/scores/optflow_histocomp
Window size: 220 (overlap = 80)
Thres. at EER of development set: 1.4863e-01
[EER @devel] FAR: 2.78% (5 / 180) | FRR: 2.50% (3 / 120) | HTER: 2.64%
[HTER @test] FAR: 2.92% (7 / 240) | FRR: 1.88% (3 / 160) | HTER: 2.40%

You can generate the results in Figure 7 and Table III on the paper just by manipulating this program.

Our paper also shows a break-down analysis (by device attack type and support) on Figure 8 (last figure). To generate such a figure, one must produce the break-down analysis per device (Figure 8.a) and attack support (Figure 8.b). To do this, pass the –breakdown option to the score_analysis.py script:

$ ./bin/score_analysis.py --window-size=220 --overlap=80 --breakdown results/histocomp replay --protocol=photo

Problems

In case of problems, please contact any of the authors of the paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

antispoofing.optflow-1.0.0.zip (51.6 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page