Skip to main content

Source code to reproduce the paper "Impact of Eye Detection Error on Face Recognition Performance"

Project description

This package provides the source code to run the experiments published in the paper Impact of Eye Detection Error on Face Recognition Performance. It relies on the FaceRecLib to execute the face recognition experiments, which in turn uses the face recognition algorithms and the database interface of Bob.

When you use this source code in a scientific publication, we would be happy if you would cite:

@article{Dutta2015,
  author = "Abhishek Dutta and Manuel G\"unther and Laurent El Shafey and S\'ebastien Marcel",
  title = "Impact of Eye Detection Error on Face Recognition Performance",
  year = 2015
  journal = {IET Biometrics},
  issn = {2047-4938},
  url = {http://digital-library.theiet.org/content/journals/10.1049/iet-bmt.2014.0037},
  pdf = {http://publications.idiap.ch/downloads/papers/2015/Dutta_IETBIOMETRICS_2014.pdf}
}

Installation

This package uses several Bob libraries, which will be automatically installed locally using the command lines as listed below. However, in order for the Bob packages to compile, certain Dependencies need to be installed.

This package

The installation of this package relies on the BuildOut system. By default, the command line sequence:

$ ./python bootstrap-buildout.py
$ ./bin/buildout

should download and install all required packages of Bob in the versions that we used to produce the results. Other versions of the packages might generate sightly different results. To use the latest versions of all Bob packages, please remove the strict version numbers that are given in the buildout.cfg file in the main directory of this package.

Image Database

The experiments are run on an external image database. We do not provide the images from the database themselves. Hence, please contact the database owners to obtain a copy of the images. The Multi-PIE database used in our experiments can be downloaded here.

Important!

After downloading the databases, you will need to tell our software, where it can find them by changing the configuration file. In particular, please update the MULTIPIE_IMAGE_DIRECTORY in xfacereclib/paper/IET2015/configuration/database.py.

Unpacking the Annotations

After the database is set up correctly, you’ll need to unpack the eye annotations that are used in the experiments. Please run the script:

$ ./bin/unpack_annotations

to extract the annotations in the desired directory structure. If you want, you can specify another directory to unpack the annotations (see ./bin/unpack_annotations.py --help), but all other functions and configurations will have their defaults set according to the default directory.

Testing your Installation

After you have set up the database, you should be able to run our test suite:

$ ./bin/nosetests

Please make sure that all tests pass.

TODO:

Implement tests.

Getting help

In case anything goes wrong, please feel free to open a new ticket in our GitHub page, start a new discussion in our Mailing List or send an email to manuel.guenther@idiap.ch.

Recreating the Results of the Paper

After successfully setting up the database, you are now able to run the face recognition experiments as explained in the Paper. Particularly, you will be able to reproduce Figure 4, Figure 7 and Figure 13. Be aware that we were running more than 1000 individual face recognition experiments, each of which used a slightly different experiment configuration.

The Experiment Configuration

The face recognition experiment are run using the FaceRecLib. In total, we are testing five different face recognition algorithms, each of which uses the default configuration from the FaceRecLib:

  • eigenfaces: a PCA is trained on pixel gray values, and the projected features are compared with Euclidean distance.

  • fisherfaces: a combined PCA + LDA matrix is trained on pixel gray values, and the projected features are compared with Euclidean distance.

  • gabor-jet: Gabor jets are extracted at grid locations in the image and compared with a Gabor-phase-based similarity function.

  • lgbphs: extended local Gabor binary pattern histogram sequences are extracted from image blocks, and the histograms are compared with histogram intersection.

  • isv: DCT features are extracted from image blocks and modeled with a Gaussian mixture model and an additional inter-session variability model, and the score is computed as a likelihood ratio.

As input, all these algorithms expect images, where the face is extracted and aligned, so that the eye centers are always placed on the same location in the image. For this alignment procedure, labeled eye locations must be available. The main focus of this paper is not on the face recognition algorithms themselves, but on how they perform in case that the eye locations are slightly misplaced, as it might happen in both manual and automatic annotations.

Running the Experiments

For convenience, we have generated a wrapper script that allows to run a set of face recognition experiments in sequence – or even in parallel, see below. This wrapper script abuses one functionality of the FaceRecLib, namely the parameter testing, which is an easy way to perform a grid search on a set of parameters. For our purposes, these parameters are:

  • Figure 4: the eye position shifts in horizontal and vertical direction, as well as the rotation angle.

  • Figure 7: the standard deviations of the Normal distributed shifts of eye positions in horizontal and vertical direction, as well as a random seed.

The according configurations are given in fixed_perturbation.py (Figure 4) and random_perturbation.py (Figure 7). There, you can find the setup as it was used to generate the according plots, but in case you want to run only a sub-set of experiments, you can reduce the parameters in each list.

The experiments can be run using the ./bin/parameter_test.py script. This script has several options, the most important of which are:

  • --configuration-file: the configuration file that contains the parameters that we want to test. For our experiments, these are the two files fixed_perturbation.py (Figure 4) and random_perturbation.py (Figure 7).

  • --database: the database that should be used in the experiments, which will be multipie-m in all cases.

  • --executable: the (pythonic) name of the face verification function that will be executed. Since we had to modify the default script a bit, our script needs to be specified (see below).

  • --sub-directory: the name of a directory (will be created on need), where all experiments for the given configuration file are stored.

  • --grid: a name of a grid configuration to run algorithms in parallel (see below).

  • --verbose: Print out additional information or debug information during the execution of the experiments. The --verbose option can be used several times, increasing the level to Warning (1), Info (2) and Debug (3). By default, only Error (0) messages are printed. The Info (aka -vv) option is recommended.

  • --dry-run: Use this option to print the calls to the FaceRecLib without executing them. Again, it is recommended to use this flag once, i.e., to check that everything is correct before running the experiments.

Additionally, parameters can be passed directly to the ./bin/faceverify.py script from the FaceRecLib. Please use a -- to separate parameters for ./bin/faceverify.py form parameters for ./bin/parameter_test.py. Useful parameters might be the --result-directory and the --temp-directory options. For a complete list of options, please check ./bin/faceverify.py --help.

Finally, the command lines to run the experiments for Figures 4 and 7, call:

$ ./bin/parameter_test.py --configuration-file fixed_perturbation.py --database multipie-m --sub-directory fixed --executable xfacereclib.paper.IET2015.script.faceverify -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

$ ./bin/parameter_test.py --configuration-file random_perturbation.py --database multipie-m --sub-directory random --executable xfacereclib.paper.IET2015.script.faceverify -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

The last set of experiments, i.e., to regenerate Figure 13 can be run using the ./bin/annotation_types script. Again, this script has a set of options, most of which have proper default values:

  • --image-directory: the base directory of the Multi-PIE database; needs to be specified.

  • --annotation-directory: the base directory, where the annotations have been extracted to.

  • --algorithms: a list of algorithms that should be tested; by default all five algorithms are run.

  • --world-types: a list of annotation types, which should serve to train the algorithms and to enroll the models with.

  • --probe-types: a list of annotation types, which should be probed against the enrolled models.

Again, the same --verbose option and options passed to the ./bin/faceverify.py script exists. Hence, the last set of experiments to be run can be started with:

$ ./bin/annotation_types --image-directory [MULTIPIE_IMAGE_DIRECTORY] -vv -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

Parallel Execution

Since the two command lines above execute more than 1000 individual face recognition algorithms, you might want to run them in parallel. For this purpose, you can use the --grid option of the ./bin/parameter_test.py script. This will trigger the usage of GridTK, a tool originally developed to submit and monitor jobs in an SGE processing farm. If you have access to such a farm, you can use the --grid sge option to submit the experiments to the SGE grid (you might need to set up the SGE configuration in the grid configuration file xfacereclib/paper/IET2015/configuration/grid.py, in the facereclib/utils/grid.py of the FaceRecLib or in the GridTK itself).

On the other hand, when you have a powerful machine with lots of processing units, you can use the --grid local option. This will submit jobs to the “local” queue, which you have to start them manually by:

$ ./bin/jman --local --database [DIR]/submitted.sql3 -vv run-scheduler --parallel [NUMBER_OF_SLOTS] --die-when-finished

Please refer to the GridTK manual for more details.

Evaluating the Experiments

After all experiments have finished successfully, the resulting score files can be evaluated. The figures in the paper were generated using a mix of python and R scripts, i.e., to make them look more beautiful. However, for this package we will plot the figures solely using matplotlib. The ./bin/plot_results script can be used to create the plots similar to the ones in Figures 4, 7 and 13. Additionally, it will write .csv files containing the exact numbers, i.e., the Figures in in the Paper rely on these files.

As usual, the ./bin/plot_results has a list of command line options, most of which have proper default values:

  • --scores-directory: the base directory, where the score files have been produced.

  • --experiments: a list of experiments to evaluate. By default, all three experiments are evaluated.

  • --algorithms: a list of algorithms to evaluate. By default, all five algorithms are evaluated.

Some more options are available, see ./bin/plot_results --help for a complete list. Hence, to produce all three plots from Figures 4, 7, and 13, simply call:

$ ./bin/plot_results -vv --scores-directory [YOUR_RESULT_DIRECTORY]

Afterward, the plots can be found in the plots directory. For Figure 4, they are called HTER_fixed.pdf and AUC_fixed.pdf, while for Figure 7 they are HTER_random.pdf and AUC_random.pdf. The HTER plots should be identical to the ones found in the Paper. The AUC plots have a different color coding than in the Paper, but the contents are identical. Finally, the file plots/ROCs.pdf contains the ROC curves of Figure 13, except that the FAR range is slightly higher.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xfacereclib.paper.IET2015-1.0.1.zip (58.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page