{ "info": { "author": "Laurent El Shafey", "author_email": "Laurent.El-Shafey@idiap.ch", "bugtrack_url": null, "classifiers": [ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Natural Language :: English", "Programming Language :: Python", "Programming Language :: Python :: 3", "Topic :: Scientific/Engineering :: Artificial Intelligence" ], "description": "======================================================\nProbabilistic Linear Discriminant Analysis Experiments\n======================================================\n\nThis package contains scripts that show how to use the implementation\nof the scalable formulation of Probabilistic Linear Discriminant Analysis \n(PLDA), integrated into `Bob `_, as \nwell as how to reproduce experiments of the article mentioned below. \nIt is implemented and maintained via `github \n`_.\n\n.. contents::\n\nReferences\n----------\n\nIf you use this package and/or its results, please cite the following\npublications:\n\n1. The original paper with the scalable formulation of PLDA explained \n in details::\n\n @article{ElShafey_TPAMI_2013,\n author = {El Shafey, Laurent and McCool, Chris and Wallace, Roy and Marcel, S{\\'{e}}bastien},\n title = {A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition},\n year = {2013},\n month = jul,\n journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},\n volume = {35},\n number = {7},\n pages = {1788-1794},\n doi = {10.1109/TPAMI.2013.38},\n pdf = {http://publications.idiap.ch/downloads/papers/2013/ElShafey_TPAMI_2013.pdf}\n }\n\n2. Bob as the core framework used to run the experiments::\n\n @inproceedings{Anjos_ACMMM_2012,\n author = {A. Anjos and L. El Shafey and R. Wallace and M. G\\\"unther and C. McCool and S. Marcel},\n title = {Bob: a free signal processing and machine learning toolbox for researchers},\n year = {2012},\n month = oct,\n booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},\n publisher = {ACM Press},\n url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf},\n }\n\n3. If you decide to use the Multi-PIE database, you should mention the\n following paper, where it is introduced::\n\n @article{Gross_IVC_2010,\n author = {Gross, Ralph and Matthews, Iain and Cohn, Jeffrey and Kanade, Takeo and Baker, Simon},\n title = {Multi-PIE},\n journal = {Image and Vision Computing},\n year = {2010},\n month = may,\n volume = {28},\n number = {5},\n issn = {0262-8856},\n pages = {807--813},\n numpages = {7},\n doi = {10.1016/j.imavis.2009.08.002},\n url = {http://dx.doi.org/10.1016/j.imavis.2009.08.002},\n acmid = {1747071},\n } \n\n4. If you only use the Multi-PIE annotations, you should cite the following paper\n since annotations were made for the experiments of this work::\n\n @article{ElShafey_TPAMI_2013,\n author = {El Shafey, Laurent and McCool, Chris and Wallace, Roy and Marcel, S{\\'{e}}bastien},\n title = {A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition},\n year = {2013},\n month = jul,\n journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},\n volume = {35},\n number = {7},\n pages = {1788-1794},\n doi = {10.1109/TPAMI.2013.38},\n pdf = {http://publications.idiap.ch/downloads/papers/2013/ElShafey_TPAMI_2013.pdf}\n }\n\n5. If you decide to use the Labeled Faces in the Wild (LFW) database, you should \n mention the following paper, where it is introduced::\n\n @TechReport{LFWTech,\n author = {Gary B. Huang and Manu Ramesh and Tamara Berg and Erik Learned-Miller},\n title = {Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments},\n institution = {University of Massachusetts, Amherst},\n year = {2007},\n number = {07-49},\n month = oct,\n }\n\n\nInstallation\n------------\n\n.. note:: \n\n If you are reading this page through our GitHub portal and not through PyPI,\n note **the development tip of the package may not be stable** or become\n unstable in a matter of moments.\n\n Go to `http://pypi.python.org/pypi/xbob.paper.tpami2013\n `_ to download the latest\n stable version of this package.\n\nThere are two options you can follow to get this package installed and\noperational on your computer: you can use automatic installers like `pip\n`_ (or `easy_install\n`_) or manually download, unpack and\nuse `zc.buildout `_ to create a\nvirtual work environment just for this package. In both cases, you must\nfirst install `Bob`_ (>= 1.2.0), whose installation process is described \nin the `user guide \n`_.\n\n\nUsing an automatic installer\n============================\n\nUsing ``pip`` is the easiest (shell commands are marked with a ``$`` signal)::\n\n $ pip install xbob.paper.tpami2013\n\nYou can also do the same with ``easy_install``::\n\n $ easy_install xbob.paper.tpami2013\n\nThis will download and install this package plus any other required\ndependencies. It will also verify if the version of Bob you have installed\nis compatible.\n\nThis scheme works well with virtual environments by `virtualenv\n`_ or if you have root access to your\nmachine. Otherwise, we recommend you use the next option.\n\n\nUsing ``zc.buildout``\n=====================\n\nDownload the latest version of this package from `PyPI\n`_ and unpack it in your\nworking area::\n\n $ wget http://pypi.python.org/packages/source/x/xbob.paper.tpami2013/xbob.paper.tpami2013-1.0.0.zip\n $ unzip xbob.paper.tpami2013-1.0.0.zip\n $ cd xbob.paper.tpami2013-1.0.0\n\nThe installation of the toolkit itself uses `buildout \n`_. You don't need to understand its inner workings\nto use this package. Here is a recipe to get you started::\n \n $ python bootstrap.py \n $ ./bin/buildout\n\nThese two commands should download and install all non-installed dependencies and\nget you a fully operational test and development environment.\n\n.. note::\n\n The python shell used in the first line of the previous command set\n determines the python interpreter that will be used for all scripts developed\n inside this package. Because this package makes use of `Bob`,\n you must make sure that the ``bootstrap.py``\n script is called with the **same** interpreter used to build Bob, or\n unexpected problems might occur.\n\n If Bob is installed by the administrator of your system, it is safe to\n consider it uses the default python interpreter. In this case, the above 3\n command lines should work as expected. If you have Bob installed somewhere\n else on a private directory, edit the file ``buildout.cfg`` **before**\n running ``./bin/buildout``. Find the section named ``buildout`` and edit or\n add the line ``prefixes`` to point to the directory where Bob is installed or\n built. For example::\n\n [buildout]\n ...\n prefixes=/home/laurent/work/bob/build\n\n\nPLDA tutorial\n-------------\n\nThe following example consists of a simple script, that makes use of\nProbabilistic Linear Discriminant Analysis (PLDA) modeling on the \nFisher's iris dataset. It performs the following tasks:\n\n 1. Train a PLDA model using the first two classes of the dataset\n 2. Enroll a class-specific PLDA model for the third class of the dataset\n 3. Compute (verification) scores for both positive and negative samples\n 4. Plot the distribution of the scores and save it into a file\n\nTo run this simple example, you just need to execute the following command::\n\n $ ./bin/plda_example_iris.py --output-img plda_example_iris.png\n\n\nReproducing experiments\n-----------------------\n\nIt is currently possible to reproduce all the experiments of the article\non both Labeled Faces in the Wild and Multi-PIE using the PLDA algorithm.\nIn particular, the value of the accuracy reported in Table 2, the \nFigure 2 and the HTER reported on Table 3 can be easily reproduced, by \nfollowing the steps described below.\n\nBe aware that all the scripts provide several optional arguments that\nare very useful if you wish to use your own features or your own\nparameters.\n\nKeep in mind that the results published in the paper were obtained with\na pre-release of Bob (older than 1.0.0). You might hence observe slight \ndifferences when running the scripts with Bob 1.2.0.\n\n\nNote for Grid Users\n===================\n\nAt Idiap, we use the powerful Sun Grid Engine (SGE) to parallelize our \njob submissions as much as we can. At the Biometrics group, we have developed \na little toolbox `gridtk `_ that can \nsubmit and manage jobs at the Idiap computing grid through SGE. \n\nThe following sections will explain how to reproduce the paper results in \nsingle (non-gridified) jobs. If you are at Idiap, you could run the \nfollowing commands on the SGE infrastructure, by applying the '--grid' \nflag to any command. This may also work on other locations with an SGE \ninfrastructure, but will likely require some configuration changes in the \ngridtk utility.\n\n\nLabeled Faces in the Wild dataset\n=================================\n\nThe experiments of this section are performed on the LFW (Labeled Faces\nin the Wild) protocol. The features are publicly available and will be\nautomatically downloaded from `this webpage \n`_ if you follow the\ninstruction below. They were extracted on the LFW images aligned with the\nfunneling algorithm.\n\n\nGetting the features and converting them into HDF5 format\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe following command will download a tarball with the SIFT features, \nextract the content of the archive and convert the features into a \nsuitable HDF5 format for Bob::\n\n $ ./bin/lfw_features.py --output-dir /PATH/TO/LFW/DATABASE/\n\n\nPCA+PLDA toolchain on LFW\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnce the features have been extracted, the dimensionality is reduced\nusing Principal Component Analysis (PCA), before applying PLDA modeling.\nThese steps are combined in the following script, that will run the \nPCA+PLDA toolchain on the specified protocol::\n\n $ ./bin/toolchain_pcaplda.py --features-dir /PATH/TO/LFW/DATABASE/lfw_funneled --protocol view1 --output-dir /PATH/TO/LFW/OUTPUT_DIR/\n\nTo report the final performance on LFW, it is required to run \n10 experiments on view 2 in a leave-one-out cross validation scheme.\nWe provide the following script for this purpose::\n\n $ ./bin/experiment_pcaplda_lfw.py -features-dir /PATH/TO/LFW/DATABASE/lfw_funneled --output-dir /PATH/TO/LFW/OUTPUT_DIR/\n\n.. note::\n\n The previous script is monothreaded and will run the 10 independent\n view 2 experiments in a sequence. If you have a multi-core CPU, you\n could split this script into several shorter jobs, by splitting the\n loop below, which will at the end be equivalent to the previous \n command::\n\n $ for k in `seq 1 10`; do \\\n ./bin/toolchain_pcaplda.py --features-dir /PATH/TO/LFW/DATABASE/lfw_funneled --protocol view2-fold${k} --output-dir /PATH/TO/LFW/OUTPUT_DIR/ ; \\\n done\n\n\nSummarizing the results as in Table 2\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnce the previous experiments have successfully completed, you can use \nthe following script to plot Table 2, that will estimate the mean\naccuracy as well as the standard error of the mean on the 10 experiments\nof LFW view2::\n\n $ ./bin/plot_table2.py --output-dir /PATH/TO/LFW/OUTPUT_DIR/\n\n.. note::\n\n Compared to the results published in the article, there are slight\n differences caused by both the order of the training files when applying\n PCA, and the lists used to split the LFW `training` set into a `training`\n set and a `validation` set (The validation set is use to select the \n verification threshold to apply on the test set).\n\n\nMulti-PIE dataset\n=================\n\nThe experiments of this section are performed on the U protocol of the\nMulti-PIE dataset. The filelists associated with this protocol can be found\non `this website `_.\n\n\nGetting the data\n~~~~~~~~~~~~~~~~\n\nYou first need to buy and download the Multi-PIE database:\n http://multipie.org/\nand to download the annotations available here:\n http://www.idiap.ch/resource/biometric/\n\n\nFeature extraction\n~~~~~~~~~~~~~~~~~~\n\nThe following command will extract Local Binary Patters (LBP) histograms \nfeatures. You should set the paths to the data according to your own \nenvironment::\n\n $ ./bin/lbph_features.py --image-dir /PATH/TO/MULTIPIE/IMAGES --annotation-dir /PATH/TO/MULTIPIE/ANNOTATIONS --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n The output directory /PATH/TO/MULTIPIE/OUTPUT_DIR/ is a base directory\n for the output of all experiments on Multi-PIE. Make sure to use the \n same directory for all the experiments below, otherwise the following\n commands might not work as expected. You can look at the options\n of the scripts if you need more flexibility or want to use alternate\n features vectors, etc.\n\n\nDimensionality reduction\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nOnce the features has been extracted, they are projected into a lower\ndimensional subspace using Principal Component Analysis (PCA)::\n \n $ ./bin/pca_features.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands::\n\n $ ./bin/pca_train.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --pca-dir features --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/linear_project.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --algorithm-dir features --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n\nProposed system: PLDA modeling and scoring\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPLDA is then applied on the dimensionality reduced features.\n\nThis involves three different steps:\n 1. Training\n 2. Model enrollment\n 3. Scoring\n\nThe following command will perform all these steps::\n\n $ ./bin/toolchain_plda.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands::\n\n $ ./bin/plda_train.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/plda_enroll.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/plda_scores.py --group dev --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/plda_scores.py --group eval --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\nThen, the HTER on the evaluation set can be obtained using the \nevaluation script from the bob library as follows::\n\n $ ./bin/bob_compute_perf.py -d /PATH/TO/MULTIPIE/OUTPUT_DIR/U/plda/scores/scores-dev -t /PATH/TO/MULTIPIE/OUTPUT_DIR/U/plda/scores/scores-eval -x\n\nThe HTER on the evaluation set, when using the EER on the development\nset as the criterion for the threshold, corresponds to the PLDA value reported\non Table 3 of the article mentioned above.\n\nIf you want to reproduce the Figure 2 of the article, you can run the \nfollowing commands (instead of the previous one)::\n\n $ ./bin/experiment_plda_subworld.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/plot_figure2.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands. Be aware that the commands within the loop\n are independent and monothreaded. Furthermore, you could break\n the loop and call several of these commands at the same time\n if your CPU has several cores::\n\n $ for k in 2 4 6 8 10 14 19 29 38 48 57 67 76; do \\\n ./bin/toolchain_plda.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/ --world-nshots $k --plda-dir plda_subworld_${k}; \\\n done\n $ ./bin/plot_figure2.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\nThe previous commands will run the PLDA toolchain several times for a varying\nnumber of training samples. Please note, that this will require a lot of time\nto complete (a bit less than two days on a recent workstation such as one with an\nIntel Core i7 CPU).\n\nThen, the value of the HTER on Table 3 of the article (for the PLDA system) \ncorresponds to the one, where the full training set is used, and might \nsimilarly be obtained as follows::\n\n $ ./bin/bob_compute_perf.py -d /PATH/TO/MULTIPIE/OUTPUT_DIR/U/plda_subworld_76/scores/scores-dev -t /PATH/TO/MULTIPIE/OUTPUT_DIR/U/plda_subworld_76/scores/scores-eval -x\n\n.. note::\n\n If you compare your obtained figure with the Figure 2 of the published article, \n you will observe slight differences. This does not affect at all the global\n trends and conclusions shown in the article. This is caused by two different \n aspects:\n\n 1. The features for the paper were generated using a version of Bob that is \n unofficial (which means older than the first official release), whereas the \n features currently generated rely on Bob 1.2.0. Many improvements were \n performed in the implementations of the preprocessing techniques (Face \n cropping and Tan Triggs algorithm) as well as in the LBP implementation. \n\n 2. The order of the files obtained (and now sorted) from the database API.\n For instance, when applying PCA, the input matrix will be different depending\n on the order of the file used to build this matrix.\n\n\nBaseline 1: PCA on the LBP histograms\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe LBP histogram features were used in combination with the PCA \nclassification technique (commonly called Eigenfaces in the face \nrecognition literature).\n\nThis involves three different steps:\n 1. PCA subspace training\n 2. Model enrollment\n 3. Scoring (with an Euclidean distance)\n\nThe following command will perform all these steps::\n\n $ ./bin/toolchain_pca.py --n-outputs 2048 --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands::\n\n $ ./bin/pca_train.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --n-outputs 2048 --pca-dir pca_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/linear_project.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --algorithm-dir pca_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/meanmodel_enroll.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/pca_euclidean/lbph_projected --algorithm-dir pca_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/pca_euclidean/lbph_projected --algorithm-dir pca_euclidean --distance euclidean --group dev --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/pca_euclidean/lbph_projected --algorithm-dir pca_euclidean --distance euclidean --group eval --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\nThen, the HTER on the evaluation set can be obtained using the \nevaluation script from the bob library as follows::\n\n $ ./bin/bob_compute_perf.py -d /PATH/TO/MULTIPIE/OUTPUT_DIR/U/pca_euclidean/scores/scores-dev -t /PATH/TO/MULTIPIE/OUTPUT_DIR/U/pca_euclidean/scores/scores-eval -x\n\nThis value corresponds to the one of the PCA baseline reported on \nTable 3 of the article (Once more, be aware of differences due \nto the implementation changes in the feature extraction process \nand algorithm parameters that have not been kept). These results \nare obtained for a PCA subspace of rank 2048, which was \nfound to be the optimal PCA subspace size, when we tuned this\nparameter using the LBPH features.\n\n.. note::\n\n In contrast to what one sentence of the article suggests, we did not \n apply the PCA baseline on the dimensionality-reduced PCA features.\n This would mean to apply consecutively twice, the same PCA \n dimensionality reduction technique, which does not make much sense.\n In contrast, we apply this PCA technique to the LBPH features,\n tuning the PCA subspace size.\n\n\nBaseline 2: LDA on the PCA projected LBP histograms\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe PCA projected LBP histogram features considered for the PLDA system\nwere also used in combination with the Fisher's Linear Discriminant \nAnalysis (LDA) classification technique (commonly called Fisherfaces \nin the face recognition literature).\n\nThis involves three different steps:\n 1. LDA subspace training\n 2. Model enrollment\n 3. Scoring (with an Euclidean distance)\n\nThe following command will perform all these steps::\n\n $ ./bin/toolchain_lda.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands::\n\n $ ./bin/lda_train.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph_projected --lda-dir lda_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/linear_project.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph_projected --algorithm-dir lda_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/meanmodel_enroll.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lda_euclidean/lbph_projected --algorithm-dir lda_euclidean --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lda_euclidean/lbph_projected --algorithm-dir lda_euclidean --distance euclidean --group dev --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lda_euclidean/lbph_projected --algorithm-dir lda_euclidean --distance euclidean --group eval --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\nThen, the HTER on the evaluation set can be obtained using the \nevaluation script from the bob library as follows::\n\n $ ./bin/bob_compute_perf.py -d /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lda_euclidean/scores/scores-dev -t /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lda_euclidean/scores/scores-eval -x\n\nThis value corresponds to the one of the LDA baseline reported on \nTable 3 of the PLDA article (Once more, be aware of slight \ndifferences due to the implementation changes in the feature \nextraction process). These results are obtained for a LDA subspace \nof rank 64, which was found to be the optimal LDA subspace size, \nwhen we tuned this parameter using the initial LBPH features.\n\n\nBaseline 3: LBP histogram classification with Chi square scoring\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe LBP histogram features might be used in combination with a distance such\nas the Chi Square distance, to obtain a face recognition system.\n\nThis involves two different steps:\n 1. Model enrollment\n 2. Scoring (with a chi square distance)\n\nThe following command will perform all these steps::\n\n $ ./bin/toolchain_lbph.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n.. note::\n\n Equivalently, this can also be achieved by running the following \n individual commands::\n\n $ ./bin/meanmodel_enroll.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --algorithm-dir lbph_chisquare --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --algorithm-dir lbph_chisquare --distance chi_square --group dev --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n $ ./bin/distance_scores.py --features-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/U/features/lbph --algorithm-dir lbph_chisquare --distance chi_square --group eval --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\nThen, the HTER on the evaluation set can be obtained using the \nevaluation script from the bob library as follows::\n\n $ ./bin/bob_compute_perf.py -d /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lbph_chisquare/scores/scores-dev -t /PATH/TO/MULTIPIE/OUTPUT_DIR/U/lbph_chisquare/scores/scores-eval -x\n\nThis value corresponds to the one of the LBP histogram (chi square) \nbaseline reported on Table 3 of article (Once more, be aware of \nslight differences due to the implementation changes on the feature \nextraction process).\n\n\nSummarizing the results as in Table 3\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you successfully run all the previous experiments, you could\nget a summary of the performances, as in Table 3, by running the\nfollowing command::\n\n $ ./bin/plot_table3.py --output-dir /PATH/TO/MULTIPIE/OUTPUT_DIR/\n\n\nReporting bugs\n--------------\n\nThe package is open source and maintained via `github \n`_.\n\nIf you are facing technical issues to be able to run the scripts\nof this package, please send a message on the `Bob's mailing list\n`_.\n\nIf you find a problem wrt. this satelitte package, you can file\na ticket on the `github issue tracker\n`_ of this\nsatellite package.\n\nIf you find a problem wrt. the PLDA implementation, you can file\na ticket on `Bob's issue tracker `_ .\n\nPlease follow `these guidelines \n`_\nwhen (or even better before) reporting any bug.", "description_content_type": null, "docs_url": null, "download_url": "UNKNOWN", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "http://pypi.python.org/pypi/xbob.paper.tpami2013", "keywords": "plda,probalistic linear discriminant analysis,machine learning,face recognition,bob,xbob", "license": "GPLv3", "maintainer": null, "maintainer_email": null, "name": "xbob.paper.tpami2013", "package_url": "https://pypi.org/project/xbob.paper.tpami2013/", "platform": "UNKNOWN", "project_url": "https://pypi.org/project/xbob.paper.tpami2013/", "project_urls": { "Download": "UNKNOWN", "Homepage": "http://pypi.python.org/pypi/xbob.paper.tpami2013" }, "release_url": "https://pypi.org/project/xbob.paper.tpami2013/1.0.0/", "requires_dist": null, "requires_python": null, "summary": "Example on how to use the scalable implementation of PLDA and how to reproduce experiments of the article", "version": "1.0.0" }, "last_serial": 853631, "releases": { "0.0.1a0": [ { "comment_text": "", "digests": { "md5": "d80a740da9bfff053eb6a88b7820257b", "sha256": "63d49ca3dfff0f247f2441d74d9a7bc010edd5843f0157f02dfaa40a67fb0880" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.0.1a0.zip", "has_sig": false, "md5_digest": "d80a740da9bfff053eb6a88b7820257b", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 37677, "upload_time": "2013-08-21T11:22:58", "url": "https://files.pythonhosted.org/packages/d3/89/38958dca8957d66213e274258d51a81579b42ea57f95bc908c82d65dece0/xbob.paper.tpami2013-0.0.1a0.zip" } ], "0.0.1a1": [ { "comment_text": "", "digests": { "md5": "651c719b7f14a1f0045f4075a1645d1d", "sha256": "9888c3a8c51c5c95ba077b575cd62bfd266d3249ea4f707d5bc0bb0170fd4b1d" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.0.1a1.zip", "has_sig": false, "md5_digest": "651c719b7f14a1f0045f4075a1645d1d", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 37749, "upload_time": "2013-08-21T13:06:32", "url": "https://files.pythonhosted.org/packages/73/94/633b02d3e201cc3ba53f92509d1d00ee091708ac96f990ef0b3c3b70b94e/xbob.paper.tpami2013-0.0.1a1.zip" } ], "0.0.1a2": [ { "comment_text": "", "digests": { "md5": "990e0b9b1deec65022a357d31c483271", "sha256": "cad23cad9cb88f09f04f077cbd4a7861920635f061b1f6d5cc1a2740197c7a52" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.0.1a2.zip", "has_sig": false, "md5_digest": "990e0b9b1deec65022a357d31c483271", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 39552, "upload_time": "2013-08-21T17:08:09", "url": "https://files.pythonhosted.org/packages/09/2f/bf754bae522f1f71fdee5569d5c3a3b0dadba4765976c2169103e5697078/xbob.paper.tpami2013-0.0.1a2.zip" } ], "0.0.1a3": [ { "comment_text": "", "digests": { "md5": "ed08af93260ddfff581f82b355e2015a", "sha256": "812c907a1df466f510b23bfa85ba21459036b693f3a06c7323cab182b3457f69" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.0.1a3.zip", "has_sig": false, "md5_digest": "ed08af93260ddfff581f82b355e2015a", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 39552, "upload_time": "2013-08-22T08:22:55", "url": "https://files.pythonhosted.org/packages/b0/a9/721f2f47402e58d65224818fb724218def2ebb070c371e82592af0005d90/xbob.paper.tpami2013-0.0.1a3.zip" } ], "0.1.0a0": [ { "comment_text": "", "digests": { "md5": "cea155af539123044e3bedac855949b3", "sha256": "28dda8c27998c85dacb724869e994bffc2c09541fadb6912c07fdc3b6edf89e5" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.1.0a0.zip", "has_sig": false, "md5_digest": "cea155af539123044e3bedac855949b3", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 51935, "upload_time": "2013-08-22T12:19:57", "url": "https://files.pythonhosted.org/packages/bd/a8/1a02956502ac52ca960b24f500718b3d7f0a4c63f6096ab878a612472d22/xbob.paper.tpami2013-0.1.0a0.zip" } ], "0.1.0a1": [ { "comment_text": "", "digests": { "md5": "82e8f1e67d9b51ee9e67997249868895", "sha256": "84c82490ecc99af4baf1b372c3449191a7c6fe82d651700ad4c1260a3eb178cf" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.1.0a1.zip", "has_sig": false, "md5_digest": "82e8f1e67d9b51ee9e67997249868895", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 52985, "upload_time": "2013-08-22T15:40:35", "url": "https://files.pythonhosted.org/packages/49/47/2024d526ddcb0efea5a187650a5ae3c0385a98815c4a13d319996334c602/xbob.paper.tpami2013-0.1.0a1.zip" } ], "0.2.0a0": [ { "comment_text": "", "digests": { "md5": "6c1f299f977f8cd3b874e65872a5f883", "sha256": "64c33fb4ddc1fc8fc2d56ac7a95fc96eb818379bc2fa1143f978ac8c57a3a39b" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.2.0a0.zip", "has_sig": false, "md5_digest": "6c1f299f977f8cd3b874e65872a5f883", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 59131, "upload_time": "2013-08-23T21:12:21", "url": "https://files.pythonhosted.org/packages/5c/29/b76b5ef5827266c657d4a96e5ffe70cab962f65b2985209d954dfbe14445/xbob.paper.tpami2013-0.2.0a0.zip" } ], "0.2.0a1": [ { "comment_text": "", "digests": { "md5": "31cace0d1e76db376d58250c1097caf5", "sha256": "ebaf3d0482aa219ef9b343b25c53b95c40e7696877e61ad6021087a959fb22ba" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.2.0a1.zip", "has_sig": false, "md5_digest": "31cace0d1e76db376d58250c1097caf5", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 74388, "upload_time": "2013-08-24T11:19:13", "url": "https://files.pythonhosted.org/packages/a8/8d/fc42f04f264984ee2b02dd21c488c5a7dd72a1edc387555053f27b56a17f/xbob.paper.tpami2013-0.2.0a1.zip" } ], "0.2.0a2": [ { "comment_text": "", "digests": { "md5": "6da00e6d33a5347b7131481a81ecb26b", "sha256": "abca2b31442e41ae5d9559d339573e394eb4edc7290b2b56b8385466196f70c6" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.2.0a2.zip", "has_sig": false, "md5_digest": "6da00e6d33a5347b7131481a81ecb26b", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 76026, "upload_time": "2013-08-24T12:36:06", "url": "https://files.pythonhosted.org/packages/e4/46/89cd0e17fb969c7fbbf8c1d02895002d054c17fce51af9594e15c31a8222/xbob.paper.tpami2013-0.2.0a2.zip" } ], "0.2.0a3": [ { "comment_text": "", "digests": { "md5": "ad5b2a8177ba46db8725e7f868f8e3a1", "sha256": "7a25c2588576a7c346ef0c3f37569394d08fe0ce06048de08d68c371c99aab31" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.2.0a3.zip", "has_sig": false, "md5_digest": "ad5b2a8177ba46db8725e7f868f8e3a1", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 78748, "upload_time": "2013-08-24T18:36:20", "url": "https://files.pythonhosted.org/packages/e2/c3/9d9ce001ca80075dfb152283363d10e288ee229df051ddbdccec78e50688/xbob.paper.tpami2013-0.2.0a3.zip" } ], "0.2.0a4": [ { "comment_text": "", "digests": { "md5": "8a2057caeca0a1e8356aee4ae59a6e81", "sha256": "aef400ea2b5a12a380c347f47bb065e7963bb8d3c38c3c5d2b9999991092295d" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.2.0a4.zip", "has_sig": false, "md5_digest": "8a2057caeca0a1e8356aee4ae59a6e81", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 80156, "upload_time": "2013-08-25T13:54:36", "url": "https://files.pythonhosted.org/packages/87/0f/0c55a28dc6d7657edaf934f457905b4ed6580d886842de2f9638caa647cc/xbob.paper.tpami2013-0.2.0a4.zip" } ], "0.3.0a0": [ { "comment_text": "", "digests": { "md5": "a11db77b58996e60bea4a7e8c24c7a85", "sha256": "5455ba922d70cce22e34b55d3ab0806b0a8c61700d2baba6b780ebf8bc0c7fbe" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.3.0a0.zip", "has_sig": false, "md5_digest": "a11db77b58996e60bea4a7e8c24c7a85", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 93043, "upload_time": "2013-08-26T19:04:08", "url": "https://files.pythonhosted.org/packages/13/09/fc5f100a41eda02c3a13325bc124f245ac93e88dd414511023a9b745791f/xbob.paper.tpami2013-0.3.0a0.zip" } ], "0.3.0a1": [ { "comment_text": "", "digests": { "md5": "00b893ace4add8b7ae8d0212f8bab881", "sha256": "80d2131183c74a5e0decdc2b984eafcd42ec5d302be819039f83fd253714e523" }, "downloads": -1, "filename": "xbob.paper.tpami2013-0.3.0a1.zip", "has_sig": false, "md5_digest": "00b893ace4add8b7ae8d0212f8bab881", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 876830, "upload_time": "2013-08-26T20:21:07", "url": "https://files.pythonhosted.org/packages/57/4c/90beca6aed46ac9d648176ef623d64c67cf6b27f97803628af59f834300f/xbob.paper.tpami2013-0.3.0a1.zip" } ], "1.0.0": [ { "comment_text": "", "digests": { "md5": "b1f7f6705f2adae6e368424a8019ce6a", "sha256": "c5b5356f3a1225a4056dd58b9de886f1482eb2969c8fa9e1dafcc49408a84a5c" }, "downloads": -1, "filename": "xbob.paper.tpami2013-1.0.0.zip", "has_sig": false, "md5_digest": "b1f7f6705f2adae6e368424a8019ce6a", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 878084, "upload_time": "2013-08-31T11:22:07", "url": "https://files.pythonhosted.org/packages/cf/8f/d1a993f2b19572d4f98b62a45d41fe8b71918e48924b09146c120e7dc9fa/xbob.paper.tpami2013-1.0.0.zip" } ], "1.0.0a0": [ { "comment_text": "", "digests": { "md5": "8fb0dc9d1a89f21aa6c4441b86bec2f2", "sha256": "11b37dba875362ff87d36ab9888829ea1022aee2e5e5636e7223383bb7e98e86" }, "downloads": -1, "filename": "xbob.paper.tpami2013-1.0.0a0.zip", "has_sig": false, "md5_digest": "8fb0dc9d1a89f21aa6c4441b86bec2f2", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 878519, "upload_time": "2013-08-31T11:14:52", "url": "https://files.pythonhosted.org/packages/df/f6/9b05c6ca52fd133627320c4ce4cc0f10eb89df25d0e772777bcdc6387c24/xbob.paper.tpami2013-1.0.0a0.zip" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "b1f7f6705f2adae6e368424a8019ce6a", "sha256": "c5b5356f3a1225a4056dd58b9de886f1482eb2969c8fa9e1dafcc49408a84a5c" }, "downloads": -1, "filename": "xbob.paper.tpami2013-1.0.0.zip", "has_sig": false, "md5_digest": "b1f7f6705f2adae6e368424a8019ce6a", "packagetype": "sdist", "python_version": "source", "requires_python": null, "size": 878084, "upload_time": "2013-08-31T11:22:07", "url": "https://files.pythonhosted.org/packages/cf/8f/d1a993f2b19572d4f98b62a45d41fe8b71918e48924b09146c120e7dc9fa/xbob.paper.tpami2013-1.0.0.zip" } ] }