{ "info": { "author": "Grigorios Kalliatakis", "author_email": "gkallia@essex.ac.uk", "bugtrack_url": null, "classifiers": [ "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.6" ], "description": "
\n
\n\n[](https://github.com/GKalliatakis/DisplaceNet/blob/master/LICENSE)\n\n\n[](https://twitter.com/intent/tweet?text=DisplaceNet:%20Recognising%20Displaced%20People%20from%20Images%20by%20Exploiting%20Dominance%20Level&url=https://github.com/GKalliatakis/DisplaceNet&hashtags=ML,DeepLearning,CNNs,HumanRights,HumanRightsViolations,ComputerVisionForHumanRights)\n
To reduce the amount of manual labour required for human-rights-related image analysis, \nwe introduce DisplaceNet, a novel model which infers potential displaced people from images \nby integrating the dominance level of the situation and a CNN classifier into one framework.
\n\n\n
\n
\n Grigorios Kalliatakis \n Shoaib Ehsan \n Maria Fasli \n Klaus McDonald-Maier \n
\n\n\nTo appear in 1st CVPR Workshop on
Computer Vision for Global Challenges (CV4GC) \n\n
\n[arXiv preprint]\n \n[poster coming soon...]\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
The performance of displaced people recognition using DisplaceNet is listed below. \nAs comparison, we list the performance of various vanilla CNNs trained with various network backbones, \nfor recognising displaced people. We report comparisons in both accuracy and coverage-the proportion of a data set for which a classifier is able to produce a prediction- metrics
\n\n\n
\n
\n :octocat:
\n Repo will be updated with more details soon!
\n Make sure you have starred and forked this repository before moving on!\n\n