{ "info": { "author": "Cl\u00e9ment Cazorla", "author_email": "clement.cazorla@univ-reims.fr", "bugtrack_url": null, "classifiers": [], "description": "# SegSRGAN\n\nThis algorithm is based on the [method](https://hal.archives-ouvertes.fr/hal-01895163) proposed by Chi-Hieu Pham in 2019. More information about the SEGSRGAN algorithm can be found in the associated [article](https://hal.archives-ouvertes.fr/hal-02189136/document).\n\n## Installation\n\n### User (recommended)\n\nThe library can be installed using Pypi\n\n```\npip install SegSRGAN\n```\n\nNOTE: We recommend to use `virtualenv`\n\nIf the package is installed, one can find all the .py files presented hereafter using the importlib python package as follow :\n\n```\nimportlib.util.find_spec(\"SegSRGAN\").submodule_search_locations[0]\n```\n\n### Developer\n\nFirst, clone the repository. Use the `make` to run the testsuite\nor yet create the pypi package.\n\n```\ngit clone git@github.com:koopa31/SegSRGAN.git\n\nmake test\nmake pkg\n```\n\n\n\n## Installation\n\n```\npip install SegSRGAN\n```\n\n## Perform a training:\n\n### Example :\n\n```\npython SegSRGAN_training.py\n\u2212\u2212new_low_res 0.5 0.5 3\n\u2212\u2212csv /home/user/data.csv\n\u2212\u2212snapshot_ folder /home/user/training_weights\n\u2212\u2212dice_file /home/user/dice.csv\n\u2212\u2212mse_ file /home/user/mse_example_for_article.csv\n\u2212\u2212folder_training_data/home/user/temporary_file_for_training\n\u2212\u2212 interp 'type_of_interpolation'\n```\n\n### Options :\n####\u00a0General options :\n\n> * **csv** (string): CSV file that contains the paths to the files used for the training. These files are divided into two categories: train and test. Consequently, it must contain 3 columns, called: HR_image, Label_image and Base (which is equal to either Train or Test), respectively\n> * **dice_file** (string): CSV file where to store the DICE at each epoch\n> * **mse\\_file**(string): CSV file where to store the MSE at each epoch\n> * **epoch** (integer) : number of training epochs\n> * **batch_size** (integer) : number of patches per mini batch\n> * **number\\_of\\_disciminator\\_iteration** (integer): how many times we train the discriminator before training the generator\n> * **new_low_res** (tuple): resolution of the LR image generated during the training. One value is given per dimension, for fixed resolution (e.g.\u201c\u2212\u2212new_low_res 0.5 0.5 3\u201d). Two values are given per dimension if the resolutions have to be drawn between bounds (e.g. \u201c\u2212\u2212new_low_res 0.5 0.5 4 \u2212\u2212new_low_res 1 1 2\u201d means that for each image at each epoch, x and y resolutions are uniformly drawn between 0.5 and 1, whereas z resolution is uniformly drawn between 2 and 4.\n> * **snapshot_folder** (string): path of the folder in which the weights will be regularly saved after a given number of epochs (this number is given by **snapshot** (integer) argument). But it is also possible to continue a training from saved weights (detailed below).\n> * **folder_training_data** (string): folder where temporary files are written during the training (created at the begining of each epoch and deleted at the end of it)\n> * **interp** (string): Interpolation type which is used for the reconstruction of the high resolution image before \n>applying the neural network. Can be either 'scipy' or 'sitk' ('scipy' by default). The downsampling method associated to each \n>interpolation method is different. With Scipy, the downsampling is performed by a Scipy method whereas we perform a classical,\n>manual downsampling for sitk. \n>\n#### Network architecture options :\n\n> * **kernel_gen** (integer): number of output channels of the first convolutional layer of the generator. The convolutions\n>number in each layer of the network is going to be a multiple of kernel_gen as shown in the schemes below.\n> * **kernel_dis** (integer): number of output channels of the first convolutional layer of the discriminator. The convolutions\n>number in each layer of the network is going to be a multiple of kernel_dis as shown in the schemes below.\n> * **is_conditional** (Boolean): enables to train a conditional network with a condition on the input resolution (discriminator and generator are conditional).\n> * **u_net** (Boolean): enables to train U-Net network (see difference between u-net and non u-net network in the images below).\n> * **is_residual** (Boolean): determines whether the structure of the network is residual or not. This option only impacts the activation function of the generator (see image below for more details).\n\n\n\n
\n
\n
\n Residual vs non residual networks generators\n
\n
\n
\n Resblock\n
\n
\n
\n Discriminator architecture\n