{ "info": { "author": "Guild AI", "author_email": "packages@guild.ai", "bugtrack_url": null, "classifiers": [], "description": "gpkg.slim.models\n################\n\n*TF-Slim models (Guild AI)*\n\nModels\n######\n\nimages\n======\n\n*Generic images dataset*\n\nOperations\n^^^^^^^^^^\n\nprepare\n-------\n\n*Prepare images for training*\n\nFlags\n`````\n\n**images**\n *Directory containing images to prepare (required)*\n\n**random-seed**\n *Seed used for train/validation split (randomly generated)*\n\n**val-split**\n *Percentage of images reserved for validation (30)*\n\n\ninception\n=========\n\n*TF-Slim Inception v1 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\ninception-resnet-v2\n===================\n\n*TF-Slim Inception ResNet v2 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\ninception-v2\n============\n\n*TF-Slim Inception v2 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\ninception-v3\n============\n\n*TF-Slim Inception v3 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\ninception-v4\n============\n\n*TF-Slim Inception v4 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nmobilenet\n=========\n\n*TF-Slim Mobilenet v1 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nmobilenet-v2-1.4\n================\n\n*TF-Slim Mobilenet v2 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nnasnet-large\n============\n\n*TF-Slim NASNet large classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nnasnet-mobile\n=============\n\n*TF-Slim NASNet mobile classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\npnasnet-large\n=============\n\n*TF-Slim PNASNet classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\npnasnet-mobile\n==============\n\n*TF-Slim PNASNet mobile classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-101\n==========\n\n*TF-Slim ResNet v1 101 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-152\n==========\n\n*TF-Slim ResNet v1 152 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-50\n=========\n\n*TF-Slim ResNet v1 50 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-v2-101\n=============\n\n*TF-Slim ResNet v2 101 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-v2-152\n=============\n\n*TF-Slim ResNet v2 152 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nresnet-v2-50\n============\n\n*TF-Slim ResNet v2 50 layer classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nvgg-16\n======\n\n*TF-Slim VGG 16 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\nvgg-19\n======\n\n*TF-Slim VGG 19 classifier*\n\nOperations\n^^^^^^^^^^\n\nevaluate\n--------\n\n*Evaluate a trained model*\n\nFlags\n`````\n\n**batch-size**\n *Number of examples in each evaluated batch (100)*\n\n**eval-batches**\n *Number of batches to evaluate (all available)*\n\n**step**\n *Checkpoint step to evaluate (latest checkpoint)*\n\nexport-and-freeze\n-----------------\n\n*Export an inference graph with checkpoint weights*\n\nFlags\n`````\n\n**step**\n *Checkpoint step to use for the frozen graph (latest checkpoint)*\n\nfinetune\n--------\n\n*Finetune a trained model*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.0001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\nlabel\n-----\n\n*Classify an image using a trained model*\n\nFlags\n`````\n\n**image**\n *Path to image to classify (required)*\n\ntflite\n------\n\n*Generate a TFLite file from a frozen graph*\n\nFlags\n`````\n\n**output-format**\n *TF Lite output format (tflite)\n\n Choices:\n tflite\n graphviz_dot\n\n *\n\n**quantized**\n *Whether or not output arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\n**quantized-inputs**\n *Whether or not input arrays are quantized (no)\n\n Choices:\n yes\n no\n\n *\n\ntrain\n-----\n\n*Train model from scratch*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\ntransfer-learn\n--------------\n\n*Train model using transfer learning*\n\nFlags\n`````\n\n**auto-scale**\n *Adjust applicable flags for multi-GPU systems (yes)\n\n Set to 'no' to disable any flag value adjustments.\n\n When this value is 'yes' (the default) the following flags are adjusted on\n multi-GPU systems:\n\n - clones\n\n - learning-rate\n\n `clones` is set to the number of available GPUs.\n\n `learning-rate` is adjusted by multiplying its specified value by the\n number of GPUs.\n\n Flags are not adjusted on single GPU or CPU only systems.\n\n *\n\n**batch-size**\n *Number of examples in each training batch (32)*\n\n**clones**\n *Number of model clones (calculated)\n\n This value is automatically set to the number of available GPUs if `auto-\n scale` is 'yes'.\n\n When `auto-scale` is 'no' this value can be increased from 1 to train the\n model in parallel on multiple GPUs.\n\n *\n\n**learning-rate**\n *Initial learning rate (0.001)*\n\n**learning-rate-decay-epochs**\n *Number of epochs after which learning rate decays (2.0)*\n\n**learning-rate-decay-factor**\n *Learning rate decay factor (0.94)*\n\n**learning-rate-decay-type**\n *Method used to decay the learning rate (exponential)\n\n Choices:\n exponential\n fixed\n polynomial\n\n *\n\n**learning-rate-end**\n *Minimal learning rate used by polynomial learning rate decay (0.0001)*\n\n**log-save-seconds**\n *Frequency of log summary saves in seconds (60)*\n\n**log-steps**\n *Frequency of summary logs in steps (100)*\n\n**model-save-seconds**\n *Frequency of model saves (checkpoints) in seconds (600)*\n\n**optimizer**\n *Optimizer used to train (rmsprop)\n\n Choices:\n adadelta\n adagrad\n adam\n ftrl\n momentum\n rmsprop\n sgd\n\n *\n\n**preprocessing**\n *Preprocessing to use (default for model)*\n\n**preprocessors**\n *Number of preprocessing threads (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize the\n preprocessor thread count for the system.\n\n *\n\n**readers**\n *Number of parallel data readers (calculated)\n\n This value is automatically set to logical CPU count / 2 if `auto-scale`\n is 'yes'.\n\n When `auto-scale` is 'no' this value can be set to optimize data reader\n performance for the system.\n\n *\n\n**train-steps**\n *Number of steps to train (train indefinitely)*\n\n**weight-decay**\n *Decay on the model weights (4e-05)*\n\n\n\n", "description_content_type": "", "docs_url": null, "download_url": "", "downloads": { "last_day": -1, "last_month": -1, "last_week": -1 }, "home_page": "https://github.com/guildai/packages/tree/master/gpkg/slim/models", "keywords": "gpkg", "license": "Apache 2.0", "maintainer": "", "maintainer_email": "", "name": "gpkg.slim.models", "package_url": "https://pypi.org/project/gpkg.slim.models/", "platform": "", "project_url": "https://pypi.org/project/gpkg.slim.models/", "project_urls": { "Homepage": "https://github.com/guildai/packages/tree/master/gpkg/slim/models" }, "release_url": "https://pypi.org/project/gpkg.slim.models/0.5.1/", "requires_dist": [ "gpkg.slim", "gpkg.tflite" ], "requires_python": "", "summary": "TF-Slim models (Guild AI)", "version": "0.5.1" }, "last_serial": 4453881, "releases": { "0.5.0": [ { "comment_text": "", "digests": { "md5": "958ad8258b62eb51aa9663983cb23e2c", "sha256": "bc34744927940d2a88998dedc70c3084e674ac7144e1faee7d0ce77f79788304" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.0-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "958ad8258b62eb51aa9663983cb23e2c", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7494, "upload_time": "2018-10-26T15:02:30", "url": "https://files.pythonhosted.org/packages/4b/5f/25f5498d94e3697e200776f3e8a30b45486c3ec867beedfbc0c27ab0db21/gpkg.slim.models-0.5.0-py2.py3-none-any.whl" } ], "0.5.0.dev5": [ { "comment_text": "", "digests": { "md5": "d80b4db038610a37e83d72c2b4fc0702", "sha256": "292e5583098f4fc430bed1f0ee1c668f308c0fc74f57a51e9627bce87b08b6a9" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.0.dev5-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "d80b4db038610a37e83d72c2b4fc0702", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7474, "upload_time": "2018-10-10T14:16:55", "url": "https://files.pythonhosted.org/packages/14/00/9040820e34dde079baaa3071342e6f424b7ab57c92daf2c6c62cef9b6ac0/gpkg.slim.models-0.5.0.dev5-py2.py3-none-any.whl" } ], "0.5.0.dev6": [ { "comment_text": "", "digests": { "md5": "3590a2b4fb50b9fba36b4f0b17f41363", "sha256": "99fa732cb7229f976a9cb149d6dd662aa4a798c81718a0d61ffe1148b789010f" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.0.dev6-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "3590a2b4fb50b9fba36b4f0b17f41363", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7594, "upload_time": "2018-10-25T15:54:56", "url": "https://files.pythonhosted.org/packages/84/18/e6d6bb2bdcb3fa68d40397b97660ece255b950ecdc5a60b9819acff7dfff/gpkg.slim.models-0.5.0.dev6-py2.py3-none-any.whl" } ], "0.5.1": [ { "comment_text": "", "digests": { "md5": "0480cf258d67c3ab3cc38e53a6bcb16b", "sha256": "63c29c501fa0f6220175599a79df3bd06ff21bc27aba6c096cac49459afc6b3e" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "0480cf258d67c3ab3cc38e53a6bcb16b", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7625, "upload_time": "2018-11-05T17:28:28", "url": "https://files.pythonhosted.org/packages/6e/17/917a705fd314d6ab01c23ae4d3204ecb41583a72866f641073490a37eb68/gpkg.slim.models-0.5.1-py2.py3-none-any.whl" } ], "0.5.1.dev2": [ { "comment_text": "", "digests": { "md5": "0ff1fc496fe827500426edf8c674d1bc", "sha256": "3bdb93fbf9611f12e462f1acb5594d237965cac66abdf6e80b458b6d4cc3400b" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.1.dev2-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "0ff1fc496fe827500426edf8c674d1bc", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7719, "upload_time": "2018-11-05T14:50:52", "url": "https://files.pythonhosted.org/packages/78/07/af23f2fe15f47b24045d481736484e2f441d11cde2262acbcb9628652499/gpkg.slim.models-0.5.1.dev2-py2.py3-none-any.whl" } ] }, "urls": [ { "comment_text": "", "digests": { "md5": "0480cf258d67c3ab3cc38e53a6bcb16b", "sha256": "63c29c501fa0f6220175599a79df3bd06ff21bc27aba6c096cac49459afc6b3e" }, "downloads": -1, "filename": "gpkg.slim.models-0.5.1-py2.py3-none-any.whl", "has_sig": false, "md5_digest": "0480cf258d67c3ab3cc38e53a6bcb16b", "packagetype": "bdist_wheel", "python_version": "py2.py3", "requires_python": null, "size": 7625, "upload_time": "2018-11-05T17:28:28", "url": "https://files.pythonhosted.org/packages/6e/17/917a705fd314d6ab01c23ae4d3204ecb41583a72866f641073490a37eb68/gpkg.slim.models-0.5.1-py2.py3-none-any.whl" } ] }