{ "info": { "author": "Philippe Remy", "author_email": "", "bugtrack_url": null, "classifiers": [], "description": "# Keras TCN\n\n\n[](https://pepy.tech/project/keras-tcn)\n[](https://pepy.tech/project/keras-tcn)\n```bash\npip install keras-tcn\n```\n\n*Keras Temporal Convolutional Network*\n\n * [Keras TCN](#keras-tcn)\n * [Why Temporal Convolutional Network?](#why-temporal-convolutional-network)\n * [API](#api)\n * [Arguments](#arguments)\n * [Input shape](#input-shape)\n * [Output shape](#output-shape)\n * [Supported task types](#supported-task-types)\n * [Receptive field](#receptive-field)\n * [Non-causal TCN](#non-causal-tcn)\n * [Installation (Python 3)](#installation-python-3)\n * [Run](#run)\n * [Tasks](#tasks)\n * [Adding Task](#adding-task)\n * [Explanation](#explanation)\n * [Implementation results](#implementation-results)\n * [Copy Memory Task](#copy-memory-task)\n * [Explanation](#explanation-1)\n * [Implementation results (first epochs)](#implementation-results-first-epochs)\n * [Sequential MNIST](#sequential-mnist)\n * [Explanation](#explanation-2)\n * [Implementation results](#implementation-results-1)\n * [References](#references)\n\n## Why Temporal Convolutional Network?\n\n- TCNs exhibit longer memory than recurrent architectures with the same capacity.\n- Constantly performs better than LSTM/GRU architectures on a vast range of tasks (Seq. MNIST, Adding Problem, Copy Memory, Word-level PTB...).\n- Parallelism, flexible receptive field size, stable gradients, low memory requirements for training, variable length inputs...\n\n
\n
\n Visualization of a stack of dilated causal convolutional layers (Wavenet, 2016)
\n
\n
\n ks = 2, dilations = [1, 2, 4, 8], 1 block
\n
\n
\n ks = 2, dilations = [1, 2, 4, 8], 2 blocks
\n
\n
\n ks = 2, dilations = [1, 2, 4, 8], 3 blocks
\n
\n
\n Non-Causal TCN - ks = 3, dilations = [1, 2, 4, 8], 1 block
\n
\n
\n Adding Problem Task
\n
\n
\n Copy Memory Task
\n
\n
\n Sequential MNIST
\n