{ "info": { "author": "Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai and Yoav Artzi", "author_email": "tao@asapp.com", "bugtrack_url": null, "classifiers": [], "description": "\n## News\nSRU++, a new SRU variant, is released. [[tech report](https://arxiv.org/pdf/2102.12459.pdf)] [[blog](https://www.asapp.com/blog/reducing-the-high-cost-of-training-nlp-models-with-sru/)]\n\nThe experimental code and SRU++ implementation are available on [the dev branch](https://github.com/asappresearch/sru/tree/3.0.0-dev/experiments/srupp_experiments) which will be merged into master later.\n\n## About\n\n**SRU** is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks. \n
\n
\nAverage processing time of LSTM, conv2d and SRU, tested on GTX 1070
\n