{ "info": { "author": "Nikesh Bajaj", "author_email": "bajaj.nikey@gmail.com", "bugtrack_url": null, "classifiers": [ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3" ], "description": "# Signal Processing toolkit\n\n### Links: **[Github](https://github.com/Nikeshbajaj/spkit)** | **[PyPi - project](https://pypi.org/project/spkit/)**\n\n\n-----\n## Table of contents\n- [**Installation**](#installation)\n- [**Signal Processing & ML function list**](#functions-list)\n- [**Examples**](#examples)\n - [**Information Theory**](#information-theory)\n - [**Machine Learning**](#machine-learning)\n - [**ICA**](#ica)\n - [**LFSR**](#lfsr)\n-----\n\n\n## Installation\n\n**Requirement**: numpy, matplotlib, scipy.stats, scikit-learn\n\n### with pip\n\n```\npip install spkit\n```\n\n### Build from the source\nDownload the repository or clone it with git, after cd in directory build it from source with\n\n```\npython setup.py install\n```\n\n## Functions list\n#### Signal Processing Techniques\n**Information Theory functions** for real valued signals\n* Entropy : Shannon entropy, R\u00e9nyi entropy of order \u03b1, Collision entropy\n* Joint entropy\n* Conditional entropy\n* Mutual Information\n* Cross entropy\n* Kullback\u2013Leibler divergence\n* Computation of optimal bin size for histogram using FD-rule\n* Plot histogram with optimal bin size\n\n**Matrix Decomposition**\n* SVD\n* ICA using InfoMax, Extended-InfoMax, FastICA & **Picard**\n\n**Linear Feedback Shift Register**\n* pylfsr\n\n**Continuase Wavelet Transform** and other functions comming soon..\n\n#### Machine Learning models - with visualizations\n* Logistic Regression\n* Naive Bayes\n* Decision Trees\n* DeepNet (to be updated)\n\n\n# Examples\n## Information Theory\n### [View in notebook](https://nbviewer.jupyter.org/github/Nikeshbajaj/spkit/blob/master/notebooks/1.1_Entropy_Example.ipynb)\n\n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport spkit as sp\n\nx = np.random.rand(10000)\ny = np.random.randn(10000)\n\n#Shannan entropy\nH_x= sp.entropy(x,alpha=1)\nH_y= sp.entropy(y,alpha=1)\n\n#R\u00e9nyi entropy\nHr_x= sp.entropy(x,alpha=2)\nHr_y= sp.entropy(y,alpha=2)\n\nH_xy= sp.entropy_joint(x,y)\n\nH_x1y= sp.entropy_cond(x,y)\nH_y1x= sp.entropy_cond(y,x)\n\nI_xy = sp.mutual_Info(x,y)\n\nH_xy_cross= sp.entropy_cross(x,y)\n\nD_xy= sp.entropy_kld(x,y)\n\n\nprint('Shannan entropy')\nprint('Entropy of x: H(x) = ',H_x)\nprint('Entropy of y: H(y) = ',H_y)\nprint('-')\nprint('R\u00e9nyi entropy')\nprint('Entropy of x: H(x) = ',Hr_x)\nprint('Entropy of y: H(y) = ',Hr_y)\nprint('-')\nprint('Mutual Information I(x,y) = ',I_xy)\nprint('Joint Entropy H(x,y) = ',H_xy)\nprint('Conditional Entropy of : H(x|y) = ',H_x1y)\nprint('Conditional Entropy of : H(y|x) = ',H_y1x)\nprint('-')\nprint('Cross Entropy of : H(x,y) = :',H_xy_cross)\nprint('Kullback\u2013Leibler divergence : Dkl(x,y) = :',D_xy)\n\n\n\nplt.figure(figsize=(12,5))\nplt.subplot(121)\nsp.HistPlot(x,show=False)\n\nplt.subplot(122)\nsp.HistPlot(y,show=False)\nplt.show()\n```\n\n## ICA\n### [View in notebook](https://nbviewer.jupyter.org/github/Nikeshbajaj/spkit/blob/master/notebooks/1.2_ICA_Example.ipynb)\n```\nfrom spkit import ICA\nfrom spkit.data import load_data\nX,ch_names = load_data.eegSample()\n\nx = X[128*10:128*12,:]\nt = np.arange(x.shape[0])/128.0\n\nica = ICA(n_components=14,method='fastica')\nica.fit(x.T)\ns1 = ica.transform(x.T)\n\nica = ICA(n_components=14,method='infomax')\nica.fit(x.T)\ns2 = ica.transform(x.T)\n\nica = ICA(n_components=14,method='picard')\nica.fit(x.T)\ns3 = ica.transform(x.T)\n\nica = ICA(n_components=14,method='extended-infomax')\nica.fit(x.T)\ns4 = ica.transform(x.T)\n```\n\n## Machine Learning\n### [Logistic Regression](https://nbviewer.jupyter.org/github/Nikeshbajaj/spkit/blob/master/notebooks/2.1_LogisticRegression_examples.ipynb) - *View in notebook*\n


