SpikeForest is an open-source benchmarking website for spike sorting algorithms. The front end is under development here.
Overview of the system -- in progress
- Overview of SpikeInterface, SpikeWidgets, and SpikeToolkit -- live notebook
- Running MountainSort directly from python -- live notebook
- Overview of KBucket and SpikeForest -- live notebook
Assembling the recordings and studies
The following notebook is used to assemble the recordings and studies that populate the website and to provide the input to the batch processing.
Processing is organized in batches that are stored on kbucket. An online notebook is used to assemble the batches. The batch processing can then be launched on any computer, for example a compute cluster. The scripts are written such that parallelization is achieved by running the same script simultaneously on many different cores / compute nodes. The pairio database is used to coordinate the jobs so that each script will perform different sorting jobs.
Exploring studies and processing results
The studies and sorting results of SpikeForest can be browsed/explored from any python notebook using the SpikeForest python API. An example of this is found in the below notebook.
The source code for this API is here (TODO: document this API).
Here's a notebook for plotting accuracy vs SNR for various algorithms:
TODO: this section needs to be expanded to describe how to load spike sorting results
More technical info
SpikeForest processing notebooks
Note: This section needs to be revised -- the pipeline has been overhauled -- docs are being assembled above
- Step 1: Assemble the studies and datasets -- live notebook
- Step 2: Process the datasets -- live notebook -- updated 3 Nov 2018 -- see docs on the .ipynb discussing parallelization
- Step 3: Sort the datasets -- live notebook -- added 6 Nov 2018
- Step 4: Process sorting results
- Step 5: Compare with ground truth