Open In Colab

Notebook: Example NAPS usage

In this notebook we’ll install NAPS, pull the example data from the GitHub repository, and run naps-track against it.

This repo is particularly useful in combination with SLEAP’s example notebook on remote training and inference which can be found here.

Install NAPS

!pip install git+https://github.com/kocherlab/naps

NOTE

Some problems may occur when installing opencv because of a conflict between SLEAP and NAPS. To fix this, run the cell below.


# If you have a model and want to do the inference on Colab, this can be done quite directly! Just upload your model and run inference as below.
# You can also take advantage of the GPU accessibility of Colab to train as well. Look to the SLEAP tutorials for more info.
# The models used for these videos can be found at https://doi.org/10.34770/6t6b-9545

# !sleap-track example.mp4 -o example-testing.slp -m naps_data/sleap-models/centroid -m naps_data/sleap-models/centered_instance --verbosity json --batch_size 1 --tracking.tracker simple --tracking.similarity iou --tracking.post_connect_single_breaks 1 --tracking.pre_cull_to_target 50 --tracking.target_instance_count 50 --tacking.clean_instance_count 50 --gpu 0

Download sample training data into Colab

Let’s download a sample dataset from the the NAPS repository.

!wget https://github.com/kocherlab/naps/raw/main/docs/notebooks/example_data/example.slp
!wget https://github.com/kocherlab/naps/raw/main/docs/notebooks/example_data/example.analysis.h5
!wget https://github.com/kocherlab/naps/raw/main/docs/notebooks/example_data/example.mp4
!ls -lht

NAPS tracking

Now let’s track the files using naps-track. We’ve adjusted a couple params here to make the tracks nicer.

!naps-track --slp-path example.slp --video-path example.mp4 --tag-node-name tag --start-frame 0 --end-frame 1200 --aruco-marker-set DICT_5X5_50 --aruco-crop-size 50 --output-path example-naps.slp --aruco-error-correction-rate 0.6 --aruco-adaptive-thresh-constant 7 --aruco-adaptive-thresh-win-size-max 23 --aruco-adaptive-thresh-win-size-step 10 --aruco-adaptive-thresh-win-size-min 3 --half-rolling-window-size 20

Download

Now we can just download the output! This pulls the video, the output file, and the original project.

# Zip the video and output
!zip -0 -r naps_output.zip example.mp4 example.slp example-naps.slp

# Download
from google.colab import files
files.download("/content/naps_output.zip")

If you happen to not be using Chrome, you may get an error here. If that happens, you should be able to download the files using the “Files” tab on the left side panel.

After NAPS

SLEAP GUI

To view the tracks, open SLEAP (sleap-label) and open the resulting SLEAP files directly. The track names will correspond with ArUco tags. If you want to remove individuals without identified tags, simply remove all instances not assigned to a track (custom instance delete).

!sleap-convert example-naps.slp -o example-naps.analysis.h5 --format analysis
import h5py
import numpy as np

filename = "example-naps.analysis.h5"

with h5py.File(filename, "r") as f:
    dset_names = list(f.keys())
    locations = f["tracks"][:].T
    track_names = f["track_names"][:]
    node_names = [n.decode() for n in f["node_names"][:]]

print("===filename===")
print(filename)
print()

print("===HDF5 datasets===")
print(dset_names)
print()

print("===locations data shape===")
print(locations.shape)
print()

print("===first 5 track names===")
for i, name in enumerate(track_names[0:5]):
    print(f"{i}: {name}")
print()

print("===nodes===")
for i, name in enumerate(node_names):
    print(f"{i}: {name}")
print()

Rendering with sleap-render

Now we can simply render this as a video using sleap-render. We’ll use –frames to subset the frames to render and –tracks to render the tracks. We’ll also use –output to specify the output file name.

!sleap-render example-naps.slp --frames 550-650 --crop 3664,1024 -o example-naps-tracks.mp4
from IPython.display import HTML
from base64 import b64encode
 
def show_video(video_path, video_width = 600):
   
  video_file = open(video_path, "r+b").read()
 
  video_url = f"data:video/mp4;base64,{b64encode(video_file).decode()}"
  return HTML(f"""<video width={video_width} controls><source src="{video_url}"></video>""")
show_video("example-naps-tracks.mp4")