From workflow to widget: customizing napari#
The napari application uses a backend for the graphical user interface (GUI) called Qt. A key feature of this framework is the use of widgets, which are composable, basic UI elements. napari not only utilizes these for its UI, but also enables you can add your own as dockable
elements. In fact, the layer controls, layer list, and napari console are all such dockable containers of Qt widgets.
There are a number of ways to go about creating your own widgets, you can see an in-depth overview in the napari documentation. By far the simplest is to rely on the fact that napari supports the use of magicgui
, a Python library for quick and easy building of GUIs. A key feature of magicgui
is autogeneration of GUIs from functions and dataclasses, by mapping Python type hints to widgets.
In this module, we will implement elements of the exploratory spot detection workflow as functions and then use magicgui.magicgui
decorator on those functions to return us compound widgets that we can use to make exploring the parameters easier in the GUI. For a nice overview of the magicgui
decorators, see the official documentation.
binder
setup#
# this cell is required to run these notebooks on Binder. Make sure that you also have a desktop tab open.
import os
if 'BINDER_SERVICE_HOST' in os.environ:
os.environ['DISPLAY'] = ':1.0'
Loading data#
Let’s get everything set up, for more information about this analysis, please see the the exploratory spot detection notebook.
from skimage import io
nuclei_url = 'https://raw.githubusercontent.com/kevinyamauchi/napari-spot-detection-tutorial/main/data/nuclei_cropped.tif'
nuclei = io.imread(nuclei_url)
spots_url = 'https://raw.githubusercontent.com/kevinyamauchi/napari-spot-detection-tutorial/main/data/spots_cropped.tif'
spots = io.imread(spots_url)
import napari
from napari.utils import nbscreenshot
# create the napari viewer
viewer = napari.Viewer()
# add the nuclei image to the viewer
viewer.add_image(nuclei, colormap = 'I Forest', blending = 'minimum')
# add the spots image to the viewer
viewer.add_image(spots, colormap = 'I Orange', blending='minimum')
<Image layer 'spots' at 0x7efbf6bfbb50>
A basic filtering function#
Now let’s write a function that takes an array and a sigma
value and performs the
high-pass operation. For more information, please see the the exploratory spot detection notebook.
import numpy as np
from scipy import ndimage as ndi
def gaussian_high_pass(image, sigma):
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_im
We can test our function, in similar fashion as before:
high_passed_spots = gaussian_high_pass(spots, 2)
viewer.add_image(high_passed_spots, colormap="I Blue", blending="minimum")
<Image layer 'high_passed_spots' at 0x7efbf1706a50>
nbscreenshot(viewer)
Obtaining a basic widget using the @magicgui
decorator#
Now lets modify the function slightly, by providing type annotations and a docstring, to
leverage napari magicgui
integration.
Tip
A brief note about type hints:
Type hints are not enforced at runtime, but they can still raise NameError
exceptions if not defined or imported.
To avoid that, we are putting the napari types in quotes to make them “forward references”, because we have
not imported them. Alternatively, we could have imported them. A third option is to import:
from __future__ import annotations
This would permit us to drop the quotes from the type hints.
For more information, see the official Python documentation for:
type hints in Python, forward references,and annotations.
from magicgui import magicgui
@magicgui
def gaussian_high_pass(
image: "napari.types.ImageData", sigma: float = 2
) -> "napari.types.ImageData":
"""Apply a gaussian high pass filter to an image.
Parameters
----------
image : np.ndarray
The image to be filtered.
sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
Returns
-------
high_passed_im : np.ndarray
The image with the high pass filter applied
"""
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_im
We have our magicgui
decorated function and we’ve annotated it with the napari types.
Now, the object gaussian_high_pass
is both a (compound) widget and a callable function.
Let’s add it to the viewer.
viewer.window.add_dock_widget(gaussian_high_pass)
<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7efc4cdb3a30>
nbscreenshot(viewer)
Notice that because we told magicgui
that our function will use not just any numpy array, but
specifically ImageData
—the data of an Image layer—and that it will also return that, magicgui
generated UI widgets for selecting an Image layer–if you add another layer type, it won’t show
up in the dropdown!
Press the Run
button and you will see that a new Image layer is added with the results of our
function—again thanks to autogeneration from magicgui
.
# we'll call the widget to simulate clicking `Run`
gaussian_high_pass(viewer.layers['spots'].data)
Show code cell output
array([[0.00084478, 0.00347908, 0.00447733, ..., 0.00014444, 0. ,
0. ],
[0.00049922, 0.00303904, 0.00163708, ..., 0.00112132, 0. ,
0. ],
[0. , 0.00024094, 0. , ..., 0.00195232, 0.00035587,
0. ],
...,
[0.0015178 , 0.0008351 , 0.00010436, ..., 0. , 0.00015427,
0.00047611],
[0.00136203, 0.00138009, 0.00107976, ..., 0. , 0. ,
0.00208043],
[0.00215613, 0.00369759, 0.0032546 , ..., 0. , 0.0006618 ,
0.00307453]], dtype=float32)
Note that we are just returning ImageData
, so there is no information passed about colormaps, blending, etc. If we want to specify that, we would need to annotate as LayerDataTuple
. (We will do this in the next example.)
For now you will need to manually or programmatically set any colormap/blending settings. (Let’s also hide the previous filtering output.)
viewer.layers[-1].blending = "minimum"
viewer.layers[-1].colormap = "I Blue"
viewer.layers['high_passed_spots'].visible = False
nbscreenshot(viewer)
However, if you press Run
again, the data for that layer will be updated in place, so
you can change the sigma
value and see the updated result.
Tip
Hover over the labels image
and sigma
– the names of the parameters we passed to the function.
You should see tooltips with the docstring information! How cool is that?
Our gaussian_high_pass
object is the widget, so we can easily get the value of the current setting:
gaussian_high_pass.sigma.value
2.0
At the same time, gaussian_high_pass
remains a callable function. Let’s call it normally, to check
that the function is still working as expected. Remember, type hints are not enforced by Python at runtime,
so nothing should have changed.
test_output = gaussian_high_pass(spots, 2)
test_output.shape
(492, 494)
This means that if you have a script or module you can import the function and use it as normally or use it as a widget in napari.
Let’s make the the widget more dynamic and user-friendly, by giving magicgui
some extra information.
Let’s ask for a slider for the sigma
parameter and lets have the function be auto-called
when the slider is changed.
But first, lets remove the previous widget.
viewer.window.remove_dock_widget("all")
@magicgui(
auto_call=True,
sigma={"widget_type": "FloatSlider", "min": 0, "max": 20}
)
def gaussian_high_pass(
image: "napari.types.ImageData", sigma: float = 2
) -> "napari.types.ImageData":
"""Apply a gaussian high pass filter to an image.
Parameters
----------
image : np.ndarray
The image to be filtered.
sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
Returns
-------
high_passed_im : np.ndarray
The image with the high pass filter applied
"""
low_pass = ndi.gaussian_filter(image, sigma)
high_passed_im = (image - low_pass).clip(0)
return high_passed_im
viewer.window.add_dock_widget(gaussian_high_pass)
<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7efbf0159d80>
nbscreenshot(viewer)
Now you can play with the slider until you get the effect you want in the GUI and then return the value:
gaussian_high_pass.sigma.value
2.0
Or you can set the value:
gaussian_high_pass.sigma.value = 3
nbscreenshot(viewer)
A more complex example#
Finally, lets make a widget for the final spot detection workflow as a function. First, we need the spot detection function, here it already has basic type hinting and a docstring to get us started:
Note
If you’re curious about the blob_log
function, please refer to the
scikit-image documentation.
from skimage.feature import blob_log
def detect_spots(
image: np.ndarray,
high_pass_sigma: float = 2,
spot_threshold: float = 0.2,
blob_sigma: float = 2
) -> tuple[np.ndarray, np.ndarray]:
"""Apply a gaussian high pass filter and detect spots in an image.
Parameters
----------
image : np.ndarray
The image in which to detect the spots.
high_pass_sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
spot_threshold : float
The relative threshold to be passed to the blob detector.
The default value is 0.2.
blob_sigma: float
The expected sigma (width) of the spots. This parameter
is passed to the "max_sigma" parameter of the blob
detector.
Returns
-------
tuple[np.ndarray, np.ndarray]
A tuple containing:
- points_coords: An NxD array with the coordinate for each detected spot.
N is the number of spots and D is the number of dimensions.
- sizes: An array of size N, where N is the number of detected spots
with the diameter of each spot.
"""
# filter the image layer data
filtered_spots = gaussian_high_pass(image, high_pass_sigma)
# detect the spots on the filtered image
blobs_log = blob_log(
filtered_spots,
max_sigma=blob_sigma,
threshold=None,
threshold_rel=spot_threshold
)
# convert the output of the blob detector to the
# desired points_coords and sizes arrays
# (see the docstring for details)
points_coords = blobs_log[:, :2]
sizes = 2 * np.sqrt(2) * blobs_log[:, 2]
return points_coords, sizes
We will need to properly annotate this function such that magicgui
can generate the widgets. This time we are also starting with image layer (data), but then we want
a Points layer with points. We could again return just the layer data using napari.types.PointsData
,
but lets get a nicer Points layer instead; we will tweak the return to return a LayerDataTuple
.
If detect_spots()
returns a LayerDataTuple
, napari will add a new layer to
the viewer using the data in the LayerDataTuple
. Briefly:
The layer data tuple should be:
(layer_data, layer_metadata, layer_type)
layer_data
: the data to be displayed in the new layer (i.e., the points coordinates)layer_metadata
: the display options for the layer stored as a dictionary. Some options to consider:symbol
,size
,face_color
layer_type
: the name of the layer type as a string—in this case'Points'
Tip
For more information on using the LayerDataTuple
type, please see the documentation.
Also let’s change the image
argument type hint to ImageLayer
so that we can access more
properties if we’d like or be able to more easily set the value programmatically.
# again lets remove the previous widget
viewer.window.remove_dock_widget("all")
from skimage.feature import blob_log
@magicgui
def detect_spots_widget(
image: "napari.layers.Image",
high_pass_sigma: float = 2,
spot_threshold: float = 0.2,
blob_sigma: float = 2
) -> "napari.types.LayerDataTuple":
"""Apply a gaussian high pass filter and detect spots in an image.
Parameters
----------
image : napari.layers.Image
The Image layer in which to detect the spots.
high_pass_sigma : float
The sigma (width) of the gaussian filter to be applied.
The default value is 2.
spot_threshold : float
The relative threshold to be passed to the blob detector.
The default value is 0.2.
blob_sigma: float
The expected sigma (width) of the spots. This parameter
is passed to the "max_sigma" parameter of the blob
detector.
Returns
-------
napari.types.LayerDataTuple
A Layer Data Tuple for a Points layer containing
- the coordinates of the detected points (data)
- a dictionary of layer meta data (sizes and face_color)
- the layer type, "Points"
"""
# filter the image layer data
filtered_spots = gaussian_high_pass(image.data, high_pass_sigma)
# detect the spots on the filtered image
blobs_log = blob_log(
filtered_spots,
max_sigma=blob_sigma,
threshold=None,
threshold_rel=spot_threshold
)
# convert the output of the blob detector to the
# desired points_coords and sizes arrays
# (see the docstring for details)
points_coords = blobs_log[:, 0:2]
sizes = 2 * np.sqrt(2) * blobs_log[:, 2]
return (points_coords, {"size": sizes, "face_color": "red"}, "Points")
viewer.window.add_dock_widget(detect_spots_widget)
<napari._qt.widgets.qt_viewer_dock_widget.QtViewerDockWidget at 0x7efbe9f37eb0>
# let's call the widget/function to simulate pressing run
detect_spots_widget(viewer.layers['spots'])
Show code cell output
(array([[252., 329.],
[220., 275.],
[255., 172.],
[195., 253.],
[204., 278.],
[454., 314.],
[263., 157.],
[458., 30.],
[250., 355.],
[ 10., 268.],
[241., 152.],
[244., 148.],
[252., 151.],
[471., 56.],
[290., 178.],
[433., 315.],
[217., 260.],
[253., 335.],
[202., 258.],
[266., 348.],
[455., 308.],
[449., 318.],
[278., 159.],
[259., 166.],
[272., 327.],
[258., 177.],
[464., 287.],
[218., 57.],
[267., 162.],
[188., 252.],
[433., 306.],
[446., 301.],
[268., 165.],
[201., 287.],
[414., 298.],
[203., 254.],
[239., 162.],
[250., 178.],
[204., 272.],
[210., 268.],
[483., 30.],
[198., 249.],
[475., 43.],
[481., 51.],
[234., 60.],
[ 30., 329.],
[228., 36.],
[234., 78.],
[444., 321.],
[235., 329.],
[478., 369.],
[286., 177.],
[248., 340.],
[216., 282.],
[261., 343.],
[460., 281.],
[400., 299.],
[187., 0.],
[477., 258.],
[484., 397.],
[415., 445.],
[392., 291.],
[237., 335.],
[ 8., 256.],
[208., 265.],
[265., 321.],
[277., 179.],
[438., 317.],
[232., 338.],
[245., 162.],
[476., 269.],
[393., 286.],
[462., 41.],
[451., 305.],
[474., 32.],
[484., 62.],
[242., 163.],
[431., 318.],
[459., 38.],
[479., 260.],
[487., 29.],
[454., 232.],
[205., 0.],
[207., 267.],
[212., 280.],
[454., 296.],
[283., 162.],
[211., 255.],
[470., 30.],
[254., 323.],
[473., 367.],
[213., 262.],
[449., 321.],
[212., 265.],
[484., 46.],
[188., 247.],
[243., 333.],
[468., 274.],
[450., 307.],
[455., 287.],
[466., 261.],
[435., 299.],
[435., 288.],
[211., 272.],
[464., 36.],
[477., 263.],
[438., 286.],
[229., 341.],
[205., 269.],
[482., 38.],
[441., 291.],
[249., 175.],
[443., 314.],
[194., 256.],
[235., 346.],
[218., 277.],
[206., 432.],
[284., 180.],
[484., 53.],
[485., 25.],
[399., 313.],
[436., 326.],
[478., 28.],
[195., 245.],
[ 50., 257.],
[ 15., 272.],
[477., 58.],
[478., 47.],
[397., 303.],
[434., 297.],
[451., 287.],
[210., 282.],
[200., 256.],
[ 16., 257.],
[463., 283.]], dtype=float32),
{'size': array([2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 3.45696645, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 4.3997756 ,
2.82842712, 2.82842712, 2.82842712, 2.82842712, 2.82842712]),
'face_color': 'red'},
'Points')
# lets set the dropdown value for the screenshot
detect_spots_widget.image.value = viewer.layers['spots']
# and lets zoom in a bit
viewer.camera.center = (200, 270)
viewer.camera.zoom = 8
nbscreenshot(viewer)
Tip
In this notebook we used the @magicgui
decorator, which turned out function into both a function
and a widget. Another similar option is the @magic_factory
decorator. This one does not return a widget instance immediately. Instead, it turns out function into a “widget factory function” that can be called to create a widget instance. This can be more convenient in many cases, if you are writing a library or package where someone else will be instantiating your widget.
One additional important—and useful!—distinction is that @magic_factory
gains the widget_init
keyword argument, which will be called with the new widget each time the factory function is called.
For more details, on the two magicgui
decorators, see the official documentation.
Custom keybindings#
napari has extensive keyboard shortcuts that can be customized in the Preferences/Settings GUI.
However, it also enables you to bind key-press events to custom callback functions. Again, the
napari implementation (bind_key
) is smart, so arguments like the viewer getting the key press
or the current selected layer will be passed to your function.
Lets try a simple example, to get the number of Points returned by our detector when we press
a key binding. For this, we will want the bind_key
decorator to pass in a selected Points layer
as an argument to our function that will return the number of detected spots. We only want this
to work for Points layers—but any Points layer will do—so we will use the @Points.bind_key
decorator.
from napari.layers import Points
from napari.utils.notifications import show_info
@Points.bind_key("Shift-D")
def print_number_of_points(points_layer: "napari.layers.Points"):
show_info(f"Detected points: {len(points_layer.data)}")
Tip
Instead of using print
, we used napari.utils.notifications.show_info
,
which will put the output in both the notebook (or the terminal, REPL, etc.), as well as in the viewer itself as a notification
in the bottom right corner. However, be aware that this won’t work when napari was launched from a Jupyter notebook for napari
versions older than 0.5.0.
Let’s call the function to trigger it for the notebook, so we see the output (unfortunately the notification will not be captured by nbscreenshot
):
print_number_of_points(viewer.layers['Points'])
INFO: Detected points: 135
Important
It is also possible to bind keybindings at the viewer level, using @viewer.bind_key
. However, at the moment, bind_key
shortcuts using @viewer
cannot overwrite napari layer shortcuts, even with overwrite=True
. Worse yet, this will
silently fail, because the layer keybinding will trigger. Hence, it is best to use layer-specific keybindings.
There are actually a number of other events that you can connect callbacks to, other than just key presses. For more information, see the napari events documentation.
Conclusions#
We’ve now seen how to how to extend the viewer with custom GUI functionality: widgets and keybindings.
By using these concepts you can making analyses even more interactive, particularly exploratory/human-in-the-loop
analysis. Additionally, the approach described here, using magicgui
, can also be directly used to create
a plugin to share with the world.