AxonEM Challenge: Large-scale 3D Axon Instance Segmentation¶
More information about the AxonEM dataset can be found in our MICCAI 2021 paper.
[Important Note!] We will use "AxonEM-v2" with improved annotations. For reference, please cite the numbers in the leaderboard instead of those in the original paper. We exclude FFN baseline results from our implementation, as it fails to fully represent the capabilities of FFN.
Explore it in your browser! [AxonM-H], [AxonEM-M]¶
(navigation tips: [manual] [youtube video], visualizing 1% ground truth)
Task¶
The task is the 3D axon instance segmentation on two 30x30x30 um datasets, {750,1000}x4096x4096 in voxels at {40,30}x8x8 nm resolution. The image volumes are acquired from a mouse (Axon-M) and a human (Axon-H) tissue, respectively. It's a challenging task, as (a) axons can be falsely merged with abutting dendrites due to the unclear cell boundaries , (b) axons often form a tight bundle.
Dataset¶
- Images [AxonEM-H-im-pad], [AxonEM-M-im-pad]: H has 1,000 consecutive 3D slices while M has 750 slices. Both datasets have the [20,512,512] padding on both sides in the zyx axis.
- Training data [AxonEM-H-train], [AxonEM-M-train]: ground truth dense instance segmentation for 9 volumes per dataset.
All the axon instances in the ground-truth annotation are at least 5um long. The annotation is not perfect. Please email donglai.wei@bc.edu the (x,y,z) location of erroneous segmentation to refine it together. The subject of the email: "[AxonEM Error]"
Evaluation Metric [code]¶
- Metric: Given the submitted dense instance segmentation, we choose the expected run length (ERL) metric introduced in the Flood-filling Network [paper] to evaluate on the axon segments only. As described in the AxonEM paper, we extend the implementation by Funke's lab to alleviate the excessive penalty due to outliers [code]. Note, the evaluation script will select the axon segments directly and there is no need for participants to worry about compartment classification (e.g. axon or not).
- Leaderboard Score: The ranking score for the challenge is the average of the accuracy on both volumes (Total_Accuracy in the leaderboard). Furthermore, each user should see evaluation results for each volume.
Submission Format¶
The challenge accepts HDF5 files for submission. A valid submission should be one zip file containing two separate HDF5 files with the following file names (MUST):
- 0_human_instance_seg_pred.h5: H5 file containing the instance segmentation results on Human volume (1000x4096x4096 for the valid region)
- 1_mouse_instance_seg_pred.h5 : H5 file containing the instance segmentation results on Mouse volume (750x4096x4096 for the valid region)