Volume 12 - Year 2025 - Pages 121-126
DOI: 10.11159/jbeb.2025.015

Tissue-Specific Graph Models Enhance GCN Performance for Breast Cancer Hormonal Status Classification in TMAs


Atefeh Azin Kousha1

1Swinburne University of Technology, School of Science, Computing and Engineering Technologies,
John St, Hawthorn, VIC 3122, Australia
aazinkousha@swin.edu.au

Abstract - Graph Convolutional Networks (GCNs) perform best on homophilic graphs, where connected nodes share similar features. However, histopathology images present a challenge due to heterogeneous tissue patterns and diverse morphological structures. In such cases, ensuring homophily in the graph model is crucial, since in non-homophilic graphs message passing can lead to feature over-smoothing, where features from dissimilar tissue regions become mixed and reduce the classifier’s ability to distinguish distinct patterns. To address this, we introduce Boosted Adaptive Radius Graph (BARG), a novel graph modelling strategy tailored for Hematoxylin and Eosin (H&E)-stained histology images. BARG builds upon a Fixed Radius Graph (FRG) dataset in which a global distance threshold for nodes’ connectivity is optimized. Through statistical analysis of nodes’ Local Density Features (LDFs) in each FRG model, a tissue-specific parameter is computed and refined via a globally optimized scaling factor. Then, a two-hop-to-one-hop edge-promotion mechanism enhances graph connectivity without introducing notable heterophily. BARG is evaluated on a balanced training dataset of 1000 images and a test set of 554 TMA images (287 positive and 267 negative), with each patient contributing one image. Graph node features are extracted via a pre-trained VGG16 Convolutional Neural Network (CNN) by processing small image patches centered at nuclei detection peaks. Compared to the FRG approach, BARG yields notable performance gains, achieving 78% accuracy, 75% sensitivity, and 81% specificity, marking a 4% improvement in accuracy and an 8% increase in sensitivity. BARG also reaches an AUC-ROC of 0.85, a 3% enhancement over FRG, while preserving structural and contextual tissue relevance. These results position BARG as a robust, scalable solution for graph modelling in histopathology image analysis, suitable for broader applications in computational pathology.

Keywords: Breast Cancer Hormonal Status, Estrogen Receptor Status, Deep Learning, Graph Convolutional Networks, Digital Pathology, Machine Learning, Features Over-smoothing, Breast Cancer Biomarkers

© Copyright 2025 Authors This is an Open Access article published under the Creative Commons Attribution License terms. Unrestricted use, distribution, and reproduction in any medium are permitted, provided the original work is properly cited.

Date Received: 2025-05-13
Date Revised: 2025-09-01
Date Accepted: 2025-10-21
Date Published: 2025-12-18

1. Introduction

Breast cancer (BC) remains the most common cancer among women worldwide, with over 2.3 million new cases in 2020 [1]. A critical factor in the prognosis and treatment of invasive BC is the assessment of ERS, a predictive biomarker guiding hormone therapy decisions [2]. Clinical guidelines, including those from the American Society of Clinical Oncology, recommend testing for both Estrogen Receptors (ER) and Progesterone Receptors (PR) in all invasive BC patients. Immunohistochemistry (IHC) is the standard method for evaluating ERS and PR status (PRS), with positivity defined as at least 1% of carcinoma cells showing staining of any intensity [3]. About 70% of BC patients are ER-positive, making ERS the most influential marker for hormone therapy eligibility [4]. Although ER and PR are often co-expressed, discordant cases are rare and  ERS is typically the main factor determining patients’ eligibility for hormone therapy treatment.

Despite its widespread use, IHC-based ERS assessment has several notable limitations. It requires the use of costly special stains, skilled personnel, and substantial processing time. More critically, IHC interpretation is subjective, depending on pathologists’

Figure 1. BARG strategy for graph modelling of H&E-stained histology images.

Despite its widespread use, IHC-based ERS assessment has several notable limitations. It requires the use of costly special stains, skilled personnel, and substantial processing time. More critically, IHC interpretation is subjective, depending on pathologists’ visual evaluation of stained slides to estimate the proportion of positively stained nuclei to all nuclei within the tissue, thereby introducing diagnostic variability and potential human error [5]. Technical factors such as tissue handling, fixation, antibody selection, and threshold settings also contribute to inconsistencies [2]. Studies report up to 20% inaccuracy in IHC-based ERS results, causing false positives or negatives [3]. These challenges highlight the need for more objective, reproducible, and scalable ERS assessment methods. H&E staining remains the primary histopathological tool for BC diagnosis.

Pathologists evaluate H&E-stained slides for the morphology of cells and their spatial arrangement within the tissue structure [6]. Recent AI advances have shown that computational features from H&E images can correlate with ERS—even when not visually apparent [5, 7, 8, 9, 10]. For this purpose, most AI methods use CNNs [5, 7, 8, 9] or use nuclei morphometric features as inputs to CNNs [10]. However, CNNs are limited by local receptive fields, making them less effective at capturing long-range spatial relationships crucial to tissue structure. Additionally, CNNs often yield arithmetic, biologically uninterpretable features. On the other hand, GCNs [16], a modern class of artificial neural networks, have successfully classified BC H&E slides into diagnostic categories like normal, benign, in-situ, and invasive carcinoma [15]. GCNs excel in capturing spatial relationships between graph nodes but are highly dependent on effective graph structures and informative node features.

Graph modelling of H&E slides remains a challenging research area for achieving optimal performance with GCNs. There are two major strategies in defining a graph model from these images: cell-graphs and patch-graphs. In cell-graph strategy, nuclei serve as graph nodes, with edges encoding the intercellular spatial relationships [6, 11, 12, 13, 14]. Cell-graphs can be constructed by connecting each node to its k nearest neighbors (kNNs) [17], to all neighbors within a fixed radial distance R [10], to kNNs located within R [13], or to neighbors within R while constraining the maximum node degree to k [14]. In the patch-graph strategy, non-overlapping image patches serve as graph nodes, with their feature vectors extracted using a CNN module. Based on the (x,y) coordinates of each patch center, edges are established either by connecting kNNs according to Euclidean distance [18] or by thresholding the Euclidean distance between patch feature vectors using a fixed value δ [19, 20].

In the adaptive survival-aware patch-graph construction strategy proposed in [21], patch features are first projected through a learnable matrix T, after which edges are formed only when the cosine similarity between node features exceeds a global threshold ε. In addition, a global ℓ1 sparsity penalty is applied uniformly across all graphs and edge weights. Without spatial proximity constraints, visually similar but non-local image patches—such as separate glands or stromal regions—may be connected, resulting in biologically implausible “teleporting” edges. Increasing the ε value exacerbates under-connectivity in heterogeneous regions, where adjacent, biologically related patches fail to meet the cutoff, disrupting microarchitectural continuity and limiting message passing, while homogeneous tissue images may become overconnected. This uneven and biologically inaccurate connectivity across the dataset can compromise GCN performance.

We argue that approaches relying on static hyperparameters or globally learned projection matrices may be suboptimal, as tissue samples exhibit distinct topological and structural characteristics. These characteristics are best captured by adapting to a tissue-specific parameter that reflects the spatial cellular organization of each individual tissue image, rather than imposing uniform rules across the entire dataset.

In contrast to previous approaches that rely on fixed hyperparameters—such as R [10, 13, 14], δ [19, 20], ε [21], k [13, 14, 17, 18], or a globally learned projection matrix T [21]—the BARG framework is both data-driven and tissue-specific. It employs an FRG-based graph model to define its adaptive tissue-specific parameter through statistical analysis of LDF values computed for graph nodes. This parameter is further refined using a global scaling constant, optimized alongside a two-hop-to-one-hop edge-promotion mechanism to enhance graph connectivity while avoiding heterophilous connections that degrade GCN performance. One-hop connectivity is preserved under controlled constraints through high-percentile selection on the CDF curve and optimization of a global scaling constant, ensuring that promoted two-hop neighbors remain generally homophilous to prevent feature over-smoothing in GCN. By incorporating tissue-specific morphological and structural variations into edge formation, BARG models cellular interactions in accordance with tissue architecture. It further integrates controlled, data-driven scaling of the tissue-specific parameter alongside edge promotion to achieve connectivity levels that enhance GCN performance without introducing features over-smoothing. The resulting graph representations are biologically grounded, sample-adaptive, and data-driven, addressing the limitations of global hyperparameters, sparsity penalties, and projection matrices used in previous methods [10, 13, 14, 17–21].

2. Materials and Methods

We hypothesize that the ERS of invasive breast cancer can be more accurately predicted from H&E slides using a GCN classifier that combines deep CNN-based features with graph structures generated by the proposed BARG strategy, compared with those achieved by the FRG strategy. Figure 1 provides an overview of our proposed graph construction and classification method, which consists of the following steps:

2.1. Data Acquisition

We use the publicly available Genetic Pathology Evaluation Centre (GPEC) databank [22-24], comprising 17 H&E-stained TMA slides from patients with invasive breast cancer, each annotated with pathologist-assessed ERS. After quality control, we curated a cohort of 1554 patients, each with a single TMA image containing at least 600 segmented nuclei. The dataset was split into a balanced training set (1000 images: 500 ER-positive, 500 ER- negative) and an independent test set (554 images: 287 ER-positive, 257 ER-negative). Each TMA image measures 650 × 800 pixels, corresponding to a 0.6 mm diameter tissue core.

2.2. Nuclei segmentation

Nuclei segmentation is performed using a multi-step image processing pipeline: (1) image upscaling, (2) background removal, (3) conversion to HSV and extraction of the brightness channel, (4) Otsu thresholding, (5) filling holes in detected nuclei, and (6) watershed segmentation to separate overlapping nuclei [25]. This approach, adapted from [10], ensures accurate identification of nuclei for subsequent graph construction.

2.3. FRG Strategy

We treat the nuclei detected in Section 2.2 as graph nodes and use them to construct a graph model defined as G={V,E,A} for each H&E-stained TMA image. Here, V denotes the finite set of nodes, each corresponding to a detected nucleus that has at least one connection in the graph; E represents the set of edges; and A is the unweighted adjacency matrix. In FRG strategy, the adjacency matrix is constructed using an optimized Fixed Radial Proximity (FRP) threshold, denoted by dmax. An edge is established between two nodes if their Euclidean distance is less than or equal to dmax. The adjacency matrix for the qth graph based on FRG strategy is denoted by AFRG,q and is computed as

(1)

where N=|V|. dmax is optimized through a grid search to maximize the GCN accuracy for BC ERS classification using FRG-based dataset, ensuring optimal graph connectivity for effective message passing.

2.4. BARG Strategy

For the qth FRG model in our dataset, denoted as GFRG,q, we compute LDF (i, q) for its ith node as a normalized score representing its spatial proximity to its three closest neighbours, if they exist, as follows:


(2)

where rij is the distance between node i and its jth closest neighbour in the graph, and d corresponds to deg(i, q), i.e., the degree of node i in GFRG,q, defined as . To determine the tissue-specific distance for GFRG,q as DTS(Gq), we analyse the CDF of LDFs in this graph model by plotting CDF curve for LDFs across all nodes. The 80th percentile in the CDF curve of qth graph, denoted as T80(Gq), is selected as our fixed reference point. This percentile is converted to a distance value as shown in Eq. 3.

(3)

After computing the tissue-specific parameter for each FRG model in our dataset as {DTS(q) | q ∈ [0, N − 1]}, we scale these tissue-specific parameters by a global constant m , a tunable hyperparameter, to define the Adaptive Radial Proximity (ARP) threshold for the qth graph, denoted as dARP (Gq ), as

(4)

This parameter m is later optimized via grid search using the entire BARG-based graph dataset (constructed using Eq. 6 in conjunction with our GCN classifier (refer to Section 2.6.)) to maximize classification accuracy.

The adjacency matrix for qth Adaptive Radial Proximity Graph model, denoted as ARG(q), is defined as

(5)

We enhance connectivity in ARG(q) model through a targeted rewiring strategy that converts two-hop neighbors of each node into direct one-hop neighbors, thereby defining the final adjacency matrix of the BARG strategy for the qth graph as

(6)

Converting two-hop edges into one-hop edges increases intra-cluster edge density and enhances message passing efficiency. Using the ARG-based adjacency matrices from Eq. 5, we construct the BARG-based adjacency matrices according to Eq. 6 for different values of m in Eq. 4. The parameter m is then optimized via grid search to maximize the GCN classification accuracy on the BARG-based graph dataset (see Section 2.6 for GCN). Figure 1 shows a figurative abstract for BARG and FRG graph construction strategies.

2.5. Graph Feature Matrix

The graph feature matrix for a graph G is defined as ,X∈RN×f, where N is the number of nodes and f is the dimensionality of the feature vectors. These feature vectors are extracted using a VGG16 CNN model pre-trained on ImageNet [26], which is employed as the feature extraction module with its final classification layer removed. For each nucleus—represented as a graph node—a feature vector is obtained by inputting an image patch of size p × p, centered at the nucleus detection peak, into the feature extraction module.

2.6. GCN Classifier

Following [16], we build a GCN by implementing the layer-wise propagation rule for spectral graph convolution operations. For the lth layer, the propagation rule is defined as

(7)

where A is the adjacency matrix, D is the Degree matrix, H(l) represents the input feature matrix to the lth graph convolution layer, and W(l) is the trainable weight matrix for the lth graph convolution layer, s denotes the activation function. H(0) is the graph feature matrix X. This GCN model consists of two graph convolutional layers, with the number of filters in each layer denoted by the hyperparameters k1 and k2, which are optimised during experimentation.

3. Results and Discussion

The FRG dataset is constructed according to Eq. 1 using an FRP threshold value of dmax=40, which is optimized to maximize the GCN classification accuracy via grid search. LDFs are then computed for all nodes in each graph model built using the FRG method as defined in Eq. 2. Additionally, the tissue-specific distance for the qth graph, DTS(Gq), was calculated using Eq. 3. The multiplier m = 3 was then determined via grid search by computing dARP(Gq) values using Eq. 4, constructing BARG-based dataset through Eq. 5 and Eq. 6, and evaluating the resulting classification performance using our GCN classifier. Then, {dARP(q) | q ∈ [0, N − 1]} values were computed using Eq. 4 to build adjacency matrices for ARG-based models using Eq. 5. Later, the adjacency matrices for BARG-based models were computed using Eq. 6. To demonstrate the superiority of the proposed BARG strategy over the conventional FRG approach, we train and validate a GCN classifier, defined in Eq. (7), on the FRG-based dataset, referred to as GCN(FRG). We then train and validate an identical GCN architecture on the BARG-based dataset, denoted as GCN(BARG). In our experiments with the GCN classifier, the numbers of filters in the two graph convolutional layers denoted as k1 and k2 were optimized and set to k1 = k2 = 512. The model was trained using the binary cross-entropy loss function and the optimizer employed was Adam with a learning rate of 0.0001. These hyperparameters were identical for training both GCN(FRG) and GCN(BARG). The graph feature matrix X was constructed using feature vectors extracted as described in Section 2.5. Specifically, by setting p=33, image patches of size 33×33 were extracted, centered at the nuclei detection peaks, resulting in feature vectors of dimension f=512 from the feature extraction module. We used AFRG,q from Eq. 1, and ABARG,q from Eq. 6 as adjacency matrix A in Eq. 7 to train GCN(FRG) and GCN(BARG) models, respectively. Table 1 presents a comparison of accuracy metrics between these two models. Representing accuracy as acc., AUC-ROC as AUC, sensitivity as Sens., specificity as Spec., Positive Predictive Value as PPV, Negative Predictive Value as NPV, Area Under Precision Recall curve as AUPR, and F1-score as F1.

Table 1. Performance Metrics GCN(BARG) and GCN(FRG)

Classifier

acc.

AUC

Sens.

Spec.

PPV

NPV

AUPR

F1

GCN(BARG)

0.778

0.847

0.746

0.813

0.817

0.741

0.851

0.780

GCN(FRG)

0.741

0.816

0.672

0.817

0.804

0.691

0.833

0.732

Figure 2. Performance Comparison of GCN on Graphs Constructed Using FRG and BARG strategies. Left: ROC Curves Comparison, right: Precision-Recall Curves Comparison.

Compared to the FRG-based dataset, BARG delivers significant performance improvements, achieving 78% accuracy, 75% sensitivity, and 81% specificity—corresponding to a 4% increase in accuracy, an 8% improvement in sensitivity, while maintaining high specificity. Furthermore, GCN(BARG) attains an AUC-ROC of 0.85, representing a 2% increase over GCN(FRG), while effectively preserving structural integrity and tissue context.

Figure 2 presents the ROC curves of GCN(FRG) and GCN(BARG) on the left, and their corresponding Precision–Recall curves on the right. In terms of both AUC-ROC and AU-PR, the GCN(BARG) slightly outperforms the GCN(FRG). These findings establish BARG as a reliable and scalable approach for graph modelling in histopathology image analysis, with strong potential for broader use in computational pathology.

4. Conclusions

In this study, we introduced BARG, an adaptive graph modeling framework for histology images. Unlike conventional graph construction approaches that rely solely on global hyperparameters for edge establishment, BARG dynamically adjusts graph connectivity based on tissue-specific characteristics derived from the statistical distribution of local cell densities across the tissue architecture. This adaptive strategy yields biologically interpretable graph models that can be directly mapped onto the underlying tissue images. BARG models are both tissue-specific and data-driven, enabling robustness to variations in staining quality and imaging conditions across laboratories, as well as to inherent structural heterogeneity among tissue samples. Beyond improving ERS prediction from H&E-stained slides compared to the FRG method, BARG provides a foundation for tissue-specific and data-driven graph modeling in histopathology and is applicable to a wide range of classification tasks. While this study is limited by the relatively small dataset size and the absence of whole-slide images, future evaluation on larger datasets and integration with more advanced graph learning models may enable performance comparable to IHC-based diagnoses. This would allow direct ERS prediction from H&E slides, potentially eliminating the need for IHC staining and improving efficiency, consistency, and objectivity in pathology workflows.


References

[1] H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, 2021. View Article

[2] A. M. Gown, “Current issues in ER and HER2 testing by IHC in breast cancer,” Modern Pathology, vol. 21, no. 2, pp. S8–S15, 2008. View Article

[3] M. E. H. Hammond, D. F. Hayes, M. Dowsett, D. C. Allred, K. L. Hagerty, S. Badve, P. L. Fitzgibbons, G. Francis, N. S. Goldstein, M. Hayes, and D. G. Hicks, “American Society of Clinical Oncology/College of American Pathologists guideline recommendations for immunohistochemical testing of estrogen and progesterone receptors in breast cancer,” Journal of Clinical Oncology, vol. 28, no. 16, pp. 2784–2795, 2010. View Article

[4] F. Lumachi, A. Brunello, M. Maruzzo, U. Basso, and S. M. M. Basso, “Treatment of estrogen receptor-positive breast cancer,” Current Medicinal Chemistry, vol. 20, no. 5, pp. 596–604, 2013. View Article

[5] G. Shamai, Y. Binenbaum, R. Slossberg, I. Duek, Z. Gil, and R. Kimmel, “Artificial intelligence algorithms to assess hormonal status from tissue microarrays in patients with breast cancer,” JAMA Network Open, vol. 2, no. 7, e197700, 2019. View Article

[6] D. Ahmedt-Aristizabal, M. A. Armin, S. Denman, C. Fookes, and L. Petersson, “A survey on graph-based deep learning for computational histopathology,” Computerized Medical Imaging and Graphics, vol. 95, p. 102027, 2022. View Article

[7] H. D. Couture, L. A. Williams, J. Geradts, S. J. Nyante, E. N. Butler, J. S. Marron, C. M. Perou, M. A. Troester, and M. Niethammer, “Image analysis with deep learning to predict breast cancer grade, ER status, histologic subtype, and intrinsic subtype,” NPJ Breast Cancer, vol. 4, no. 1, p. 30, 2018. View Article

[8] N. Naik, A. Madani, A. Esteva, N. S. Keskar, M. F. Press, D. Ruderman, D. B. Agus, and R. Socher, “Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains,” Nature Communications, vol. 11, no. 1, p. 5727, 2020. View Article

[9] P. Gamble, R. Jaroensri, H. Wang, F. Tan, M. Moran, T. Brown, I. Flament-Auvigne, E. A. Rakha, M. Toss, D. J. Dabbs, and P. Regitnig, “Determining breast cancer biomarker status and associated morphological features using deep learning,” Communications Medicine, vol. 1, no. 1, p. 14, 2021. View Article

[10] R. R. Rawat, D. Ruderman, P. Macklin, D. L. Rimm, and D. B. Agus, “Correlating nuclear morphometric patterns with estrogen receptor status in breast cancer pathologic specimens,” NPJ Breast Cancer, vol. 4, no. 1, p. 32, 2018. View Article

[11] H. Sharma, N. Zerbe, S. Lohmann, K. Kayser, O. Hellwich, and P. Hufnagl, “A review of graph-based methods for image analysis in digital histopathology,” Diagnostic Pathology, vol. 1, no. 1, 2015.

[12] D. Anand, S. Gadiya, and A. Sethi, “Histographs: Graphs in histopathology,” in Medical Imaging 2020: Digital Pathology, vol. 11320, pp. 150–155, 2020. View Article

[13] J. Wang, R. J. Chen, M. Y. Lu, A. Baras, and F. Mahmood, “Weakly supervised prostate TMA classification via graph convolutional networks,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 239–243, 2020. View Article

[14] Y. Zhou, S. Graham, N. Alemi Koohbanani, M. Shaban, P. A. Heng, and N. Rajpoot, “CGC-Net: Cell graph convolutional network for grading of colorectal cancer histology images,” in IEEE/CVF International Conference on Computer Vision Workshops, 2019. View Article

[15] Z. Gao, Z. Lu, J. Wang, S. Ying, and J. Shi, “A convolutional neural network and graph convolutional network based framework for classification of breast histopathological images,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 7, pp. 3163–3173, 2022. View Article

[16] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations (ICLR), 2017.

[17] R. J. Chen, M. Y. Lu, J. Wang, D. F. Williamson, S. J. Rodig, N. I. Lindeman, and F. Mahmood, “Pathomic fusion: An integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis,” IEEE Transactions on Medical Imaging, vol. 41, no. 4, pp. 757–770, 2020. View Article

[18] Chen, R.J., Lu, M.Y., Shaban, M., Chen, C., Chen, T.Y., Williamson, D.F. and Mahmood, F., 2021, September. Whole slide images are 2d point clouds: Context-aware survival prediction using patch-based graph convolutional networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 339-349). Cham: Springer International Publishing. View Article

[19] Tu, M., Huang, J., He, X. and Zhou, B., 2019. Multiple instance learning with graph neural networks. arXiv preprint arXiv:1906.04881.

[20] Li, R., Yao, J., Zhu, X., Li, Y. and Huang, J., 2018, September. Graph CNN for survival analysis on whole slide pathological images. In International conference on medical image computing and computer-assisted intervention (pp. 174-182). Cham: Springer International Publishing. View Article

[21] Liu, P., Ji, L., Ye, F. and Fu, B., 2023. GraphLSurv: A scalable survival prediction network with adaptive and sparse structure learning for histopathological whole-slide images. Computer methods and programs in biomedicine, 231, p.107433. View Article

[22] Cheang, M.C., Treaba, D.O., Speers, C.H., Olivotto, I.A., Bajdik, C.D., Chia, S.K., Goldstein, L.C., Gelmon, K.A., Huntsman, D., Gilks, C.B. and Nielsen, T.O., 2006. Immunohistochemical detection using the new rabbit monoclonal antibody SP1 of estrogen receptor in breast cancer is superior to mouse monoclonal antibody 1D5 in predicting survival. Journal of clinical oncology, 24(36), pp.5637-5644. View Article

[23] Welcome to GPEC Bliss Server Images, viewed 3 May 2023, www.gpecdata.med-.ubc.ca/images/bliss/. View Article

[24] Genetic Pathology Evaluation Centre, viewed 2 Jul 2022, . View Article

[25] Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., Rueden, C., Saalfeld, S., Schmid, B. and Tinevez, J.Y., 2012. Fiji: an open-source platform for biological-image analysis. Nature methods, 9(7), pp.676-682. View Article

[26] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., and Fei-Fei, L., 2009, June. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 248-255). Ieee. View Article