Volume 601, Issue 15 p. 3011-3024
Topical Review
Open Access

From single-neuron dynamics to higher-order circuit motifs in control and pathological brain networks

Darian Hadjiabadi

Corresponding Author

Darian Hadjiabadi

Department of Neurosurgery, Stanford University, Stanford, California, USA

Department of Bioengineering, Stanford University, Stanford, California, USA

Corresponding author D. Hadjiabadi: Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA.  Email: [email protected]

Search for more papers by this author
Ivan Soltesz

Ivan Soltesz

Department of Neurosurgery, Stanford University, Stanford, California, USA

Search for more papers by this author
First published: 11 July 2022
Citations: 3

Handling Editors: Katalin Toth & Michael Okun

The peer review history is available in the Supporting information section of this article (https://doi.org/10.1113/JP282749#support-information-section).

Abstract

The convergence of advanced single-cell in vivo functional imaging techniques, computational modelling tools and graph-based network analytics has heralded new opportunities to study single-cell dynamics across large-scale networks, providing novel insights into principles of brain communication and pointing towards potential new strategies for treating neurological disorders. A major recent finding has been the identification of unusually richly connected hub cells that have capacity to synchronize networks and may also be critical in network dysfunction. While hub neurons are traditionally defined by measures that consider solely the number and strength of connections, novel higher-order graph analytics now enables the mining of massive networks for repeating subgraph patterns called motifs. As an illustration of the power offered by higher-order analysis of neuronal networks, we highlight how recent methodological advances uncovered a new functional cell type, the superhub, that is predicted to play a major role in regulating network dynamics. Finally, we discuss open questions that will be critical for assessing the importance of higher-order cellular-scale network analytics in understanding brain function in health and disease.

Introduction

Though the term ‘computational neuroscience’ was not introduced in text until 1990 (Schwartz, 1990), Hodgkin and Huxley's seminal work to model action potential initiation and propagation (Hodgkin & Huxley, 1952) remains the defining touchstone for the field. Despite massive and rapid technological advances since their time, the fundamental premise of modelling remains the same – to simulate complex neural systems and generate accurate predictions that have the potential to guide the next generation of experiments. Building and analysing computational models of neural circuits has the potential to unify large bodies of biological and quantitative knowledge (Wang et al., 2020), enabling researchers to study a multitude of spatial scales, from macroscopic brain regions, to circuits, individual cells, and synapses (Koch et al., 1998; Milstein et al., 2021; Suárez et al., 2020; Traub et al., 1991).

A major topic leveraging computational modelling has been the investigation of the cellular-scale communication connectome by uncovering the functional role of individual cells in driving network activity. Not only do such endeavours propel our understanding of brain communication at the cellular level (Sadeh & Clopath, 2020), but they also provide new insights into neurological disorders that may ultimately be useful for devising novel intervention strategies (Hadjiabadi et al., 2021; Krook-Magnuson & Soltesz, 2015). Significant efforts to date have discovered sparse but highly connected so-called hub neurons across a range of organisms and brain regions from Caenorhabditis elegans (Towlson et al., 2013) to rodents (Bonifazi et al., 2009) and primates (Dann et al., 2016). These hub neurons play a central role in the overall organization of a network, driving network synchronization (Bonifazi et al., 2009), and have furthermore been implicated in the ability of circuits to generate pathological seizure activity (Morgan & Soltesz, 2008). The major computational prediction that only a few cells could exert such powerful influence over network-wide activity (Morgan & Soltesz, 2008) has propelled significant research to utilize hub cells as targets for seizure control (Krook-Magnuson et al., 2015).

Identifying the hub neuron and its connection to normal and abnormal network dynamics required the convergence of single-cell imaging, quantitative modelling and graph-based network theory, all of which have undergone significant transformations over the last decade. These methodological advances include the ability to image activity from thousands of individual neurons in vivo (Lovett-Barron et al., 2017) with advanced 2-photon microscopes, measure field potentials from multiple brain regions with Neuropixel probes (Vesuna et al., 2020), model the rich communication structure with machine learning-based inference algorithms (Sompolinsky et al., 1988; Sussillo & Abbott, 2009), and mine the inferred connectivity networks with higher-order network analytics (Benson et al., 2016; Yin et al., 2017) (Fig. 1). Computational pipelines that synthesize one or more of these advances have uncovered new fundamental insights in network topology, one example being the identification of ‘superhub’ cells that are predicted to be a maximally selective single-cell target for seizure control (Hadjiabadi et al., 2021). More broadly, these new methodologies provide opportunities to mine massive cellular-scale connectomes for connectivity patterns called higher-order motifs (Benson et al., 2016), which have been implicated in affecting network function across a range of fields (Milo et al., 2002) and may therefore be critical to understanding how biological neural networks are organized in health and disease.

Details are in the caption following the image
Figure 1. A computational pipeline to record, model, and mine large-scale cellular-resolution networks
State-of-the-art imaging technology and quantitative analysis methods are enabling researchers to uncover network dynamics across massive neuronal networks at single-cell resolution. In step 1 of this example pipeline, single-cell recordings from the larval zebrafish are performed using 2-photon Ca2+ imaging, capturing activity from thousands of neurons across the entire brain in both control and pathological (preseizure) conditions. In step 2, time-series neural activity from these brain states undergo computational modelling to infer the underlying cell–cell communication network using machine learning-based optimization (see Fig. 2). The inferred directed graph can then be mined in step 3 using local clustering techniques that can extract important higher-order connectivity structures (see Fig. 3). Reproduced in a modified form with permission from Hadjiabadi et al. (2021).

In this Review, we highlight how data-driven computational modelling and analysis can be applied to understand higher-order neural communication at the cellular level in large-scale networks with specific examples from control and epileptic brains. Novel machine-learning and higher-order graph analytics are introduced in Section 1. These methods will be critical for investigating the topological organization of cellular networks, where we highlight the prediction that superhub neurons play a major role in driving downstream excitation in the preseizure brain state (Section 2). Lastly, we will discuss particularly promising avenues for future research into higher-order network analysis and superhubs (Section 3).

Section 1: Advanced methods to model and mine time-series neural data

The recording, modelling, and mining pipeline (Fig. 1) is a fundamental series of steps that allows high-dimensional time-series neural data to be transformed into accessible insights and actionable predictions. Numerous approaches exist to extract meaningful measures of neural activity, including functional magnetic resonance imaging (fMRI), EEG, microelectrode arrays and calcium imaging (Chen et al., 2017). Modelling methods can then be utilized to convert recorded activity into a graph, which is defined as a collection of nodes (objects) and edges (connections between objects). With macroscale recordings such as fMRI and EEG, nodes are usually brain regions, whereas with single-cell imaging, each node in the network can represent a neuron. These networks can be mined using a host of powerful graph theoretical algorithms whose origins date back to the 18th century (Berge, 1982).

While graph analysis of macroscale data has been informative for advancing our understanding of cognition, behaviour and perception (Bassett et al., 2018), it has been shown that single-cell dynamics can often diverge significantly from what population recordings would predict (Muldoon et al., 2013; Truccolo et al., 2011); a situation referred to as the micro–macro disconnect (Farrell et al., 2019). Fortunately, the conjunction of highly efficient algorithms for modelling and mining massive datasets (Sussillo & Abbott, 2009; Yin et al., 2017) can and has been leveraged to study cellular-resolution neural circuits (Hadjiabadi et al., 2021). Here, we briefly discuss some of the latest methods in connectivity modelling and graph-based network analysis that can give rise to novel insights on brain function.

FORCE learning

The underlying cell–cell communication network can be inferred from large-scale, cellular-resolution recordings using functional connectivity and effective connectivity analysis (Friston, 2011). Functional connectivity refers to the temporal correlations of the signal of interest, whereas effective connectivity determines how much influence one node exerts on its neighbours (Friston, 2011). Numerous methods exist within the functional and effective connectivity domains (Farahani et al., 2019). For example, functional measurements include cross-correlation (Biswal et al., 1995), spectral coherence (Sun et al., 2004), statistical parametric mapping (Friston et al., 1991), decomposition-based analysis (Baumgartner et al., 2000), clustering (Chuang et al., 1999) and mutual information (Tsai et al., 1999). Effective measurements include Granger causality (Seth, 2010), dynamic causal modelling (Friston, 2009), transfer entropy (Shimono & Beggs, 2015) and FORCE learning on chaotic recurrent neural networks (RNNs) (Sompolinsky et al., 1988; Sussillo & Abbott, 2009). Altogether, the overarching goal is to model the pairwise interactions between recorded entities – whether macroscale brain regions or single cells.

FORCE learning is an effective connectivity approach that optimizes for the effective weights between pairs of nodes to match a target output (Fig. 2) (Sussillo & Abbott, 2009). To date, this technique has been shown to be effective for modelling neural dynamics related to hippocampal sequence generation (Rajan et al., 2016), motor planning (Li et al., 2016), coping (Andalman et al., 2019) and epilepsy (Hadjiabadi et al., 2021). Importantly, advances in 2-photon single-cell imaging have enabled researchers to capture and model thousands of neurons across the whole larval zebrafish brain (Andalman et al., 2019), opening new pathways to mine the massive cellular-scale connectome. FORCE learning requires two key ingredients: a network with governing dynamics and a learning rule.

Details are in the caption following the image
Figure 2. Chaotic recurrent neural networks and FORCE learning
A, the network architecture of the chaotic recurrent neural network (RNN) is a graph consisting of a collection of nodes and tunable edges (Sompolinsky et al., 1988). Depending on the recording device used, the nodes can range from regions to individual neurons. The edges Jij represent the influence between pairs of nodes and are signed (positive/negative) and weighted. Currently, networks with thousands of neurons can be modelled from whole-brain single cell recordings of larval zebrafish (Andalman et al., 2019; Betzel, 2019; Hadjiabadi et al., 2021; Lovett-Barron et al., 2017). The governing equations of the RNN allows for chaos to emerge. Here, τ is the time constant, xi is the inferred current within node i, g is the gain parameter and generates chaotic dynamics for g > 1, φ(.) is the tanh function, hi is a noise term for node i, and zi is the model-generated unit activity at node i. The dynamics can be solved using numerical estimation methods. B, the learning rule for FORCE optimization ensures that the effective edges Jij are updated at each training epoch. In the original formulation, the weight update is proportional to the error between the unit activity zi and the target-derived activity fi (Sussillo & Abbott, 2009). In recent work (Hadjiabadi et al., 2021), the error is scaled by the structural connectivity weight Sij (see Panel C). With single-cell imaging, that target activity is experimental GCaMP recordings. Note that zi initially starts off as chaotic but eventually matches the target output, leading to an overall decrease in mean square error. C, modifications to the original weight update step include incorporating structural data from the zebrafish neuroanatomical connectome as a biological constraint to guide training (Hadjiabadi et al., 2021). The aim is to modulate the distribution of effective weights such that pairs of neurons living in poorly connected brain regions exhibit low influence while pairs of neurons in highly connected brain regions have increased influence. The neuroanatomical connectome was built from quantitative analysis from thousands of neuron tracings and is represented by a weight matrix (Kunst et al., 2019).

The network architecture is represented by a chaotic RNN (Fig. 2A), a type of non-linear dynamic system that spontaneously exhibits chaotic behaviours (Sompolinsky et al., 1988). Chaotic systems are highly sensitive to initial conditions (Lorenz, 1963) and can therefore generate a wide range of complex dynamics, making them optimal for training. The RNN architecture is intuitively very similar to a graph, consisting of nodes and directed edges, and is recurrently connected in the sense that each node is connected to every other node. Sparsity constraints have been included to improve generalizability and to more closely represent the underlying structural anatomy (Song et al., 2005). Each node in the network generates activity from a system of governing dynamic equations that considers the weighted connectivity pattern of its inputs. Importantly, chaotic RNNs also fall under the category of generative models as they can be used to generate synthetic datasets (e.g. of single-cell synthetic calcium traces, as illustrated in Step 2 in Fig. 1) from the dynamic equations, a critical asset for performing perturbation simulations (Betzel & Bassett, 2017).

The learning rule enables the chaotic RNN to fine tune the weights of the directed edges through a least-squares optimization (Sussillo & Abbott, 2009) approach (Fig. 2B). The expectation is that each iteration of learning controls the chaos of the RNN, bringing the dynamics closer to the experimentally derived time-series neural data. The final edge weights represent the influence from one node to another and can be positive (exciting) or negative (inhibiting).

There are, however, several limitations of this approach. First, FORCE learning is purely a data-driven approach and does not include knowledge of the underlying system being modelled. To mitigate this, recent modelling of single-cell data from the larval zebrafish included the macroscale neuroanatomical connectome as a biological constraint (Hadjiabadi et al., 2021) (Fig. 2C). This modification aids to reduce the chances of false-positive effective connections between pairs of neurons living in brain regions that are not highly connected, and more broadly creates opportunities for additional biologically realistic constraints on abstract machine learning-based modelling efforts. Second, first-generation RNNs assumed that all neurons are connected to every other neuron, which does not hold in the nervous system. Sparsity constraints have been included (Rajan et al., 2016) to address this issue while also serving the dual purpose of improving generalizability of the network. Third, while RNNs can take significant time to train, access to massive supercomputers enables modellers to generate well-fitted models in a reasonable time (Andalman et al., 2019).

Higher-order network analysis

Networks are a fundamental data structure for modelling and understanding complex systems. Traditionally, algorithms considering local and global properties of network topology only consider lower-order properties based on nodes and edges (Berge, 1982). However, there has been significant interest in higher-order motif structures (Fig. 3A), repeating subgraphs that have been implicated to play important roles in network function (Bojanek et al., 2020; Milo et al., 2002; Sporns & Kötter, 2004). Recent algorithms to mine large-scale networks for motif patterns (Step 3 in Fig. 1) have been used to gain new insights into the cellular-level neural communication (Benson et al., 2016).

Details are in the caption following the image
Figure 3. Higher-order motifs and local higher-order clustering
A, motifs are subgraphs representing higher-order connectivity patterns of networks. These repeating subgraphs may be critical for understanding network function in massively complex systems (Milo et al., 2002). For example, feedforward motifs and cycles have been implicated in sustaining cortical-like asynchronous spiking patterns in computational models (Bojanek et al., 2020), and the overrepresentation of feedforward motifs is furthermore predicted to emerge in the pathological brain, generating unrestrained excitation (Hadjiabadi et al., 2021). Moreover, bifan motifs, defined as two output neurons that individually target two input neurons, are critical for filtering and synchronization in signalling networks (Lipshtat et al., 2008), and have been linked to the circuit pathways underlying nictation in C. elegans (Benson et al., 2016). B–D, local higher-order clustering can identify clusters dense in a specific motif of interest. The MAPPR algorithm (Yin et al., 2017) will identify clusters with the lowest conductance value, which can be thought of as islands isolated from the rest of the graph. As an illustration of how the motif conductance is quantified, we use a toy undirected network structure and local cluster X. B, an example seed node (asterisk) and its neighbours are shown. This seed node is a part of two triangle motifs (labels 1,2). C, zooming out, the seed node is part of a larger network, and resides inside a local cluster X containing six nodes. Each node in the cluster gets assigned a ‘motif degree’, which is the number of motifs each node is a part of (see B). D, the motif conductance is calculated by considering how many motifs reside between the local cluster X and the rest of the network divided by the sum of the ‘motif degree’ for nodes inside the cluster X. In this case, there is one motif that cuts the cluster (grey), and the sum of ‘motif degree’ for nodes in X is 11. Altogether, this gives a motif conductance of 1/11 = 0.091.

One such algorithm has been the motif-based spectral clustering method (Benson et al., 2016). The key idea is to identify a cluster of nodes such that the cluster participates in many instances of the user-specified motif while minimizing the number of ‘cut’ motifs (i.e. motifs that lie between the cluster and the rest of the network) (Fig. 3B, C). The algorithm finds a higher-order cluster by minimizing a metric termed ‘motif conductance’ (Fig. 3D), which is simply the number of cut motifs divided by the number of motifs living inside the cluster. Using this framework, Benson et al. (2016) investigated the higher-order organization of the C. elegans neuronal network for bifan motifs (Fig. 3A), which are multi-output, multi-input channels that can serve as signal filters and synchronizers (Lipshtat et al., 2008). Specifically, Benson et al. (2016) showed that bifan motifs form key pathways in the motor-sensory circuit responsible for nictation (Benson et al., 2016), providing correlative evidence linking the higher-order organization of neuronal networks with behavioural output.

Though the method described above (Benson et al., 2016) has mathematical guarantees to find an optimal cluster, it is a global clustering method with run-time O(m1.5), where m is the number of edges. However, additional advances have improved the run-time of higher-order clustering algorithms. Local higher-order clustering methods have recently been developed (Yin et al., 2017), enabling targeted clustering around a given seed node and are therefore faster than traditional global clustering methods. The Motif-based Approximate Personalized PageRank (MAPPR) algorithm developed by Yin et al. (2017) finds higher-order clusters by minimizing the motif conductance, similar to Benson et al. (2016), and also has mathematical guarantees of optimality (Fig. 3B–D). Moreover, clusters can be identified in O(m) time, enabling massive networks containing billions of edges to be mined for their higher-order structure much more rapidly, though the network cannot be weighted. While these methods critically open new opportunities to investigate the cellular-scale communication connectome, recent research has begun to challenge the notion that complex interactions can be captured by standard network models where the influence between components is represented by pairwise link-statistics (Lambiotte et al., 2019). Instead, new analysis tools have begun to build networks based on higher-order path-statistics, providing a wealth of insights across a broad range of fields (Lambiotte et al., 2019). Studying the organization of higher-order motifs within these networks may unlock new avenues to further understand the organizing structure of neural communication.

Section 2: Topological organization of cellular networks

The topology of a network determines the ability of an individual node to influence its surroundings (Bassett et al., 2018). One important concept across the study of networks – whether focusing on electrical grids, the internet or the brain – has been the hub node (Heuvel & Sporns, 2013). For neuroscience applications, hub nodes refer to neurons with high connectivity and emerge in networks with non-random connectivity patterns (Barabási & Albert, 1999), and have been found across a range of organisms and brain regions (Bonifazi et al., 2009; Dann et al., 2016; Nigam et al., 2016; Shimono & Beggs, 2015). Interestingly, hub-containing networks can be constructed through simple ‘rich-get-richer’ connectivity schemes where newly added nodes preferentially attach to nodes that already have higher than average connectivity, with implications for developing biological networks (see below) (Barabási & Albert, 1999; Bonifazi et al., 2009; Morgan & Soltesz, 2008). While several metrics exist to quantify hubs (Sporns et al., 2007), the most intuitive has been degree centrality, which simply counts the number and strength of connections a node has (Fig. 4A). This application of graph theory has been critical for determining the role of highly connected neurons in influencing circuit dynamics (Bonifazi et al., 2009).

Details are in the caption following the image
Figure 4. Cellular superhubs identified through higher-order network analysis
A, traditional hub neurons are identified by simple edge count and have an unusually high degree of connectivity. An example outgoing hub neuron with numerous outgoing projections. B, an example superhub rich in feedforward motifs, identified using the MAPPR algorithm (Yin et al., 2017) for local higher-order clustering. In the zebrafish experiments illustrated in Fig. 1 in the pathological preseizure brain state, feedforward motif conductance of outgoing hub neurons was significantly elevated compared with baseline state, indicating a propensity to project excitatory activity downstream (Hadjiabadi et al., 2021). C, disconnecting superhubs from the epileptic brain stabilizes circuits. PCA analysis of simulated network dynamics after a single outgoing hub was perturbed in a modelled preseizure network before (top) and after (bottom) superhubs were disconnected. Reproduced, in modified form, with permission from Hadjiabadi et al. (2021).

Functional roles of highly connected hub neurons in neural circuits

A key function of hub neurons was first discovered by Bonifazi et al. (2009) in the rodent hippocampus and has since been studied more broadly in other cortical brain regions of both mice and primates (Dann et al., 2016; Shimono & Beggs, 2015). Bonifazi et al. used single-cell calcium imaging to capture the activity of neurons in the developing hippocampus. Recordings from acute hippocampal slice preparations were then analysed ‘online’ (i.e. while the imaging was performed from the living slice) using functional connectivity correlation analysis, where it was observed that the node degree distribution was heavy-tailed, with most nodes having few connections and only a few nodes having large numbers of connections (Bonifazi et al., 2009). Astonishingly, perturbation of a single predicted hub neuron indeed significantly influenced network dynamics and established synchronization (Bonifazi et al., 2009). Further analysis showed that hub neurons were primarily early-born GABAergic interneurons (Bonifazi et al., 2009), and this hub status continued into adulthood (Bocchio et al., 2020).

Computational simulations have also provided unique insights into understanding the role of hub neurons in functional network dynamics. In control biophysically detailed neocortical microcircuit model containing 31,000 neurons, computational simulations revealed that a targeted attack of hub neurons significantly affected macroscale network properties including the average firing rate and spike synchronization (Gal et al., 2021). It was suspected that these functional changes arising from hub attacks may be due to significant alterations in the ‘small-world’ topology of the circuit, a structure that has been found to be evolutionarily ubiquitous (Bassett & Bullmore, 2017) and potentially critical for optimal information processing (Watts & Strogatz, 1998).

Computational modelling predicted a major role for hubs in seizures

Computational modelling has also been critical for improving our understanding of how changes to the cellular-scale networktopology can give rise to pathological conditions. One such example has been in the study of temporal lobe epilepsy (TLE), the most common epilepsy in the adult population, affecting millions of individuals worldwide (Blair, 2012). Here, Morgan and Soltesz (2008) used a large-scale biophysically realistic computational model of the rodent dentate gyrus (DG) to determine how changes to the underlying structural connectome, often observed in human TLE patients, affects network dynamics (Morgan & Soltesz, 2008). The computational model consisted of over 50,000 detailed single-cell models of various cell types each constructed based on anatomical, cellular and electrophysiological data (Morgan & Soltesz, 2008). Specific modifications to the structural connectome included loss of cells in the hilar subregion of the DG (Kobayashi & Buckmaster, 2003) and sprouting of excitatory granule cell (GC) mossy fibre axons (Sutula et al., 1989) observed in both experimental animals and resected tissue from epilepsy patients. This sprouting of mossy fibres leads to recurrent connections between the GCs, which are typically very sparsely connected (Rolls & Treves, 1997). Seizure-like dynamics in the computational network model were only observed when GCs were wired in a manner that led to the emergence of highly connected hub neurons, whereas randomly wiring GCs had lesser effect on network dynamics in the model (Morgan & Soltesz, 2008).

The prediction that hub neurons could foster transition of a circuit from normal to a seizure-like state prompted researchers to experimentally test whether suppressing GCs could reduce seizure severity (Krook-Magnuson et al., 2015). Using a closed-loop optogenetic technique, shutting off GCs during a detected seizure overall reduced seizure duration (Krook-Magnuson et al., 2015). While these experiments indicated that GCs indeed can gate hippocampal seizures, the opsin expression in these experiments broadly targeted the GC population and did not discriminate between hubs and non-hubs. However, the gene expression patterns of potential candidate hub neurons in the DG have been studied (Arnatkevičiūtė et al., 2019) (see also below), setting the stage for future experiments that seek to control epileptic circuits in the hippocampus by targeting hub neurons only.

Higher-order analysis reveals superhubs in the epileptic brain

While hub neurons make up a small fraction of a neural circuit and may be important targets for seizure control (Morgan & Soltesz, 2008), the fundamental identity of a hub relies on lower-order statistics including counting edges (i.e. synapses targeting other neurons) (Fig. 4A). However, recent studies investigating higher-order structure in cortical microcircuits have reported that motifs are overrepresented even in control brains (Shimono & Beggs, 2015; Song et al., 2005), emphasizing their potential importance in driving network dynamics. As algorithms for mining the higher-order properties of large-scale networks have been introduced (Benson et al., 2016; Yin et al., 2017), the role of motifs in driving control and pathological dynamics has begun to be explored (Hadjiabadi et al., 2021).

In recent work, whole-brain single-cell imaging of the larval zebrafish model of acute seizures and single-cell imaging of granule cells in the DG of mice with chronic TLE was performed (Hadjiabadi et al., 2021). Computational modelling using FORCE learning (Sussillo & Abbott, 2009) (see above) was deployed with additional modifications to incorporate the zebrafish neuroanatomical connectome (Kunst et al., 2019) as a biological constraint (Fig. 2C). This ensured that the training captured biologically relevant details of the structural architecture, which was confirmed by community detection (Javed et al., 2018), revealing relevant front and mid/hind brain separation (Betzel, 2019).

After FORCE learning, biologically constrained models of larval zebrafish in control and preseizure states were able to generate synthetic calcium traces (Step 2 in Fig. 1), with each node in the model accurately reproducing the activity of a single neuron imaged experimentally (Hadjiabadi et al., 2021). The models additionally revealed heavy-tailed networks, and nodes with a high outgoing degree (outgoing hubs) and incoming degree (incoming hubs) were identified. Interestingly, in the preseizure state, a subset of outgoing hubs was richly connected with feedforward motifs (Fig. 3A). Specifically, network analysis using the local higher-order clustering MAPPR algorithm (Yin et al., 2017) (see above) revealed that these particular outgoing hubs had significantly elevated motif conductance. As motif conductance is analogous to flow of current in an electrical circuit, higher conductance implies a higher capacity to drive downstream excitation and therefore destabilize networks.

These single-cell ‘superhubs’ (Fig. 4B) represent an even smaller fraction of nodes in the network than traditional hub neurons. The ability to model effective connectivity networks using generative techniques such as FORCE enabled interrogation of the functional role of superhubs in driving pathological dynamics. Here, perturbation simulations predicted that disconnecting superhubs from the network more effectively stabilized epileptic circuits (Fig. 4C) compared with disconnecting hub neurons determined only through simple edge count (Hadjiabadi et al., 2021). Therefore, the imaging, modelling and mining pipeline enabled the identification of a maximally selective single-cell target for circuit control in epilepsy, with implications for other disorders with altered neural network ensemble dynamics (see below).

Superhubs are predominantly adult-born granule cells in chronically epileptic mice

It has been suggested that epilepsy-related adult-born granule cells (abGCs) (Danzer, 2018, 2019) play a critical role in driving pathological dynamics (Sparks et al., 2020). Specifically, abGCs were found to form functional clusters during epileptiform discharges in the DG of chronically epileptic mice (Sparks et al., 2020). However, it was unclear whether such functional clusters were actively driving or were being driven by other clusters. To address this, the functional activity of individual abGCs and non-abGCs in mice with chronic TLE underwent FORCE learning and MAPPR higher-order analysis similar to zebrafish imaging data (Hadjiabadi et al., 2021). The 1:1 mapping between experimentally recorded neurons and nodes in the RNN optimized through FORCE learning allowed for the major prediction that abGCs were predominantly superhubs (Hadjiabadi et al., 2021), linking the computational pipeline to a concrete biological entity. By predicting that abGCs are actively driving pathological events (Hadjiabadi et al., 2021), researchers now have a network-level, experimentally testable framework suggesting that targeting abGCs may be optimal for seizure control.

Section 3: The future of superhubs

Validating superhubs as a therapeutic target

As described above, powerful and optimized computational modelling techniques combined with higher-order network analytics enabled the discovery of a new functional cell type – the superhub – that predominantly emerges in the epileptic brain and may be a new target for seizure control. One major experiment required to validate the functional impact of superhubs is to determine whether suppressing superhubs prevents seizures in the epileptic brain. As mentioned above, in the mouse model of chronic TLE, superhubs were predicted to be abGCs (Hadjiabadi et al., 2021). Interestingly, recent research has shown that ablating abGCs reduces seizure frequency (Varma et al., 2019), albeit only a transient reduction. Therefore, it remains an open but experimentally testable question whether briefly suppressing abGCs via controlled optogenetic or chemogenetic intervention may be sufficient to reduce or prevent seizures.

In the experiments involving the larval zebrafish model of acute seizures (Baraban et al., 2005), single-cell calcium data were captured under pan-neuronal expression of a GCaMP calcium indicator (Hadjiabadi et al., 2021). Therefore, the molecular composition of superhubs in the zebrafish brain remains uncertain but is nonetheless critical to precisely control superhub activity through cell type-specific genetic drivers. Research to uncover the specific cell type of the superhub will likely yield additional clues on the mechanisms driving pathological dynamics. For example, determining whether superhubs are excitatory or inhibitory may indicate dysfunctions in disinhibitory, feedback and feedforward inhibitory mechanisms. Due to the significant genetic overlap of zebrafish with humans (Lovett-Barron et al., 2017), future experiments with targeted opsin expression in particular cell types may also be performed in genetic models of epilepsy with generalized seizures (Griffin et al., 2021).

It remains unknown whether superhubs identified at one time point remain superhubs over time, especially during the prolonged evolution of the disease state, as in epileptogenesis after insults. There is growing knowledge that consecutive preseizure epochs may represent unique neuronal ensemble states at the microscale level, as macroscopically recurrent interictal and ictal discharges are actually heterogeneous in nature when examined at the single-cell level (Muldoon et al., 2013; Truccolo et al., 2011). Plasticity changes over the course of these discharges may rewire the overall structure of the network, causing changes to functional dynamics. Identifying the temporal dynamics of superhubs over longer time scales will therefore be critical when considering these neurons as an optimal target for seizure control.

Broader insights on the higher-order organization of cell–cell communication networks

The primary motivation for suppressing superhubs during the preseizure state is to minimize side-effects by focusing on as few targets as possible for intervention. Highly connected hubs nonetheless control network synchronicity (Bonifazi et al., 2009), and it remains an open question whether superhubs can have a similar influence. Suppressing superhubs, ideally perhaps even a single superhub, exerting a major influence on network dynamics may help prevent overexcitation of the network, but it is unclear whether ongoing normal neuronal computations such as memory encoding and consolidation (Swanson et al., 2020) may be affected. Dedicated future behavioural–cognitive experiments will be needed to test whether interventions targeting superhubs can prevent the projection of unrestrained excitation downstream in the epileptic brain without significantly affecting ‘normal’ information-processing in the network.

One common feature of the larval zebrafish model of acute seizures and the mouse model of chronic TLE is inhibitory dysfunction. More broadly, inhibitory dysfunction has been observed in many neurological conditions including autism (Uzunova et al., 2016) and Alzheimer's disease (Xu et al., 2020), and many afflicted individuals are more likely to exhibit unprovoked, ongoing, albeit often subclinical, seizures (Lam et al., 2017; Stafstrom & Benke, 2015). It remains an open question whether superhubs emerge in these conditions, causing unconstrained downstream excitation and disorder in neuronal computations. If superhubs are present, it would be interesting to evaluate whether targeting them not only mitigates comorbidities such as seizures, but also rescues healthy information-processing by the neuronal network.

Identifying the specific mechanisms that enable superhubs to emerge in epilepsy and potentially other pathological brain states would benefit greatly from the increased use of data-driven computational modelling. Inhibitory dysregulation could impact both feedforward and feedback inhibitory processes. However, it is unknown how they contribute individually to the emergence of feedforward motifs. Current work in the DG of mice with chronic TLE reported reduced feedforward inhibition onto GCs resulting from mossy cell loss, which may be connected to seizure generation and propagation (Bui et al., 2018). As abGCs have been reported to be superhubs in the DG of mice with TLE (Hadjiabadi et al., 2021), biophysically realistic computational models of DG (Raikov et al., 2021) can be leveraged to predict whether disruption to feedforward inhibition is sufficient for superhubs to emerge. As hub neurons exist in other cortical brain regions (Shimono & Beggs, 2015), cortical microcircuit models (Markram et al., 2015) could furthermore be used to determine whether mechanisms for superhub generation are potentially generalizable across brain regions.

While superhubs have been discussed in the context of pathological brain states, mostly focusing on epilepsy (because most of our knowledge about them is based on the latter disorder), it also remains unknown whether superhub-like neurons exist in the healthy brain. Supporting evidence from in vitro studies in mice observed that higher-order motifs are often overrepresented in cortical microcircuits (Shimono & Beggs, 2015; Song et al., 2005), and indeed Hadjiabadi et al. reported frequent occurrences of motifs in the control zebrafish brain (Hadjiabadi et al., 2021). However, previous investigations did not analyse detailed statistics regarding the presence of higher-order motifs centred on hub neurons, which has only recently been considered (Hadjiabadi et al., 2021). Rigorous analysis will need to be carried out in the future to definitively establish whether the occurrence of motifs surrounding hubs in control brains is higher than chance levels.

Functionally, our current understanding is that hub neurons have the capacity to synchronize neuronal networks (Bonifazi et al., 2009). However, the analysis performed in these latter experiments did not consider the higher-order structure. If superhub neurons did exist in the healthy brain (e.g. during development), would perturbation simulations and experiments reveal that they affect network function differently than their hub counterparts? During natural behaviour, could superhubs play a critical role in initiating highly synchronous population events that span multiple brain regions? One recent study suggests that hub neurons in the mammalian hippocampus are inhibitory, and 40% of these hub neurons may be long-range GABAergic (LRGs) projections (Bocchio et al., 2020). LRGs primarily target interneurons downstream (Melzer et al., 2012), including disinhibitory cells (Malik et al., 2022). Therefore, we speculate that superhubs may be LRG projections which have the potential to regulate local circuits across a multitude of brain regions. Collectively, addressing these fundamental questions would open new doors to understanding how a small fraction of the neural network may control normal brain functions.

Conclusions

As highlighted in this review, methodological advances spanning the fields of imaging, modelling, and network analysis have brought new opportunities to build increasingly specialized data-driven models that can be simulated with extreme precision. While contemporaries of Hodgkin and Huxley probably could not have imagined the breadth of technologies researchers have at their disposal today, the essence of using mathematical modelling to drive biologically meaningful predictions remains at the forefront of computational modelling efforts. Understanding how the brain communicates at the cellular level will be integral not only for the sake of scientific advancement but also for potentially providing new avenues to treat individuals across a range of debilitating neurological diseases and disorders (Farrell et al., 2019). The superhub represents one such example of how computational neuroscience can be leveraged to address this topic while also providing new considerations for how we treat individuals with challenging, complex disorders such as epilepsy. Moreover, the prediction of superhubs is one of the first examples of knowledge learned through shifting our perspective from lower-order (simple connection-based) to higher-order (motif-based) structure of single-cell networks. Therefore, we propose that further research into modelling and mining the higher-order organization of neural circuits will be a useful novel link in the overall goal of studying brain communication in health and disease.

Biographies

  • image

    Ivan Soltesz is the James R. Doty Professor of Neurosurgery and Neurosciences at the Stanford University School of Medicine. He established his laboratory at UC Irvine in 1995, where he served as department Chair from 2006 until his return to Stanford in 2015. He is interested in the nature of inhibition in the central nervous system, and his lab employs a combination of closely integrated experimental and theoretical techniques to explore the synaptic and cellular organization of GABAergic microcircuits in the hippocampus under normal and pathological conditions.

  • image

    Darian Hadjiabadi is a graduate student in Stanford University's Department of Bioengineering and is training under Professor Soltesz. He is interested in applying computational modelling and quantitative network analysis to study neural communication in health and disease. His goal is to leverage these tools to develop new strategies for treating neurological conditions.

Additional information

Competing interests

The authors declare no competing interests.

Author contributions

D.H. and I.S. conceived, wrote and edited the review. Both authors approved the final version of the manuscript and that all persons designated as authors qualify for authorship, and all those who qualify for authorship are listed.

Funding

Darian Hadjiabadi and Ivan Soltesz were funded by National Institute of Health (NIH) Project #U19NS104590. DH was additionally funded by an American Epilepsy Society Predoctoral Research Fellowship (826671).

Acknowledgements

The authors would like to thank members of the Soltesz Lab for their valuable input.