square_clustering#

square_clustering(G, nodes=None)[source]#

Compute the squares clustering coefficient for nodes.

For each node return the fraction of possible squares that exist at the node [1]

\[C_4(v) = \frac{ \sum_{u=1}^{k_v} \sum_{w=u+1}^{k_v} q_v(u,w) }{ \sum_{u=1}^{k_v} \sum_{w=u+1}^{k_v} [a_v(u,w) + q_v(u,w)]},\]

where \(q_v(u,w)\) are the number of common neighbors of \(u\) and \(w\) other than \(v\) (ie squares), and \(a_v(u,w) = (k_u - (1+q_v(u,w)+\theta_{uv})) + (k_w - (1+q_v(u,w)+\theta_{uw}))\), where \(\theta_{uw} = 1\) if \(u\) and \(w\) are connected and 0 otherwise. [2]

Parameters:
Ggraph
nodescontainer of nodes, optional (default=all nodes in G)

Compute clustering for nodes in this container.

Returns:
c4dictionary

A dictionary keyed by node with the square clustering coefficient value.

Notes

While \(C_3(v)\) (triangle clustering) gives the probability that two neighbors of node v are connected with each other, \(C_4(v)\) is the probability that two neighbors of node v share a common neighbor different from v. This algorithm can be applied to both bipartite and unipartite networks.

References

[1]

Pedro G. Lind, Marta C. González, and Hans J. Herrmann. 2005 Cycles and clustering in bipartite networks. Physical Review E (72) 056127.

[2]

Zhang, Peng et al. Clustering Coefficient and Community Structure of Bipartite Networks. Physica A: Statistical Mechanics and its Applications 387.27 (2008): 6869–6875. https://arxiv.org/abs/0710.0117v1

Examples

>>> G = nx.complete_graph(5)
>>> print(nx.square_clustering(G, 0))
1.0
>>> print(nx.square_clustering(G))
{0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0}

Additional backends implement this function

graphblasOpenMP-enabled sparse linear algebra backend.
Additional parameters:
chunksizeint or str, optional

Split the computation into chunks; may specify size as string or number of rows. Default “256 MiB”

parallelParallel backend for NetworkX algorithms

The nodes are chunked into node_chunks and then the square clustering coefficient for all node_chunks are computed in parallel over all available CPU cores.

Additional parameters:
get_chunksstr, function (default = “chunks”)

A function that takes in a list of all the nodes (or nbunch) as input and returns an iterable node_chunks. The default chunking is done by slicing the nodes into n chunks, where n is the number of CPU cores.

[Source]