On this submit we’ll establish and visualise totally different clusters of most cancers varieties by analysing illness ontology as a information graph. Particularly we’ll arrange neo4j in a docker container, import the ontology, generate graph clusters and embeddings, earlier than utilizing dimension discount to plot these clusters and derive some insights. Though we’re utilizing `disease_ontology` for example, the identical steps can be utilized to discover any ontology or graph database.
In a graph database, somewhat than storing knowledge as rows (like a spreadsheet or relational database) knowledge is saved as nodes and relationships between nodes. For instance within the determine under we see that melanoma and carcinoma are SubCategories Of cell sort most cancers tumour (proven by the SCO relationship). With this type of knowledge we will clearly see that melanoma and carcinoma are associated although this isn’t explicitly said within the knowledge.
Ontologies are a formalised set of ideas and relationships between these ideas. They’re much simpler for computer systems to parse than free textual content and due to this fact simpler to extract that means from. Ontologies are extensively utilized in organic sciences and you could discover an ontology you’re taken with at https://obofoundry.org/. Right here we’re specializing in the illness ontology which exhibits how various kinds of illnesses relate to one another.
Neo4j is a device for managing, querying and analysing graph databases. To make it simpler to arrange we’ll use a docker container.
docker run
-it - rm
- publish=7474:7474 - publish=7687:7687
- env NEO4J_AUTH=neo4j/123456789
- env NEO4J_PLUGINS='["graph-data-science","apoc","n10s"]'
neo4j:5.17.0
Within the above command the `-publish` flags set ports to let python question the database instantly and allow us to entry it by means of a browser. The `NEO4J_PLUGINS` argument specifies which plugins to put in. Sadly, the home windows docker picture doesn’t appear to have the ability to deal with the set up, so to comply with alongside you’ll want to put in neo4j desktop manually. Don’t fear although, the opposite steps ought to all nonetheless give you the results you want.
Whereas neo4j is working you possibly can entry your database by going to http://localhost:7474/ in your browser, or you need to use the python driver to attach as under. Observe that we’re utilizing the port we revealed with our docker command above and we’re authenticating with the username and password we additionally outlined above.
URI = "bolt://localhost:7687"
AUTH = ("neo4j", "123456789")
driver = GraphDatabase.driver(URI, auth=AUTH)
driver.verify_connectivity()
After you have your neo4j database arrange, it’s time to get some knowledge. The neo4j plug-in n10s is constructed to import and deal with ontologies; you need to use it to embed your knowledge into an current ontology or to discover the ontology itself. With the cypher instructions under we first set some configs to make the outcomes cleaner, then we arrange a uniqueness constraint, lastly we truly import illness ontology.
CALL n10s.graphconfig.init({ handleVocabUris: "IGNORE" });
CREATE CONSTRAINT n10s_unique_uri FOR (r:Useful resource) REQUIRE r.uri IS UNIQUE;
CALL n10s.onto.import.fetch(http://purl.obolibrary.org/obo/doid.owl, RDF/XML);
To see how this may be achieved with the python driver, try the complete code right here https://github.com/DAWells/do_onto/blob/major/import_ontology.py
Now that we’ve imported the ontology you possibly can discover it by opening http://localhost:7474/ in your net browser. This allows you to discover a bit of your ontology manually, however we’re within the larger image so lets do some evaluation. Particularly we’ll do Louvain clustering and generate quick random projection embeddings.
Louvain clustering is a clustering algorithm for networks like this. Briefly, it identifies units of nodes which are extra related to one another than they’re to the broader set of nodes; this set is then outlined as a cluster. When utilized to an ontology it’s a quick technique to establish a set of associated ideas. Quick random projection alternatively produces an embedding for every node, i.e. a numeric vector the place extra related nodes have extra related vectors. With these instruments we will establish which illnesses are related and quantify that similarity.
To generate embeddings and clusters now we have to “venture” the components of our graph that we’re taken with. As a result of ontologies are usually very massive, this subsetting is an easy technique to pace up computation and keep away from reminiscence errors. On this instance we’re solely taken with cancers and never some other sort of illness. We do that with the cypher question under; we match the node with the label “most cancers” and any node that’s associated to this by a number of SCO or SCO_RESTRICTION relationships. As a result of we need to embrace the relationships between most cancers varieties now we have a second MATCH question that returns the related most cancers nodes and their relationships.
MATCH (most cancers:Class {label:"most cancers"})<-[:SCO|SCO_RESTRICTION *1..]-(n:Class)
WITH n
MATCH (n)-[:SCO|SCO_RESTRICTION]->(m:Class)
WITH gds.graph.venture(
"proj", n, m, {}, {undirectedRelationshipTypes: ['*']}
) AS g
RETURN g.graphName AS graph, g.nodeCount AS nodes, g.relationshipCount AS rels
As soon as now we have the projection (which now we have referred to as “proj”) we will calculate the clusters and embeddings and write them again to the unique graph. Lastly by querying the graph we will get the brand new embeddings and clusters for every most cancers sort which we will export to a csv file.
CALL gds.fastRP.write(
'proj',
{embeddingDimension: 128, randomSeed: 42, writeProperty: 'embedding'}
) YIELD nodePropertiesWrittenCALL gds.louvain.write(
"proj",
{writeProperty: "louvain"}
) YIELD communityCount
MATCH (most cancers:Class {label:"most cancers"})<-[:SCO|SCO_RESTRICTION *0..]-(n)
RETURN DISTINCT
n.label as label,
n.embedding as embedding,
n.louvain as louvain
Let’s take a look at a few of these clusters to see which sort of cancers are grouped collectively. After we’ve loaded the exported knowledge right into a pandas dataframe in python we will examine particular person clusters.
Cluster 2168 is a set of pancreatic cancers.
nodes[nodes.louvain == 2168]["label"].tolist()
#array(['"islet cell tumor"',
# '"non-functioning pancreatic endocrine tumor"',
# '"pancreatic ACTH hormone producing tumor"',
# '"pancreatic somatostatinoma"',
# '"pancreatic vasoactive intestinal peptide producing tumor"',
# '"pancreatic gastrinoma"', '"pancreatic delta cell neoplasm"',
# '"pancreatic endocrine carcinoma"',
# '"pancreatic non-functioning delta cell tumor"'], dtype=object)
Cluster 174 is a bigger group of cancers however largely carcinomas.
nodes[nodes.louvain == 174]["label"]
#array(['"head and neck cancer"', '"glottis carcinoma"',
# '"head and neck carcinoma"', '"squamous cell carcinoma"',
#...
# '"pancreatic squamous cell carcinoma"',
# '"pancreatic adenosquamous carcinoma"',
#...
# '"mixed epithelial/mesenchymal metaplastic breast carcinoma"',
# '"breast mucoepidermoid carcinoma"'], dtype=object)p
These are wise groupings, primarily based on both organ or most cancers sort, and will probably be helpful for visualization. The embeddings alternatively are nonetheless too excessive dimensional to be visualised meaningfully. Thankfully, TSNE is a really helpful methodology for dimension discount. Right here, we use TSNE to scale back the embedding from 128 dimensions all the way down to 2, whereas nonetheless holding intently associated nodes shut collectively. We will confirm that this has labored by plotting these two dimensions as a scatter plot and colouring by the Louvain clusters. If these two strategies agree we must always see nodes clustering by color.
from sklearn.manifold import TSNEnodes = pd.read_csv("export.csv")
nodes['louvain'] = pd.Categorical(nodes.louvain)
embedding = nodes.embedding.apply(lambda x: ast.literal_eval(x))
embedding = embedding.tolist()
embedding = pd.DataFrame(embedding)
tsne = TSNE()
X = tsne.fit_transform(embedding)
fig, axes = plt.subplots()
axes.scatter(
X[:,0],
X[:,1],
c = cm.tab20(Normalize()(nodes['louvain'].cat.codes))
)
plt.present()
Which is precisely what we see, related kinds of most cancers are grouped collectively and visual as clusters of a single color. Observe that some nodes of a single color are very far aside, it’s because we’re having to reuse some colors as there are 29 clusters and solely 20 colors. This offers us a terrific overview of the construction of our information graph, however we will additionally add our personal knowledge.
Beneath we plot the frequency of most cancers sort as node dimension and the mortality price because the opacity (Bray et al 2024). I solely had entry to this knowledge for a number of of the most cancers varieties so I’ve solely plotted these nodes. Beneath we will see that liver most cancers doesn’t have an particularly excessive incidence over all. Nevertheless, incidence charges of liver most cancers are a lot increased than different cancers inside its cluster (proven in purple) like oropharynx, larynx, and nasopharynx.
Right here now we have used the illness ontology to group totally different cancers into clusters which supplies us the context to check these illnesses. Hopefully this little venture has proven you how one can visually discover an ontology and add that data to your individual knowledge.
You possibly can try the complete code for this venture at https://github.com/DAWells/do_onto.
Bray, F., Laversanne, M., Sung, H., Ferlay, J., Siegel, R. L., Soerjomataram, I., & Jemal, A. (2024). International most cancers statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 international locations. CA: a most cancers journal for clinicians, 74(3), 229–263.