Co-created by the Catalyzer Think Tank divergent thinking and Gemini Deep Research tool.
1. Introduction
1.1. Context: The Rise of Graph Representation Learning
Graphs provide a powerful and flexible framework for representing entities and their intricate relationships across diverse domains. From modeling interactions in biological networks 1 and social systems to structuring vast knowledge bases 3, graphs capture complex relational data effectively. The inherent structure of graph data, however, presents challenges for traditional machine learning algorithms designed for grid-like or sequential data. This has spurred the development of Graph Representation Learning (GRL), also known as graph embedding.1
GRL aims to transform high-dimensional, complex graph structures into lower-dimensional, dense vector representations (embeddings) while preserving the intrinsic properties of the graph, such as node proximity, structural roles, and connectivity patterns.1 These learned embeddings serve as feature inputs for various downstream machine learning tasks, including node classification (predicting node labels), link prediction (inferring missing connections), and graph classification (categorizing entire graphs).1 This paradigm shift moves away from manual feature engineering towards automatically learning informative representations directly from the graph structure and associated data.2
1.2. Knowledge Graph Embeddings (KGE)
A prominent application area for GRL is Knowledge Graphs (KGs). KGs are large-scale, structured knowledge bases that represent factual information about entities (e.g., people, places, concepts) and the relationships between them in a graph format.3 Each fact is typically stored as a triplet (h,r,t), where h is the head entity, t is the tail entity, and r is the relation connecting them.9 KGs have become crucial components in various artificial intelligence applications, including question answering, retrieval-augmented generation (RAG), recommendation systems, and data integration.10
Despite their utility, KGs are notoriously incomplete, often missing a vast number of true facts.9 Knowledge Graph Completion (KGC), the task of inferring these missing links, is therefore a critical research problem. Knowledge Graph Embeddings (KGEs) have emerged as a dominant approach for KGC.1 KGE methods learn low-dimensional vector representations for entities and relations within a KG. These embeddings aim to capture the semantic and structural patterns present in the graph. A scoring function is typically defined based on these embeddings to estimate the plausibility of a given triplet (h,r,t).15 Common KGE model families include translational distance models (e.g., TransE, which models relations as translations, aiming for h+r≈t) and semantic matching models using multiplicative interactions (e.g., DistMult, ComplEx, HolE).1
1.3. Persistent Homology (PH) in Data Analysis
Parallel to the advancements in GRL, Topological Data Analysis (TDA) has emerged as a powerful framework for analyzing the underlying shape and structure of complex datasets.8 TDA utilizes concepts from algebraic topology to extract qualitative features that are robust to noise and continuous deformations.15
The cornerstone of TDA is Persistent Homology (PH).15 PH provides a multi-scale representation of topological features within data, such as connected components (0-dimensional features, β0), cycles or loops (1-dimensional features, β1), voids (2-dimensional features), and their higher-dimensional analogues. It achieves this by analyzing how these features appear (“birth”) and disappear (“death”) across a sequence of nested topological spaces (a filtration) constructed from the data, often based on proximity or function values.15 The lifespan or “persistence” of these features indicates their significance. The output of PH is typically summarized in a persistence diagram (PD), a multiset of points in a 2D plane where coordinates represent birth and death times (or scales).15 Features corresponding to points far from the diagonal are considered topologically robust or persistent.
Recognizing limitations in standard PH (e.g., features with infinite persistence, potential loss of information), Extended Persistent Homology (EPH) has been developed. EPH provides outputs with finite values and can capture strictly more topological information than standard PH.19
1.4. Statement of Purpose
This report investigates the specific intersection of persistent homology (and related TDA concepts rooted in algebraic topology) with knowledge graph embeddings within the domain of machine learning. The primary objective is to identify and document the researchers, methodologies, key publications, and available software or code repositories emerging from this confluence. It is crucial to clarify that the term “homology” used throughout this report refers specifically to algebraic homology as employed in TDA, distinguishing it from the concept of sequence homology used in computational biology.1 This investigation aims to provide a clear and evidence-based answer to whether and how these two advanced techniques are being combined in contemporary ML research.
2. Persistent Homology for Efficient Knowledge Graph Embedding Evaluation: The Knowledge Persistence (KP) Method
A significant challenge in the development and deployment of KGE models lies in their evaluation. While effective, standard evaluation protocols face considerable computational hurdles, prompting research into more efficient alternatives, notably leveraging persistent homology.
2.1. The Challenge: Evaluating KGE Models
The standard procedure for evaluating KGE models, particularly for the task of link prediction or KG completion, relies on ranking-based metrics.10 For a given test triplet (h,r,t), the model is typically asked to rank the true tail entity t against all other possible entities in the KG, given the head h and relation r. Similarly, the true head h is ranked against all alternatives given r and t. Performance is then aggregated over a test set using metrics like Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits@N (the proportion of true entities ranked within the top N candidates).15
While these metrics provide valuable insights into a model’s predictive capabilities, their computation poses a major bottleneck. Calculating the score for all possible corrupted triplets requires a number of operations that scales quadratically with the number of entities ∣E∣ in the KG, i.e., O(∣E∣2).15 For large-scale KGs containing millions of entities, this quadratic complexity translates into prohibitive evaluation times, potentially taking hours or even days for a single evaluation run.15 This high computational cost not only slows down the research and development cycle, hindering rapid prototyping and hyperparameter tuning, but also carries a significant carbon footprint and limits the participation of researchers with constrained computational resources.15 Previous attempts to mitigate this cost, such as using random sampling of negative candidates, can reduce computation but may fail to accurately capture the true distribution of entities and thus provide less reliable estimates of model performance.10
2.2. The Solution: Knowledge Persistence (KP)
To address the critical need for efficient KGE evaluation, Bastos, Singh, Nadgeri, Hoffart, Suzumura, and Singh proposed a novel method called Knowledge Persistence (KP).15 The core innovation of KP lies in shifting the evaluation perspective from exhaustive ranking to analyzing the topological structure induced by the KGE model’s scoring function. Specifically, KP leverages persistent homology to quantify the difference in the topological signatures of how a model scores known true (positive) triples versus known false or assumed false (negative) triples.15 The underlying hypothesis is that better KGE models will exhibit a more distinct topological separation between the scores assigned to positive and negative examples.
2.3. KP Methodology Deep Dive
The KP methodology involves several key steps:
- Sampled Graph Construction: Instead of considering all possible triples, KP begins by sampling a subset of positive triples present in the KG and generating a corresponding set of negative triples (e.g., by corrupting positive triples). Crucially, the number of sampled triples is designed to be linear in the number of entities, O(∣E∣), drastically reducing the input size compared to the O(∣E∣2) complexity of full ranking.15 Two weighted directed graphs are constructed from these samples: G+ using the positive triples and G− using the negative triples. In both graphs, nodes represent entities, and edges represent relations. The weight assigned to each edge (h,r,t) is the plausibility score calculated by the KGE model under evaluation.15
- Filtration and Persistence Diagrams: Persistent homology is then applied to these weighted graphs. A filtration process is defined based on the edge weights (scores). Conceptually, this involves building the graph incrementally by adding edges according to their scores (e.g., from lowest score to highest, or vice-versa). During this process, PH tracks the birth and death scales (score thresholds) of topological features. KP focuses on 0-dimensional features (connected components).15 This process yields two persistence diagrams (PDs): D+ summarizing the topological evolution of the positive triple graph G+, and D− for the negative triple graph G−.15 The method utilizes both sub-level set (adding edges with scores ≤ threshold) and super-level set (adding edges with scores ≥ threshold) filtrations to capture different aspects of the score distribution, combining the resulting PDs.15
- Distance Computation: The final step involves quantifying the difference between the topological summaries D+ and D−. KP employs the Sliced Wasserstein (SW) distance, an efficient approximation of the Wasserstein distance between probability distributions, to measure the dissimilarity between the two persistence diagrams.15 This distance value constitutes the final KP score.
Interpretation: A larger KP score indicates a greater topological separation between the score distributions of positive and negative triples, as captured by their persistence diagrams. This is interpreted as evidence that the KGE model is better at distinguishing true facts from false ones, and thus, a higher KP score is expected to correlate positively with better performance on traditional ranking metrics.15
2.4. Performance and Impact
The reported results for KP demonstrate its effectiveness as an efficient evaluation proxy. Experiments showed that KP scores exhibit high correlation with standard metrics like Hits@N, MR, and MRR across various datasets and KGE models.15 Most strikingly, KP achieves dramatic reductions in evaluation time. In some reported cases, the time dropped from 18 hours (using Hits@10) to just 27 seconds (using KP), with an average reduction across methods and datasets cited as approximately 99.96%.15
The implications of such efficiency gains are substantial. KP enables significantly faster prototyping, hyperparameter optimization, and model development cycles for KGE research.15 Furthermore, by drastically reducing the computational resources required for evaluation, it lowers the associated energy consumption and carbon footprint, while also making state-of-the-art KGE research more accessible to institutions and individuals with limited computational budgets.15
This work exemplifies a growing trend where sophisticated mathematical tools like persistent homology are applied not only to enhance model representations but also to optimize the machine learning workflow itself. By tackling the computational bottleneck of evaluation, KP demonstrates the utility of TDA for improving the efficiency and sustainability of the ML development lifecycle. The success of KP also underscores the power of persistent homology in extracting meaningful structural information even from relatively small, linear-sized samples (O(∣E∣)) of potentially vast datasets (O(∣E∣2)), highlighting its potential for efficient data summarization in large-scale scenarios.
2.5. Key Researchers and Resources
The research introducing the Knowledge Persistence method was conducted by a collaborative team:
- Anson Bastos (IIT Hyderabad, India) 15
- Kuldeep Singh (Zerotha Research / Cerence GmbH, Germany) 15
- Abhishek Nadgeri (Zerotha Research / RWTH Aachen, Germany) 15
- Johannes Hoffart (SAP, Germany) 15
- Toyotaro Suzumura (The University of Tokyo, Japan) 15
- Manish Singh (IIT Hyderabad, India) 15
Primary Publication:
- Bastos, A., Singh, K., Nadgeri, A., Hoffart, J., Suzumura, T., & Singh, M. (2023). Can Persistent Homology Provide an Efficient Alternative for Evaluation of Knowledge Graph Completion Methods?. In Proceedings of the ACM Web Conference 2023 (WWW ’23) (pp. 2455–2466). 15 (Preprint: arXiv:2301.12929)
Code Repository:
- The implementation of the KP method is available on GitHub: https://github.com/ansonb/Knowledge_Persistence 36
3. Topological Approaches for Knowledge Graph Representation and Reasoning
Beyond evaluation, topological data analysis and persistent homology are also being explored to directly enhance knowledge graph representation learning and reasoning processes, aiming to capture richer structural information than standard KGE models or graph algorithms alone.
3.1. Persistent Homology for Rule Learning in KGs
Knowledge graph completion can benefit from identifying logical rules, often represented as sequences of relations, that imply new facts (inductive relation prediction).9 For example, a rule might state that if person X was born in city Y, and city Y is located in country Z, then person X has nationality Z. Discovering such rules automatically is a challenging task.
Yan et al. (2021) proposed a novel approach that leverages algebraic topology to facilitate rule learning in KGs.9 Their key insight is to view rules as cycles within the knowledge graph structure.9 They utilize concepts from algebraic topology, specifically focusing on the structure of the cycle space (related to 1-dimensional homology, H1), to improve the efficiency of searching for relevant rules.9
Their methodology involves:
- Identifying cycle bases that span the space of cycles within the KG.
- Building a Graph Neural Network (GNN) framework specifically designed to operate on these collected cycles.
- Learning representations of the cycles themselves using this GNN.
- Using the learned cycle representations to predict the existence or non-existence of a direct relation between two entities, effectively performing inductive relation prediction based on learned rule patterns.9
This work connects rule learning to the broader trend of employing more sophisticated graph representations, such as simplicial complexes, which can capture higher-order relationships beyond simple pairwise edges.9 By applying topological concepts to discover inherent structural patterns (rules represented as cycles) within the KG, this approach demonstrates how TDA can contribute directly to the reasoning capabilities applied to knowledge graphs, moving beyond simply analyzing embeddings or evaluating models.
Primary Publication:
- Yan, C., Wang, Y., Li, C., Wang, B., & Wang, W. Y. (2021). A Topological View of Rule Learning in Knowledge Graphs. In Proceedings of the 38th International Conference on Machine Learning (ICML ’21), PMLR 139, pp. 11498-11509. 9 (Preprint: arXiv:2103.03212 28)
- (Note: Code link not found in provided materials.)
3.2. Persistent Homology for Analyzing KGE Model Representations
Understanding the properties of the latent spaces learned by different KGE models is crucial for model selection, interpretation, and improvement. Yavorskyi and Kussul (2023) employed persistent homology as an analytical tool to investigate the topological characteristics of vector representations generated by various KGE models for multipartite graphs.41
Their approach involves:
- Generating entity and relation embeddings using different KGE models (specifically TuckeR, MurE, and PairRE in their study).41
- Treating these sets of embedding vectors as point clouds in a vector space.
- Calculating the persistent homology of these point clouds, focusing on features up to dimension 2 (capturing 1D, 2D, and 3D “holes” or voids in the embedding distribution).41
- Representing the results as persistence diagrams and performing statistical analysis (calculating metrics like kurtosis, skewness, mean, deviation) on the distribution of points within these diagrams.41
- Comparing the statistical summaries of the persistence diagrams generated from different KGE models.41
Their findings indicate that distinct KGE models produce embedding spaces with significantly different topological characteristics.41 This implies that the choice of KGE architecture fundamentally influences the geometric and topological structure of the learned latent space. Such analysis provides a novel way to compare and understand KGE models beyond standard performance metrics, offering insights into the intrinsic properties of the representations they learn. This use of PH contributes to model interpretability by providing a topological lens through which to examine why different models might exhibit varying behaviors or performance levels.
Primary Publication:
- Yavorskyi, O., & Kussul, N. (2023). Representation of Multipartite Graphs Using Persistent Homologies. Journal of Automation and Information Sciences, 55(11), 57-71. 41
- (Note: Code link not found in provided materials.)
3.3. Topology-Infused Embeddings for Explainability (Potential KG Application)
The increasing complexity of deep learning models, including those used for or with KGs (like large language models – LLMs), raises challenges in explainability and interpretability. Liubov Tupikina (2023) proposed a conceptual framework aimed at constructing more explainable data embeddings by integrating techniques from higher-order geometry, TDA, and natural language processing.21
The proposed “topology-infused embeddings” method involves:
- Preprocessing high-dimensional data (e.g., text corpora).
- Applying embedding models (like BERT or autoencoders) to generate low-dimensional latent representations.21
- Analyzing the geometry and topology of this latent space using TDA tools, such as manifold learning or hypergraph analysis, to identify meaningful patterns and structures (e.g., clusters, holes).21
- Interpreting these topological features by linking them back to the semantics of the original data, aiming to understand how concepts and relationships are encoded in the latent space.21
While presented as a general framework, the work explicitly mentions potential applications in generating and navigating knowledge graph embeddings, particularly in scenarios involving the interaction between KGs and LLMs, such as inducing knowledge from KGs into LLMs or explaining LLM behavior.21 This line of research points towards a future where TDA could serve as a crucial bridge between the explicit, structured knowledge representation of KGs and the high-dimensional, often implicit, representations learned by LLMs. By providing tools to analyze the “shape” of latent knowledge representations, TDA might enhance the transparency, interpretability, and potentially the robustness of systems that combine structured knowledge with deep learning models.
Primary Publication:
- Tupikina, L. (2023). Topology infused embeddings for large language models. ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models. 21
- (Note: Code link not found in provided materials.)
4. Key Researchers and Resources Summary
The research landscape at the specific intersection of persistent homology (TDA) and knowledge graph embeddings (KGE) in machine learning, while nascent, features distinct contributions from several researchers and groups. Table 1 provides a consolidated overview of the key methods, researchers, publications, and available resources identified in this report.
Table 1: Key Researchers and Methods at the Intersection of Persistent Homology and Knowledge Graph Embeddings
Researcher(s)/Group |
Key Method/Concept |
Primary Publication(s) |
Code Link |
Brief Description |
Bastos, K. Singh, Nadgeri, Hoffart, Suzumura, M. Singh |
Knowledge Persistence (KP) |
WWW ’23 / arXiv:2301.12929 15 |
GitHub 36 |
Using PH on sampled KGE scores for efficient, scalable evaluation of KG completion models. |
Yan et al. |
Topological Rule Learning |
ICML ’21 (PMLR) / arXiv:2103.03212 9 |
Not found |
Using cycle homology/topology to discover rules (as cycles) in KGs for inductive relation prediction. |
Yavorskyi & Kussul |
Topological Analysis of KGE Models |
JAIS ’23 41 |
Not found |
Applying PH to analyze and compare the topological structure of embedding spaces generated by different KGE models. |
Tupikina, L. |
Topology-Infused Explainable Embeddings |
ICLR ’23 Workshop / OpenReview 21 |
Not found |
Framework using TDA/hypergraphs to create interpretable embeddings, potentially for KGs interacting with LLMs. |
This table serves as a direct reference to the specific individuals and works fulfilling the user’s query, providing pointers to the primary research papers and, where available, the associated code implementations.
5. Related Work: Topological Graph Neural Networks (Context)
The specific applications of persistent homology to KGE detailed above exist within a broader and rapidly growing field focused on integrating topological concepts with Graph Neural Networks (GNNs), often referred to as Topological Deep Learning (TDL).22 Understanding this context helps situate the KGE-specific work and highlights potential future avenues.
5.1. Bridging TDA and GNNs
Standard GNNs primarily rely on message-passing mechanisms, where nodes iteratively aggregate information from their immediate neighbors.20 While powerful, this local aggregation process can struggle to capture global graph properties or higher-order structures like cycles and cavities effectively.32 TDA, and particularly PH, offers tools to explicitly compute these global and multi-scale topological features.20
Consequently, a significant research effort aims to combine the strengths of GNNs (learning node representations) with TDA/PH (capturing global topology).8 The goal is to create GNN architectures that are “topology-aware,” enhancing their expressive power and enabling them to distinguish graphs or learn representations based on features that standard message-passing might miss.20 This integration is seen by many as a promising new frontier for relational and graph learning.42 Topological features derived from PH can be used to augment GNN inputs or incorporated directly into the network architecture.5
5.2. Key Frameworks (Examples)
Several specific frameworks and layers have been proposed to facilitate this integration:
- TOGL (Topological Graph Layer): Developed by Horn, Rieck, Borgwardt, and collaborators, TOGL is designed as a differentiable layer that can be inserted into existing GNN architectures.32 It utilizes learnable filtration functions applied to the graph’s nodes or features. Persistent homology (typically focusing on 0-dimensional components and 1-dimensional cycles) is computed based on these filtrations. The resulting persistence diagrams are then embedded into vector representations associated with nodes (or edges), making the topological information accessible to subsequent GNN layers.32 TOGL has been theoretically shown to be strictly more expressive than standard message-passing GNNs according to the Weisfeiler-Lehman test hierarchy.32
- TREPH (Topological Representation with Extended Persistent Homology): Proposed by Ye, Sun, and Xiang, TREPH is another plug-in topological layer for GNNs.19 Its key distinction is the use of Extended Persistent Homology (EPH) instead of standard PH.19 This allows TREPH to potentially capture more topological detail and handle persistence values more uniformly. The layer includes a novel aggregation mechanism to map EPH features back to graph nodes and is designed to be differentiable, enabling end-to-end training.19 It also claims higher expressivity than PH-based methods and standard GNNs.19
- Other Related Concepts: Research in TDL also explores representations beyond standard graphs, such as simplicial complexes 9 and cell complexes 58, which inherently encode higher-order relationships (e.g., triangles, tetrahedra). Neural networks designed for these structures (simplicial/cellular neural networks) represent another facet of topology-informed graph ML.57 Additionally, due to the computational cost of PH, especially in higher dimensions 28, methods like PDGNN have been developed to use GNNs to approximate persistence diagrams efficiently.50 CliquePH uses low-dimensional PH on clique graphs to capture higher-order information efficiently.59
5.3. Relevance to KGE
While the primary applications demonstrated for frameworks like TOGL and TREPH often involve graph classification or node classification on general graph datasets (e.g., molecules, social networks) 33, the underlying principles are potentially applicable to KGE. Many modern KGE models, such as CompGCN 62 and RAGAT 62, are themselves based on GNN architectures.14 Therefore, it is conceivable that topological layers like TOGL or TREPH could be incorporated into these GNN-based KGE models. Such integration could potentially enhance the ability of KGE models to capture complex relational patterns or long-range dependencies within the knowledge graph structure, although specific published examples of this direct combination were not prominent in the reviewed materials.
The development of dedicated, differentiable topological layers (TOGL, TREPH), efficient approximation techniques (PDGNN), and accompanying theoretical analyses (e.g., expressivity comparisons 33) signifies that the integration of TDA/PH with GNNs is maturing from initial exploratory studies into a more established subfield. This maturation provides standardized tools and a stronger theoretical basis for future work. This progress creates a clear technical pathway for future research: incorporating these established topological layers directly into GNN-based KGE architectures to explore potential benefits for knowledge graph completion and reasoning tasks.
5.4. Table 2: Overview of Related Topological GNN Frameworks
To provide context and differentiate the broader TDA-GNN landscape from the specific KGE applications discussed earlier, Table 2 summarizes some key related frameworks.
Table 2: Overview of Related Topological GNN Frameworks
Framework |
Key Researchers |
Primary Publication(s) |
Code Link |
Core Idea |
TOGL |
Horn, Rieck, De Brouwer, Moor, Borgwardt et al. |
ICLR ’22 53 |
GitHub 53 |
Integrable GNN layer using learnable filtrations and PH (0D/1D) features. |
TREPH |
Ye, Sun, Xiang |
Entropy ’23 20 / arXiv 19 |
Not found (PyTorch mentioned 19) |
Integrable GNN layer using Extended Persistent Homology (EPH) for richer topology. |
PDGNN |
Yan et al. |
NeurIPS ’22 50 / arXiv 60 |
Not found |
GNN to approximate Extended Persistence Diagrams (EPDs) efficiently. |
CliquePH |
Buffelli, Soleymani, Rieck |
LoG ’24 61 / arXiv 59 |
Not found |
Uses low-dim PH on higher-order clique graphs to augment GNNs efficiently. |
5.5. Insights and Implications
The development of dedicated layers (TOGL, TREPH), approximation methods (PDGNN), and theoretical analysis (expressivity comparisons to WL tests 33) indicates that integrating TDA/PH with GNNs is moving beyond initial explorations towards becoming a more established subfield with dedicated tools and theoretical grounding. This progress creates a clear technical pathway for future research: incorporating these established topological layers directly into GNN-based KGE architectures to explore potential benefits for knowledge graph completion and reasoning tasks. While direct application to KGE models is sparse in the reviewed materials, the existence of GNN layers incorporating topology (TOGL, TREPH) suggests a clear pathway for developing novel KGE models, especially those based on GNN architectures (e.g., CompGCN 62, RAGAT 62), by incorporating these topological layers.
6. Conclusion
6.1. Summary of Findings
This report confirms the existence of active research at the intersection of persistent homology (from Topological Data Analysis) and knowledge graph embeddings in the field of machine learning. While still a relatively niche area compared to the broader fields of KGE or Topological Deep Learning, specific applications have emerged:
- Efficient KGE Model Evaluation: The Knowledge Persistence (KP) method utilizes PH to drastically reduce the computational cost and time required for evaluating KGE models, offering a scalable alternative to traditional ranking metrics.15
- Topological Rule Learning: Algebraic topology, specifically focusing on cycle homology, is being used to identify and learn representations of rule-like structures within KGs for improved inductive reasoning.9
- Analysis of KGE Latent Spaces: PH serves as an analytical tool to probe and compare the intrinsic topological characteristics of the embedding spaces generated by different KGE algorithms, contributing to model understanding.41
- Explainable Embeddings: Conceptual frameworks are being proposed that leverage TDA to create more interpretable embeddings, with potential applications in mediating the interaction between structured KGs and opaque models like LLMs.21
6.2. Key Contributions and Resources
Specific researchers and groups are driving these developments. Notably, the work by Bastos, K. Singh, Nadgeri, Hoffart, Suzumura, M. Singh on Knowledge Persistence provides a concrete method with significant practical benefits and publicly available code.15 Yan et al. have pioneered the topological view of rule learning 9, while Yavorskyi & Kussul applied PH for comparative KGE model analysis.41 Tupikina explores the potential of topology-infused embeddings for explainability.21 Table 1 provides a quick reference to these key contributors and their work.
6.3. Broader Context and Future Directions
These specific applications are part of the larger trend of Topological Deep Learning (TDL), which seeks to integrate topological awareness into deep learning models, particularly GNNs.22 The development of dedicated topological layers like TOGL 33 and TREPH 20 within the broader TDL field suggests promising avenues for future KGE research.
A clear next step would be the direct integration of these topological layers into GNN-based KGE architectures (e.g., CompGCN 62) to investigate whether capturing explicit topological features can improve link prediction performance.20 Further exploration of TDA for enhancing the explainability and robustness of systems combining KGs and LLMs also appears fruitful.21 However, challenges remain, particularly concerning the computational scalability of persistent homology for higher-dimensional topological features, necessitating further research into efficient computation or approximation techniques.28 Overall, the synergy between the rigorous structural analysis offered by TDA/PH and the complex relational modeling challenges posed by knowledge graphs presents a compelling area for continued machine learning research.
7. References
63 ResearchGate. (n.d.). Persistent Homology-induced Graph Ensembles for Time Series Regressions. Retrieved Date N/A, from https://www.researchgate.net/publication/389947716_Persistent_Homology-induced_Graph_Ensembles_for_Time_Series_Regressions
15 Bastos, A., Singh, K., Nadgeri, A., Hoffart, J., Suzumura, T., & Singh, M. (2023). Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? arXiv:2301.12929 [cs.LG]. https://arxiv.org/pdf/2301.12929
9 Yan, C., Wang, Y., Li, C., Wang, B., & Wang, W. Y. (2021). A Topological View of Rule Learning in Knowledge Graphs. Proceedings of the 38th International Conference on Machine Learning, PMLR 139, 11498-11509. https://proceedings.mlr.press/v162/yan22a/yan22a.pdf
1 Nguyen, H. T., Vu, T. H., Nguyen, T. D., & Jung, S. (2024). Graph embedding for protein function prediction: Concepts, methods, and applications. PeerJ, 12, e18509. https://peerj.com/articles/18509/
10 Bastos, A., Nadgeri, A., Singh, K., & Suzumura, T. (2024). Faster and Accurate KGC Evaluation with Relation Recommenders. arXiv:2402.00053 [cs.LG]. https://arxiv.org/html/2402.00053v1
19 Ye, X., Sun, F., & Xiang, S. (2023). TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy, 25(2), 331. https://pmc.ncbi.nlm.nih.gov/articles/PMC9954936/
5 Papers With Code. (n.d.). Graph Representation Learning. Retrieved Date N/A, from https://paperswithcode.com/task/graph-representation-learning?page=6&q=
49 Wasi, A. (n.d.). Awesome-Graph-Research-ICML2024. GitHub. Retrieved Date N/A, from https://github.com/azminewasi/Awesome-Graph-Research-ICML2024
20 Ye, X., Sun, F., & Xiang, S. (2023). TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy, 25(2), 331. https://www.mdpi.com/1099-4300/25/2/331
2 Papalia, F., Coronnello, C., & Militello, C. (2023). Graph embedding: a comprehensive survey. Frontiers in Artificial Intelligence, 6, 1256352. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1256352/full
41 Yavorskyi, O., & Kussul, N. (2023). Representation of Multipartite Graphs Using Persistent Homologies. Journal of Automation and Information Sciences, 55(11), 57-71. https://jais.net.ua/index.php/files/article/view/202
4 Gosztolai, A., Arnaudon, A., & Barahona, M. (2023). Unveiling the structure of complex networks through topological embedding. arXiv:2305.03474 [physics.soc-ph]. https://arxiv.org/pdf/2305.03474
21 Tupikina, L. (2023). Topology infused embeddings for large language models. ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models. https://openreview.net/pdf?id=B7U7zaQGNR
27 Muscoloni, A., & Cannistraci, C. V. (2021). Topological Data Analysis Approach for Weighted Networks Embedding. ResearchGate. https://www.researchgate.net/publication/349446460_Topological_Data_Analysis_Approach_for_Weighted_Networks_Embedding
64 Papers With Code. (n.d.). Arxiv HEP-TH (citation graph). Retrieved Date N/A, from https://paperswithcode.com/datasets?task=document-summarization&mod=graphs
28 ResearchGate. (n.d.). A Topological View of Rule Learning in Knowledge Graphs. Retrieved Date N/A, from https://www.researchgate.net/publication/355111678_A_Topological_View_of_Rule_Learning_in_Knowledge_Graphs
65 Zhao, S., Schneider, L., & Liakata, M. (2024). OLLM: Learning to Construct Ontologies from Text. arXiv preprint arXiv:2410.23584. (Note: URL provided was invalid, citation based on content).
66 Papers With Code. (n.d.). Datasets for Protein Language Model. Retrieved Date N/A, from https://paperswithcode.com/datasets?q=PH2&task=protein-language-model&page=1
67 Strodthoff, N., et al. (2024). Transformer models in biomedicine: a systematic survey. Briefings in Bioinformatics, 25(4), bbae303. https://pmc.ncbi.nlm.nih.gov/articles/PMC11287876/
6 Papers With Code. (n.d.). Node Classification. Retrieved Date N/A, from https://paperswithcode.com/task/node-classification/latest?page=6&q=
68 Papers With Code. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://paperswithcode.com/author/toyotaro-suzumura
29 Zheng, Z., Lyu, C., Chen, C., & Wang, L. (2024). Constraining Latent Space Topology for Vision-Language Models. arXiv:2402.16078v1 [cs.CV]. https://openreview.net/pdf/5cab7ed9375bde4b2a85485640533b73a623b658.pdf
69 Papers With Code. (n.d.). Biology Datasets. Retrieved Date N/A, from https://paperswithcode.com/datasets?mod=biology&page=1
70 Papers With Code. (n.d.). Newest Biology Datasets. Retrieved Date N/A, from https://paperswithcode.com/datasets?q=&v=lst&o=newest&mod=biology&page=1
71 Li, M., Zhang, H., Wang, Y., & Zhou, A. (2025). Topology-Driven Attribute Recovery for Attributed Missing Graphs. arXiv:2501.10151 [cs.LG]. https://arxiv.org/html/2501.10151v1
62 Singh, K., Bastos, A., & Nadgeri, A. (2024). Study of Topology Bias in GNN-based Knowledge Graphs Algorithms. ResearchGate. https://www.researchgate.net/publication/377661475_Study_of_Topology_Bias_in_GNN-based_Knowledge_Graphs_Algorithms
14 Nadgeri, A., Singh, K., Shekarpour, S., Mahdisoltani, F., & Vyas, Y. (2022). Scaling Up GNN-based Knowledge Graph Embedding Models. arXiv:2201.02791 [cs.LG]. https://arxiv.org/pdf/2201.02791
7 Özgür, A., & Can, T. (2023). Topological feature generation for link prediction in protein–protein interaction networks. PeerJ Computer Science, 9, e1332. https://pmc.ncbi.nlm.nih.gov/articles/PMC10178302/
72 Pelrine, K., Danilevsky, M., & Yan, C. (2020). Evaluating Link Prediction Methods for Argument Mining. Proceedings of the First Workshop on Insights from Negative Results in NLP, 90-96. https://aclanthology.org/2020.insights-1.11.pdf
73 Albreiki, B., Zaki, N., & Alashwal, H. (2023). Extracting topological features to identify at-risk students using machine learning and graph convolutional network models. International Journal of Educational Technology in Higher Education, 20(1), 23. https://www.researchgate.net/publication/369919374_Extracting_topological_features_to_identify_at-risk_students_using_machine_learning_and_graph_convolutional_network_models
74 Nadgeri, A., Singh, K., Shekarpour, S., Mahdisoltani, F., & Vyas, Y. (2022). AGIL: Learning Relational Semantics in Knowledge Graphs. International Conference on Computational Science (ICCS 2022), LNCS 13352, 705-712. https://www.iccs-meeting.org/archive/iccs2022/papers/133520705.pdf
75 Zhang, R., Zhao, W. X., Chen, L., Wang, S., & Wen, J. R. (n.d.). FTL-LM: A Comprehensive Framework for Fusing Topology and Logical Rules into Language Models for Knowledge Graph Completion. Retrieved Date N/A, from http://ww.sentic.net/knowledge-graph-completion.pdf
22 Nguyen, D. P., et al. (2025). Topological Data Analysis in Graph Neural Networks: Surveys and Perspectives. IEEE Transactions on Neural Networks and Learning Systems. https://www.researchgate.net/publication/387777906_Topological_Data_Analysis_in_Graph_Neural_Networks_Surveys_and_Perspectives
42 Reddit /r/MachineLearning. (2024). [D] Topological Deep Learning: Promising or Hype? Retrieved Date N/A, from https://www.reddit.com/r/MachineLearning/comments/1ji6xlv/d_topological_deep_learning_promising_or_hype/
76 Chen, C. (n.d.). Research. Retrieved Date N/A, from https://chaochen.github.io/research.html
30 YouTube / Karsten Borgwardt. (2022). Topological Graph Neural Networks. https://www.youtube.com/watch?v=L26FVejm_zw
8 Zitnik, M., et al. (2023). Current and future directions in network biology. Nature Communications, 14(1), 7913. https://pmc.ncbi.nlm.nih.gov/articles/PMC10699434/
77 University of Bergen. (n.d.). Knowledge Graphs: Research Directions. Retrieved Date N/A, from https://www.uib.no/en/rg/ml/139211/knowledge-graphs-research-directions
23 Ajayi, A. S., et al. (2021). On the Application of Topological Data Analysis and Machine Learning to Flood Incidents and Decision Making. ResearchGate. https://www.researchgate.net/publication/357021154_On_the_Application_of_Topological_Data_Analysis_and_Machine_Learning_to_Flood_Incidents_and_Decision_Making
24 Reddit /r/MachineLearning. (2019). [Discussion] Merging Machine Learning with Topology. Retrieved Date N/A, from https://www.reddit.com/r/MachineLearning/comments/ea2gca/discussion_merging_machine_learning_with/
36 Bastos, A. (n.d.). Knowledge_Persistence. GitHub. Retrieved Date N/A, from https://github.com/ansonb/Knowledge_Persistence
10 Bastos, A., Nadgeri, A., Singh, K., & Suzumura, T. (2024). Faster and Accurate KGC Evaluation with Relation Recommenders. arXiv:2402.00053 [cs.LG]. https://arxiv.org/html/2402.00053v1
15 Bastos, A., Singh, K., Nadgeri, A., Hoffart, J., Suzumura, T., & Singh, M. (2023). Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? arXiv:2301.12929 [cs.LG]. https://arxiv.org/pdf/2301.12929
16 Chandrahas, Singh, K., Lukovnikov, D., & Fischer, A. (2019). Towards Understanding the Geometry of Knowledge Graph Embeddings. ResearchGate. https://www.researchgate.net/publication/334117893_Towards_Understanding_the_Geometry_of_Knowledge_Graph_Embeddings
17 Bastos, A., Singh, K., Nadgeri, A., Shekarpour, S., Mulang, I. O., & Hoffart, J. (2021). HopfE: Knowledge Graph Representation Learning using Inverse Hopf Fibrations. ResearchGate. https://www.researchgate.net/publication/355785393_HopfE_Knowledge_Graph_Representation_Learning_using_Inverse_Hopf_Fibrations
78 SIGWEB. (2023). WWW ’23: Proceedings of the ACM Web Conference 2023. Retrieved Date N/A, from https://www.sigweb.org/toc/www23.html
79 Colab.ws. (n.d.). Article referencing Bastos et al. (2023). Retrieved Date N/A, from https://colab.ws/articles/10.18653%2Fv1%2F2021.naacl-main.187 (Note: URL potentially unstable/paywalled, content inferred)
80 CatalyzeX. (n.d.). Manish Singh. Retrieved Date N/A, from https://www.catalyzex.com/author/Manish%20Singh (Note: URL potentially unstable/paywalled, content inferred)
81 Colab.ws. (n.d.). Article referencing Bastos et al. (2023). Retrieved Date N/A, from https://colab.ws/articles/10.18653%2Fv1%2FW17-2609 (Note: URL potentially unstable/paywalled, content inferred)
82 Wang, Y., et al. (2024). GTAT: empowering graph neural networks with cross attention. ResearchGate. https://www.researchgate.net/publication/388823279_GTAT_empowering_graph_neural_networks_with_cross_attention
20 Ye, X., Sun, F., & Xiang, S. (2023). TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy, 25(2), 331. https://www.mdpi.com/1099-4300/25/2/331
35 Ye, X., Sun, F., & Xiang, S. (2023). TREPH: A Plug-In Topological Layer for Graph Neural Networks. ResearchGate. https://www.researchgate.net/publication/368490644_TREPH_A_Plug-In_Topological_Layer_for_Graph_Neural_Networks
60 Ye, X., Sun, F., & Xiang, S. (2024). Dynamic Neural Dowker Network: Approximating Persistent Homology in Dynamic Directed Graphs. arXiv:2408.09123 [cs.LG]. https://arxiv.org/html/2408.09123v1
39 DBLP. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://dblp.org/pid/99/844
83 ResearchGate. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://www.researchgate.net/profile/Toyotaro-Suzumura-3
84 RIKEN R-CCS. (2023). R-CCS International Symposium 2023 Speakers. Retrieved Date N/A, from https://www.r-ccs.riken.jp/R-CCS-Symposium/2023/speakers/
40 OpenReview. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://openreview.net/profile?id=~Toyotaro_Suzumura1
85 researchmap. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://researchmap.jp/toyotaro?lang=en
86 ResearchGate. (n.d.). Toyotaro SUZUMURA. Retrieved Date N/A, from https://www.researchgate.net/profile/Toyotaro-Suzumura
87 Google Scholar. (n.d.). Toyo (Toyotaro) Suzumura. Retrieved Date N/A, from https://scholar.google.com/citations?user=tY3BWm0AAAAJ&hl=en
88 Google Scholar. (n.d.). Toyo (Toyotaro) Suzumura (Thai View). Retrieved Date N/A, from https://scholar.google.com.sg/citations?user=tY3BWm0AAAAJ&hl=th
89 Suzumura, T., et al. (2019). Towards Federated Graph Learning for Collaborative Financial Crimes Detection. arXiv:1909.12946 [cs.LG]. https://ar5iv.labs.arxiv.org/html/1909.12946
90 SciSpace. (n.d.). Toyotaro Suzumura. Retrieved Date N/A, from https://scispace.com/authors/toyotaro-suzumura-6vzx8myf6y?papers_page=18
22 Nguyen, D. P., et al. (2025). Topological Data Analysis in Graph Neural Networks: Surveys and Perspectives. IEEE Transactions on Neural Networks and Learning Systems. https://www.researchgate.net/publication/387777906_Topological_Data_Analysis_in_Graph_Neural_Networks_Surveys_and_Perspectives
30 YouTube / Karsten Borgwardt. (2022). Topological Graph Neural Networks. https://www.youtube.com/watch?v=L26FVejm_zw
61 Rieck, B. (n.d.). Research. Retrieved Date N/A, from https://bastian.rieck.me/research/
25 Rieck, B., et al. (2021). Topological Machine Learning. Frontiers in Artificial Intelligence, 4, 681108. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2021.681108/full
91 Pioneer Campus Helmholtz Munich. (n.d.). Bastian Rieck – All Publications. Retrieved Date N/A, from https://www.pioneercampus.org/themenmenue-links/about-us0/principal-investigators/janna-nawroth/bastian-rieck/all-publications/index.html
51 Horn, M., et al. (2022). Topological Graph Neural Networks. arXiv:2102.07835 [cs.LG]. https://arxiv.org/abs/2102.07835
92 Zhao, Y., & Wang, Y. (2022). Topological Relational Inference for Graph Neural Networks. arXiv:2206.08283 [cs.LG]. https://openreview.net/pdf?id=YOc9i6-NrQk
43 Hajij, M., et al. (2024). Topological Deep Learning: Going Beyond Graph Data. arXiv:2304.10031 [cs.LG]. https://arxiv.org/html/2402.08871v3
93 Rieck, B., et al. (2019). Topological Machine Learning with Persistence Indicator Functions. In: Topological Methods in Data Analysis and Visualization V. Springer. https://www.researchgate.net/publication/331311502_Topological_Machine_Learning_with_Persistence_Indicator_Functions
31 Rieck, B. (2019). Perspectives in Persistent Homology.. https://bastian.rieck.me/talks/Perspectives_in_Persistent_Homology.pdf
57 Hensel, F., et al. (2024). Sheaves As A Framework For Understanding And Interpreting Deep Learning. arXiv:2502.15476 [cs.LG]. https://arxiv.org/html/2502.15476v1
3 Jia, W., et al. (2022). Networked Knowledge: A Complex Network Perspective. IEEE/CAA Journal of Automatica Sinica, 9(6), 955-972. https://www.ieee-jas.net/article/doi/10.1109/JAS.2022.105737
11 Jia, W., et al. (2022). Networked Knowledge: A Complex Network Perspective. IEEE/CAA Journal of Automatica Sinica, 9(6), 955-972. https://www.sciengine.com/doi/pdfView/5FBDD70639DF4A1A8DBE5D5D772631FC
18 Bastos, A., Singh, K., Nadgeri, A., Hoffart, J., Suzumura, T., & Singh, M. (2023). Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? ResearchGate. https://www.researchgate.net/publication/367557959_Can_Persistent_Homology_provide_an_efficient_alternative_for_Evaluation_of_Knowledge_Graph_Completion_Methods
12 Coronnello, C., et al. (2023). Knowledge Graphs and Their Embeddings in Biomedicine. Computer Science and Information Systems, 20(1), 1-28. http://www.comsis.org/pdf.php?id=16053
4 Gosztolai, A., Arnaudon, A., & Barahona, M. (2023). Unveiling the structure of complex networks through topological embedding. arXiv:2305.03474 [physics.soc-ph]. https://arxiv.org/pdf/2305.03474
58 Hajij, M., et al. (2023). Cell Complex Neural Networks. Transactions on Machine Learning Research. https://openreview.net/pdf?id=6Tq18ySFpGU
48 Wen, Q., et al. (2024). Tensor-view Topological Graph Neural Network. Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS 2024), PMLR 238. https://arxiv.org/html/2401.12007v3
44 Immonen, E., et al. (2024). RePHINE: Regularized Persistent Homology Informed Graph Neural Network. Advances in Neural Information Processing Systems 37 (NeurIPS 2024). https://proceedings.neurips.cc/paper_files/paper/2024/file/21f76686538a5f06dc431efea5f475f5-Paper-Conference.pdf
32 Horn, M., et al. (2021). Topological Graph Neural Networks. 33rd Annual Fall Workshop on Computational Geometry (FWCG 2021). https://comptag.github.io/fwcg21/assets/papers/FWCG2021_paper_12.pdf
52 Wen, Q., et al. (2024). Tensor-view Topological Graph Neural Network. Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS 2024), PMLR 238. https://proceedings.mlr.press/v238/wen24a/wen24a.pdf
33 Horn, M., et al. (2022). Topological Graph Neural Networks. International Conference on Learning Representations (ICLR 2022). https://openreview.net/pdf?id=oxxUMeFwEHd
94 de Souza, C. H. C., et al. (2024). Topological Positional Encoding For Graph Neural Networks. International Conference on Learning Representations (ICLR 2024). https://openreview.net/forum?id=sKv4bbbqUa
26 Buffelli, D., Soleymani, F., & Rieck, B. (2024). CliquePH: Higher-Order Information for Graph Neural Networks Through Persistent Homology on Clique Graphs. arXiv:2409.08217 [cs.LG]. https://arxiv.org/html/2409.08217v2
59 Buffelli, D., Soleymani, F., & Rieck, B. (2024). CliquePH: Higher-Order Information for Graph Neural Networks Through Persistent Homology on Clique Graphs. arXiv:2409.08217v2 [cs.LG]. https://openreview.net/pdf/6d0f86fe236575bdb16c2c436452182e1d323f5d.pdf
34 Rieck, B. (2023). On the Expressivity of Persistent Homology in Graph Learning. ResearchGate. https://www.researchgate.net/publication/368665551_On_the_Expressivity_of_Persistent_Homology_in_Graph_Learning
60 Ye, X., Sun, F., & Xiang, S. (2024). Dynamic Neural Dowker Network: Approximating Persistent Homology in Dynamic Directed Graphs. arXiv:2408.09123 [cs.LG]. https://arxiv.org/html/2408.09123v1
95 Donut Topology Rocks. (n.d.). Publications. Retrieved Date N/A, from https://donut.topology.rocks/papers
96 Donut Topology Rocks. (n.d.). Tag: persistent homology. Retrieved Date N/A, from https://donut.topology.rocks/?q=tag%3A%22persistent+homology%22
55 MDPI. (2023). Entropy, Volume 25, Issue 2. Retrieved Date N/A, from https://www.mdpi.com/1099-4300/25/2
53 BorgwardtLab. (n.d.). TOGL. GitHub. Retrieved Date N/A, from https://github.com/BorgwardtLab/TOGL
97 Rieck, B. (n.d.). Topological Representation Learning: A Differentiable Perspective.. https://bastian.rieck.me/talks/Topological_Representation_Learning_A_Differentiable_Perspective.pdf
54 OpenReview. (2022). Topological Graph Neural Networks (Review). Retrieved Date N/A, from https://openreview.net/forum?id=oxxUMeFwEHd
46 Horn, M., et al. (2022). Topological Graph Neural Networks. arXiv:2102.07835v4 [cs.LG]. https://arxiv.org/pdf/2102.07835
45 Rieck, B. (n.d.). Topology-Based Graph Learning.. https://bastian.rieck.me/talks/Topology-Based_Graph_Learning_Dagstuhl.pdf
47 Horn, M., et al. (2021). Topological Graph Neural Networks. ResearchGate. https://www.researchgate.net/publication/349363618_Topological_Graph_Neural_Networks
56 Ye, X., Sun, F., & Xiang, S. (2024). Dynamic Neural Dowker Network: Approximating Persistent Homology in Dynamic Directed Graphs. ResearchGate. https://www.researchgate.net/publication/383236634_Dynamic_Neural_Dowker_Network_Approximating_Persistent_Homology_in_Dynamic_Directed_Graphs
37 Papers With Code. (n.d.). Anson Bastos. Retrieved Date N/A, from https://paperswithcode.com/author/anson-bastos
98 Bastos, A., & Kaul, M. (2022). Pretrained Knowledge Base Embeddings for improved Sentential Relation Extraction. Proceedings of the Widening NLP Workshop (WiNLP) @ EMNLP 2022. https://www.utupub.fi/bitstream/handle/10024/168290/Pretrained%20Knowledge%20Base%20Embeddings%20for%20improved%20Sentential%20Relation%20Extraction.pdf?sequence=1&isAllowed=y
99 Bastos, A., et al. (2021). RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network. Proceedings of the Web Conference 2021 (WWW ’21), 1673-1685. https://www.researchgate.net/publication/344325484_RECON_Relation_Extraction_using_Knowledge_Graph_Context_in_a_Graph_Neural_Network
15 Bastos, A., Singh, K., Nadgeri, A., Hoffart, J., Suzumura, T., & Singh, M. (2023). Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? arXiv:2301.12929 [cs.LG]. https://arxiv.org/pdf/2301.12929
38 Google Scholar. (n.d.). Anson Bastos. Retrieved Date N/A, from https://scholar.google.com/citations?user=is7rRuAAAAAJ&hl=en
13 Merono Penuela, A. (2023). Towards Explainable Automatic Knowledge Graph Construction with Human in the loop. Proceedings of the Workshop on Human-Centered Explainable AI (HCXAI) @ CHI 2023. https://www.albertmeronyo.org/wp-content/uploads/2023/05/Towards_Explainable_Automatic_Knowledge_Graph_Construction_with_Human_in_the_loop.pdf
100 Ahrabian, K., et al. (2021). Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion. Findings of the Association for Computational Linguistics: EMNLP 2021, 3199-3210. https://aclanthology.org/2021.findings-emnlp.263.pdf
22 Nguyen, D. P., et al. (2025). Topological Data Analysis in Graph Neural Networks: Surveys and Perspectives. IEEE Transactions on Neural Networks and Learning Systems. https://www.researchgate.net/publication/387777906_Topological_Data_Analysis_in_Graph_Neural_Networks_Surveys_and_Perspectives
50 Yan, C., et al. (2022). Neural Approximation of Graph Topological Features. Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 21094-21107. https://proceedings.neurips.cc/paper_files/paper/2022/file/d7ce06e9293c3d8e6cb3f80b4157f875-Paper-Conference.pdf
92 Zhao, Y., & Wang, Y. (2022). Topological Relational Inference for Graph Neural Networks. arXiv:2206.08283 [cs.LG]. https://openreview.net/pdf?id=YOc9i6-NrQk
101 Levy, J., et al. (2021). Topological Data Analysis and Graph Neural Networks for Provenance Modeling in Digital Pathology. Pacific Symposium on Biocomputing 26, 241-252. https://psb.stanford.edu/psb-online/proceedings/psb21/levy_j.pdf
102 Hajij, M., et al. (2024). Topological Deep Learning: Going Beyond Graph Data. arXiv:2304.10031v3 [cs.LG]. https://arxiv.org/pdf/2304.10031
103 Levy, J., et al. (2020). Topological Data Analysis and Graph Neural Networks for Provenance Modeling in Digital Pathology. bioRxiv. https://www.biorxiv.org/content/10.1101/2020.08.01.231639v2.full-text
15 Internal Analysis based on.15
21 Internal Analysis based on.21
19 Internal Analysis based on.19
41 Internal Analysis based on.41
Works cited
- An experimental analysis of graph representation learning for Gene Ontology based protein function prediction – PeerJ, accessed May 5, 2025, https://peerj.com/articles/18509/
- Graph embedding and geometric deep learning relevance to network biology and structural chemistry – Frontiers, accessed May 5, 2025, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1256352/full
- Networked Knowledge and Complex Networks: An Engineering View, accessed May 5, 2025, https://www.ieee-jas.net/article/doi/10.1109/JAS.2022.105737
- Zoo Guide to Network Embedding – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2305.03474
- Graph Representation Learning | Papers With Code, accessed May 5, 2025, https://paperswithcode.com/task/graph-representation-learning?page=6&q=
- Node Classification | Papers With Code, accessed May 5, 2025, https://paperswithcode.com/task/node-classification/latest?page=6&q=
- Topological feature generation for link prediction in biological networks – PMC, accessed May 5, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10178302/
- Graph Representation Learning in Biomedicine and Healthcare – PMC – PubMed Central, accessed May 5, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10699434/
- Cycle Representation Learning for Inductive Relation Prediction, accessed May 5, 2025, https://proceedings.mlr.press/v162/yan22a/yan22a.pdf
- Are We Wasting Time? A Fast, Accurate Performance Evaluation Framework for Knowledge Graph Link Predictors – arXiv, accessed May 5, 2025, https://arxiv.org/html/2402.00053v1
- Networked Knowledge and Complex Networks: An Engineering View – SciEngine, accessed May 5, 2025, https://www.sciengine.com/doi/pdfView/5FBDD70639DF4A1A8DBE5D5D772631FC
- A Comprehensive Review of the Data and Knowledge Graphs Approaches in Bioinformatics⋆ – Computer Science and Information Systems, accessed May 5, 2025, http://www.comsis.org/pdf.php?id=16053
- Towards Explainable Automatic Knowledge Graph Construction with Human-in-the-loop, accessed May 5, 2025, https://www.albertmeronyo.org/wp-content/uploads/2023/05/Towards_Explainable_Automatic_Knowledge_Graph_Construction_with_Human_in_the_loop.pdf
- Scaling Knowledge Graph Embedding Models – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2201.02791
- Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2301.12929
- Towards Understanding the Geometry of Knowledge Graph Embeddings – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/334117893_Towards_Understanding_the_Geometry_of_Knowledge_Graph_Embeddings
- HopfE: Knowledge Graph Representation Learning using Inverse Hopf Fibrations | Request PDF – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/355785393_HopfE_Knowledge_Graph_Representation_Learning_using_Inverse_Hopf_Fibrations
- (PDF) Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods? – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/367557959_Can_Persistent_Homology_provide_an_efficient_alternative_for_Evaluation_of_Knowledge_Graph_Completion_Methods
- TREPH: A Plug-In Topological Layer for Graph Neural Networks – PMC, accessed May 5, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9954936/
- TREPH: A Plug-In Topological Layer for Graph Neural Networks – MDPI, accessed May 5, 2025, https://www.mdpi.com/1099-4300/25/2/331
- openreview.net, accessed May 5, 2025, https://openreview.net/pdf?id=B7U7zaQGNR
- Topological Data Analysis in Graph Neural Networks: Surveys and Perspectives | Request PDF – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/387777906_Topological_Data_Analysis_in_Graph_Neural_Networks_Surveys_and_Perspectives
- (PDF) On the Application of Topological Data Analysis and Machine Learning to Flood Incidents, and Decision Making – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/357021154_On_the_Application_of_Topological_Data_Analysis_and_Machine_Learning_to_Flood_Incidents_and_Decision_Making
- [Discussion] Merging machine learning with topological data analysis. : r/MachineLearning, accessed May 5, 2025, https://www.reddit.com/r/MachineLearning/comments/ea2gca/discussion_merging_machine_learning_with/
- A Survey of Topological Machine Learning Methods – Frontiers, accessed May 5, 2025, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2021.681108/full
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs – arXiv, accessed May 5, 2025, https://arxiv.org/html/2409.08217v2
- Topological Data Analysis Approach for Weighted Networks Embedding – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/349446460_Topological_Data_Analysis_Approach_for_Weighted_Networks_Embedding
- A Topological View of Rule Learning in Knowledge Graphs – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/355111678_A_Topological_View_of_Rule_Learning_in_Knowledge_Graphs
- Homology Consistency Constrained Efficient Tuning for Vision-Language Models – OpenReview, accessed May 5, 2025, https://openreview.net/pdf/5cab7ed9375bde4b2a85485640533b73a623b658.pdf
- Bastian Rieck (11/17/2021): Topological Graph Neural Networks – YouTube, accessed May 5, 2025, https://www.youtube.com/watch?v=L26FVejm_zw
- Perspectives in Persistent Homology – Bastian Rieck, accessed May 5, 2025, https://bastian.rieck.me/talks/Perspectives_in_Persistent_Homology.pdf
- Topological Graph Neural Networks – GitHub Pages, accessed May 5, 2025, https://comptag.github.io/fwcg21/assets/papers/FWCG2021_paper_12.pdf
- TOPOLOGICAL GRAPH NEURAL NETWORKS – OpenReview, accessed May 5, 2025, https://openreview.net/pdf?id=oxxUMeFwEHd
- On the Expressivity of Persistent Homology in Graph Learning – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/368665551_On_the_Expressivity_of_Persistent_Homology_in_Graph_Learning
- TREPH: A Plug-In Topological Layer for Graph Neural Networks – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/368490644_TREPH_A_Plug-In_Topological_Layer_for_Graph_Neural_Networks
- ansonb/Knowledge_Persistence: Code of the Knowledge Persistence (KP) method proposed in WWW’23 paper – GitHub, accessed May 5, 2025, https://github.com/ansonb/Knowledge_Persistence
- Anson Bastos – Papers With Code, accessed May 5, 2025, https://paperswithcode.com/author/anson-bastos
- Anson Bastos – Google Scholar, accessed May 5, 2025, https://scholar.google.com/citations?user=is7rRuAAAAAJ&hl=en
- Toyotaro Suzumura – DBLP, accessed May 5, 2025, https://dblp.org/pid/99/844
- Toyotaro Suzumura | OpenReview, accessed May 5, 2025, https://openreview.net/profile?id=~Toyotaro_Suzumura1
- Studying the multipartite graph representations with topological data …, accessed May 5, 2025, https://jais.net.ua/index.php/files/article/view/202
- [D] “Topological” Deep Learning – Promising or Hype? : r/MachineLearning – Reddit, accessed May 5, 2025, https://www.reddit.com/r/MachineLearning/comments/1ji6xlv/d_topological_deep_learning_promising_or_hype/
- Position: Topological Deep Learning is the New Frontier for Relational Learning – arXiv, accessed May 5, 2025, https://arxiv.org/html/2402.08871v3
- Boosting Graph Pooling with Persistent Homology – NIPS papers, accessed May 5, 2025, https://proceedings.neurips.cc/paper_files/paper/2024/file/21f76686538a5f06dc431efea5f475f5-Paper-Conference.pdf
- Topology-Based Graph Learning – Graph Embeddings: Theory meets Practice – Bastian Rieck, accessed May 5, 2025, https://bastian.rieck.me/talks/Topology-Based_Graph_Learning_Dagstuhl.pdf
- topological graph neural networks – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2102.07835
- Topological Graph Neural Networks | Request PDF – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/349363618_Topological_Graph_Neural_Networks
- 1 Introduction – arXiv, accessed May 5, 2025, https://arxiv.org/html/2401.12007v3
- azminewasi/Awesome-Graph-Research-ICML2024: All graph/GNN papers accepted at the International Conference on Machine Learning (ICML) 2024. – GitHub, accessed May 5, 2025, https://github.com/azminewasi/Awesome-Graph-Research-ICML2024
- Neural Approximation of Graph Topological Features, accessed May 5, 2025, https://proceedings.neurips.cc/paper_files/paper/2022/file/d7ce06e9293c3d8e6cb3f80b4157f875-Paper-Conference.pdf
- [2102.07835] Topological Graph Neural Networks – arXiv, accessed May 5, 2025, https://arxiv.org/abs/2102.07835
- Tensor-view Topological Graph Neural Network – Proceedings of Machine Learning Research, accessed May 5, 2025, https://proceedings.mlr.press/v238/wen24a/wen24a.pdf
- BorgwardtLab/TOGL: Topological Graph Neural Networks (ICLR 2022) – GitHub, accessed May 5, 2025, https://github.com/BorgwardtLab/TOGL
- Topological Graph Neural Networks – OpenReview, accessed May 5, 2025, https://openreview.net/forum?id=oxxUMeFwEHd
- Entropy, Volume 25, Issue 2 (February 2023) – 211 articles – MDPI, accessed May 5, 2025, https://www.mdpi.com/1099-4300/25/2
- (PDF) Dynamic Neural Dowker Network: Approximating Persistent Homology in Dynamic Directed Graphs – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/383236634_Dynamic_Neural_Dowker_Network_Approximating_Persistent_Homology_in_Dynamic_Directed_Graphs
- Sheaf theory: from deep geometry to deep learning – arXiv, accessed May 5, 2025, https://arxiv.org/html/2502.15476v1
- Cell Complex Neural Networks – OpenReview, accessed May 5, 2025, https://openreview.net/pdf?id=6Tq18ySFpGU
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs – OpenReview, accessed May 5, 2025, https://openreview.net/pdf/6d0f86fe236575bdb16c2c436452182e1d323f5d.pdf
- Dynamic Neural Dowker Network: Approximating Persistent Homology in Dynamic Directed Graphs – arXiv, accessed May 5, 2025, https://arxiv.org/html/2408.09123v1
- Research – Bastian Rieck, accessed May 5, 2025, https://bastian.rieck.me/research/
- Study of Topology Bias in GNN-based Knowledge Graphs Algorithms – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/377661475_Study_of_Topology_Bias_in_GNN-based_Knowledge_Graphs_Algorithms
- Persistent Homology-induced Graph Ensembles for Time Series Regressions | Request PDF – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/389947716_Persistent_Homology-induced_Graph_Ensembles_for_Time_Series_Regressions
- Machine Learning Datasets – Papers With Code, accessed May 5, 2025, https://paperswithcode.com/datasets?task=document-summarization&mod=graphs
- End-to-End Ontology Learning with Large Language Models – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2410.23584?
- Machine Learning Datasets – Papers With Code, accessed May 5, 2025, https://paperswithcode.com/datasets?q=PH2&task=protein-language-model&page=1
- Transformer models in biomedicine – PMC – PubMed Central, accessed May 5, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11287876/
- Toyotaro Suzumura | Papers With Code, accessed May 5, 2025, https://paperswithcode.com/author/toyotaro-suzumura
- Machine Learning Datasets – Papers With Code, accessed May 5, 2025, https://paperswithcode.com/datasets?mod=biology&page=1
- Machine Learning Datasets | Papers With Code, accessed May 5, 2025, https://paperswithcode.com/datasets?q=&v=lst&o=newest&mod=biology&page=1
- Topology-Driven Attribute Recovery for Attribute Missing Graph Learning in Social Internet of Things – arXiv, accessed May 5, 2025, https://arxiv.org/html/2501.10151v1
- Can Knowledge Graph Embeddings Tell Us What Fact-checked Claims Are About? – ACL Anthology, accessed May 5, 2025, https://aclanthology.org/2020.insights-1.11.pdf
- Extracting topological features to identify at-risk students using machine learning and graph convolutional network models – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/369919374_Extracting_topological_features_to_identify_at-risk_students_using_machine_learning_and_graph_convolutional_network_models
- Augmenting Graph Inductive Learning Model With Topographical Features – The International Conference on Computational Science, accessed May 5, 2025, https://www.iccs-meeting.org/archive/iccs2022/papers/133520705.pdf
- Fusing topology contexts and logical rules in language models for knowledge graph completion – SenticNet, accessed May 5, 2025, http://ww.sentic.net/knowledge-graph-completion.pdf
- Robust and Trustworthy Machine Learning – Chao Chen, accessed May 5, 2025, https://chaochen.github.io/research.html
- Knowledge Graphs: Research Directions | Machine Learning – Universitetet i Bergen, accessed May 5, 2025, https://www.uib.no/en/rg/ml/139211/knowledge-graphs-research-directions
- WWW ’23: Proceedings of the ACM Web Conference 2023 – SIGWEB, accessed May 5, 2025, https://www.sigweb.org/toc/www23.html
- Highly Efficient Knowledge Graph Embedding Learning with, accessed May 5, 2025, https://colab.ws/articles/10.18653%2Fv1%2F2021.naacl-main.187
- Manish Singh – CatalyzeX, accessed May 5, 2025, https://www.catalyzex.com/author/Manish%20Singh
- Knowledge Base Completion: Baselines Strike Back | CoLab, accessed May 5, 2025, https://colab.ws/articles/10.18653%2Fv1%2FW17-2609
- (PDF) GTAT: empowering graph neural networks with cross attention – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/388823279_GTAT_empowering_graph_neural_networks_with_cross_attention
- Toyotaro SUZUMURA | Research Staff Member and Manager | PhD | IBM, Armonk | Thomas J. Watson Research Center – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/profile/Toyotaro-Suzumura-3
- Invited Speakers | The 5th R-CCS International Symposium, accessed May 5, 2025, https://www.r-ccs.riken.jp/R-CCS-Symposium/2023/speakers/
- Toyotaro Suzumura – My portal – researchmap, accessed May 5, 2025, https://researchmap.jp/toyotaro?lang=en
- Toyotaro SUZUMURA | IBM, Armonk | Centre for Advanced Studies, Japan | Research profile – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/profile/Toyotaro-Suzumura
- Toyo (Toyotaro) Suzumura – Google Scholar, accessed May 5, 2025, https://scholar.google.com/citations?user=tY3BWm0AAAAJ&hl=en
- Toyo (Toyotaro) Suzumura – Google Scholar, accessed May 5, 2025, https://scholar.google.com.sg/citations?user=tY3BWm0AAAAJ&hl=th
- [1909.12946] Towards Federated Graph Learning for Collaborative Financial Crimes Detection – ar5iv, accessed May 5, 2025, https://ar5iv.labs.arxiv.org/html/1909.12946
- Toyotaro Suzumura | IBM | 180 Publications | 1974 Citations, accessed May 5, 2025, https://scispace.com/authors/toyotaro-suzumura-6vzx8myf6y?papers_page=18
- All publications – the Helmholtz Pioneer Campus, accessed May 5, 2025, https://www.pioneercampus.org/themenmenue-links/about-us0/principal-investigators/janna-nawroth/bastian-rieck/all-publications/index.html
- Topological Relational Learning on Graphs – OpenReview, accessed May 5, 2025, https://openreview.net/pdf?id=YOc9i6-NrQk
- Topological Machine Learning with Persistence Indicator Functions – ResearchGate, accessed May 5, 2025, https://www.researchgate.net/publication/331311502_Topological_Machine_Learning_with_Persistence_Indicator_Functions
- Topological Positional Encoding – OpenReview, accessed May 5, 2025, https://openreview.net/forum?id=sKv4bbbqUa
- Database of Original & Non-Theoretical Uses of Topology – DONUT, accessed May 5, 2025, https://donut.topology.rocks/papers
- Database of Original & Non-Theoretical Uses of Topology – DONUT, accessed May 5, 2025, https://donut.topology.rocks/?q=tag%3A%22persistent+homology%22
- Topological Representation Learning – A Differentiable Perspective – Bastian Rieck, accessed May 5, 2025, https://bastian.rieck.me/talks/Topological_Representation_Learning_A_Differentiable_Perspective.pdf
- Pretrained Knowledge Base Embeddings for improved Sentential Relation Extraction.pdf – UTUPub, accessed May 5, 2025, https://www.utupub.fi/bitstream/handle/10024/168290/Pretrained%20Knowledge%20Base%20Embeddings%20for%20improved%20Sentential%20Relation%20Extraction.pdf?sequence=1&isAllowed=y
- RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network, accessed May 5, 2025, https://www.researchgate.net/publication/344325484_RECON_Relation_Extraction_using_Knowledge_Graph_Context_in_a_Graph_Neural_Network
- Knowledge Representation Learning with Contrastive Completion Coding – ACL Anthology, accessed May 5, 2025, https://aclanthology.org/2021.findings-emnlp.263.pdf
- Topological Feature Extraction and Visualization of Whole Slide Images using Graph Neural Networks – Pacific Symposium on Biocomputing, accessed May 5, 2025, https://psb.stanford.edu/psb-online/proceedings/psb21/levy_j.pdf
- Architectures of Topological Deep Learning: A Survey of Message-Passing Topological Neural Networks – arXiv, accessed May 5, 2025, https://arxiv.org/pdf/2304.10031
- Topological Feature Extraction and Visualization of Whole Slide Images using Graph Neural Networks | bioRxiv, accessed May 5, 2025, https://www.biorxiv.org/content/10.1101/2020.08.01.231639v2.full-text