Distinguishing Activation from Inhibition with Relation-Aware Graph Neural Networks
In my last post, I discussed self-supervised edge prediction as a way of embedding genes using a gene-regulatory network.
This approach allows genes, metabolites, drugs and other vertices to be connected based on shared network topology. However, to date I’ve only discussed edge prediction using a dot-product head, where a vertex-pair’s edge support is a direct readout of their similarity in embedding space (𝐚 · 𝐛). While surprisingly powerful, this head has limitations when vertices are heterogeneous or interact in qualitatively different ways — particularly when we want to distinguish between activation and inhibition.
Here, I explore more expressive approaches for learning mappings between A → B by evaluating both general edge prediction heads (like MLPs) and “relation-aware” heads that can learn distinct mappings for different edge types. The post will cover:
- Data model and training changes enabling relation-specific predictions
- Geometric analysis revealing how relation-aware heads encode regulatory semantics
- PerturbSeq validation demonstrating successful prediction of signed regulatory interactions
- Pre-trained models available on HuggingFace