Models
The model module contains model architectures, training code and data modules for training. Models are written in PyTorch, with training and data loading/processing typically using PyTorch Lightning. The directory structure is listed below.
├── models
├── embedding_models
├── end_to_end
├── HOList
├── INT
├── premise_selection
├── TacticZero
Embedding Models
Contains the Embedding architectures currently used for TacticZero, HOList and Premise Selection experiments.
GNN
Includes the message passing GNN architecture from FormulaNet and HOList, as well as other architectures including GCNs.
Transformer
Standard implementation of the Transformer Encoder.
SAT
Structure Aware Transformer(SAT), including the Directed variant.
Ensemble
Ensemble models, which feed the embeddings from a GNN and Transformer model through a final MLP to obtain an aggregated final embedding.
End-to-End
Models used for End-to-End experiments are found here.
Currently under development.
HOList
Agent
The code for the live agent used in the HOList Evaluation experiment.
Supervised
This includes the model and training code for the HOList Supervised experiment, with the GNN architecture used in this paper.
INT
The models used for the INT experiments. Includes GNN and Transformer Encoders, as defined in models.INT
.
Premise Selection
Contains the Premise Selection model and training code used for Premise Selection experiments.
The forward pass takes a goal and premise as input, embeds them, then concatenates their embeddings before passing them through a MLP for classification
This model is initialised with embedding architectures for the goal and premise, as well as the architecture for the classifier.
TacticZero
Includes the architecture for the Policy Models used in TacticZero experiments, as well as the original seq2seq based autoencoder.