DGL-KE aims to computing knowledge graph embeddings efficiently on giant knowledge graphs.
DGL-KE is a DGL-based package for computing node embeddings and relation embeddings of
It can train knowledge graphs, such as FB15k and wn18, within a few minutes, while it trains
knowledge graphs efficiently. DGL-KE is fast and scalable. On a single machine,
Freebase, which has hundreds of millions of edges within a couple of hours.
it takes only a few minutes for medium-size knowledge graphs, such as FB15k and wn18, and
It supports multiple knowledge graph embeddings. For now, it supports knowledge graph embedding
takes a couple of hours on Freebase, which has hundreds of millions of edges.
models including:
DGL-KE includes the following knowledge graph embedding models:
- TransE
- TransE
- DistMult
- DistMult
- ComplEx
- ComplEx
More models will be supported in a near future.
It will add other popular models in the future.
DGL-KE supports multiple training modes:
DGL-KE supports multiple training modes:
- CPU & GPU training
- CPU training
- Mixed CPU & GPU training: node embeddings are stored on CPU and mini-batches are trained on GPU. This is designed for training KGE models on large knowledge graphs.
- GPU training
- Multiprocessing training on CPUs: this is designed to train KGE models on large knowledge graphs with many CPU cores.
- Joint CPU & GPU training
- Multiprocessing training on CPUs
For joint CPU & GPU training, node embeddings are stored on CPU and mini-batches are trained on GPU. This is designed for training KGE models on large knowledge graphs
For multiprocessing training, each process train mini-batches independently and use shared memory for communication between processes. This is designed to train KGE models on large knowledge graphs with many CPU cores.
We will support multi-GPU training and distributed training in a near future.
We will support multi-GPU training and distributed training in a near future.
## Requirements
## Requirements
The package can run with both Pytorch and MXNet. For Pytorch, it works with Pytorch v1.2 or newer.
The package can run with both Pytorch and MXNet. For Pytorch, it works with Pytorch v1.2 or newer.