README.md 1.54 KB
Newer Older
Zhenyu Tan's avatar
Zhenyu Tan committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# keras-nlp

## Layers

Layers are the fundamental building blocks for NLP models. They can be used to
assemble new layers, networks, or models.

*   [TransformerEncoderBlock](layers/transformer_encoder_block.py) implements
    an optionally masked transformer as described in
    ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762).

*   [OnDeviceEmbedding](layers/on_device_embedding.py) implements efficient
    embedding lookups designed for TPU-based models.

*   [PositionalEmbedding](layers/position_embedding.py) creates a positional
    embedding as described in ["BERT: Pre-training of Deep Bidirectional
    Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805).

*   [SelfAttentionMask](layers/self_attention_mask.py) creates a 3D attention
    mask from a 2D tensor mask.

Zhenyu Tan's avatar
Zhenyu Tan committed
22
*   [MaskedLM](layers/masked_lm.py) implements a masked language model. It
Zhenyu Tan's avatar
Zhenyu Tan committed
23
24
25
26
27
28
29
30
31
32
    assumes the embedding table variable is passed to it.


## Encoders

Encoders are combinations of layers (and possibly other encoders). They are
sub-units of models that would not be trained alone. It encapsulates common
network structures like a classification head or a transformer encoder into an
easily handled object with a standardized configuration.

Zhenyu Tan's avatar
Zhenyu Tan committed
33
*   [BertEncoder](encoders/bert_encoder.py) implements a bi-directional
Zhenyu Tan's avatar
Zhenyu Tan committed
34
35
36
37
    Transformer-based encoder as described in
    ["BERT: Pre-training of Deep Bidirectional Transformers for Language
    Understanding"](https://arxiv.org/abs/1810.04805). It includes the embedding
    lookups, transformer layers and pooling layer.