Commit daa1408c authored by Jing Li's avatar Jing Li Committed by A. Unique TensorFlower
Browse files

Update NLP README to introduce major components.

PiperOrigin-RevId: 314526747
parent d90bed2e
# TensorFlow Natural Language Processing Modelling Toolkit # TensorFlow NLP Modelling Toolkit
tensorflow/models/official/nlp provides a [modeling library](modeling) for constructing This codebase provides a Natrual Language Processing modeling toolkit written in
NLP model achitectures, as well as TF2 reference implementations for [TF2](https://www.tensorflow.org/guide/effective_tf2). It allows researchers and
state-of-the-art models. developers to reproduce state-of-the-art model results and train custom models
to experiment new research ideas.
The repository contains the following models, with implementations, pre-trained ## Features
model weights, usage scripts and conversion utilities:
* [Albert](albert) * Reusable and modularized modeling building blocks
* [Bert](bert) * State-of-the-art reproducible
* [NHNet](nhnet) * Easy to customize and extend
* [XLNet](xlnet) * End-to-end training
* [Transformer for translation](transformer) * Distributed trainable on both GPUs and TPUs
Addtional features: ## Major components
### Libraries
We provide modeling library to allow users to train custom models for new
research ideas. Detailed intructions can be found in READMEs in each folder.
* [modeling/](modeling): modeling library that provides building blocks (e.g., Layers, Networks, and Models) that can be assembled into transformer-based achitectures .
* [data/](data): binaries and utils for input preprocessing, tokenization, etc.
### State-of-the-Art models and examples
We provide SoTA model implementations, pre-trained models, training and
evaluation examples, and command lines. Detail instructions can be found in the
READMEs for specific papers.
1. [BERT](bert): [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Devlin et al., 2018
2. [ALBERT](albert): [A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) by Lan et al., 2019
3. [XLNet](xlnet): [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al., 2019
4. [Transformer for translation](transformer): [Attention Is All You Need](https://arxiv.org/abs/1706.03762) by Vaswani et al., 2017
5. [NHNet](nhnet): [Generating Representative Headlines for News Stories](https://arxiv.org/abs/2001.09386) by Gu et al, 2020
* Distributed trainable on both multi-GPU and TPU
* e2e training for custom models, including both pretraining and finetuning.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment