@@ -6,68 +6,41 @@ The TensorFlow official models are a collection of models
...
@@ -6,68 +6,41 @@ The TensorFlow official models are a collection of models
that use TensorFlow’s high-level APIs.
that use TensorFlow’s high-level APIs.
They are intended to be well-maintained, tested, and kept up to date
They are intended to be well-maintained, tested, and kept up to date
with the latest TensorFlow API.
with the latest TensorFlow API.
They should also be reasonably optimized for fast performance while still
They should also be reasonably optimized for fast performance while still
being easy to read.
being easy to read.
These models are used as end-to-end tests, ensuring that the models run
These models are used as end-to-end tests, ensuring that the models run
with the same or improved speed and performance with each new TensorFlow build.
with the same or improved speed and performance with each new TensorFlow build.
## More models to come!
## Model Implementations
The team is actively developing new models.
In the near future, we will add:
* State-of-the-art language understanding models:
More members in Transformer family
* Start-of-the-art image classification models:
EfficientNet, MnasNet, and variants
* A set of excellent objection detection models.
## Table of Contents
-[Models and Implementations](#models-and-implementations)
### Natural Language Processing
*[Computer Vision](#computer-vision)
+[Image Classification](#image-classification)
+[Object Detection and Segmentation](#object-detection-and-segmentation)
*[Natural Language Processing](#natural-language-processing)
*[Recommendation](#recommendation)
-[How to get started with the official models](#how-to-get-started-with-the-official-models)
## Models and Implementations
| Model | Description | Reference |
| ----- | ----------- | --------- |
| [ALBERT](nlp/albert) | A Lite BERT for Self-supervised Learning of Language Representations | [arXiv:1909.11942](https://arxiv.org/abs/1909.11942) |
| [BERT](nlp/bert) | A powerful pre-trained language representation model: BERT (Bidirectional Encoder Representations from Transformers) | [arXiv:1810.04805](https://arxiv.org/abs/1810.04805) |
| [NHNet](nlp/nhnet) | A transformer-based multi-sequence to sequence model: Generating Representative Headlines for News Stories | [arXiv:2001.09386](https://arxiv.org/abs/2001.09386) |
| [Transformer](nlp/transformer) | A transformer model to translate the WMT English to German dataset | [arXiv:1706.03762](https://arxiv.org/abs/1706.03762) |
| [XLNet](nlp/xlnet) | XLNet: Generalized Autoregressive Pretraining for Language Understanding | [arXiv:1906.08237](https://arxiv.org/abs/1906.08237) |
### Computer Vision
### Computer Vision
#### Image Classification
| Model | Description | Reference |
| ----- | ----------- | --------- |
| Model | Reference (Paper) |
| [MNIST](vision/image_classification) | A basic model to classify digits from the MNIST dataset | [Link](http://yann.lecun.com/exdb/mnist/) |
|-------|-------------------|
|[ResNet](vision/image_classification) | A deep residual network for image recognition | [arXiv:1512.03385](https://arxiv.org/abs/1512.03385)|
| [MNIST](vision/image_classification) | A basic model to classify digits from the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) |
| [RetinaNet](vision/detection) | A fast and powerful object detector | [arXiv:1708.02002](https://arxiv.org/abs/1708.02002) |
| [ResNet](vision/image_classification) | [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) |
| [Mask R-CNN](vision/detection) | An object detection and instance segmentation model | [arXiv:1703.06870](https://arxiv.org/abs/1703.06870) |
#### Object Detection and Segmentation
### Other models
| Model | Reference (Paper) |
| Model | Description | Reference |
|-------|-------------------|
| ----- | ----------- | --------- |
| [RetinaNet](vision/detection) | [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002) |
| [NCF](recommendation) | Neural Collaborative Filtering model for recommendation tasks | [arXiv:1708.05031](https://arxiv.org/abs/1708.05031) |
| [ALBERT (A Lite BERT)](nlp/albert) | [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) |
| [BERT (Bidirectional Encoder Representations from Transformers)](nlp/bert) | [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) |