".github/vscode:/vscode.git/clone" did not exist on "65e8328573f8aade265a3f2458c64fc698adde55"
Commit 91717b1d authored by A. Unique TensorFlower's avatar A. Unique TensorFlower
Browse files

Internal change

PiperOrigin-RevId: 303854407
parent 1aaab0b7
# TensorFlow Official Models
![Logo](https://storage.googleapis.com/model_garden_artifacts/TF_Model_Garden.png)
The TensorFlow official models are a collection of models that use
TensorFlow's high-level APIs. They are intended to be well-maintained, tested,
and kept up to date with the latest TensorFlow API. They should also be
reasonably optimized for fast performance while still being easy to read.
# TensorFlow Official Models
These models are used as end-to-end tests, ensuring that the models run with the
same or improved speed and performance with each new TensorFlow build.
The TensorFlow official models are a collection of models
that use TensorFlow’s high-level APIs.
They are intended to be well-maintained, tested, and kept up to date
with the latest TensorFlow API.
They should also be reasonably optimized for fast performance while still
being easy to read.
These models are used as end-to-end tests, ensuring that the models run
with the same or improved speed and performance with each new TensorFlow build.
## Tensorflow releases
## Model Implementations
The master branch of the models are **in development** with TensorFlow 2.x, and
they target the
[nightly binaries](https://github.com/tensorflow/tensorflow#installation) built
from the
[master branch of TensorFlow](https://github.com/tensorflow/tensorflow/tree/master).
You may start from installing with pip:
### Natural Language Processing
```shell
pip3 install tf-nightly
```
| Model | Description | Reference |
| ----- | ----------- | --------- |
| [ALBERT](nlp/albert) | A Lite BERT for Self-supervised Learning of Language Representations | [arXiv:1909.11942](https://arxiv.org/abs/1909.11942) |
| [BERT](nlp/bert) | A powerful pre-trained language representation model: BERT (Bidirectional Encoder Representations from Transformers) | [arXiv:1810.04805](https://arxiv.org/abs/1810.04805) |
| [Transformer](nlp/transformer) | A transformer model to translate the WMT English to German dataset | [arXiv:1706.03762](https://arxiv.org/abs/1706.03762) |
| [XLNet](nlp/xlnet) | XLNet: Generalized Autoregressive Pretraining for Language Understanding | [arXiv:1906.08237](https://arxiv.org/abs/1906.08237) |
**Stable versions** of the official models targeting releases of TensorFlow are
available as tagged branches or
[downloadable releases](https://github.com/tensorflow/models/releases). Model
repository version numbers match the target TensorFlow release, such that
[release v2.1.0](https://github.com/tensorflow/models/releases/tag/v2.1.0) are
compatible with
[TensorFlow v2.1.0](https://github.com/tensorflow/tensorflow/releases/tag/v2.1.0).
### Computer Vision
If you are on a version of TensorFlow earlier than 1.4, please
[update your installation](https://www.tensorflow.org/install/).
| Model | Description | Reference |
| ----- | ----------- | --------- |
| [MNIST](vision/image_classification) | A basic model to classify digits from the MNIST dataset | [Link](http://yann.lecun.com/exdb/mnist/) |
| [ResNet](vision/image_classification) | A deep residual network for image recognition | [arXiv:1512.03385](https://arxiv.org/abs/1512.03385) |
| [RetinaNet](vision/detection) | A fast and powerful object detector | [arXiv:1708.02002](https://arxiv.org/abs/1708.02002) |
## Requirements
### Other models
Please follow the below steps before running models in this repo:
| Model | Description | Reference |
| ----- | ----------- | --------- |
| [NCF](recommendation) | Neural Collaborative Filtering model for recommendation tasks | [arXiv:1708.05031](https://arxiv.org/abs/1708.05031) |
1. TensorFlow
[nightly binaries](https://github.com/tensorflow/tensorflow#installation)
---
2. If users would like to clone this repo but do not care about change history,
please consider:
## How to get started with the Model Garden official models
```shell
export repo_version="master"
git clone -b ${repo_version} https://github.com/tensorflow/models.git --depth=1
```
* The models in the master branch are developed using TensorFlow 2,
and they target the TensorFlow [nightly binaries](https://github.com/tensorflow/tensorflow#installation)
built from the
[master branch of TensorFlow](https://github.com/tensorflow/tensorflow/tree/master).
* The stable versions targeting releases of TensorFlow are available
as tagged branches or [downloadable releases](https://github.com/tensorflow/models/releases).
* Model repository version numbers match the target TensorFlow release,
such that
[release v2.1.0](https://github.com/tensorflow/models/releases/tag/v2.1.0)
are compatible with
[TensorFlow v2.1.0](https://github.com/tensorflow/tensorflow/releases/tag/v2.1.0).
3. Add the top-level ***/models*** folder to the Python path with the command:
Please follow the below steps before running models in this repository.
```shell
export PYTHONPATH=$PYTHONPATH:/path/to/models
```
### Requirements
Using Colab:
* The latest TensorFlow Model Garden release and TensorFlow 2
* If you are on a version of TensorFlow earlier than 2.1, please
upgrade your TensorFlow to [the latest TensorFlow 2](https://www.tensorflow.org/install/).
```python
import os
os.environ['PYTHONPATH'] += ":/path/to/models"
```
```shell
pip3 install tf-nightly
```
4. Install dependencies:
### Installation
```shell
pip3 install --user -r official/requirements.txt
```
#### Method 1: Install the TensorFlow Model Garden pip package
**tf-models-nightly** is the nightly Model Garden package
created daily automatically. pip will install all models
and dependencies automatically.
To make Official Models easier to use, we are planning to create a pip
installable Official Models package. This is being tracked in
[#917](https://github.com/tensorflow/models/issues/917).
```shell
pip install tf-models-nightly
```
## Available models
Please check out our [example](colab/bert.ipynb)
to learn how to use a PIP package.
**NOTE: For Officially Supported TPU models please check [README-TPU](README-TPU.md).**
#### Method 2: Clone the source
**NOTE:** Please make sure to follow the steps in the
[Requirements](#requirements) section.
1. Clone the GitHub repository:
### Natural Language Processing
```shell
git clone https://github.com/tensorflow/models.git
```
* [albert](nlp/albert): A Lite BERT for Self-supervised Learning of Language
Representations.
* [bert](nlp/bert): A powerful pre-trained language representation model:
BERT, which stands for Bidirectional Encoder Representations from
Transformers.
* [transformer](nlp/transformer): A transformer model to translate the WMT English
to German dataset.
* [xlnet](nlp/xlnet): XLNet: Generalized Autoregressive Pretraining for
Language Understanding.
2. Add the top-level ***/models*** folder to the Python path.
### Computer Vision
```shell
export PYTHONPATH=$PYTHONPATH:/path/to/models
```
* [mnist](vision/image_classification): A basic model to classify digits from
the MNIST dataset.
* [resnet](vision/image_classification): A deep residual network that can be
used to classify both CIFAR-10 and ImageNet's dataset of 1000 classes.
* [retinanet](vision/detection): A fast and powerful object detector.
If you are using a Colab notebook, please set the Python path with os.environ.
### Others
```python
import os
os.environ['PYTHONPATH'] += ":/path/to/models"
```
* [ncf](recommendation): Neural Collaborative Filtering model for
recommendation tasks.
3. Install other dependencies
Models that will not update to TensorFlow 2.x stay inside R1 directory:
```shell
pip3 install --user -r official/requirements.txt
```
* [boosted_trees](r1/boosted_trees): A Gradient Boosted Trees model to
classify higgs boson process from HIGGS Data Set.
* [wide_deep](r1/wide_deep): A model that combines a wide model and deep
network to classify census income data.
---
## More models to come!
We are in the progress to revamp official model garden with TensorFlow 2.0 and
Keras. In the near future, we will bring:
The team is actively developing new models.
In the near future, we will add:
* State-of-the-art language understanding models: XLNet, GPT2, and more
members in Transformer family.
* Start-of-the-art image classification models: EfficientNet, MnasNet and
variants.
* A set of excellent objection detection models.
- State-of-the-art language understanding models:
More members in Transformer family
- Start-of-the-art image classification models:
EfficientNet, MnasNet and variants.
- A set of excellent objection detection models.
If you would like to make any fixes or improvements to the models, please
[submit a pull request](https://github.com/tensorflow/models/compare).
## New Models
---
## Contributions
The team is actively working to add new models to the repository. Every model
should follow the following guidelines, to uphold the our objectives of
readable, usable, and maintainable code.
Every model should follow our guidelines to uphold our objectives of readable,
usable, and maintainable code.
**General guidelines**
### General Guidelines
* Code should be well documented and tested.
* Runnable from a blank environment with relative ease.
* Trainable on: single GPU/CPU (baseline), multiple GPUs, TPU
* Compatible with Python 3 (using [six](https://pythonhosted.org/six/) when
being compatible with Python 2 is necessary)
* Conform to [Google Python Style Guide](https://github.com/google/styleguide/blob/gh-pages/pyguide.md)
- Code should be well documented and tested.
- Runnable from a blank environment with ease.
- Trainable on: single GPU/CPU (baseline), multiple GPUs & TPUs
- Compatible with Python 3 (using [six](https://pythonhosted.org/six/)
when being compatible with Python 2 is necessary)
- Conform to
[Google Python Style Guide](https://github.com/google/styleguide/blob/gh-pages/pyguide.md)
**Implementation guidelines**
### Implementation Guidelines
These guidelines exist so the model implementations are consistent for better
readability and maintainability.
These guidelines are to ensure consistent model implementations for
better readability and maintainability.
* Use [common utility functions](utils)
* Export SavedModel at the end of training.
* Consistent flags and flag-parsing library
([read more here](utils/flags/guidelines.md))
* Produce benchmarks and logs ([read more here](utils/logs/guidelines.md))
- Use [common utility functions](utils)
- Export SavedModel at the end of the training.
- Consistent flags and flag-parsing library ([read more here](utils/flags/guidelines.md))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment