Commit 4a705e08 authored by Neal Wu's avatar Neal Wu Committed by GitHub
Browse files

Fix broken links in the models repo (#2445)

parent 0e09477a
...@@ -83,11 +83,11 @@ def create_readable_names_for_imagenet_labels(): ...@@ -83,11 +83,11 @@ def create_readable_names_for_imagenet_labels():
(since 0 is reserved for the background class). (since 0 is reserved for the background class).
Code is based on Code is based on
https://github.com/tensorflow/models/blob/master/inception/inception/data/build_imagenet_data.py#L463 https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py
""" """
# pylint: disable=g-line-too-long # pylint: disable=g-line-too-long
base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/inception/inception/data/' base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data/'
synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url) synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url)
synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url) synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url)
......
...@@ -17,7 +17,7 @@ Ensure that you have installed TensorFlow 1.1 or greater ...@@ -17,7 +17,7 @@ Ensure that you have installed TensorFlow 1.1 or greater
You also need copy of ImageNet dataset if you want to run provided example. You also need copy of ImageNet dataset if you want to run provided example.
Follow Follow
[Preparing the dataset](https://github.com/tensorflow/models/tree/master/slim#Data) [Preparing the dataset](https://github.com/tensorflow/models/tree/master/research/slim#Data)
instructions in TF-Slim library to get and preprocess ImageNet data. instructions in TF-Slim library to get and preprocess ImageNet data.
## Available models ## Available models
...@@ -32,7 +32,7 @@ Inception v3 | Step L.L. on ensemble of 4 models| [ens4_adv_inception_v3_2017_08 ...@@ -32,7 +32,7 @@ Inception v3 | Step L.L. on ensemble of 4 models| [ens4_adv_inception_v3_2017_08
Inception ResNet v2 | Step L.L. on ensemble of 3 models | [ens_adv_inception_resnet_v2_2017_08_18.tar.gz](http://download.tensorflow.org/models/ens_adv_inception_resnet_v2_2017_08_18.tar.gz) Inception ResNet v2 | Step L.L. on ensemble of 3 models | [ens_adv_inception_resnet_v2_2017_08_18.tar.gz](http://download.tensorflow.org/models/ens_adv_inception_resnet_v2_2017_08_18.tar.gz)
All checkpoints are compatible with All checkpoints are compatible with
[TF-Slim](https://github.com/tensorflow/models/tree/master/slim) [TF-Slim](https://github.com/tensorflow/models/tree/master/research/slim)
implementation of Inception v3 and Inception Resnet v2. implementation of Inception v3 and Inception Resnet v2.
## How to evaluate models on ImageNet test data ## How to evaluate models on ImageNet test data
......
...@@ -135,20 +135,20 @@ adversarial training losses). The training loop itself is defined in ...@@ -135,20 +135,20 @@ adversarial training losses). The training loop itself is defined in
### Command-Line Flags ### Command-Line Flags
Flags related to distributed training and the training loop itself are defined Flags related to distributed training and the training loop itself are defined
in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/train_utils.py). in [`train_utils.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/train_utils.py).
Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/graphs.py). Flags related to model hyperparameters are defined in [`graphs.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/graphs.py).
Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/adversarial_losses.py). Flags related to adversarial training are defined in [`adversarial_losses.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/adversarial_losses.py).
Flags particular to each job are defined in the main binary files. Flags particular to each job are defined in the main binary files.
### Data Generation ### Data Generation
* Vocabulary generation: [`gen_vocab.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/gen_vocab.py) * Vocabulary generation: [`gen_vocab.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/gen_vocab.py)
* Data generation: [`gen_data.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/gen_data.py) * Data generation: [`gen_data.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/gen_data.py)
Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/adversarial_text/data/document_generators.py) Command-line flags defined in [`document_generators.py`](https://github.com/tensorflow/models/tree/master/research/adversarial_text/data/document_generators.py)
control which dataset is processed and how. control which dataset is processed and how.
## Contact for Issues ## Contact for Issues
......
...@@ -43,7 +43,7 @@ cd .. ...@@ -43,7 +43,7 @@ cd ..
4. `train.py` works with both CPU and GPU, though using GPU is preferable. It has been tested with a Titan X and with a GTX980. 4. `train.py` works with both CPU and GPU, though using GPU is preferable. It has been tested with a Titan X and with a GTX980.
[TF]: https://www.tensorflow.org/install/ [TF]: https://www.tensorflow.org/install/
[FSNS]: https://github.com/tensorflow/models/tree/master/street [FSNS]: https://github.com/tensorflow/models/tree/master/research/street
## How to use this code ## How to use this code
...@@ -81,7 +81,7 @@ python train.py --checkpoint=model.ckpt-399731 ...@@ -81,7 +81,7 @@ python train.py --checkpoint=model.ckpt-399731
You need to define a new dataset. There are two options: You need to define a new dataset. There are two options:
1. Store data in the same format as the FSNS dataset and just reuse the 1. Store data in the same format as the FSNS dataset and just reuse the
[python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py) [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/datasets/fsns.py)
module. E.g., create a file datasets/newtextdataset.py: module. E.g., create a file datasets/newtextdataset.py:
``` ```
import fsns import fsns
...@@ -151,8 +151,8 @@ To learn how to store a data in the FSNS ...@@ -151,8 +151,8 @@ To learn how to store a data in the FSNS
- labels: ground truth label ids, shape=[batch_size x seq_length]; - labels: ground truth label ids, shape=[batch_size x seq_length];
- labels_one_hot: labels in one-hot encoding, shape [batch_size x seq_length x num_char_classes]; - labels_one_hot: labels in one-hot encoding, shape [batch_size x seq_length x num_char_classes];
Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/data_provider.py#L33) Refer to [python/data_provider.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/data_provider.py#L33)
for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/attention_ocr/python/datasets/fsns.py) for more details. You can use [python/datasets/fsns.py](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/datasets/fsns.py)
as the example. as the example.
## How to use a pre-trained model ## How to use a pre-trained model
...@@ -164,11 +164,11 @@ The recommended way is to use the [Serving infrastructure][serving]. ...@@ -164,11 +164,11 @@ The recommended way is to use the [Serving infrastructure][serving].
Alternatively you can: Alternatively you can:
1. define a placeholder for images (or use directly an numpy array) 1. define a placeholder for images (or use directly an numpy array)
2. [create a graph ](https://github.com/tensorflow/models/blob/master/attention_ocr/python/eval.py#L60) 2. [create a graph ](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/eval.py#L60)
``` ```
endpoints = model.create_base(images_placeholder, labels_one_hot=None) endpoints = model.create_base(images_placeholder, labels_one_hot=None)
``` ```
3. [load a pretrained model](https://github.com/tensorflow/models/blob/master/attention_ocr/python/model.py#L494) 3. [load a pretrained model](https://github.com/tensorflow/models/blob/master/research/attention_ocr/python/model.py#L494)
4. run computations through the graph: 4. run computations through the graph:
``` ```
predictions = sess.run(endpoints.predicted_chars, predictions = sess.run(endpoints.predicted_chars,
......
...@@ -27,7 +27,7 @@ hyperparameter values (in vggish_params.py) that were used to train this model ...@@ -27,7 +27,7 @@ hyperparameter values (in vggish_params.py) that were used to train this model
internally. internally.
For comparison, here is TF-Slim's VGG definition: For comparison, here is TF-Slim's VGG definition:
https://github.com/tensorflow/models/blob/master/slim/nets/vgg.py https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py
""" """
import tensorflow as tf import tensorflow as tf
......
...@@ -168,10 +168,10 @@ The *Show and Tell* model requires a pretrained *Inception v3* checkpoint file ...@@ -168,10 +168,10 @@ The *Show and Tell* model requires a pretrained *Inception v3* checkpoint file
to initialize the parameters of its image encoder submodel. to initialize the parameters of its image encoder submodel.
This checkpoint file is provided by the This checkpoint file is provided by the
[TensorFlow-Slim image classification library](https://github.com/tensorflow/models/tree/master/slim#tensorflow-slim-image-classification-library) [TensorFlow-Slim image classification library](https://github.com/tensorflow/models/tree/master/research/slim#tensorflow-slim-image-classification-library)
which provides a suite of pre-trained image classification models. You can read which provides a suite of pre-trained image classification models. You can read
more about the models provided by the library more about the models provided by the library
[here](https://github.com/tensorflow/models/tree/master/slim#pre-trained-models). [here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).
Run the following commands to download the *Inception v3* checkpoint. Run the following commands to download the *Inception v3* checkpoint.
......
**NOTE**: For the most part, you will find a newer version of this code at [models/slim](https://github.com/tensorflow/models/tree/master/slim). In particular: **NOTE**: For the most part, you will find a newer version of this code at [models/research/slim](https://github.com/tensorflow/models/tree/master/research/slim). In particular:
* `inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples. * `inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
* `inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training. * `inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Object Detection Demo\n", "# Object Detection Demo\n",
"Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md) before you start." "Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start."
] ]
}, },
{ {
...@@ -96,7 +96,7 @@ ...@@ -96,7 +96,7 @@
"\n", "\n",
"Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. \n", "Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. \n",
"\n", "\n",
"By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies." "By default we use an \"SSD with Mobilenet\" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies."
] ]
}, },
{ {
......
...@@ -26,7 +26,7 @@ https://papers.nips.cc/paper/6206-perspective-transformer-nets-learning-single-v ...@@ -26,7 +26,7 @@ https://papers.nips.cc/paper/6206-perspective-transformer-nets-learning-single-v
(2) Official implementation in Torch: https://github.com/xcyan/ptnbhwd (2) Official implementation in Torch: https://github.com/xcyan/ptnbhwd
(3) 2D Transformer implementation in TF: (3) 2D Transformer implementation in TF:
github.com/tensorflow/models/tree/master/transformer github.com/tensorflow/models/tree/master/research/transformer
""" """
......
...@@ -67,7 +67,7 @@ git clone https://github.com/tensorflow/models/ ...@@ -67,7 +67,7 @@ git clone https://github.com/tensorflow/models/
This will put the TF-Slim image models library in `$HOME/workspace/models/research/slim`. This will put the TF-Slim image models library in `$HOME/workspace/models/research/slim`.
(It will also create a directory called (It will also create a directory called
[models/inception](https://github.com/tensorflow/models/tree/master/inception), [models/inception](https://github.com/tensorflow/models/tree/master/research/inception),
which contains an older version of slim; you can safely ignore this.) which contains an older version of slim; you can safely ignore this.)
To verify that this has worked, execute the following commands; it should run To verify that this has worked, execute the following commands; it should run
...@@ -127,7 +127,7 @@ from integer labels to class names. ...@@ -127,7 +127,7 @@ from integer labels to class names.
You can use the same script to create the mnist and cifar10 datasets. You can use the same script to create the mnist and cifar10 datasets.
However, for ImageNet, you have to follow the instructions However, for ImageNet, you have to follow the instructions
[here](https://github.com/tensorflow/models/blob/master/inception/README.md#getting-started). [here](https://github.com/tensorflow/models/blob/master/research/inception/README.md#getting-started).
Note that you first have to sign up for an account at image-net.org. Note that you first have to sign up for an account at image-net.org.
Also, the download can take several hours, and could use up to 500GB. Also, the download can take several hours, and could use up to 500GB.
...@@ -464,17 +464,17 @@ bazel-bin/tensorflow/examples/label_image/label_image \ ...@@ -464,17 +464,17 @@ bazel-bin/tensorflow/examples/label_image/label_image \
#### The model runs out of CPU memory. #### The model runs out of CPU memory.
See See
[Model Runs out of CPU memory](https://github.com/tensorflow/models/tree/master/inception#the-model-runs-out-of-cpu-memory). [Model Runs out of CPU memory](https://github.com/tensorflow/models/tree/master/research/inception#the-model-runs-out-of-cpu-memory).
#### The model runs out of GPU memory. #### The model runs out of GPU memory.
See See
[Adjusting Memory Demands](https://github.com/tensorflow/models/tree/master/inception#adjusting-memory-demands). [Adjusting Memory Demands](https://github.com/tensorflow/models/tree/master/research/inception#adjusting-memory-demands).
#### The model training results in NaN's. #### The model training results in NaN's.
See See
[Model Resulting in NaNs](https://github.com/tensorflow/models/tree/master/inception#the-model-training-results-in-nans). [Model Resulting in NaNs](https://github.com/tensorflow/models/tree/master/research/inception#the-model-training-results-in-nans).
#### The ResNet and VGG Models have 1000 classes but the ImageNet dataset has 1001 #### The ResNet and VGG Models have 1000 classes but the ImageNet dataset has 1001
...@@ -509,4 +509,4 @@ image_preprocessing_fn = preprocessing_factory.get_preprocessing( ...@@ -509,4 +509,4 @@ image_preprocessing_fn = preprocessing_factory.get_preprocessing(
#### What hardware specification are these hyper-parameters targeted for? #### What hardware specification are these hyper-parameters targeted for?
See See
[Hardware Specifications](https://github.com/tensorflow/models/tree/master/inception#what-hardware-specification-are-these-hyper-parameters-targeted-for). [Hardware Specifications](https://github.com/tensorflow/models/tree/master/research/inception#what-hardware-specification-are-these-hyper-parameters-targeted-for).
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
"python -c \"import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once\"\n", "python -c \"import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once\"\n",
"```\n", "```\n",
"\n", "\n",
"Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim **before** running this notebook, so that these files are in your python path.\n", "Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from [here](https://github.com/tensorflow/models/tree/master/research/slim). Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/research/slim **before** running this notebook, so that these files are in your python path.\n",
"\n", "\n",
"To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.\n" "To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory.\n"
] ]
...@@ -757,7 +757,7 @@ ...@@ -757,7 +757,7 @@
"<a id='Pretrained'></a>\n", "<a id='Pretrained'></a>\n",
"\n", "\n",
"Neural nets work best when they have many parameters, making them very flexible function approximators.\n", "Neural nets work best when they have many parameters, making them very flexible function approximators.\n",
"However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/slim#pre-trained-models).\n", "However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list [here](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models).\n",
"\n", "\n",
"\n", "\n",
"You can either use these models as-is, or you can perform \"surgery\" on them, to modify them for some other task. For example, it is common to \"chop off\" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.\n", "You can either use these models as-is, or you can perform \"surgery\" on them, to modify them for some other task. For example, it is common to \"chop off\" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes.\n",
......
...@@ -19,7 +19,7 @@ languages. ...@@ -19,7 +19,7 @@ languages.
This repository is largely divided into two sub-packages: This repository is largely divided into two sub-packages:
1. **DRAGNN: 1. **DRAGNN:
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/dragnn), [code](https://github.com/tensorflow/models/tree/master/research/syntaxnet/dragnn),
[documentation](g3doc/DRAGNN.md), [documentation](g3doc/DRAGNN.md),
[paper](https://arxiv.org/pdf/1703.04474.pdf)** implements Dynamic Recurrent [paper](https://arxiv.org/pdf/1703.04474.pdf)** implements Dynamic Recurrent
Acyclic Graphical Neural Networks (DRAGNN), a framework for building Acyclic Graphical Neural Networks (DRAGNN), a framework for building
...@@ -31,7 +31,7 @@ This repository is largely divided into two sub-packages: ...@@ -31,7 +31,7 @@ This repository is largely divided into two sub-packages:
easier to use than the original SyntaxNet implementation.* easier to use than the original SyntaxNet implementation.*
1. **SyntaxNet: 1. **SyntaxNet:
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/syntaxnet), [code](https://github.com/tensorflow/models/tree/master/research/syntaxnet/syntaxnet),
[documentation](g3doc/syntaxnet-tutorial.md)** is a transition-based [documentation](g3doc/syntaxnet-tutorial.md)** is a transition-based
framework for natural language processing, with core functionality for framework for natural language processing, with core functionality for
feature extraction, representing annotated data, and evaluation. As of the feature extraction, representing annotated data, and evaluation. As of the
...@@ -95,7 +95,7 @@ following commands: ...@@ -95,7 +95,7 @@ following commands:
```shell ```shell
git clone --recursive https://github.com/tensorflow/models.git git clone --recursive https://github.com/tensorflow/models.git
cd models/syntaxnet/tensorflow cd models/research/syntaxnet/tensorflow
./configure ./configure
cd .. cd ..
bazel test ... bazel test ...
......
...@@ -56,7 +56,7 @@ setuptools.setup( ...@@ -56,7 +56,7 @@ setuptools.setup(
version='0.2', version='0.2',
description='SyntaxNet: Neural Models of Syntax', description='SyntaxNet: Neural Models of Syntax',
long_description='', long_description='',
url='https://github.com/tensorflow/models/tree/master/syntaxnet', url='https://github.com/tensorflow/models/tree/master/research/syntaxnet',
author='Google Inc.', author='Google Inc.',
author_email='opensource@google.com', author_email='opensource@google.com',
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
### Module `dragnn_ops` ### Module `dragnn_ops`
Defined in Defined in
[`tensorflow/dragnn/python/dragnn_ops.py`](https://github.com/tensorflow/models/blob/master/syntaxnet/dragnn/python/dragnn_ops.py). [`tensorflow/dragnn/python/dragnn_ops.py`](https://github.com/tensorflow/models/blob/master/research/syntaxnet/dragnn/python/dragnn_ops.py).
Groups the DRAGNN TensorFlow ops in one module. Groups the DRAGNN TensorFlow ops in one module.
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# Models can be downloaded from # Models can be downloaded from
# http://download.tensorflow.org/models/parsey_universal/<language>.zip # http://download.tensorflow.org/models/parsey_universal/<language>.zip
# for the languages listed at # for the languages listed at
# https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md # https://github.com/tensorflow/models/blob/master/research/syntaxnet/universal.md
# #
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval PARSER_EVAL=bazel-bin/syntaxnet/parser_eval
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
# Models can be downloaded from # Models can be downloaded from
# http://download.tensorflow.org/models/parsey_universal/<language>.zip # http://download.tensorflow.org/models/parsey_universal/<language>.zip
# for the languages listed at # for the languages listed at
# https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md # https://github.com/tensorflow/models/blob/master/research/syntaxnet/universal.md
# #
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval PARSER_EVAL=bazel-bin/syntaxnet/parser_eval
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment