Commit 1630da34 authored by David Andersen's avatar David Andersen Committed by Neal Wu
Browse files

Update links in slim to the new tensorflow models organization (#2439)

parent 4f32535f
...@@ -13,7 +13,7 @@ converting them ...@@ -13,7 +13,7 @@ converting them
to TensorFlow's native TFRecord format and reading them in using TF-Slim's to TensorFlow's native TFRecord format and reading them in using TF-Slim's
data reading and queueing utilities. You can easily train any model on any of data reading and queueing utilities. You can easily train any model on any of
these datasets, as we demonstrate below. We've also included a these datasets, as we demonstrate below. We've also included a
[jupyter notebook](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), [jupyter notebook](https://github.com/tensorflow/models/blob/master/research/slim/slim_walkthrough.ipynb),
which provides working examples of how to use TF-Slim for image classification. which provides working examples of how to use TF-Slim for image classification.
For developing or modifying your own models, see also the [main TF-Slim page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim). For developing or modifying your own models, see also the [main TF-Slim page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim).
...@@ -55,7 +55,7 @@ python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evalua ...@@ -55,7 +55,7 @@ python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evalua
## Installing the TF-slim image models library ## Installing the TF-slim image models library
To use TF-Slim for image classification, you also have to install To use TF-Slim for image classification, you also have to install
the [TF-Slim image models library](https://github.com/tensorflow/models/tree/master/slim), the [TF-Slim image models library](https://github.com/tensorflow/models/tree/master/research/slim),
which is not part of the core TF library. which is not part of the core TF library.
To do this, check out the To do this, check out the
[tensorflow/models](https://github.com/tensorflow/models/) repository as follows: [tensorflow/models](https://github.com/tensorflow/models/) repository as follows:
...@@ -65,7 +65,7 @@ cd $HOME/workspace ...@@ -65,7 +65,7 @@ cd $HOME/workspace
git clone https://github.com/tensorflow/models/ git clone https://github.com/tensorflow/models/
``` ```
This will put the TF-Slim image models library in `$HOME/workspace/models/slim`. This will put the TF-Slim image models library in `$HOME/workspace/models/research/slim`.
(It will also create a directory called (It will also create a directory called
[models/inception](https://github.com/tensorflow/models/tree/master/inception), [models/inception](https://github.com/tensorflow/models/tree/master/inception),
which contains an older version of slim; you can safely ignore this.) which contains an older version of slim; you can safely ignore this.)
...@@ -74,7 +74,7 @@ To verify that this has worked, execute the following commands; it should run ...@@ -74,7 +74,7 @@ To verify that this has worked, execute the following commands; it should run
without raising any errors. without raising any errors.
``` ```
cd $HOME/workspace/models/slim cd $HOME/workspace/models/research/slim
python -c "from nets import cifarnet; mynet = cifarnet.cifarnet" python -c "from nets import cifarnet; mynet = cifarnet.cifarnet"
``` ```
...@@ -140,11 +140,11 @@ which stores pointers to the data file, as well as various other pieces of ...@@ -140,11 +140,11 @@ which stores pointers to the data file, as well as various other pieces of
metadata, such as the class labels, the train/test split, and how to parse the metadata, such as the class labels, the train/test split, and how to parse the
TFExample protos. We have included the TF-Slim Dataset descriptors TFExample protos. We have included the TF-Slim Dataset descriptors
for for
[Cifar10](https://github.com/tensorflow/models/blob/master/slim/datasets/cifar10.py), [Cifar10](https://github.com/tensorflow/models/blob/master/research/slim/datasets/cifar10.py),
[ImageNet](https://github.com/tensorflow/models/blob/master/slim/datasets/imagenet.py), [ImageNet](https://github.com/tensorflow/models/blob/master/research/slim/datasets/imagenet.py),
[Flowers](https://github.com/tensorflow/models/blob/master/slim/datasets/flowers.py), [Flowers](https://github.com/tensorflow/models/blob/master/research/slim/datasets/flowers.py),
and and
[MNIST](https://github.com/tensorflow/models/blob/master/slim/datasets/mnist.py). [MNIST](https://github.com/tensorflow/models/blob/master/research/slim/datasets/mnist.py).
An example of how to load data using a TF-Slim dataset descriptor using a An example of how to load data using a TF-Slim dataset descriptor using a
TF-Slim TF-Slim
[DatasetDataProvider](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/data/dataset_data_provider.py) [DatasetDataProvider](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/data/dataset_data_provider.py)
...@@ -242,30 +242,30 @@ crops at multiple scales. ...@@ -242,30 +242,30 @@ crops at multiple scales.
Model | TF-Slim File | Checkpoint | Top-1 Accuracy| Top-5 Accuracy | Model | TF-Slim File | Checkpoint | Top-1 Accuracy| Top-5 Accuracy |
:----:|:------------:|:----------:|:-------:|:--------:| :----:|:------------:|:----------:|:-------:|:--------:|
[Inception V1](http://arxiv.org/abs/1409.4842v1)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/inception_v1.py)|[inception_v1_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz)|69.8|89.6| [Inception V1](http://arxiv.org/abs/1409.4842v1)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v1.py)|[inception_v1_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz)|69.8|89.6|
[Inception V2](http://arxiv.org/abs/1502.03167)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/inception_v2.py)|[inception_v2_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v2_2016_08_28.tar.gz)|73.9|91.8| [Inception V2](http://arxiv.org/abs/1502.03167)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v2.py)|[inception_v2_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v2_2016_08_28.tar.gz)|73.9|91.8|
[Inception V3](http://arxiv.org/abs/1512.00567)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/inception_v3.py)|[inception_v3_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz)|78.0|93.9| [Inception V3](http://arxiv.org/abs/1512.00567)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3.py)|[inception_v3_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz)|78.0|93.9|
[Inception V4](http://arxiv.org/abs/1602.07261)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py)|[inception_v4_2016_09_09.tar.gz](http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz)|80.2|95.2| [Inception V4](http://arxiv.org/abs/1602.07261)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py)|[inception_v4_2016_09_09.tar.gz](http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz)|80.2|95.2|
[Inception-ResNet-v2](http://arxiv.org/abs/1602.07261)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/inception_resnet_v2.py)|[inception_resnet_v2_2016_08_30.tar.gz](http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz)|80.4|95.3| [Inception-ResNet-v2](http://arxiv.org/abs/1602.07261)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py)|[inception_resnet_v2_2016_08_30.tar.gz](http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz)|80.4|95.3|
[ResNet V1 50](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v1.py)|[resnet_v1_50_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz)|75.2|92.2| [ResNet V1 50](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py)|[resnet_v1_50_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz)|75.2|92.2|
[ResNet V1 101](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v1.py)|[resnet_v1_101_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz)|76.4|92.9| [ResNet V1 101](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py)|[resnet_v1_101_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz)|76.4|92.9|
[ResNet V1 152](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v1.py)|[resnet_v1_152_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_152_2016_08_28.tar.gz)|76.8|93.2| [ResNet V1 152](https://arxiv.org/abs/1512.03385)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py)|[resnet_v1_152_2016_08_28.tar.gz](http://download.tensorflow.org/models/resnet_v1_152_2016_08_28.tar.gz)|76.8|93.2|
[ResNet V2 50](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v2.py)|[resnet_v2_50_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz)|75.6|92.8| [ResNet V2 50](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py)|[resnet_v2_50_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz)|75.6|92.8|
[ResNet V2 101](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v2.py)|[resnet_v2_101_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_101_2017_04_14.tar.gz)|77.0|93.7| [ResNet V2 101](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py)|[resnet_v2_101_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_101_2017_04_14.tar.gz)|77.0|93.7|
[ResNet V2 152](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v2.py)|[resnet_v2_152_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_152_2017_04_14.tar.gz)|77.8|94.1| [ResNet V2 152](https://arxiv.org/abs/1603.05027)^|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py)|[resnet_v2_152_2017_04_14.tar.gz](http://download.tensorflow.org/models/resnet_v2_152_2017_04_14.tar.gz)|77.8|94.1|
[ResNet V2 200](https://arxiv.org/abs/1603.05027)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/resnet_v2.py)|[TBA]()|79.9\*|95.2\*| [ResNet V2 200](https://arxiv.org/abs/1603.05027)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py)|[TBA]()|79.9\*|95.2\*|
[VGG 16](http://arxiv.org/abs/1409.1556.pdf)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/vgg.py)|[vgg_16_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz)|71.5|89.8| [VGG 16](http://arxiv.org/abs/1409.1556.pdf)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py)|[vgg_16_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz)|71.5|89.8|
[VGG 19](http://arxiv.org/abs/1409.1556.pdf)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/vgg.py)|[vgg_19_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz)|71.1|89.8| [VGG 19](http://arxiv.org/abs/1409.1556.pdf)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py)|[vgg_19_2016_08_28.tar.gz](http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz)|71.1|89.8|
[MobileNet_v1_1.0_224](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.py)|[mobilenet_v1_1.0_224_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_1.0_224_2017_06_14.tar.gz)|70.7|89.5| [MobileNet_v1_1.0_224](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.py)|[mobilenet_v1_1.0_224_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_1.0_224_2017_06_14.tar.gz)|70.7|89.5|
[MobileNet_v1_0.50_160](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.py)|[mobilenet_v1_0.50_160_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_0.50_160_2017_06_14.tar.gz)|59.9|82.5| [MobileNet_v1_0.50_160](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.py)|[mobilenet_v1_0.50_160_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_0.50_160_2017_06_14.tar.gz)|59.9|82.5|
[MobileNet_v1_0.25_128](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.py)|[mobilenet_v1_0.25_128_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_0.25_128_2017_06_14.tar.gz)|41.3|66.2| [MobileNet_v1_0.25_128](https://arxiv.org/pdf/1704.04861.pdf)|[Code](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.py)|[mobilenet_v1_0.25_128_2017_06_14.tar.gz](http://download.tensorflow.org/models/mobilenet_v1_0.25_128_2017_06_14.tar.gz)|41.3|66.2|
^ ResNet V2 models use Inception pre-processing and input image size of 299 (use ^ ResNet V2 models use Inception pre-processing and input image size of 299 (use
`--preprocessing_name inception --eval_image_size 299` when using `--preprocessing_name inception --eval_image_size 299` when using
`eval_image_classifier.py`). Performance numbers for ResNet V2 models are `eval_image_classifier.py`). Performance numbers for ResNet V2 models are
reported on the ImageNet validation set. reported on the ImageNet validation set.
All 16 MobileNet Models reported in the [MobileNet Paper](https://arxiv.org/abs/1704.04861) can be found [here](https://github.com/tensorflow/models/tree/master/slim/nets/mobilenet_v1.md). All 16 MobileNet Models reported in the [MobileNet Paper](https://arxiv.org/abs/1704.04861) can be found [here](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet_v1.md).
(\*): Results quoted from the [paper](https://arxiv.org/abs/1603.05027). (\*): Results quoted from the [paper](https://arxiv.org/abs/1603.05027).
...@@ -303,7 +303,7 @@ python train_image_classifier.py \ ...@@ -303,7 +303,7 @@ python train_image_classifier.py \
This process may take several days, depending on your hardware setup. This process may take several days, depending on your hardware setup.
For convenience, we provide a way to train a model on multiple GPUs, For convenience, we provide a way to train a model on multiple GPUs,
and/or multiple CPUs, either synchrononously or asynchronously. and/or multiple CPUs, either synchrononously or asynchronously.
See [model_deploy](https://github.com/tensorflow/models/blob/master/slim/deployment/model_deploy.py) See [model_deploy](https://github.com/tensorflow/models/blob/master/research/slim/deployment/model_deploy.py)
for details. for details.
### TensorBoard ### TensorBoard
...@@ -350,7 +350,7 @@ one only want train a sub-set of layers, so the flag `--trainable_scopes` allows ...@@ -350,7 +350,7 @@ one only want train a sub-set of layers, so the flag `--trainable_scopes` allows
to specify which subsets of layers should trained, the rest would remain frozen. to specify which subsets of layers should trained, the rest would remain frozen.
Below we give an example of Below we give an example of
[fine-tuning inception-v3 on flowers](https://github.com/tensorflow/models/blob/master/slim/scripts/finetune_inception_v3_on_flowers.sh), [fine-tuning inception-v3 on flowers](https://github.com/tensorflow/models/blob/master/research/slim/scripts/finetune_inception_v3_on_flowers.sh),
inception_v3 was trained on ImageNet with 1000 class labels, but the flowers inception_v3 was trained on ImageNet with 1000 class labels, but the flowers
dataset only have 5 classes. Since the dataset is quite small we will only train dataset only have 5 classes. Since the dataset is quite small we will only train
the new layers. the new layers.
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
"""Provides data for the Cifar10 dataset. """Provides data for the Cifar10 dataset.
The dataset scripts used to create the dataset can be found at: The dataset scripts used to create the dataset can be found at:
tensorflow/models/slim/datasets/download_and_convert_cifar10.py tensorflow/models/research/slim/datasets/download_and_convert_cifar10.py
""" """
from __future__ import absolute_import from __future__ import absolute_import
......
...@@ -58,7 +58,7 @@ DATA_DIR="${1%/}" ...@@ -58,7 +58,7 @@ DATA_DIR="${1%/}"
SCRATCH_DIR="${DATA_DIR}/raw-data/" SCRATCH_DIR="${DATA_DIR}/raw-data/"
mkdir -p "${DATA_DIR}" mkdir -p "${DATA_DIR}"
mkdir -p "${SCRATCH_DIR}" mkdir -p "${SCRATCH_DIR}"
WORK_DIR="$0.runfiles/third_party/tensorflow_models/slim" WORK_DIR="$0.runfiles/third_party/tensorflow_models/research/slim"
# Download the ImageNet data. # Download the ImageNet data.
LABELS_FILE="${WORK_DIR}/datasets/imagenet_lsvrc_2015_synsets.txt" LABELS_FILE="${WORK_DIR}/datasets/imagenet_lsvrc_2015_synsets.txt"
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
"""Provides data for the flowers dataset. """Provides data for the flowers dataset.
The dataset scripts used to create the dataset can be found at: The dataset scripts used to create the dataset can be found at:
tensorflow/models/slim/datasets/download_and_convert_flowers.py tensorflow/models/research/slim/datasets/download_and_convert_flowers.py
""" """
from __future__ import absolute_import from __future__ import absolute_import
......
...@@ -79,11 +79,11 @@ def create_readable_names_for_imagenet_labels(): ...@@ -79,11 +79,11 @@ def create_readable_names_for_imagenet_labels():
(since 0 is reserved for the background class). (since 0 is reserved for the background class).
Code is based on Code is based on
https://github.com/tensorflow/models/blob/master/inception/inception/data/build_imagenet_data.py#L463 https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py#L463
""" """
# pylint: disable=g-line-too-long # pylint: disable=g-line-too-long
base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/inception/inception/data/' base_url = 'https://raw.githubusercontent.com/tensorflow/models/master/research/inception/inception/data/'
synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url) synset_url = '{}/imagenet_lsvrc_2015_synsets.txt'.format(base_url)
synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url) synset_to_human_url = '{}/imagenet_metadata.txt'.format(base_url)
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
"""Provides data for the MNIST dataset. """Provides data for the MNIST dataset.
The dataset scripts used to create the dataset can be found at: The dataset scripts used to create the dataset can be found at:
tensorflow/models/slim/datasets/download_and_convert_mnist.py tensorflow/models/research/slim/datasets/download_and_convert_mnist.py
""" """
from __future__ import absolute_import from __future__ import absolute_import
......
...@@ -16,8 +16,8 @@ r"""Saves out a GraphDef containing the architecture of the model. ...@@ -16,8 +16,8 @@ r"""Saves out a GraphDef containing the architecture of the model.
To use it, run something like this, with a model name defined by slim: To use it, run something like this, with a model name defined by slim:
bazel build tensorflow_models/slim:export_inference_graph bazel build tensorflow_models/research/slim:export_inference_graph
bazel-bin/tensorflow_models/slim/export_inference_graph \ bazel-bin/tensorflow_models/research/slim/export_inference_graph \
--model_name=inception_v3 --output_file=/tmp/inception_v3_inf_graph.pb --model_name=inception_v3 --output_file=/tmp/inception_v3_inf_graph.pb
If you then want to use the resulting model with your own or pretrained If you then want to use the resulting model with your own or pretrained
......
...@@ -42,6 +42,6 @@ $ tar -xvf mobilenet_v1_1.0_224_2017_06_14.tar.gz ...@@ -42,6 +42,6 @@ $ tar -xvf mobilenet_v1_1.0_224_2017_06_14.tar.gz
$ mv mobilenet_v1_1.0_224.ckpt.* ${CHECKPOINT_DIR} $ mv mobilenet_v1_1.0_224.ckpt.* ${CHECKPOINT_DIR}
$ rm mobilenet_v1_1.0_224_2017_06_14.tar.gz $ rm mobilenet_v1_1.0_224_2017_06_14.tar.gz
``` ```
More information on integrating MobileNets into your project can be found at the [TF-Slim Image Classification Library](https://github.com/tensorflow/models/blob/master/slim/README.md). More information on integrating MobileNets into your project can be found at the [TF-Slim Image Classification Library](https://github.com/tensorflow/models/blob/master/research/slim/README.md).
To get started running models on-device go to [TensorFlow Mobile](https://www.tensorflow.org/mobile/). To get started running models on-device go to [TensorFlow Mobile](https://www.tensorflow.org/mobile/).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment