installation.md 2.61 KB
Newer Older
1
2
# Installation

3
Transformers is tested on Python 3.5+ and PyTorch 1.1.0
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

## With pip

PyTorch Transformers can be installed using pip as follows:

``` bash
pip install transformers
```

## From source

To install from source, clone the repository and install with:

``` bash
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install [--editable] .
```

## Tests

25
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests folder](https://github.com/huggingface/transformers/tree/master/tests) and examples tests in the [examples folder](https://github.com/huggingface/transformers/tree/master/examples).
26

27
Tests can be run using `unittest` or `pytest` (install pytest if needed with `pip install pytest`).
28
29
30

Run all the tests from the root of the cloned repository with the commands:

31
```bash
32
33
python -m unittest discover -s tests -t . -v
python -m unittest discover -s examples -t examples -v
34
35
36
37
```

or

38
``` bash
39
python -m pytest -sv ./tests/
40
41
42
python -m pytest -sv ./examples/
```

43
44
By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to `yes` to run them.

45
46
## OpenAI GPT original tokenization workflow

47
If you want to reproduce the original tokenization process of the `OpenAI GPT` paper, you will need to install `ftfy` and `SpaCy`:
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67

``` bash
pip install spacy ftfy==4.4.3
python -m spacy download en
```

If you don't install `ftfy` and `SpaCy`, the `OpenAI GPT` tokenizer will default to tokenize using BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most usage, don't worry).

## Note on model downloads (Continuous Integration or large-scale deployments)

If you expect to be downloading large volumes of models (more than 1,000) from our hosted bucket (for instance through your CI setup, or a large-scale production deployment), please cache the model files on your end. It will be way faster, and cheaper. Feel free to contact us privately if you need any help.

## Do you want to run a Transformer model on a mobile device?

You should check out our [swift-coreml-transformers](https://github.com/huggingface/swift-coreml-transformers) repo.

It contains a set of tools to convert PyTorch or TensorFlow 2.0 trained Transformer models (currently contains `GPT-2`, `DistilGPT-2`, `BERT`, and `DistilBERT`) to CoreML models that run on iOS devices.

At some point in the future, you'll be able to seamlessly move from pre-training or fine-tuning models in PyTorch to productizing them in CoreML,
or prototype a model or an app in CoreML then research its hyperparameters or architecture from PyTorch. Super exciting!