Commit 799a38c5 authored by chenzk's avatar chenzk
Browse files

v1.0

parents
Pipeline #616 failed with stages
in 0 seconds
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xl="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" version="1.1" viewBox="383 238 488 319" width="488" height="319">
<defs>
<font-face font-family="Futura" font-size="239" panose-1="2 11 6 2 2 2 4 9 3 3" units-per-em="1000" underline-position="-97.65625" underline-thickness="78.125" slope="-29.288703" x-height="467.77344" cap-height="761.71875" ascent="1038.0859" descent="-263.1836" font-style="italic" font-weight="500">
<font-face-src>
<font-face-name name="Futura-MediumItalic"/>
</font-face-src>
</font-face>
</defs>
<metadata> Produced by OmniGraffle 7.18.6\n2022-03-02 07:53:29 +0000</metadata>
<g id="Canvas_1" stroke-dasharray="none" stroke="none" stroke-opacity="1" fill="none" fill-opacity="1">
<title>Canvas 1</title>
<g id="Canvas_1_Layer_1">
<title>Layer 1</title>
<g id="Graphic_2">
<text transform="translate(383 238.49902)" fill="#3661af">
<tspan font-family="Futura" font-size="239" font-style="italic" font-weight="500" fill="#3661af" x=".21533203" y="248">OFA</tspan>
</text>
</g>
</g>
</g>
</svg>
<?xml version="1.0" encoding="UTF-8"?>
<svg width="455.01" height="181.93" version="1.1" viewBox="383 238 455.01 181.93" xmlns="http://www.w3.org/2000/svg">
<metadata>Produced by OmniGraffle 7.18.6\n2022-03-02 07:53:29 +0000</metadata>
<g transform="translate(-11.652 -70.883)" fill="none">
<title>Canvas 1</title>
<g>
<title>Layer 1</title>
<g transform="translate(383 238.5)" fill="#3661af" aria-label="OFA">
<path d="m165.34 156.97q0-26.958-16.455-43.412-16.338-16.571-43.295-16.571-13.304 0-25.44 5.0181-12.02 5.0181-20.889 14.354-10.27 10.503-15.638 23.923t-5.3682 28.125q0 26.724 16.104 43.412 16.221 16.571 42.128 16.571 29.292 0 49.014-20.422 19.839-20.539 19.839-50.998zm-58.35-83.907q38.978 0 62.084 22.756 23.106 22.64 23.106 60.684 0 20.422-7.5854 38.511-7.4688 17.972-21.356 31.275-13.42 12.604-31.042 19.372-17.622 6.6518-37.577 6.6518-37.227 0-60.1-22.873-22.873-22.873-22.873-60.1 0-20.772 7.3521-38.861 7.3521-18.205 21.123-31.392 13.07-12.487 30.458-19.255 17.388-6.7686 36.41-6.7686z"/>
<path d="m211.79 248 22.406-170.85h95.81l-3.0342 24.157h-69.319l-5.4849 42.945h69.086l-3.1509 23.923h-69.319l-10.27 79.822z"/>
<path d="m305.27 248 98.961-177.62 62.434 177.62h-27.774l-13.887-40.378h-68.036l-22.523 40.378zm62.434-63.134h49.947l-13.187-42.245q-0.9336-3.0342-1.9839-8.0522t-2.5674-13.187q-2.5674 6.0684-5.1348 11.553-2.4507 5.3682-4.9014 10.153z"/>
</g>
</g>
</g>
</svg>
This image diff could not be displayed because it is too large. You can view the blob instead.
This image diff could not be displayed because it is too large. You can view the blob instead.
## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈
Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates.
---
name: 🐛 Bug Report
about: Submit a bug report to help us improve
labels: 'bug, needs triage'
---
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior (**always include the command you ran**):
1. Run cmd '....'
2. See error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
---
name: 📚 Documentation/Typos
about: Report an issue related to documentation or a typo
labels: 'documentation, needs triage'
---
## 📚 Documentation
For typos and doc fixes, please go ahead and:
1. Create an issue.
2. Fix the typo.
3. Submit a PR.
Thanks!
---
name: 🚀 Feature Request
about: Submit a proposal/request for a new feature
labels: 'enhancement, help wanted, needs triage'
---
## 🚀 Feature Request
<!-- A clear and concise description of the feature proposal -->
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
---
name: ❓ Questions/Help
about: If you have questions, please first search existing issues and docs
labels: 'question, needs triage'
---
## ❓ Questions and Help
### Before asking:
1. search the issues.
2. search the docs.
<!-- If you still can't find what you need: -->
#### What is your question?
#### Code
<!-- Please paste a code snippet if your question requires it! -->
#### What have you tried?
#### What's your environment?
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
# Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/main/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
## What does this PR do?
Fixes # (issue).
## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
## Did you have fun?
Make sure you had fun coding 🙃
# Configuration for probot-stale - https://github.com/probot/stale
# Mostly copied from github.com/facebook/react/blob/master/.github/stale.yml
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 90
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- bug
# Label to use when marking an issue as stale
staleLabel: stale
issues:
# Comment to post when marking an issue as stale.
markComment: >
This issue has been automatically marked as stale.
**If this issue is still affecting you, please leave any comment** (for example, "bump"), and we'll keep it open.
We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment!
# Comment to post when closing a stale issue.
closeComment: >
Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you!
pulls:
# Comment to post when marking a pull request as stale.
markComment: >
This pull request has been automatically marked as stale.
**If this pull request is still relevant, please leave any comment** (for example, "bump"), and we'll keep it open.
We are sorry that we haven't been able to prioritize reviewing it yet. Your contribution is very much appreciated.
# Comment to post when closing a stale pull request.
closeComment: >
Closing this pull request after a prolonged period of inactivity. If this issue is still present in the latest release, please ask for this pull request to be reopened. Thank you!
name: build
on:
# Trigger the workflow on push to main or any pull request
push:
branches:
- main
pull_request:
jobs:
build:
strategy:
max-parallel: 4
matrix:
platform: [ubuntu-latest, macos-latest]
python-version: [3.6, 3.7]
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Conditionally install pytorch
if: matrix.platform == 'windows-latest'
run: pip3 install torch -f https://download.pytorch.org/whl/torch_stable.html
- name: Install locally
run: |
python -m pip install --upgrade pip
git submodule update --init --recursive
python setup.py build_ext --inplace
python -m pip install --editable .
- name: Install optional test requirements
run: |
python -m pip install iopath transformers pyarrow
python -m pip install git+https://github.com/facebookresearch/fairscale.git@main
- name: Lint with flake8
run: |
pip install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics --extend-exclude fairseq/model_parallel/megatron
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics --extend-exclude fairseq/model_parallel/megatron
- name: Run tests
run: |
python setup.py test
name: build_wheels
on:
push:
branches:
- v[0-9]+.[0-9]+.[x0-9]+
tags:
- v*
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
steps:
- uses: actions/checkout@v2
- name: Install Python
uses: actions/setup-python@v2
with:
python-version: '3.7'
- name: Install cibuildwheel
run: |
python -m pip install cibuildwheel
- name: Build wheels for CPython
run: |
python -m cibuildwheel --output-dir dist
env:
CIBW_BUILD: "cp36-*64 cp37-*64 cp38-*64"
CIBW_MANYLINUX_X86_64_IMAGE: manylinux1
CIBW_BEFORE_BUILD: git submodule update --init --recursive && pip install .
- uses: actions/upload-artifact@v2
with:
name: wheels
path: ./dist/*.whl
# JetBrains PyCharm IDE
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# macOS dir files
.DS_Store
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Checkpoints
checkpoints
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# dotenv
.env
# virtualenv
.venv
venv/
ENV/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
# Generated files
/fairseq/temporal_convolution_tbc
/fairseq/modules/*_layer/*_forward.cu
/fairseq/modules/*_layer/*_backward.cu
/fairseq/version.py
# data
data-bin/
# reranking
/examples/reranking/rerank_data
# Cython-generated C++ source files
/fairseq/data/data_utils_fast.cpp
/fairseq/data/token_block_utils_fast.cpp
# VSCODE
.vscode/ftp-sync.json
.vscode/settings.json
# Experimental Folder
experimental/*
# Weights and Biases logs
wandb/
[submodule "fairseq/model_parallel/megatron"]
path = fairseq/model_parallel/megatron
url = https://github.com/ngoyal2707/Megatron-LM
branch = fairseq
# Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all project spaces, and it also applies when
an individual is representing the project or its community in public spaces.
Examples of representing a project or community include using an official
project e-mail address, posting via an official social media account, or acting
as an appointed representative at an online or offline event. Representation of
a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at <conduct@pytorch.org>. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment