Commit 277b0be2 authored by Przemek Tredak's avatar Przemek Tredak
Browse files

Publish nightly version of the documentation


Signed-off-by: default avatarPrzemek Tredak <ptredak@nvidia.com>
parent 2c996359
...@@ -7,6 +7,7 @@ name: 'Build documentation' ...@@ -7,6 +7,7 @@ name: 'Build documentation'
on: on:
pull_request: pull_request:
workflow_dispatch: workflow_dispatch:
workflow_call:
jobs: jobs:
build_docs: build_docs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
......
# Copyright (c) 2022-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# See LICENSE for license information.
# A workflow to deploy the nightly version of TE documentation to GitHub Pages
name: Deploy nightly docs
on:
push:
branches: [ "main" ]
jobs:
build:
uses: ./.github/workflows/build_docs.yml
prepare:
needs: build
runs-on: ubuntu-latest
steps:
- name: Download artifact
uses: actions/download-artifact@v3
with:
name: "te_docs"
path: "html"
- name: Prepare for pages
uses: actions/upload-pages-artifact@v1.0.7
with:
path: "html"
deploy:
needs: prepare
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
permissions:
pages: write
id-token: write
runs-on: ubuntu-latest
steps:
- name: Deploy
uses: actions/deploy-pages@v2.0.0
...@@ -14,7 +14,7 @@ Transformer Engine (TE) is a library for accelerating Transformer models on NVID ...@@ -14,7 +14,7 @@ Transformer Engine (TE) is a library for accelerating Transformer models on NVID
using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower
memory utilization in both training and inference. TE provides a collection of highly optimized memory utilization in both training and inference. TE provides a collection of highly optimized
building blocks for popular Transformer architectures and an automatic mixed precision-like API that building blocks for popular Transformer architectures and an automatic mixed precision-like API that
can be used seamlessly with your own framework-specific code. TE also includes a framework agnostic can be used seamlessly with your own framework-specific code. TE also includes a framework agnostic
C++ API that can be integrated with other deep learning libraries to enable FP8 support for Transformers. C++ API that can be integrated with other deep learning libraries to enable FP8 support for Transformers.
As the number of parameters in Transformer models continues to grow, training and inference for As the number of parameters in Transformer models continues to grow, training and inference for
......
...@@ -64,6 +64,7 @@ extensions = [ ...@@ -64,6 +64,7 @@ extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.autodoc',
'sphinx.ext.mathjax', 'sphinx.ext.mathjax',
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinx.ext.ifconfig',
'nbsphinx', 'nbsphinx',
'breathe', 'breathe',
'autoapi.extension'] 'autoapi.extension']
......
...@@ -6,6 +6,15 @@ ...@@ -6,6 +6,15 @@
Transformer Engine documentation Transformer Engine documentation
============================================== ==============================================
.. ifconfig:: "dev" in release
.. warning::
You are currently viewing unstable developer preview of the documentation.
To see the documentation for the latest stable release, refer to:
* `Release Notes <https://docs.nvidia.com/deeplearning/transformer-engine/release-notes/index.html>`_
* `Developer Guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html>`_ (stable version of this page)
.. include:: ../README.rst .. include:: ../README.rst
:start-after: overview-begin-marker-do-not-remove :start-after: overview-begin-marker-do-not-remove
:end-before: overview-end-marker-do-not-remove :end-before: overview-end-marker-do-not-remove
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment