558 lines
28 KiB
Plaintext
558 lines
28 KiB
Plaintext
Metadata-Version: 2.1
|
|
Name: torch
|
|
Version: 2.5.1
|
|
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
|
|
Home-page: https://pytorch.org/
|
|
Download-URL: https://github.com/pytorch/pytorch/tags
|
|
Author: PyTorch Team
|
|
Author-email: packages@pytorch.org
|
|
License: BSD-3-Clause
|
|
Keywords: pytorch,machine learning
|
|
Classifier: Development Status :: 5 - Production/Stable
|
|
Classifier: Intended Audience :: Developers
|
|
Classifier: Intended Audience :: Education
|
|
Classifier: Intended Audience :: Science/Research
|
|
Classifier: License :: OSI Approved :: BSD License
|
|
Classifier: Topic :: Scientific/Engineering
|
|
Classifier: Topic :: Scientific/Engineering :: Mathematics
|
|
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
Classifier: Topic :: Software Development
|
|
Classifier: Topic :: Software Development :: Libraries
|
|
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
Classifier: Programming Language :: C++
|
|
Classifier: Programming Language :: Python :: 3
|
|
Classifier: Programming Language :: Python :: 3.8
|
|
Classifier: Programming Language :: Python :: 3.9
|
|
Classifier: Programming Language :: Python :: 3.10
|
|
Classifier: Programming Language :: Python :: 3.11
|
|
Classifier: Programming Language :: Python :: 3.12
|
|
Requires-Python: >=3.8.0
|
|
Description-Content-Type: text/markdown
|
|
License-File: LICENSE
|
|
License-File: NOTICE
|
|
Requires-Dist: filelock
|
|
Requires-Dist: typing-extensions>=4.8.0
|
|
Requires-Dist: networkx
|
|
Requires-Dist: jinja2
|
|
Requires-Dist: fsspec
|
|
Requires-Dist: nvidia-cuda-nvrtc-cu12==12.4.127; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cuda-runtime-cu12==12.4.127; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cuda-cupti-cu12==12.4.127; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cudnn-cu12==9.1.0.70; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cublas-cu12==12.4.5.8; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cufft-cu12==11.2.1.3; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-curand-cu12==10.3.5.147; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cusolver-cu12==11.6.1.9; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-cusparse-cu12==12.3.1.170; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-nccl-cu12==2.21.5; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-nvtx-cu12==12.4.127; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: nvidia-nvjitlink-cu12==12.4.127; platform_system == "Linux" and platform_machine == "x86_64"
|
|
Requires-Dist: triton==3.1.0; platform_system == "Linux" and platform_machine == "x86_64" and python_version < "3.13"
|
|
Requires-Dist: sympy==1.12.1; python_version == "3.8"
|
|
Requires-Dist: setuptools; python_version >= "3.12"
|
|
Requires-Dist: sympy==1.13.1; python_version >= "3.9"
|
|
Provides-Extra: opt-einsum
|
|
Requires-Dist: opt-einsum>=3.3; extra == "opt-einsum"
|
|
Provides-Extra: optree
|
|
Requires-Dist: optree>=0.12.0; extra == "optree"
|
|
|
|

|
|
|
|
--------------------------------------------------------------------------------
|
|
|
|
PyTorch is a Python package that provides two high-level features:
|
|
- Tensor computation (like NumPy) with strong GPU acceleration
|
|
- Deep neural networks built on a tape-based autograd system
|
|
|
|
You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
|
|
|
|
Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/main).
|
|
|
|
<!-- toc -->
|
|
|
|
- [More About PyTorch](#more-about-pytorch)
|
|
- [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
|
|
- [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
|
|
- [Python First](#python-first)
|
|
- [Imperative Experiences](#imperative-experiences)
|
|
- [Fast and Lean](#fast-and-lean)
|
|
- [Extensions Without Pain](#extensions-without-pain)
|
|
- [Installation](#installation)
|
|
- [Binaries](#binaries)
|
|
- [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
|
|
- [From Source](#from-source)
|
|
- [Prerequisites](#prerequisites)
|
|
- [NVIDIA CUDA Support](#nvidia-cuda-support)
|
|
- [AMD ROCm Support](#amd-rocm-support)
|
|
- [Intel GPU Support](#intel-gpu-support)
|
|
- [Get the PyTorch Source](#get-the-pytorch-source)
|
|
- [Install Dependencies](#install-dependencies)
|
|
- [Install PyTorch](#install-pytorch)
|
|
- [Adjust Build Options (Optional)](#adjust-build-options-optional)
|
|
- [Docker Image](#docker-image)
|
|
- [Using pre-built images](#using-pre-built-images)
|
|
- [Building the image yourself](#building-the-image-yourself)
|
|
- [Building the Documentation](#building-the-documentation)
|
|
- [Previous Versions](#previous-versions)
|
|
- [Getting Started](#getting-started)
|
|
- [Resources](#resources)
|
|
- [Communication](#communication)
|
|
- [Releases and Contributing](#releases-and-contributing)
|
|
- [The Team](#the-team)
|
|
- [License](#license)
|
|
|
|
<!-- tocstop -->
|
|
|
|
## More About PyTorch
|
|
|
|
[Learn the basics of PyTorch](https://pytorch.org/tutorials/beginner/basics/intro.html)
|
|
|
|
At a granular level, PyTorch is a library that consists of the following components:
|
|
|
|
| Component | Description |
|
|
| ---- | --- |
|
|
| [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
|
|
| [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
|
|
| [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code |
|
|
| [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
|
|
| [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
|
|
| [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
|
|
|
|
Usually, PyTorch is used either as:
|
|
|
|
- A replacement for NumPy to use the power of GPUs.
|
|
- A deep learning research platform that provides maximum flexibility and speed.
|
|
|
|
Elaborating Further:
|
|
|
|
### A GPU-Ready Tensor Library
|
|
|
|
If you use NumPy, then you have used Tensors (a.k.a. ndarray).
|
|
|
|

|
|
|
|
PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
|
|
computation by a huge amount.
|
|
|
|
We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
|
|
such as slicing, indexing, mathematical operations, linear algebra, reductions.
|
|
And they are fast!
|
|
|
|
### Dynamic Neural Networks: Tape-Based Autograd
|
|
|
|
PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
|
|
|
|
Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
|
|
One has to build a neural network and reuse the same structure again and again.
|
|
Changing the way the network behaves means that one has to start from scratch.
|
|
|
|
With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
|
|
change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
|
|
from several research papers on this topic, as well as current and past work such as
|
|
[torch-autograd](https://github.com/twitter/torch-autograd),
|
|
[autograd](https://github.com/HIPS/autograd),
|
|
[Chainer](https://chainer.org), etc.
|
|
|
|
While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
|
|
You get the best of speed and flexibility for your crazy research.
|
|
|
|

|
|
|
|
### Python First
|
|
|
|
PyTorch is not a Python binding into a monolithic C++ framework.
|
|
It is built to be deeply integrated into Python.
|
|
You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
|
|
You can write your new neural network layers in Python itself, using your favorite libraries
|
|
and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
|
|
Our goal is to not reinvent the wheel where appropriate.
|
|
|
|
### Imperative Experiences
|
|
|
|
PyTorch is designed to be intuitive, linear in thought, and easy to use.
|
|
When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
|
|
When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
|
|
The stack trace points to exactly where your code was defined.
|
|
We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
|
|
|
|
### Fast and Lean
|
|
|
|
PyTorch has minimal framework overhead. We integrate acceleration libraries
|
|
such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
|
|
At the core, its CPU and GPU Tensor and neural network backends
|
|
are mature and have been tested for years.
|
|
|
|
Hence, PyTorch is quite fast — whether you run small or large neural networks.
|
|
|
|
The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
|
|
We've written custom memory allocators for the GPU to make sure that
|
|
your deep learning models are maximally memory efficient.
|
|
This enables you to train bigger deep learning models than before.
|
|
|
|
### Extensions Without Pain
|
|
|
|
Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
|
|
and with minimal abstractions.
|
|
|
|
You can write new neural network layers in Python using the torch API
|
|
[or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
|
|
|
|
If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
|
|
No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
|
|
|
|
|
|
## Installation
|
|
|
|
### Binaries
|
|
Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
|
|
|
|
|
|
#### NVIDIA Jetson Platforms
|
|
|
|
Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
|
|
|
|
They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
|
|
|
|
|
|
### From Source
|
|
|
|
#### Prerequisites
|
|
If you are installing from source, you will need:
|
|
- Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
|
|
- A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux)
|
|
- Visual Studio or Visual Studio Build Tool on Windows
|
|
|
|
\* PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
|
|
Professional, or Community Editions. You can also install the build tools from
|
|
https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
|
|
come with Visual Studio Code by default.
|
|
|
|
\* We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
|
|
|
|
An example of environment setup is shown below:
|
|
|
|
* Linux:
|
|
|
|
```bash
|
|
$ source <CONDA_INSTALL_DIR>/bin/activate
|
|
$ conda create -y -n <CONDA_NAME>
|
|
$ conda activate <CONDA_NAME>
|
|
```
|
|
|
|
* Windows:
|
|
|
|
```bash
|
|
$ source <CONDA_INSTALL_DIR>\Scripts\activate.bat
|
|
$ conda create -y -n <CONDA_NAME>
|
|
$ conda activate <CONDA_NAME>
|
|
$ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
|
|
```
|
|
|
|
##### NVIDIA CUDA Support
|
|
If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
|
|
- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
|
- [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
|
|
- [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
|
|
|
|
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
|
|
|
|
If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
|
|
Other potentially useful environment variables may be found in `setup.py`.
|
|
|
|
If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
|
|
|
|
##### AMD ROCm Support
|
|
If you want to compile with ROCm support, install
|
|
- [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
|
|
- ROCm is currently supported only for Linux systems.
|
|
|
|
If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
|
|
Other potentially useful environment variables may be found in `setup.py`.
|
|
|
|
##### Intel GPU Support
|
|
If you want to compile with Intel GPU support, follow these
|
|
- [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions.
|
|
- Intel GPU is supported for Linux and Windows.
|
|
|
|
If you want to disable Intel GPU support, export the environment variable `USE_XPU=0`.
|
|
Other potentially useful environment variables may be found in `setup.py`.
|
|
|
|
#### Get the PyTorch Source
|
|
```bash
|
|
git clone --recursive https://github.com/pytorch/pytorch
|
|
cd pytorch
|
|
# if you are updating an existing checkout
|
|
git submodule sync
|
|
git submodule update --init --recursive
|
|
```
|
|
|
|
#### Install Dependencies
|
|
|
|
**Common**
|
|
|
|
```bash
|
|
conda install cmake ninja
|
|
# Run this command on native Windows
|
|
conda install rust
|
|
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
**On Linux**
|
|
|
|
```bash
|
|
pip install mkl-static mkl-include
|
|
# CUDA only: Add LAPACK support for the GPU if needed
|
|
conda install -c pytorch magma-cuda121 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
|
|
|
|
# (optional) If using torch.compile with inductor/triton, install the matching version of triton
|
|
# Run from the pytorch directory after cloning
|
|
# For Intel GPU support, please explicitly `export USE_XPU=1` before running command.
|
|
make triton
|
|
```
|
|
|
|
**On MacOS**
|
|
|
|
```bash
|
|
# Add this package on intel x86 processor machines only
|
|
pip install mkl-static mkl-include
|
|
# Add these packages if torch.distributed is needed
|
|
conda install pkg-config libuv
|
|
```
|
|
|
|
**On Windows**
|
|
|
|
```bash
|
|
pip install mkl-static mkl-include
|
|
# Add these packages if torch.distributed is needed.
|
|
# Distributed package support on Windows is a prototype feature and is subject to changes.
|
|
conda install -c conda-forge libuv=1.39
|
|
```
|
|
|
|
#### Install PyTorch
|
|
**On Linux**
|
|
|
|
If you would like to compile PyTorch with [new C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) enabled, then first run this command:
|
|
```bash
|
|
export _GLIBCXX_USE_CXX11_ABI=1
|
|
```
|
|
|
|
Please **note** that starting from PyTorch 2.5, the PyTorch build with XPU supports both new and old C++ ABIs. Previously, XPU only supported the new C++ ABI. If you want to compile with Intel GPU support, please follow [Intel GPU Support](#intel-gpu-support).
|
|
|
|
If you're compiling for AMD ROCm then first run this command:
|
|
```bash
|
|
# Only run this if you're compiling for ROCm
|
|
python tools/amd_build/build_amd.py
|
|
```
|
|
|
|
Install PyTorch
|
|
```bash
|
|
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
|
python setup.py develop
|
|
```
|
|
|
|
> _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
|
|
>
|
|
> ```plaintext
|
|
> build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
|
|
> collect2: error: ld returned 1 exit status
|
|
> error: command 'g++' failed with exit status 1
|
|
> ```
|
|
>
|
|
> This is caused by `ld` from the Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.
|
|
|
|
**On macOS**
|
|
|
|
```bash
|
|
python3 setup.py develop
|
|
```
|
|
|
|
**On Windows**
|
|
|
|
If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
|
|
|
|
**CPU-only builds**
|
|
|
|
In this mode PyTorch computations will run on your CPU, not your GPU
|
|
|
|
```cmd
|
|
python setup.py develop
|
|
```
|
|
|
|
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
|
|
|
|
**CUDA based build**
|
|
|
|
In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
|
|
|
|
[NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
|
|
NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
|
|
Make sure that CUDA with Nsight Compute is installed after Visual Studio.
|
|
|
|
Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
|
|
<br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
|
|
|
|
Additional libraries such as
|
|
[Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
|
|
|
|
You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
|
|
|
|
|
|
```cmd
|
|
cmd
|
|
|
|
:: Set the environment variables after you have downloaded and unzipped the mkl package,
|
|
:: else CMake would throw an error as `Could NOT find OpenMP`.
|
|
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
|
|
set LIB={Your directory}\mkl\lib;%LIB%
|
|
|
|
:: Read the content in the previous section carefully before you proceed.
|
|
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
|
|
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
|
|
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
|
|
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
|
|
set DISTUTILS_USE_SDK=1
|
|
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
|
|
|
|
:: [Optional] If you want to override the CUDA host compiler
|
|
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
|
|
|
|
python setup.py develop
|
|
|
|
```
|
|
|
|
##### Adjust Build Options (Optional)
|
|
|
|
You can adjust the configuration of cmake variables optionally (without building first), by doing
|
|
the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
|
|
with such a step.
|
|
|
|
On Linux
|
|
```bash
|
|
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
|
python setup.py build --cmake-only
|
|
ccmake build # or cmake-gui build
|
|
```
|
|
|
|
On macOS
|
|
```bash
|
|
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
|
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
|
|
ccmake build # or cmake-gui build
|
|
```
|
|
|
|
### Docker Image
|
|
|
|
#### Using pre-built images
|
|
|
|
You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
|
|
|
|
```bash
|
|
docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
|
|
```
|
|
|
|
Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
|
|
for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
|
|
should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
|
|
|
|
#### Building the image yourself
|
|
|
|
**NOTE:** Must be built with a docker version > 18.06
|
|
|
|
The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
|
|
You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
|
|
unset to use the default.
|
|
|
|
```bash
|
|
make -f docker.Makefile
|
|
# images are tagged as docker.io/${your_docker_username}/pytorch
|
|
```
|
|
|
|
You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build.
|
|
See [setup.py](./setup.py) for the list of available variables.
|
|
|
|
```bash
|
|
make -f docker.Makefile
|
|
```
|
|
|
|
### Building the Documentation
|
|
|
|
To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
|
|
readthedocs theme.
|
|
|
|
```bash
|
|
cd docs/
|
|
pip install -r requirements.txt
|
|
```
|
|
You can then build the documentation by running `make <format>` from the
|
|
`docs/` folder. Run `make` to get a list of all available output formats.
|
|
|
|
If you get a katex error run `npm install katex`. If it persists, try
|
|
`npm install -g katex`
|
|
|
|
> Note: if you installed `nodejs` with a different package manager (e.g.,
|
|
`conda`) then `npm` will probably install a version of `katex` that is not
|
|
compatible with your version of `nodejs` and doc builds will fail.
|
|
A combination of versions that is known to work is `node@6.13.1` and
|
|
`katex@0.13.18`. To install the latter with `npm` you can run
|
|
```npm install -g katex@0.13.18```
|
|
|
|
### Previous Versions
|
|
|
|
Installation instructions and binaries for previous PyTorch versions may be found
|
|
on [our website](https://pytorch.org/previous-versions).
|
|
|
|
|
|
## Getting Started
|
|
|
|
Three-pointers to get you started:
|
|
- [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
|
|
- [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
|
|
- [The API Reference](https://pytorch.org/docs/)
|
|
- [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md)
|
|
|
|
## Resources
|
|
|
|
* [PyTorch.org](https://pytorch.org/)
|
|
* [PyTorch Tutorials](https://pytorch.org/tutorials/)
|
|
* [PyTorch Examples](https://github.com/pytorch/examples)
|
|
* [PyTorch Models](https://pytorch.org/hub/)
|
|
* [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
|
|
* [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
|
|
* [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
|
|
* [PyTorch Twitter](https://twitter.com/PyTorch)
|
|
* [PyTorch Blog](https://pytorch.org/blog/)
|
|
* [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
|
|
|
|
## Communication
|
|
* Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
|
|
* GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
|
|
* Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
|
|
* Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
|
|
* Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
|
|
* For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
|
|
|
|
## Releases and Contributing
|
|
|
|
Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
|
|
|
|
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
|
|
|
|
If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
|
|
Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
|
|
|
|
To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md). For more information about PyTorch releases, see [Release page](RELEASE.md).
|
|
|
|
## The Team
|
|
|
|
PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
|
|
|
|
PyTorch is currently maintained by [Soumith Chintala](http://soumith.ch), [Gregory Chanan](https://github.com/gchanan), [Dmytro Dzhulgakov](https://github.com/dzhulgakov), [Edward Yang](https://github.com/ezyang), and [Nikita Shulga](https://github.com/malfet) with major contributions coming from hundreds of talented individuals in various forms and means.
|
|
A non-exhaustive but growing list needs to mention: [Trevor Killeen](https://github.com/killeent), [Sasank Chilamkurthy](https://github.com/chsasank), [Sergey Zagoruyko](https://github.com/szagoruyko), [Adam Lerer](https://github.com/adamlerer), [Francisco Massa](https://github.com/fmassa), [Alykhan Tejani](https://github.com/alykhantejani), [Luca Antiga](https://github.com/lantiga), [Alban Desmaison](https://github.com/albanD), [Andreas Koepf](https://github.com/andreaskoepf), [James Bradbury](https://github.com/jamesb93), [Zeming Lin](https://github.com/ebetica), [Yuandong Tian](https://github.com/yuandong-tian), [Guillaume Lample](https://github.com/glample), [Marat Dukhan](https://github.com/Maratyszcza), [Natalia Gimelshein](https://github.com/ngimel), [Christian Sarofeen](https://github.com/csarofeen), [Martin Raison](https://github.com/martinraison), [Edward Yang](https://github.com/ezyang), [Zachary Devito](https://github.com/zdevito).
|
|
|
|
Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
|
|
|
|
## License
|
|
|
|
PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
|