Home Arrow Icon Knowledge base Arrow Icon Global Arrow Icon What are the steps to build PyTorch and PyTorch/XLA from source


What are the steps to build PyTorch and PyTorch/XLA from source


Building PyTorch and PyTorch/XLA from source involves several steps. Here's a detailed guide on how to accomplish this:

Building PyTorch from Source

1. Clone the PyTorch Repository:
Start by cloning the PyTorch source code from the official GitHub repository. Use the following command:

bash
   git clone --recursive https://github.com/pytorch/pytorch.git
   cd pytorch
   git checkout main # or checkout the specific release version >= v2.4
   git submodule sync
   git submodule update --init --recursive
   

2. Install Required Packages:
Install necessary packages using Conda and Pip. For GPU support, ensure you have the appropriate CUDA or ROCm installed.

bash
   conda install cmake ninja
   pip install -r requirements.txt
   

For NVIDIA GPUs, install CUDA. For AMD GPUs, install ROCm.

3. Set Up Environment Variables:
If you want to compile PyTorch with the new C++ ABI enabled, run:

bash
   export _GLIBCXX_USE_CXX11_ABI=1
   

4. Build PyTorch:
Set the `CMAKE_PREFIX_PATH` and run the setup script to build PyTorch:

bash
   export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
   python setup.py develop
   cd ..
   

Building PyTorch/XLA from Source

1. Create a GPU Instance:
Ensure you have a GPU-enabled environment. This can be a local machine with a GPU or a cloud-based GPU VM.

2. Clone PyTorch/XLA Repositories:
Clone both the PyTorch and PyTorch/XLA repositories:

bash
   git clone https://github.com/pytorch/pytorch.git
   cd pytorch
   USE_CUDA=1 python setup.py install
   USE_CUDA=1 python setup.py bdist_wheel

   git clone https://github.com/pytorch/xla.git
   cd xla
   XLA_CUDA=1 python setup.py install
   

3. Set Environment Variables:
Ensure that your `PATH` and `LD_LIBRARY_PATH` environment variables include CUDA paths.

4. Build and Install PyTorch/XLA:
The installation commands above also build PyTorch/XLA. Ensure all dependencies are correctly set up for GPU support.

Additional Notes

- Building libtorch (C++ version of PyTorch):
If you need to build libtorch, you can use CMake to configure and build it. This involves cloning the PyTorch repository, setting up a build directory, configuring with CMake, and then building and installing it[1].

- Using Docker:
For a safer and more controlled environment, consider using Docker to build PyTorch and PyTorch/XLA[5].

- Troubleshooting:
Ensure all dependencies are correctly installed and environment variables are set. Common issues include missing CUDA or ROCm versions that are compatible with your system[5].

Citations:
[1] https://www.restack.io/p/pytorch-answer-source-code-build
[2] https://pytorch.org/xla/master/gpu.html
[3] https://pytorch.org/get-started/locally/
[4] https://pytorch.org/xla/release/1.7/index.html
[5] https://stackoverflow.com/questions/71075872/how-to-build-pytorch-source
[6] https://pytorch.org/xla/master/learn/xla-overview.html
[7] https://discuss.pytorch.org/t/build-and-compile-pytorch/115197
[8] https://stackoverflow.com/questions/76199099/how-to-build-pytorch-xla-from-source-on-windows-11-wsl