Yes, it is possible to train a model using PyTorch Lightning on a single TPU core. PyTorch Lightning supports training on either a single TPU core or 8 TPU cores. To train on a single TPU core, you specify the TPU core ID in a list when setting up the `Trainer`. For example, to train on TPU core ID 5, you would use something like this:
python
import pytorch_lightning as pl
my_model = MyLightningModule()
trainer = pl.Trainer(tpu_cores=[5]) # Train on TPU core ID 5
trainer.fit(my_model)
However, the documentation does not explicitly provide an example for passing a single core ID in this manner, but it mentions passing a core ID for single-core training. Typically, for single-core training, you might set `tpu_cores=1`, but specifying a core ID directly is not commonly documented. The most straightforward approach for single-core training is setting `tpu_cores=1`, which is supported by PyTorch Lightning[1][2][5].
python
trainer = pl.Trainer(tpu_cores=1)
Citations:
[1] https://pytorch-lightning.readthedocs.io/en/1.0.8/tpu.html
[2] https://lightning.ai/docs/pytorch/1.5.9/advanced/tpu.html
[3] https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-with-pytorch-lightning-on-tpus
[4] https://pytorch-lightning.readthedocs.io/en/1.0.8/trainer.html
[5] https://pytorch-lightning.readthedocs.io/en/1.2.10/advanced/tpu.html
[6] https://cloud.google.com/blog/topics/developers-practitioners/scaling-deep-learning-workloads-pytorch-xla-and-cloud-tpu-vm
[7] https://lightning.ai/docs/pytorch/stable/notebooks/lightning_examples/mnist-tpu-training.html
[8] https://pytorch.org/xla/release/2.2/index.html
[9] https://www.youtube.com/watch?v=eBZciVDr21o