René's URL Explorer Experiment


Title: Writing Custom Datasets, DataLoaders and Transforms — PyTorch Tutorials 2.9.0+cu128 documentation

direct link

Domain: docs.PyTorch.org


Hey, it has json ld scripts:
    {
       "@context": "https://schema.org",
       "@type": "Article",
       "name": "Writing Custom Datasets, DataLoaders and Transforms",
       "headline": "Writing Custom Datasets, DataLoaders and Transforms",
       "description": "PyTorch Documentation. Explore PyTorch, an open-source machine learning library that accelerates the path from research prototyping to production deployment. Discover tutorials, API references, and guides to help you build and deploy deep learning models efficiently.",
       "url": "/beginner/data_loading_tutorial.html",
       "articleBody": "Note Go to the end to download the full example code. Writing Custom Datasets, DataLoaders and Transforms# Author: Sasank Chilamkurthy A lot of effort in solving any machine learning problem goes into preparing the data. PyTorch provides many tools to make data loading easy and hopefully, to make your code more readable. In this tutorial, we will see how to load and preprocess/augment data from a non trivial dataset. To run this tutorial, please make sure the following packages are installed: scikit-image: For image io and transforms pandas: For easier csv parsing import os import torch import pandas as pd from skimage import io, transform import numpy as np import matplotlib.pyplot as plt from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils # Ignore warnings import warnings warnings.filterwarnings(\"ignore\") plt.ion() # interactive mode \u003ccontextlib.ExitStack object at 0x7fba75108f10\u003e The dataset we are going to deal with is that of facial pose. This means that a face is annotated like this: Over all, 68 different landmark points are annotated for each face. Note Download the dataset from here so that the images are in a directory named \u2018data/faces/\u2019. This dataset was actually generated by applying excellent dlib\u2019s pose estimation on a few images from imagenet tagged as \u2018face\u2019. Dataset comes with a .csv file with annotations which looks like this: image_name,part_0_x,part_0_y,part_1_x,part_1_y,part_2_x, ... ,part_67_x,part_67_y 0805personali01.jpg,27,83,27,98, ... 84,134 1084239450_e76e00b7e7.jpg,70,236,71,257, ... ,128,312 Let\u2019s take a single image name and its annotations from the CSV, in this case row index number 65 for person-7.jpg just as an example. Read it, store the image name in img_name and store its annotations in an (L, 2) array landmarks where L is the number of landmarks in that row. landmarks_frame = pd.read_csv(\u0027data/faces/face_landmarks.csv\u0027) n = 65 img_name = landmarks_frame.iloc[n, 0] landmarks = landmarks_frame.iloc[n, 1:] landmarks = np.asarray(landmarks, dtype=float).reshape(-1, 2) print(\u0027Image name: {}\u0027.format(img_name)) print(\u0027Landmarks shape: {}\u0027.format(landmarks.shape)) print(\u0027First 4 Landmarks: {}\u0027.format(landmarks[:4])) Image name: person-7.jpg Landmarks shape: (68, 2) First 4 Landmarks: [[32. 65.] [33. 76.] [34. 86.] [34. 97.]] Let\u2019s write a simple helper function to show an image and its landmarks and use it to show a sample. def show_landmarks(image, landmarks): \"\"\"Show image with landmarks\"\"\" plt.imshow(image) plt.scatter(landmarks[:, 0], landmarks[:, 1], s=10, marker=\u0027.\u0027, c=\u0027r\u0027) plt.pause(0.001) # pause a bit so that plots are updated plt.figure() show_landmarks(io.imread(os.path.join(\u0027data/faces/\u0027, img_name)), landmarks) plt.show() Dataset class# torch.utils.data.Dataset is an abstract class representing a dataset. Your custom dataset should inherit Dataset and override the following methods: __len__ so that len(dataset) returns the size of the dataset. __getitem__ to support the indexing such that dataset[i] can be used to get \\(i\\)th sample. Let\u2019s create a dataset class for our face landmarks dataset. We will read the csv in __init__ but leave the reading of images to __getitem__. This is memory efficient because all the images are not stored in the memory at once but read as required. Sample of our dataset will be a dict {\u0027image\u0027: image, \u0027landmarks\u0027: landmarks}. Our dataset will take an optional argument transform so that any required processing can be applied on the sample. We will see the usefulness of transform in the next section. class FaceLandmarksDataset(Dataset): \"\"\"Face Landmarks dataset.\"\"\" def __init__(self, csv_file, root_dir, transform=None): \"\"\" Arguments: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. \"\"\" self.landmarks_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.landmarks_frame) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0]) image = io.imread(img_name) landmarks = self.landmarks_frame.iloc[idx, 1:] landmarks = np.array([landmarks], dtype=float).reshape(-1, 2) sample = {\u0027image\u0027: image, \u0027landmarks\u0027: landmarks} if self.transform: sample = self.transform(sample) return sample Let\u2019s instantiate this class and iterate through the data samples. We will print the sizes of first 4 samples and show their landmarks. face_dataset = FaceLandmarksDataset(csv_file=\u0027data/faces/face_landmarks.csv\u0027, root_dir=\u0027data/faces/\u0027) fig = plt.figure() for i, sample in enumerate(face_dataset): print(i, sample[\u0027image\u0027].shape, sample[\u0027landmarks\u0027].shape) ax = plt.subplot(1, 4, i + 1) plt.tight_layout() ax.set_title(\u0027Sample #{}\u0027.format(i)) ax.axis(\u0027off\u0027) show_landmarks(**sample) if i == 3: plt.show() break 0 (324, 215, 3) (68, 2) 1 (500, 333, 3) (68, 2) 2 (250, 258, 3) (68, 2) 3 (434, 290, 3) (68, 2) Transforms# One issue we can see from the above is that the samples are not of the same size. Most neural networks expect the images of a fixed size. Therefore, we will need to write some preprocessing code. Let\u2019s create three transforms: Rescale: to scale the image RandomCrop: to crop from image randomly. This is data augmentation. ToTensor: to convert the numpy images to torch images (we need to swap axes). We will write them as callable classes instead of simple functions so that parameters of the transform need not be passed every time it\u2019s called. For this, we just need to implement __call__ method and if required, __init__ method. We can then use a transform like this: tsfm = Transform(params) transformed_sample = tsfm(sample) Observe below how these transforms had to be applied both on the image and landmarks. class Rescale(object): \"\"\"Rescale the image in a sample to a given size. Args: output_size (tuple or int): Desired output size. If tuple, output is matched to output_size. If int, smaller of image edges is matched to output_size keeping aspect ratio the same. \"\"\" def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) self.output_size = output_size def __call__(self, sample): image, landmarks = sample[\u0027image\u0027], sample[\u0027landmarks\u0027] h, w = image.shape[:2] if isinstance(self.output_size, int): if h \u003e w: new_h, new_w = self.output_size * h / w, self.output_size else: new_h, new_w = self.output_size, self.output_size * w / h else: new_h, new_w = self.output_size new_h, new_w = int(new_h), int(new_w) img = transform.resize(image, (new_h, new_w)) # h and w are swapped for landmarks because for images, # x and y axes are axis 1 and 0 respectively landmarks = landmarks * [new_w / w, new_h / h] return {\u0027image\u0027: img, \u0027landmarks\u0027: landmarks} class RandomCrop(object): \"\"\"Crop randomly the image in a sample. Args: output_size (tuple or int): Desired output size. If int, square crop is made. \"\"\" def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) if isinstance(output_size, int): self.output_size = (output_size, output_size) else: assert len(output_size) == 2 self.output_size = output_size def __call__(self, sample): image, landmarks = sample[\u0027image\u0027], sample[\u0027landmarks\u0027] h, w = image.shape[:2] new_h, new_w = self.output_size top = np.random.randint(0, h - new_h + 1) left = np.random.randint(0, w - new_w + 1) image = image[top: top + new_h, left: left + new_w] landmarks = landmarks - [left, top] return {\u0027image\u0027: image, \u0027landmarks\u0027: landmarks} class ToTensor(object): \"\"\"Convert ndarrays in sample to Tensors.\"\"\" def __call__(self, sample): image, landmarks = sample[\u0027image\u0027], sample[\u0027landmarks\u0027] # swap color axis because # numpy image: H x W x C # torch image: C x H x W image = image.transpose((2, 0, 1)) return {\u0027image\u0027: torch.from_numpy(image), \u0027landmarks\u0027: torch.from_numpy(landmarks)} Note In the example above, RandomCrop uses an external library\u2019s random number generator (in this case, Numpy\u2019s np.random.int). This can result in unexpected behavior with DataLoader (see here). In practice, it is safer to stick to PyTorch\u2019s random number generator, e.g. by using torch.randint instead. Compose transforms# Now, we apply the transforms on a sample. Let\u2019s say we want to rescale the shorter side of the image to 256 and then randomly crop a square of size 224 from it. i.e, we want to compose Rescale and RandomCrop transforms. torchvision.transforms.Compose is a simple callable class which allows us to do this. scale = Rescale(256) crop = RandomCrop(128) composed = transforms.Compose([Rescale(256), RandomCrop(224)]) # Apply each of the above transforms on sample. fig = plt.figure() sample = face_dataset[65] for i, tsfrm in enumerate([scale, crop, composed]): transformed_sample = tsfrm(sample) ax = plt.subplot(1, 3, i + 1) plt.tight_layout() ax.set_title(type(tsfrm).__name__) show_landmarks(**transformed_sample) plt.show() Iterating through the dataset# Let\u2019s put this all together to create a dataset with composed transforms. To summarize, every time this dataset is sampled: An image is read from the file on the fly Transforms are applied on the read image Since one of the transforms is random, data is augmented on sampling We can iterate over the created dataset with a for i in range loop as before. transformed_dataset = FaceLandmarksDataset(csv_file=\u0027data/faces/face_landmarks.csv\u0027, root_dir=\u0027data/faces/\u0027, transform=transforms.Compose([ Rescale(256), RandomCrop(224), ToTensor() ])) for i, sample in enumerate(transformed_dataset): print(i, sample[\u0027image\u0027].size(), sample[\u0027landmarks\u0027].size()) if i == 3: break 0 torch.Size([3, 224, 224]) torch.Size([68, 2]) 1 torch.Size([3, 224, 224]) torch.Size([68, 2]) 2 torch.Size([3, 224, 224]) torch.Size([68, 2]) 3 torch.Size([3, 224, 224]) torch.Size([68, 2]) However, we are losing a lot of features by using a simple for loop to iterate over the data. In particular, we are missing out on: Batching the data Shuffling the data Load the data in parallel using multiprocessing workers. torch.utils.data.DataLoader is an iterator which provides all these features. Parameters used below should be clear. One parameter of interest is collate_fn. You can specify how exactly the samples need to be batched using collate_fn. However, default collate should work fine for most use cases. dataloader = DataLoader(transformed_dataset, batch_size=4, shuffle=True, num_workers=0) # Helper function to show a batch def show_landmarks_batch(sample_batched): \"\"\"Show image with landmarks for a batch of samples.\"\"\" images_batch, landmarks_batch = \\ sample_batched[\u0027image\u0027], sample_batched[\u0027landmarks\u0027] batch_size = len(images_batch) im_size = images_batch.size(2) grid_border_size = 2 grid = utils.make_grid(images_batch) plt.imshow(grid.numpy().transpose((1, 2, 0))) for i in range(batch_size): plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size + (i + 1) * grid_border_size, landmarks_batch[i, :, 1].numpy() + grid_border_size, s=10, marker=\u0027.\u0027, c=\u0027r\u0027) plt.title(\u0027Batch from dataloader\u0027) # if you are using Windows, uncomment the next line and indent the for loop. # you might need to go back and change ``num_workers`` to 0. # if __name__ == \u0027__main__\u0027: for i_batch, sample_batched in enumerate(dataloader): print(i_batch, sample_batched[\u0027image\u0027].size(), sample_batched[\u0027landmarks\u0027].size()) # observe 4th batch and stop. if i_batch == 3: plt.figure() show_landmarks_batch(sample_batched) plt.axis(\u0027off\u0027) plt.ioff() plt.show() break 0 torch.Size([4, 3, 224, 224]) torch.Size([4, 68, 2]) 1 torch.Size([4, 3, 224, 224]) torch.Size([4, 68, 2]) 2 torch.Size([4, 3, 224, 224]) torch.Size([4, 68, 2]) 3 torch.Size([4, 3, 224, 224]) torch.Size([4, 68, 2]) Afterword: torchvision# In this tutorial, we have seen how to write and use datasets, transforms and dataloader. torchvision package provides some common datasets and transforms. You might not even have to write custom classes. One of the more generic datasets available in torchvision is ImageFolder. It assumes that images are organized in the following way: root/ants/xxx.png root/ants/xxy.jpeg root/ants/xxz.png . . . root/bees/123.jpg root/bees/nsdf3.png root/bees/asd932_.png where \u2018ants\u2019, \u2018bees\u2019 etc. are class labels. Similarly generic transforms which operate on PIL.Image like RandomHorizontalFlip, Scale, are also available. You can use these to write a dataloader like this: import torch from torchvision import transforms, datasets data_transform = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) hymenoptera_dataset = datasets.ImageFolder(root=\u0027hymenoptera_data/train\u0027, transform=data_transform) dataset_loader = torch.utils.data.DataLoader(hymenoptera_dataset, batch_size=4, shuffle=True, num_workers=4) For an example with training code, please see Transfer Learning for Computer Vision Tutorial. Total running time of the script: (0 minutes 1.982 seconds) Download Jupyter notebook: data_loading_tutorial.ipynb Download Python source code: data_loading_tutorial.py Download zipped: data_loading_tutorial.zip",
       "author": {
         "@type": "Organization",
         "name": "PyTorch Contributors",
         "url": "https://pytorch.org"
       },
       "image": "https://pytorch.org/docs/stable/_static/img/pytorch_seo.png",
       "mainEntityOfPage": {
         "@type": "WebPage",
         "@id": "/beginner/data_loading_tutorial.html"
       },
       "datePublished": "2023-01-01T00:00:00Z",
       "dateModified": "2023-01-01T00:00:00Z"
     }
 

article:modified_time2022-07-20T23:02:43+00:00
docsearch:languageen
docbuild:last-updateJul 20, 2022
og:imagehttps://docs.pytorch.org/docs/stable/_static/img/pytorch_seo.png
None1
pytorch_projecttutorials

Links:

https://pytorch.org/
Get Started https://pytorch.org/get-started/locally
Tutorials https://docs.pytorch.org/tutorials
Learn the Basics https://pytorch.org/tutorials/beginner/basics/intro.html
PyTorch Recipes https://pytorch.org/tutorials/recipes/recipes_index.html
Intro to PyTorch - YouTube Series https://pytorch.org/tutorials/beginner/introyt.html
Webinars https://pytorch.org/webinars/
Landscape https://landscape.pytorch.org/
Join the Ecosystem https://pytorch.org/join-ecosystem
Community Hub https://pytorch.org/community-hub/
Forums https://discuss.pytorch.org/
Developer Resources https://pytorch.org/resources
Contributor Awards https://pytorch.org/contributor-awards/
Community Events https://pytorch.org/community-events/
PyTorch Ambassadors https://pytorch.org/programs/ambassadors/
PyTorch https://pytorch.org/projects/pytorch/
vLLM https://pytorch.org/projects/vllm/
DeepSpeed https://pytorch.org/projects/deepspeed/
Host Your Project https://pytorch.org/projects/host-your-project/
RAY https://pytorch.org/projects/ray/
PyTorch https://docs.pytorch.org/docs/stable/index.html
Domains https://pytorch.org/domains
Blog https://pytorch.org/blog/
Announcements https://pytorch.org/announcements
Case Studies Events Newsletter https://pytorch.org/case-studies/
Events https://pytorch.org/events
Newsletter https://pytorch.org/newsletter
PyTorch Foundation https://pytorch.org/foundation
Members https://pytorch.org/members
Governing Board https://pytorch.org/governing-board
Technical Advisory Council https://pytorch.org/tac
Cloud Credit Program https://pytorch.org/credits
Staff https://pytorch.org/staff
Contact https://pytorch.org/contact
Brand Guidelines https://pytorch.org/wp-content/uploads/2025/09/pytorch_brand_guide_091925a.pdf
JOIN https://pytorch.org/join
https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html
https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html
Get Startedhttps://pytorch.org/get-started/locally
Tutorialshttps://docs.pytorch.org/tutorials
Learn the Basicshttps://pytorch.org/tutorials/beginner/basics/intro.html
PyTorch Recipeshttps://pytorch.org/tutorials/recipes/recipes_index.html
Introduction to PyTorch - YouTube Serieshttps://pytorch.org/tutorials/beginner/introyt.html
Webinarshttps://pytorch.org/webinars/
Landscapehttps://landscape.pytorch.org/
Join the Ecosystemhttps://pytorch.org/join-ecosystem
Community Hubhttps://pytorch.org/community-hub/
Forumshttps://discuss.pytorch.org/
Developer Resourceshttps://pytorch.org/resources
Contributor Awardshttps://pytorch.org/contributor-awards/
Community Eventshttps://pytorch.org/community-events/
PyTorch Ambassadorshttps://pytorch.org/programs/ambassadors/
PyTorchhttps://pytorch.org/projects/pytorch/
vLLMhttps://pytorch.org/projects/vllm/
DeepSpeedhttps://pytorch.org/projects/deepspeed/
Host Your Projecthttps://pytorch.org/projects/host-your-project/
PyTorchhttps://docs.pytorch.org/docs/stable/index.html
Domainshttps://pytorch.org/domains
Bloghttps://pytorch.org/blog/
Announcementshttps://pytorch.org/announcements
Case Studieshttps://pytorch.org/case-studies/
Eventshttps://pytorch.org/events
Newsletterhttps://pytorch.org/newsletter
PyTorch Foundationhttps://pytorch.org/foundation
Membershttps://pytorch.org/members
Governing Boardhttps://pytorch.org/governing-board
Technical Advisory Councilhttps://pytorch.org/tac
Cloud Credit Programhttps://pytorch.org/credits
Staffhttps://pytorch.org/staff
Contacthttps://pytorch.org/contact
Skip to main contenthttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#main-content
v2.9.0+cu128https://docs.PyTorch.org/tutorials/index.html
Intro https://docs.PyTorch.org/tutorials/intro.html
Compilers https://docs.PyTorch.org/tutorials/compilers_index.html
Domains https://docs.PyTorch.org/tutorials/domains.html
Distributed https://docs.PyTorch.org/tutorials/distributed.html
Deep Dive https://docs.PyTorch.org/tutorials/deep-dive.html
Extension https://docs.PyTorch.org/tutorials/extension.html
Ecosystem https://docs.PyTorch.org/tutorials/ecosystem.html
Recipes https://docs.PyTorch.org/tutorials/recipes_index.html
Xhttps://x.com/PyTorch
GitHubhttps://github.com/pytorch/tutorials
Discoursehttps://dev-discuss.pytorch.org/
PyPihttps://pypi.org/project/torch/
Intro https://docs.PyTorch.org/tutorials/intro.html
Compilers https://docs.PyTorch.org/tutorials/compilers_index.html
Domains https://docs.PyTorch.org/tutorials/domains.html
Distributed https://docs.PyTorch.org/tutorials/distributed.html
Deep Dive https://docs.PyTorch.org/tutorials/deep-dive.html
Extension https://docs.PyTorch.org/tutorials/extension.html
Ecosystem https://docs.PyTorch.org/tutorials/ecosystem.html
Recipes https://docs.PyTorch.org/tutorials/recipes_index.html
Xhttps://x.com/PyTorch
GitHubhttps://github.com/pytorch/tutorials
Discoursehttps://dev-discuss.pytorch.org/
PyPihttps://pypi.org/project/torch/
https://docs.PyTorch.org/tutorials/index.html
Go to the endhttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#sphx-glr-download-beginner-data-loading-tutorial-py
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#writing-custom-datasets-dataloaders-and-transforms
Sasank Chilamkurthyhttps://chsasank.github.io
Datasethttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset
DataLoaderhttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
https://docs.PyTorch.org/tutorials/_images/landmarked_face2.png
herehttps://download.pytorch.org/tutorial/faces.zip
dlib’s pose estimationhttps://blog.dlib.net/2014/08/real-time-face-pose-estimation.html
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class
Datasethttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset
torch.is_tensorhttps://docs.pytorch.org/docs/stable/generated/torch.is_tensor.html#torch.is_tensor
FaceLandmarksDatasethttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#transforms
torch.from_numpyhttps://docs.pytorch.org/docs/stable/generated/torch.from_numpy.html#torch.from_numpy
torch.from_numpyhttps://docs.pytorch.org/docs/stable/generated/torch.from_numpy.html#torch.from_numpy
herehttps://pytorch.org/docs/stable/notes/faq.html#my-data-loader-workers-return-identical-random-numbers
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#compose-transforms
composedhttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
transforms.Composehttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
tsfrmhttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
composedhttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
tsfrmhttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
tsfrmhttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#iterating-through-the-dataset
FaceLandmarksDatasethttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset
transforms.Composehttps://docs.pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html#torchvision.transforms.Compose
dataloaderhttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
DataLoaderhttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
utils.make_gridhttps://docs.pytorch.org/vision/stable/generated/torchvision.utils.make_grid.html#torchvision.utils.make_grid
dataloaderhttps://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
#https://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#afterword-torchvision
Transfer Learning for Computer Vision Tutorialhttps://docs.PyTorch.org/tutorials/beginner/transfer_learning_tutorial.html
Download Jupyter notebook: data_loading_tutorial.ipynbhttps://docs.PyTorch.org/tutorials/_downloads/f498e3bcd9b6159ecfb1a07d6551287d/data_loading_tutorial.ipynb
Download Python source code: data_loading_tutorial.pyhttps://docs.PyTorch.org/tutorials/_downloads/6042bacf7948939030769777afe22e55/data_loading_tutorial.py
Download zipped: data_loading_tutorial.ziphttps://docs.PyTorch.org/tutorials/_downloads/87fbf07e3a7a367017f554174f91759e/data_loading_tutorial.zip
PyData Sphinx Themehttps://pydata-sphinx-theme.readthedocs.io/en/stable/index.html
Dataset classhttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class
Transformshttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#transforms
Compose transformshttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#compose-transforms
Iterating through the datasethttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#iterating-through-the-dataset
Afterword: torchvisionhttps://docs.PyTorch.org/tutorials/beginner/data_loading_tutorial.html#afterword-torchvision
torchaohttps://docs.pytorch.org/ao
torchrechttps://docs.pytorch.org/torchrec
torchfthttps://docs.pytorch.org/torchft
TorchCodechttps://docs.pytorch.org/torchcodec
torchvisionhttps://docs.pytorch.org/vision
ExecuTorchhttps://docs.pytorch.org/executorch
PyTorch on XLA Deviceshttps://docs.pytorch.org/xla
View Docshttps://docs.pytorch.org/docs/stable/index.html
View Tutorialshttps://docs.pytorch.org/tutorials
View Resourceshttps://pytorch.org/resources
Privacy Policyhttps://www.linuxfoundation.org/privacy/
https://www.facebook.com/pytorch
https://twitter.com/pytorch
https://www.youtube.com/pytorch
https://www.linkedin.com/company/pytorch
https://pytorch.slack.com
https://pytorch.org/wechat
Policieshttps://www.linuxfoundation.org/legal/policies
Trademark Usagehttps://www.linuxfoundation.org/trademark-usage
Privacy Policyhttp://www.linuxfoundation.org/privacy
Cookies Policyhttps://www.facebook.com/policies/cookies/
Sphinxhttps://www.sphinx-doc.org/
PyData Sphinx Themehttps://pydata-sphinx-theme.readthedocs.io/en/stable/index.html

Viewport: width=device-width, initial-scale=1


URLs of crawlers that visited me.