Paddle Question: how/if does TensorFlow's Tensor allocate GPU memory?

Paddle PaddlePaddle-Gardener 2022年3月21日 21:28 13 查看原文

源自github用户wangkuiyi: I reviewed the details about how tensorflow::Tensor allocate memory in my note https://github.com/PaddlePaddle/Paddle/wiki/tensorflow::Tensor. However, it seems that it allocate only CPU memory.

@helinwang had thought that tensorflow::Tensor's constructor might accept a raw pointer to externally allocated GPU memory. But I just checked that the constructor requires an Allocator object, other than a pointer.

回答
2 条回答

源自github用户reyoung: 1. Allocator is an interface class for all devices memory allocation. A tensor takes Tensor::Tensor(Allocator* a, DataType type, const TensorShape& shape) as a constructor arguments. We can find VisitableAllocator is a subclass of Allocator. And GPU allocators, including GPUDebugAllocator, PoolAllocator, GPUBFCAllocator, are sub-class of VisitableAllocator. They are defined in /tensorflow/core/common_runtime/gpu/*_allocator* files. 1. In OpKernel of TF, it run forward_input_or_allocate_output first and allocate op's output here. In allocate_output, TF new a tensor, and then use allocator to allocate memory. The allocator is basically get from OpKernel->device_;

热门问题
相关推荐
推荐分类