thrust

◆ device

const detail::device_t thrust::device
static

thrust::device is the default parallel execution policy associated with Thrust's device backend system configured by the THRUST_DEVICE_SYSTEM macro.

Instead of relying on implicit algorithm dispatch through iterator system tags, users may directly target algorithm dispatch at Thrust's device system by providing thrust::device as an algorithm parameter.

Explicit dispatch can be useful in avoiding the introduction of data copies into containers such as thrust::device_vector or to avoid wrapping e.g. raw pointers allocated by the CUDA API with types such as thrust::device_ptr.

The user must take care to guarantee that the iterators provided to an algorithm are compatible with the device backend system. For example, raw pointers allocated by std::malloc typically cannot be dereferenced by a GPU. For this reason, raw pointers allocated by host APIs should not be mixed with a thrust::device algorithm invocation when the device backend is CUDA.

The type of thrust::device is implementation-defined.

The following code snippet demonstrates how to use thrust::device to explicitly dispatch an invocation of thrust::for_each to the device backend system:

#include <cstdio>
struct printf_functor
{
__host__ __device__
void operator()(int x)
{
printf("%d\n", x);
}
};
...
thrust::device_vector<int> vec(3);
vec[0] = 0; vec[1] = 1; vec[2] = 2;
thrust::for_each(thrust::device, vec.begin(), vec.end(), printf_functor());
// 0 1 2 is printed to standard output in some unspecified order
See also
host_execution_policy
thrust::device