thrust

◆ for_each_n() [1/2]

template<typename DerivedPolicy , typename InputIterator , typename Size , typename UnaryFunction >
__host__ __device__ InputIterator thrust::for_each_n ( const thrust::detail::execution_policy_base< DerivedPolicy > &  exec,
InputIterator  first,
Size  n,
UnaryFunction  f 
)

for_each_n applies the function object f to each element in the range [first, first + n); f's return value, if any, is ignored. Unlike the C++ Standard Template Library function std::for_each, this version offers no guarantee on order of execution.

The algorithm's execution is parallelized as determined by exec.

Parameters
execThe execution policy to use for parallelization.
firstThe beginning of the sequence.
nThe size of the input sequence.
fThe function object to apply to the range [first, first + n).
Returns
first + n
Template Parameters
DerivedPolicyThe name of the derived execution policy.
InputIteratoris a model of Input Iterator, and InputIterator's value_type is convertible to UnaryFunction's argument_type.
Sizeis an integral type.
UnaryFunctionis a model of Unary Function, and UnaryFunction does not apply any non-constant operation through its argument.

The following code snippet demonstrates how to use for_each_n to print the elements of a device_vector using the thrust::device parallelization policy.

#include <cstdio>
struct printf_functor
{
__host__ __device__
void operator()(int x)
{
// note that using printf in a __device__ function requires
// code compiled for a GPU with compute capability 2.0 or
// higher (nvcc --arch=sm_20)
printf("%d\n", x);
}
};
...
thrust::device_vector<int> d_vec(3);
d_vec[0] = 0; d_vec[1] = 1; d_vec[2] = 2;
thrust::for_each_n(thrust::device, d_vec.begin(), d_vec.size(), printf_functor());
// 0 1 2 is printed to standard output in some unspecified order
See also
for_each
http://www.sgi.com/tech/stl/for_each.html