GPU implementation
This function is called for each integration point during each time step of the simulation on the CPU. It defines the behavior of the material, such as its constitutive response, failure criteria, and post-failure behavior.
CUDA kernels (GPU code) cannot be implemented as a member functions in classes. The solution here is to implement them in separate files but invoked from the class function. To protecct against naming conflicts, we define the same namespace for both the class and the kernel that belong to each other.
runMatGPU()
void runMatGPU(UserMatHost host, UserMatDevice device, cudaStream_t stream) const
host
- Type: UserMatHost
- Description: User material host data structure. See UserMatHost for more information on the data structure.
device
- Type: UserMatDevice
- Description: User material device data structure. See UserMatDevice for more information on the data structure.
stream
- Type: cudaStream_t
- Description: CUDA stream used by the material to perform its calculations.
Example
The following example calls the function mat_user(), which itself is part of the same
namespace. We do not have to explicitly call the namespace, since both the
class already belongs to the namespace. In our provided sample, the CUDA implementation is in a file
called kernel_mat_rubber.cu.
void MatRubber::runMatGPU(UserMatHost host, UserMatDevice device, cudaStream_t stream) const
{
mat_user(host, device, stream);
}