Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When running GPGPU-sim with NVIDIA driver, such an error occurs. #274

Open
zzczzc20 opened this issue Jan 4, 2023 · 7 comments
Open

When running GPGPU-sim with NVIDIA driver, such an error occurs. #274

zzczzc20 opened this issue Jan 4, 2023 · 7 comments

Comments

@zzczzc20
Copy link

zzczzc20 commented Jan 4, 2023

Hi, everyone. I use GPGPU-sim without any real cards. The program I simulate needs libcuda so I installed the NVIDIA driver. When I run the program, such an error occurs: "GPGPU-Sim PTX: Execution error: CUDA API function "cudaError_t cudaSetDeviceFlags(int)()" has not been implemented yet.
[$GPGPUSIM_ROOT/libcuda/cuda_runtime_api.cc around line 3771]"

@masa-laboratory
Copy link

Perhaps you are using the latest version of NVCC compiler, and GPGPU-sim has not fully implemented all the APIs of CUDA language; Or even if the NVCC version you use is not the latest, but your CUDA executable uses the API that GPGPU-Sim has not yet implemented.

List your experimental environment as much as possible, especially the .cu source code and compiler version.

@nothingface0
Copy link

See also:

Our implementation of libcudart is not compatible with all versions of cuda because of different interfaces that NVIDIA uses between different versions.

From here.

@masa-laboratory
Copy link

@nothingface0

Hi!

May I ask you a problem about GPGPU-sim:

Among the types of memory space in the src/abstract_hardware_model.h​ file:

enum _memory_space_t { 
    undefined_space = 0, 
    reg_space, 
    local_space, 
    shared_space, 
    sstarr_space,  <========= HERE 
    param_space_unclassified, 
    param_space_kernel, /* global to all threads in a kernel : read-only */ 
    param_space_local,    /* local to a thread : read-writable */ 
    const_space, 
    tex_space, 
    surf_space, 
    global_space, 
    generic_space, 
    instruction_space 
}; 

, there is such a type called "sstarr_space​", but I can't find it in GPGPU-Sim manual and other materials.
Is it the abbreviation of one memory space?

Thanks for your favourable reply.

@nothingface0
Copy link

@masa-laboratory
I'm not an expert, I'm just a user of GPGPU-Sim, plus I have not really delved into the memory part yet.

I can assume that, since this is used in the ptx part of the code (cuda-sim), it's PTX-specific. I found a list of PTX directives here. From the name of the variable, I would assume that it's referring to some array, possible "shared static array"? I have no idea why it would be different to normal shared memory, though.

@masa-laboratory
Copy link

@nothingface0
Yes.

When generating the memory accesses in a warp, the sstarr_space's action is same to shared_space's.
I will treat it as a shared memory for the time being.

Thank you!

@nothingface0
Copy link

@masa-laboratory sstarr seems to be related with Tensor Cores, from what I saw here. They mention that it works similarly to shared memory.

@masa-laboratory
Copy link

@nothingface0 Thanks!. Have you been reading the source code of GPGPU-Sim?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants