Skip to content

support driver 570.86.15 #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks done
wanlill opened this issue Feb 5, 2025 · 4 comments
Open
2 tasks done

support driver 570.86.15 #30

wanlill opened this issue Feb 5, 2025 · 4 comments
Labels
bug Something isn't working

Comments

@wanlill
Copy link

wanlill commented Feb 5, 2025

NVIDIA Open GPU Kernel Modules Version

570.86.15

Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.

  • I confirm that this does not happen with the proprietary driver package.

Operating System and Version

Ubuntu 24.04.1 LTS

Kernel Release

Linux rtx4090 6.8.0-52-generic #53-Ubuntu SMP PREEMPT_DYNAMIC

Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.

  • I am running on a stable kernel release.

Hardware: GPU

NVIDIA GeForce RTX 3090 Ti

Describe the bug

Current version doesn't work with newer version of nvidia open kernel module.

To Reproduce

build new nvidia kernel module

Bug Incidence

Once

nvidia-bug-report.log.gz

I have a commit ready: wanlill@8c45577#diff-23ed7c330fc6e677510252fa0a241cf164408e40b7153e2fde9e360f7afcf30bR194 but not sure how to update the base version of this repo.

More Info

For folks suffering from nan verification error of simpleP2P:

Verification error @ element 0: val = nan, ref = 0.000000
Verification error @ element 1: val = nan, ref = 4.000000
Verification error @ element 2: val = nan, ref = 8.000000
Verification error @ element 3: val = nan, ref = 12.000000
Verification error @ element 4: val = nan, ref = 16.000000
Verification error @ element 5: val = nan, ref = 20.000000
Verification error @ element 6: val = nan, ref = 24.000000
Verification error @ element 7: val = nan, ref = 28.000000
Verification error @ element 8: val = nan, ref = 32.000000
Verification error @ element 9: val = nan, ref = 36.000000
Verification error @ element 10: val = nan, ref = 40.000000
Verification error @ element 11: val = nan, ref = 44.000000

It might be that your CPU/mobo/chipset doesn't support p2p read well, you can change simpleP2P to let the kernel do a p2p write instead, please see my above commit msg for details. People online also suffer from similar issue with similar intel platforms: https://community.intel.com/t5/Processors/P2p-capabilities-of-the-Alder-Lake-Z690-platform/td-p/1395965

@wanlill wanlill added the bug Something isn't working label Feb 5, 2025
@katkase
Copy link

katkase commented Feb 24, 2025

Hi, I tried your modified kernel modules with my system:

Ubuntu 22.04.5 with 6.8.0-52-generic kernel
Asus Z790, i9 13th gen, 64 GB RAM and 2x 3090 24 GB

to start I will tell you that it was the only version of open-gpu-kernel-modules that gave me an empty BAR1 USED memory (when GPUs were not working..) while with all others I always started with 24 GB occupied

with the other versions:

Image

with yours:

Image

I enabled Resizable BAR and disabled IOMMU in the bios, I also added intel_iommu=off iommu=off in GRUB2

Image

Below are the results of some tests I performed:

Image

Image

`franci@ubuntu-22:~/Code3/cuda-samples$ ./build/Samples/1_Utilities/deviceQuery/deviceQuery
./build/Samples/1_Utilities/deviceQuery/deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "NVIDIA GeForce RTX 3090"
CUDA Driver Version / Runtime Version 12.8 / 12.8
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24113 MBytes (25284050944 bytes)
(082) Multiprocessors, (128) CUDA Cores/MP: 10496 CUDA Cores
GPU Max Clock rate: 1860 MHz (1.86 GHz)
Memory Clock rate: 9751 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "NVIDIA GeForce RTX 3090"
CUDA Driver Version / Runtime Version 12.8 / 12.8
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 24135 MBytes (25307578368 bytes)
(082) Multiprocessors, (128) CUDA Cores/MP: 10496 CUDA Cores
GPU Max Clock rate: 1860 MHz (1.86 GHz)
Memory Clock rate: 9751 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 6291456 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Peer access from NVIDIA GeForce RTX 3090 (GPU0) -> NVIDIA GeForce RTX 3090 (GPU1) : Yes
Peer access from NVIDIA GeForce RTX 3090 (GPU1) -> NVIDIA GeForce RTX 3090 (GPU0) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.8, CUDA Runtime Version = 12.8, NumDevs = 2
Result = PASS`

anyway even patching simpleP2P as you did, the test doesn't end well:

Image

Before I got all nan for val

if you have any advice that would be great. Thanks

@katkase
Copy link

katkase commented Feb 25, 2025

nevermind, solved by disabling VT-d in the bios

`franci@ubuntu-22:~/Code3/cuda-samples$ ./build/Samples/0_Introduction/simpleP2P/simpleP2P
[./build/Samples/0_Introduction/simpleP2P/simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 2

Checking GPU(s) for support of peer to peer memory access...

Peer access from NVIDIA GeForce RTX 3090 (GPU0) -> NVIDIA GeForce RTX 3090 (GPU1) : Yes
Peer access from NVIDIA GeForce RTX 3090 (GPU1) -> NVIDIA GeForce RTX 3090 (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 6.26GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed
`

the test passes with both intel_iommu=off iommu=off and intel_iommu=on iommu=pt in grub

@LuosjDD
Copy link

LuosjDD commented Mar 9, 2025

Thank you very much, I tried to modify it in 570.124.04 according to your commit, but I cannot test it pass on the RTX5070 Ti.
Is more modification needed to support the GeForce Blackwell architecture?

`[./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 2
Checking GPU(s) for support of peer to peer memory access...

Peer access from NVIDIA GeForce RTX 5070 Ti (GPU0) -> NVIDIA GeForce RTX 5070 Ti (GPU1) : Yes
Peer access from NVIDIA GeForce RTX 5070 Ti (GPU1) -> NVIDIA GeForce RTX 5070 Ti (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
CUDA error at /.../cuda-samples/Samples/0_Introduction/simpleP2P/simpleP2P.cu:170 code=719(cudaErrorLaunchFailure) "cudaEventSynchronize(stop_event)"`

@fulloo5
Copy link

fulloo5 commented Mar 10, 2025

Thank you very much, I tried to modify it in 570.124.04 according to your commit, but I cannot test it pass on the RTX5070 Ti. Is more modification needed to support the GeForce Blackwell architecture?

`[./simpleP2P] - Starting... Checking for multiple GPUs... CUDA-capable device count: 2 Checking GPU(s) for support of peer to peer memory access...

Peer access from NVIDIA GeForce RTX 5070 Ti (GPU0) -> NVIDIA GeForce RTX 5070 Ti (GPU1) : Yes
Peer access from NVIDIA GeForce RTX 5070 Ti (GPU1) -> NVIDIA GeForce RTX 5070 Ti (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
CUDA error at /.../cuda-samples/Samples/0_Introduction/simpleP2P/simpleP2P.cu:170 code=719(cudaErrorLaunchFailure) "cudaEventSynchronize(stop_event)"`

I ran into the same problem on the 5090 graphics card.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants