-
Notifications
You must be signed in to change notification settings - Fork 249
Add AnyCuDeviceArray variations and CuScalar #2849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Your PR requires formatting changes to meet the project's style guidelines. Click here to view the suggested changes.diff --git a/src/array.jl b/src/array.jl
index 03078e002..19a9e7b7c 100644
--- a/src/array.jl
+++ b/src/array.jl
@@ -123,7 +123,7 @@ end
## convenience constructors
-const CuScalar{T} = CuArray{T,0}
+const CuScalar{T} = CuArray{T, 0}
const CuVector{T} = CuArray{T,1}
const CuMatrix{T} = CuArray{T,2}
const CuVecOrMat{T} = Union{CuVector{T},CuMatrix{T}}
@@ -372,7 +372,7 @@ is_host(a::CuArray) = memory_type(a) == HostMemory
export DenseCuArray, DenseCuVector, DenseCuMatrix, DenseCuVecOrMat,
StridedCuArray, StridedCuVector, StridedCuMatrix, StridedCuVecOrMat,
- AnyCuArray, AnyCuScalar, AnyCuVector, AnyCuMatrix, AnyCuVecOrMat
+ AnyCuArray, AnyCuScalar, AnyCuVector, AnyCuMatrix, AnyCuVecOrMat
# dense arrays: stored contiguously in memory
#
@@ -427,7 +427,7 @@ end
# anything that's (secretly) backed by a CuArray
const AnyCuArray{T,N} = Union{CuArray{T,N}, WrappedArray{T,N,CuArray,CuArray{T,N}}}
-const AnyCuScalar{T} = AnyCuArray{T,0}
+const AnyCuScalar{T} = AnyCuArray{T, 0}
const AnyCuVector{T} = AnyCuArray{T,1}
const AnyCuMatrix{T} = AnyCuArray{T,2}
const AnyCuVecOrMat{T} = Union{AnyCuVector{T}, AnyCuMatrix{T}}
diff --git a/src/device/array.jl b/src/device/array.jl
index 2d3f4e350..82c123a52 100644
--- a/src/device/array.jl
+++ b/src/device/array.jl
@@ -33,18 +33,18 @@ struct CuDeviceArray{T,N,A} <: DenseArray{T,N}
new(ptr, maxsize, dims, prod(dims))
end
-const CuDeviceScalar{T} = CuDeviceArray{T,0,A} where {A}
-const CuDeviceVector{T} = CuDeviceArray{T,1,A} where {A}
-const CuDeviceMatrix{T} = CuDeviceArray{T,2,A} where {A}
+const CuDeviceScalar{T} = CuDeviceArray{T, 0, A} where {A}
+const CuDeviceVector{T} = CuDeviceArray{T, 1, A} where {A}
+const CuDeviceMatrix{T} = CuDeviceArray{T, 2, A} where {A}
# anything that's (secretly) backed by a CuDeviceArray
export AnyCuDeviceArray, AnyCuDeviceScalar, AnyCuDeviceVector, AnyCuDeviceMatrix, AnyCuDeviceVecOrMat
-const AnyCuDeviceArray{T,N} = Union{CuDeviceArray{T,N},WrappedArray{T,N,CuDeviceArray,CuDeviceArray{T,N,A}}} where {A}
-const AnyCuDeviceScalar{T} = AnyCuDeviceArray{T,0}
-const AnyCuDeviceVector{T} = AnyCuDeviceArray{T,1}
-const AnyCuDeviceMatrix{T} = AnyCuDeviceArray{T,2}
-const AnyCuDeviceVecOrMat{T} = Union{AnyCuDeviceVector{T},AnyCuDeviceMatrix{T}}
+const AnyCuDeviceArray{T, N} = Union{CuDeviceArray{T, N}, WrappedArray{T, N, CuDeviceArray, CuDeviceArray{T, N, A}}} where {A}
+const AnyCuDeviceScalar{T} = AnyCuDeviceArray{T, 0}
+const AnyCuDeviceVector{T} = AnyCuDeviceArray{T, 1}
+const AnyCuDeviceMatrix{T} = AnyCuDeviceArray{T, 2}
+const AnyCuDeviceVecOrMat{T} = Union{AnyCuDeviceVector{T}, AnyCuDeviceMatrix{T}}
## array interface
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #2849 +/- ##
==========================================
+ Coverage 89.64% 89.82% +0.18%
==========================================
Files 150 150
Lines 13229 13232 +3
==========================================
+ Hits 11859 11886 +27
+ Misses 1370 1346 -24 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Is there precedence for this in the Julia ecosystem? AFAIK we normally use
I guess that's fine, but beware that the |
There is ZeroDimensionalArrays.
I tried to get this working but don't know how. using CUDA
function kernel(x)
x[] += 1
return
end
arr = fill(0) |> cu
ref = CuRef{Int64}(0)
@cuda kernel(arr) # works
@cuda kernel(ref)
Argument 2 to your kernel function is of type CUDA.CuRefValue{Int64}, which is not a bitstype:
.buf is of type CUDA.Managed{CUDA.DeviceMemory} which is not isbits.
.stream is of type CuStream which is not isbits.
.ctx is of type Union{Nothing, CuContext} which is not isbits.
When adding the adoptations. using Adapt
Adapt.@adapt_structure CUDA.CuRefValue
@cuda kernel(ref)
ERROR: LoadError: CuRef only supports element types that are allocated inline.
CUDA.Managed{CUDA.DeviceMemory} is a mutable type
I am doing Simulated Annealing and Parallel Tempering with multiple replicas. That's also where I use the function sweep!(spins::AnyCuDeviceMatrix, energies::AnyCuDeviceVector, ...)
replica = blockIdx().x
x = @view spins[:,replica_idx]
e = @view energies[replica_idx]
sweep!(x, e, ...)
end
function sweep!(spins::AnyCuDeviceVector, energy::AnyCuDeviceScalar, ...)
...
end |
CuScalar{T} = CuArray{T,0}
for convenience.AnyCuDevice{Array,Scalar,Vector,VecOrMat}
so they match onSubArrays
as well.This is useful when working with zero-dimensional CUDA arrays and makes type matching with
AnyCuDevice*
more ergonomic.I haven’t added tests yet because I’m not sure if this direction is welcome.