Skip to content

Project 2: Siyu Zheng #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
138 changes: 132 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,138 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Siyu Zheng
* Tested on: Windows 10, i7-8750 @ 2.20GHz 16GB, GTX 1060 6GB, Visual Studio 2015, CUDA 8.0(Personal Laptop)

### (TODO: Your README)
## Description

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
### Stream Compation
![](img/streamCompaction.png)

The goal of stream compaction is that, given an array of elements, we create a new array with elements that meet a certain criteria, e.g. non null and preserve order. It's used in path tracing, collision detection, sparse matrix compression, etc.

* Step 1: Compute temporary array
* Step 2: Run exclusive scan on temporary array
* Step 3: Scatter

### CPU Scan
Use for loop to compute an exclusive prefix sum.
![](img/cpu.png)

Number of add: O(n)

### Naive GPU Scan

![](img/naive.png)

Use double-buffer to scan two array. First do exclusive scan, then do shift right to get inclusive scan array.

Number of add: O(nlog2(n))

### Work-Efficient GPU Scan
Up-Sweep (Reduce) Phase:

![](img/upsweep.png)

In the reduce phase, we traverse the tree from leaves to root computing partial sums at internal nodes of the tree.

Down-Sweep Phase:

![](img/downsweep.png)

In the down-sweep phase, we traverse back down the tree from the root, using the partial sums from the reduce phase to build the scan in place on the array. We start by inserting zero at the root of the tree, and on each step, each node at the current level passes its own value to its left child, and the sum of its value and the former value of its left child to its right child.

### Thrust's Implementation

Wraps a call to the Thrust library function thrust::exclusive_scan(first, last, result).

## Performance Analysis

* Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.
Array Size 1<<15


| Block Size | 128 | 256 | 512 |1024 |
| ------------- |-------------|-------| -----|----- |
| naive | 0.16784 | 0.132096 | 0.157504|0.155584 |
| coherent | 1639.7 | 1534.2 | 0.094048| 0.096736 |

In my experiment, the performance for different block size is quite closed. I chose 1024 for my further tests.


* Compare all of these GPU Scan implementations (Naive, Work-Efficient, and Thrust) to the serial CPU version of Scan. Plot a graph of the comparison (with array size on the independent axis).
![](img/pot1.png)
![](img/pot2.png)
![](img/npot1.png)
![](img/npot2.png)

* Write a brief explanation of the phenomena you see here.

At first, I used non-optimized efficient GPU scan which is slower than CPU approach. Then I optimized it with resizable blockPerGrid, so that in each level of depth in scanning we can terminate idle threads. In upSweep and downSweep stage, modify the array index to maintain correctness. As a result, in test of array size larger than 16, the effiecient GPU approach has better performance than CPU approach.

Compare these four implementation, we can see that when the array size is small, the CPU approach has the best performance. Effiecient GPU approach is better than naive approach. After array size larger than 16, GPU implementation has better performance than CPU. For thrust approach, when array size is large, it has the best performance and the as the size grows, the running time doesn't increase much so it's quite stable.

I checked timeline when array size is 1 << 15. The function call of thrust::exclusive_scan is about one half of each kernel sweep call. So in the thrust implementation most expense is on memory allocation and copy. I guess the base cost for memory operation is quite big in thrust, but as the array size grows, since it might has some kind of memory access optimization like contiguous memory access, the memory operation cost might not increase a lot. As a result, in larger array, thrust implementation has the best performance.

The performance bottleneck for naive approach is mainly the algorithm. For non-optimized efficient scan, too many idle threads is the bottleneck. For optimized efficient GPU approach, the bottleneck is mainly memory I/O. If we switch to shared memory, the performance will increase a lot.

## Result

Array size = 1<<15

```

****************
** SCAN TESTS **
****************
[ 10 39 41 0 14 37 18 40 1 42 27 21 10 ... 14 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.10945ms (std::chrono Measured)
[ 0 10 49 90 90 104 141 159 199 200 242 269 290 ... 803563 803577 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0.116406ms (std::chrono Measured)
[ 0 10 49 90 90 104 141 159 199 200 242 269 290 ... 803493 803514 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.235072ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.197024ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.147424ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.119808ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.299008ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.253952ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 0 3 0 2 1 3 2 3 3 1 3 0 0 ... 1 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 0.121971ms (std::chrono Measured)
[ 3 2 1 3 2 3 3 1 3 2 1 2 1 ... 2 1 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.139594ms (std::chrono Measured)
[ 3 2 1 3 2 3 3 1 3 2 1 2 1 ... 1 1 ]
passed
==== cpu compact with scan ====
elapsed time: 0.552812ms (std::chrono Measured)
[ 3 2 1 3 2 3 3 1 3 2 1 2 1 ... 2 1 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.141568ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.243712ms (CUDA Measured)
passed

```
Binary file added img/cpu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/downsweep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/naive.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/npot1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/npot2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/pot1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/pot2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/streamCompaction.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/upsweep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 15 additions & 1 deletion src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 15; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand Down Expand Up @@ -81,6 +81,20 @@ int main(int argc, char* argv[]) {
//printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);

//zeroArray(SIZE, c);
//printDesc("work-efficient scan, shared memory, power-of-two");
//StreamCompaction::Efficient::sharedMemoryScan(SIZE, c, a);
//printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
////printArray(SIZE, c, true);
//printCmpResult(SIZE, b, c);

//zeroArray(SIZE, c);
//printDesc("work-efficient scan, shared memory, non-power-of-two");
//StreamCompaction::Efficient::sharedMemoryScan(NPOT, c, a);
//printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
////printArray(NPOT, c, true);
//printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
Expand Down
2 changes: 1 addition & 1 deletion stream_compaction/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ set(SOURCE_FILES

cuda_add_library(stream_compaction
${SOURCE_FILES}
OPTIONS -arch=sm_20
OPTIONS -arch=sm_30
)
12 changes: 12 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = blockDim.x * blockIdx.x + threadIdx.x;
if (index >= n) {
return;
}
bools[index] = idata[index] == 0 ? 0 : 1;
}

/**
Expand All @@ -33,6 +38,13 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = blockDim.x * blockIdx.x + threadIdx.x;
if (index >= n) {
return;
}
if (bools[index] == 1) {
odata[indices[index]] = idata[index];
}
}

}
Expand Down
53 changes: 42 additions & 11 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
#include <cstdio>
#include "cpu.h"

#include "common.h"
#include "common.h"


namespace StreamCompaction {
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
Expand All @@ -18,9 +19,13 @@ namespace StreamCompaction {
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
//timer().startCpuTimer();
// TODO
timer().endCpuTimer();
odata[0] = 0;
for (int i = 1; i < n; i++) {
odata[i] = odata[i - 1] + idata[i - 1];
}
//timer().endCpuTimer();
}

/**
Expand All @@ -31,20 +36,46 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int count = 0;
for (int i = 0; i < n; i++) {
if (idata[i] == 0) {
continue;
}
odata[count++] = idata[i];
}
timer().endCpuTimer();
return -1;
return count;
}

/**
/**s
* CPU stream compaction using scan and scatter, like the parallel version.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int *mdata = new int[n];
int *sdata = new int[n];
int count = 0;
for (int i = 0; i < n; i++) {
if (idata[i] == 0) {
mdata[i] = 0;
}
else {
mdata[i] = 1;
}
}
scan(n, sdata, mdata);
for (int i = 0; i < n; i++) {
if (mdata[i] != 0) {
odata[sdata[i]] = idata[i];
count++;
}
}
delete[] mdata, sdata;
timer().endCpuTimer();
return -1;
return count;
}
}
}
Loading