Skip to content

Project 2: Edward Atter #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ cmake_minimum_required(VERSION 3.1)
project(cis565_stream_compaction_test)

set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")

# Enable C++11 for host code
set(CMAKE_CXX_STANDARD 11)
Expand Down
92 changes: 86 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,92 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Edward Atter
* [LinkedIn](https://www.linkedin.com/in/atter/)
* Tested on: Linux Mint 18.3 Sylvia (4.13.0-41-generic), Ryzen 7 2700x @ 3.7 ghz (base clock) 16GB, GTX 1070 TI 8GB GDDR5 (Personal)
* CUDA 9

### (TODO: Your README)
## Overview

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
This project implements array scanning and stream compaction. Below are some diagrams from GPU Gems.

![Scanning implementation](img/figure-39-2.jpg)

Compaction is useful to reduce the size if only the true values matter. Think of sparse arrays or only considering object collisions.

## Performance

#### Methodology

Memory operations such as `malloc` or `memset` were excluded from the results. In the case of compaction, only the final step (scatter) is timed. Any preliminary data formatting (for example to get the boolean array or scanning to get the proper indices) is not included in the time. Unless otherwise stated, a block size of 1024 was used throughout the analysis.

The timing data displayed below is an average across two runs. Ideally, there would be a much higher number of trials, though in practice the timings did not change much.

#### Analysis

Large block sizes perform the best. This is likely because each thread does very little work. Specifically, a block size of 1024 was chosen. See the graphs below for a comparison.

![Time vs Array Size (BS = 256)](img/time-vs-array-bs-256.png) ![Time vs Array Size (BS = 256)](img/time-vs-array-bs-256-zoom.png)

![Time vs Array Size (BS = 1024)](img/time-vs-array-bs-1024.png) ![Time vs Array Size (BS = 1024)](img/time-vs-array-bs-1024-zoom.png)

With the exception of thrust, both GPU implementations improve relative to the CPU as the array size increases. The naive solution is never worth using, as it is always the slowest. When the array size grows extremely large, the efficient implementation beats even the standard thrust library. Any performance bottlenecks are IO related. It can be improved significantly by using shared instead of global memory within blocks.

It should be noted that thrust is really at an unfair advantage in these tests. Memory allocations are not included in the performance tests for any of the custom built solutions. The timing for thrust, however, includes all the necessary memory allocations.

On examination, in addition to watching `top` and `nvidia-smi`, I believe thrust uses the CPU for small array sizes, and switches to utilizing the GPU when the array size reaches a sufficient length.

## Program Output

****************
** SCAN TESTS **
****************
[ 40 41 8 46 3 10 25 38 24 42 12 31 35 ... 16 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.14779ms (std::chrono Measured)
[ 0 40 81 89 135 138 148 173 211 235 277 289 320 ... 6428057 6428073 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0.14218ms (std::chrono Measured)
[ 0 40 81 89 135 138 148 173 211 235 277 289 320 ... 6427957 6427981 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.163936ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.109696ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.131392ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.119008ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.221088ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.22096ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 2 1 0 2 1 2 1 2 2 2 0 1 1 ... 0 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 1.05048ms (std::chrono Measured)
[ 2 1 2 1 2 1 2 2 2 1 1 3 3 ... 3 3 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 1.05846ms (std::chrono Measured)
[ 2 1 2 1 2 1 2 2 2 1 1 3 3 ... 2 3 ]
passed
==== cpu compact with scan ====
elapsed time: 1.09453ms (std::chrono Measured)
[ 2 1 2 1 2 1 2 2 2 1 1 3 3 ... 3 3 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.022336ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.024768ms (CUDA Measured)
passed
Binary file added img/time-vs-array-bs-1024-zoom.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/time-vs-array-bs-1024.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/time-vs-array-bs-256-zoom.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/time-vs-array-bs-256.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 18; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand Down Expand Up @@ -147,7 +147,7 @@ int main(int argc, char* argv[]) {
//printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

system("pause"); // stop Win32 console from closing on exit
//system("pause"); // stop Win32 console from closing on exit
delete[] a;
delete[] b;
delete[] c;
Expand Down
2 changes: 1 addition & 1 deletion stream_compaction/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ set(SOURCE_FILES

cuda_add_library(stream_compaction
${SOURCE_FILES}
OPTIONS -arch=sm_20
OPTIONS -arch=sm_30
)
29 changes: 27 additions & 2 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,14 @@ namespace StreamCompaction {
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int i = threadIdx.x + blockIdx.x * blockDim.x;
if (i >= n) { return; }

if (idata[i] != 0) {
bools[i] = 1;
} else {
bools[i] = 0;
}
}

/**
Expand All @@ -32,7 +39,25 @@ namespace StreamCompaction {
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int i = threadIdx.x + blockIdx.x * blockDim.x;

int outIndex = -1;
if (bools[i] == 1) {
outIndex = indices[i];
odata[outIndex] = idata[i];
}
}

__global__ void kernConvertScanToExclusive(int n, int exclusiveScan[], const int inclusiveScan[]) {
int idx = threadIdx.x + (blockIdx.x * blockDim.x);
if (idx >= n) {
return;
} else if (idx >= 1) {
exclusiveScan[idx] = inclusiveScan[idx - 1];
return;
}

exclusiveScan[0] = 0;
}

}
Expand Down
5 changes: 5 additions & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

#define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
#define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__)
#define BLOCK_SIZE 1024

/**
* Check for CUDA errors; print and exit if there was a problem.
Expand All @@ -37,6 +38,8 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices);

__global__ void kernConvertScanToExclusive(int n, int exclusiveScan[], const int inclusiveScan[]);

/**
* This class is used for timing the performance
* Uncopyable and unmovable
Expand Down Expand Up @@ -114,6 +117,8 @@ namespace StreamCompaction {
PerformanceTimer& operator=(const PerformanceTimer&) = delete;
PerformanceTimer& operator=(PerformanceTimer&&) = delete;



private:
cudaEvent_t event_start = nullptr;
cudaEvent_t event_end = nullptr;
Expand Down
45 changes: 39 additions & 6 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -19,20 +19,36 @@ namespace StreamCompaction {
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
scanNoTimer(n, odata, idata);
timer().endCpuTimer();
}

void scanNoTimer(int n, int *odata, const int *idata) {
odata[0] = 0; //Add identity
int sum = 0;
for (int i = 0; i < n - 1; i++){
sum += idata[i];
odata[i + 1] = sum;
}
}

/**
* CPU stream compaction without using the scan function.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int index = 0;

for (int i = 0; i < n; i++) {
if (idata[i] != 0){
odata[index] = idata[i];
index ++;
}
}
timer().endCpuTimer();
return -1;
return index;
}

/**
Expand All @@ -41,10 +57,27 @@ namespace StreamCompaction {
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int idx = 0;
int booleanArray[n];
timer().startCpuTimer();
for (int i = 0; i < n; i++) {
if (idata[i] != 0) {
booleanArray[i] = 1;
} else {
booleanArray[i] = 0;
}
}
int indexArray[n];
scanNoTimer(n, indexArray, booleanArray);
// Finally, scatter
for (int i = 0; i < n; i++) {
if (idata[i] != 0) {
idx = indexArray[i];
odata[idx] = idata[i];
}
}
timer().endCpuTimer();
return -1;
return idx + 1;
}
}
}
2 changes: 2 additions & 0 deletions stream_compaction/cpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ namespace StreamCompaction {

void scan(int n, int *odata, const int *idata);

void scanNoTimer(int n, int *odata, const int *idata);

int compactWithoutScan(int n, int *odata, const int *idata);

int compactWithScan(int n, int *odata, const int *idata);
Expand Down
Loading