The first thing you’ll need to know about is the thread hierarchy. CPUs are designed to run just a few threads, very quickly. GPUs, on the other hand, are designed to process thousands of threads simultaneously, with great efficiency. So, in order to take full advantage of your graphics card, you’ll need to break your problem down into hundreds, or thousands of threads.
Half-Warp - A half-warp is a group of 16 consecutive threads. Half-warp threads are generally executed together. Half-warps are aligned. For instance, Threads 0->15 will be in the same half-warp, 16->31, etc.
Warp – A warp of threads is a group of 32 consecutive threads. On future computing devices from nVidia, it might be possible that all threads in the same Warp are generally executed together in parallel. Therefor, it is a good idea to make your programs as if all threads within the same warp will execute together in parallel. Threads 0->31 will be in the same warp, 32->63, etc.
Block – A block is a collection of threads. For technical reasons, blocks should have at least 192 threads to obtain maximum efficiency and full latency hiding. Typically, blocks might contain 256 threads, 512 threads, or even 768 threads. Here’s the important thing you need to know; threads within the same block can synchronize with each other, and quickly communicate with each other.
Grid – A grid is a collection of blocks. Blocks can not synchronize with each other, and therefore threads within one block can not synchronize with threads in another block.
Step 3: Understand the memory hierarchy
Global Memory – Global memory can be thought of as the physical memory on your graphics card. If you have an integrated nVidia chipset like the ION, the global memory can be thought of as the amount of memory alloted to the graphics device. All threads can read and write to Global memory. You can even read and write to Global memory from a thread on the CPU.
Shared Memory – A GPU consists of many processors, or multiprocessors. Each multiprocessor has a small amount of Shared memory, on the order of about 16KB of memory. Shared memory is generally used as a very quick working space for threads within a block. Shared memory is allocated on a block by block basis. For example, you may have three blocks running consecutively on the same multiprocessor. This means that the maximum amount of shared memory the blocks can reserve is 16KB / 3. Threads within the same block can quickly and easily communicate with each other by writing and reading to the shared memory. It’s worth mentioning that the shared memory is at least 100 times faster than global memory, so it’s very advantageous if you can use it correctly.
Texture Memory – A GPU also has texture units and memory which can be taken advantage of in some circumstances. Unlike global memory, texture memory is cached, and is generally read only. If you expect threads to access memory addresses which have some coherence, you might want to consider using texture memory to speed up those memory accesses.
If every thread in a half-warp requests data from the same address in constant memory, your GPU will generate only a single read request and subsequently broadcast the data to every thread.
To achieve high memory bandwidth, shared memory is divided into equally-sized memory modules, called banks, which can be accessed simultaneously. In the case of the shared memory space, the banks are organized such that successive 32-bit words are assigned to successive banks and each bank has a bandwidth of 32 bits per two clock cycles.
So, any memory read or write request made of n addresses that fall in n distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is n times as high as the bandwidth of a single module. However, if two addresses of a memory request fall in the same memory bank, there is a bank conflict and the access has to be serialized. The hardware splits a memory request with bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of separate memory requests. If the number of separate memory requests is n, the initial memory request is said to cause n-way bank conflicts.
Functionname <<m,n>> (parameters);
M: number of blocks. <65535
N: number of threads in each block. <=maxThreadsPerBlock (512?)
on page 89 in Cuda Programming Guide 2.2 there are two examples of bank conflicts:
__shared__struct type shared[32];
struct type data= shared[BaseIndex+tid];
struct type data= shared[BaseIndex+tid];
it says that if the data accesed based on tid and the type is declared as this, it will result in NO bank conflict .
struct type{ float x,y,z; }
and if the type is declared as this it WILL have a bank conflict.
struct type{ float x,y; }
in 1st case compiler should be reading data as:
x.x = ((float*) shared)[tid * 3 + 0];
x.y = ((float*) shared)[tid * 3 + 1];
x.z = ((float*) shared)[tid * 3 + 2];
x.x = ((float*) shared)[tid * 3 + 0];
x.y = ((float*) shared)[tid * 3 + 1];
x.z = ((float*) shared)[tid * 3 + 2];
Imagine threads from the 1st half-wrap where tid ranges from 0 to 15, that means that we address following banks [((0..15)*3 + 0) % 16] = [0,3,6,9,12,15,2,5,8,11,14,1,4,7,10,13]. As you can see threads address different banks when reading 'x', same goes for 'y' and 'z'
32 * 3= 96 words. 1 bank has 1 word (32-bit). 16 banks. 96/16=6 lines.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 14 | 14 | 15 |
x | y | z | x | y | z | x | y | z | x | y | z | x | y | z | x |
y | z | x | y | z | x | y | z | x | y | z | x | y | z | x | y |
z | x | y | z | x | y | z | x | y | z | x | y | z | x | y | z |
x | y | z | x | y | z | x | y | z | x | y | z | x | y | z | x |
y | z | x | y | z | x | y | z | x | y | z | x | y | z | x | y |
z | x | y | z | x | y | z | x | y | z | x | y | z | x | y | z |
In a half-warp, there are 16 threads. Shared[0..15] (in blue) will be access in the first half-warp simultaneously. And Shared[16..31] (in brown) will be accessed by second half-warp simultaneously. So, in above table, first 3 lines will be accessed simultaneously by 1st half-warp and since each x or y or z is in different bank so there is no conflict. Similarly, the last 3 lines will be accessed simultaneously by 2nd half-warp and since each x or y or z is in different bank so there is no conflict.
Most important fact: Threads only access one 32-bit word simultaneously.
now 2nd case
x.x = ((float*) shared)[tid * 2 + 0];
x.y = ((float*) shared)[tid * 2 + 1];
Reading 'x' addresses following banks [((0..15)*2 + 0) % 16] = [0,2,4,6,8,10,12,14,0,2,4,6,8,10,12,14]. As you can see here, every other bank is referenced 2 times, giving you a 2-way bank conflict. Same happens when reading 'y'.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 14 | 14 | 15 |
x | y | x | y | x | y | x | y | x | y | x | y | x | y | x | y |
x | y | x | y | x | y | x | y | x | y | x | y | x | y | x | y |
x | y | x | y | x | y | x | y | x | y | x | y | x | y | x | y |
x | y | x | y | x | y | x | y | x | y | x | y | x | y | x | y |
In a half-warp, there are 16 threads. Shared[0..15] (in blue) will be access in the first half-warp simultaneously. And Shared[16..31] (in brown) will be accessed by second half-warp simultaneously. So, in above table, first 2 lines (in blue) will be accessed simultaneously by 1st half-warp and since each x or y is in the same bank so there is bank conflict.
When multiple threads in the same warp access the same bank, a bank conflict occurs unless all threads of the warp access the same address within the same 32-bit word" - First thing there are 16 memory banks each 4bytes wide. So essentially, if you have any thread in a half warp reading memory from the same 4bytes in a shared memory bank, you're going to have bank conflicts and serialization etc.
Conclusion: for the above example, it should be taken into consideration:
1: how many banks? Because in some gpus, it is 16 while others are 32. In above, it is 16.
2: whether it is accessed per-warp instead of per-half-warp. In above, it is per-half-warp.
The CUDA C compiler treats variables in shared memory differently than typical variables. It creates a copy of the variable for each block that you launch on the GPU. Every thread in that block shares the memory, but threads cannot see or modify the copy of this variable that is seen within other blocks. This provides an excellent means by which threads within a block can communicate and collaborate on computations. Furthermore, shared memory
buffers reside physically on the GPU as opposed to residing in off-chip DRAM.
Because of this, the latency to access shared memory tends to be far lower
than typical buffers, making shared memory effective as a per-block, software-managed cache or scratchpad.
__syncthreads() guarantees that every thread in the block has completed instructions prior to __syncthreads() before the hardware will execute the next instruction on any thread.
Detect bank conflicts:
You can use bank checker macro (part of cutil) if you compile and run in emulation mode. It'll tell you where in your code you're getting bank conflicts.
No comments:
Post a Comment