如何在GPU中理解“warp中的所有线程同时执行相同的指令”?(How to understand “All threads in a warp execute the same instruction at the same time.” in GPU?)

我正在阅读专业CUDA C编程 ,以及GPU体系结构概述部分:

CUDA采用单指令多线程(SIMT)架构来管理和执行32个被称为warp的组中的线程。 warp中的所有线程同时执行相同的指令。 每个线程都有自己的指令地址计数器和寄存器状态,并对其自己的数据执行当前指令。 每个SM将分配给它的线程块分区为32线程warp,然后调度它以便在可用的硬件资源上执行。

SIMT体系结构与SIMD(单指令多数据)体系结构类似。 SIMD和SIMT都通过向多个执行单元广播相同的指令来实现并行性。 一个关键的区别是,SIMD要求矢量中的所有矢量元素都在一个单一的同步组中执行,而SIMT允许同一个warp中的多个线程独立执行。 即使warp中的所有线程在同一程序地址处一起启动,单个线程也可能具有不同的行为。 SIMT使您能够为独立的标量线程编写线程级并行代码,以及为协调线程编写数据并行代码。 SIMT模型包含SIMD不具备的三个关键特征: ➤每个线程都有自己的指令地址计数器。 ➤每个线程都有自己的寄存器状态。 ➤每个线程都可以有独立的执行路径。

第一段提到“ All threads in a warp execute the same instruction at the same time. ”,而在第二段中,它表示“ Even though all threads in a warp start together at the same program address, it is possible for individual threads to have different behavior. “ 这让我感到困惑,上面的说法似乎是矛盾的。 任何人都可以解释一下吗?

I am reading Professional CUDA C Programming, and in GPU Architecture Overview section:

CUDA employs a Single Instruction Multiple Thread (SIMT) architecture to manage and execute threads in groups of 32 called warps. All threads in a warp execute the same instruction at the same time. Each thread has its own instruction address counter and register state, and carries out the current instruction on its own data. Each SM partitions the thread blocks assigned to it into 32-thread warps that it then schedules for execution on available hardware resources.

The SIMT architecture is similar to the SIMD (Single Instruction, Multiple Data) architecture. Both SIMD and SIMT implement parallelism by broadcasting the same instruction to multiple execution units. A key difference is that SIMD requires that all vector elements in a vector execute together in a unifed synchronous group, whereas SIMT allows multiple threads in the same warp to execute independently. Even though all threads in a warp start together at the same program address, it is possible for individual threads to have different behavior. SIMT enables you to write thread-level parallel code for independent, scalar threads, as well as data-parallel code for coordinated threads. The SIMT model includes three key features that SIMD does not: ➤ Each thread has its own instruction address counter. ➤ Each thread has its own register state. ➤ Each thread can have an independent execution path.

The first paragraph mentions "All threads in a warp execute the same instruction at the same time.", while in the second paragraph, it says "Even though all threads in a warp start together at the same program address, it is possible for individual threads to have different behavior.". It makes me confused, and the above statements seems contradictory. Could anyone can explain it?

最满意答案

没有矛盾。 warp中的所有线程始终以锁定步骤执行相同的指令。 为了支持条件执行和分支,CUDA在SIMT模型中引入了两个概念

预测执行(见这里 ) 指令重播/序列化(见这里 )

预测执行意味着条件指令的结果可用于屏蔽线程执行没有分支的后续指令。 指令重放是如何处理经典的条件分支。 所有线程通过重放指令来执行有条件执行的代码的所有分支。 不遵循特定执行路径的线程被屏蔽并执行相当于NOP的操作。 这是CUDA中所谓的分支分歧惩罚,因为它对性能有重大影响。

这就是锁步执行如何支持分支。

There is no contradiction. All threads in a warp execute the same instruction in lock-step at all times. To support conditional execution and branching CUDA introduces two concepts in the SIMT model

Predicated execution (See here) Instruction replay/serialisation (See here)

Predicated execution means that the result of a conditional instruction can be used to mask off threads from executing a subsequent instruction without a branch. Instruction replay is how a classic conditional branch is dealt with. All threads execute all branches of the conditionally executed code by replaying instructions. Threads which do not follow a particular execution path are masked off and execute the equivalent of a NOP. This is the so-called branch divergence penalty in CUDA, because it has a significant impact on performance.

This is how lock-step execution can support branching.

更多推荐