Then, for the CPU request queue, a dynamic bank partitioning strategy is implemented, which dynamically maps it to different bank sets according to different memory characteristics of the application, and eliminates memory request interference of multiple CPU applications without affecting bank-level parallelism. ![]() The step-by-step memory scheduling strategy first creates a new memory request queue based on the request source and isolates the CPU requests from the GPU requests when the memory controller receives the memory request, thereby preventing the GPU request from interfering with the CPU request. In order to solve the problems encountered in the shared memory of heterogeneous multi-core systems, we propose a step-by-step memory scheduling strategy, which improve the system performance. ![]() The difference in access latency between GPU cores increases the average latency of memory accesses. Requests between multiple CPUs are intertwined when accessing memory, and its performance is greatly affected. ![]() ![]() Memory requests from the GPU seriously interfere with the CPU memory access performance. Multiple CPUs and GPUs are integrated on the same chip to share memory, and access requests between cores are interfering with each other.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |