[prev in list] [next in list] [prev in thread] [next in thread]
List: llvm-bugs
Subject: [llvm-bugs] [Bug 65134] MLIR: gpu dialect, runtime error
From: LLVM Bugs via llvm-bugs <llvm-bugs () lists ! llvm ! org>
Date: 2023-08-31 8:20:30
Message-ID: 20230831082030.6f9e651e981ebfd8 () email ! llvm ! org
[Download RAW message or body]
[Attachment #2 (text/html)]
<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/65134>65134</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
MLIR: gpu dialect, runtime error
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
lyc200150
</td>
</tr>
</table>
<pre>
Hi, I am running a program using gpu dialect, and I find it will occur a segment \
fault when increasing running iter. The problem also happen when running a vector add \
example ```
func.func @main() {
%cn = arith.constant 32768: index
%d_a = gpu.alloc (%cn) : memref<?xf32>
%d_b = gpu.alloc (%cn) : memref<?xf32>
%d_c = gpu.alloc (%cn) : memref<?xf32>
%h_a = memref.alloc(%cn) : memref<?xf32>
%h_b = memref.alloc(%cn) : memref<?xf32>
%h_c = memref.alloc(%cn) : memref<?xf32>
%unrank_ha = memref.cast %h_a : memref<?xf32> to memref<*xf32>
%unrank_hb = memref.cast %h_b : memref<?xf32> to memref<*xf32>
%unrank_hc = memref.cast %h_c : memref<?xf32> to memref<*xf32>
func.call @test_init_f32(%unrank_ha, %cn) : (memref<*xf32>, index) -> ()
func.call @test_init_f32(%unrank_hb, %cn) : (memref<*xf32>, index) -> ()
func.call @test_init_f32(%unrank_hc, %cn) : (memref<*xf32>, index) -> ()
gpu.memcpy %d_a, %h_a : memref<?xf32>, memref<?xf32>
gpu.memcpy %d_b, %h_b : memref<?xf32>, memref<?xf32>
gpu.memcpy %d_c, %h_c : memref<?xf32>, memref<?xf32>
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%iter = arith.constant 80000 : index
%dim_sz = arith.constant 128 : index
%grid_sz = arith.constant 256: index
scf.for %i = %c0 to %iter step %c1 {
// print iter
%i_iter = arith.index_cast %i : index to i32
func.call @debug(%i_iter) : (i32) -> ()
gpu.launch blocks(%arg1, %arg2, %arg3) in (%sz_x = %grid_sz, %sz_y = %c1, \
%sz_z = %c1) threads(%arg4, %arg5, %arg6) in (%tx = %dim_sz, %ty = %c1, %tz = %c1) { \
%threadidx = gpu.thread_id x %blockid = gpu.block_id x
%block_offset = arith.muli %blockid, %dim_sz : index
%thread_offset = arith.addi %threadidx, %block_offset : index
// threadidx + blockdim * blockIdx
%mem_a = memref.load %d_a[%thread_offset] : memref<?xf32>
%mem_b = memref.load %d_b[%thread_offset] : memref<?xf32>
%mem_c = memref.load %d_c[%thread_offset] : memref<?xf32>
// c[i] = a[i] + b[i] + c[i]
%add_ab = arith.addf %mem_a, %mem_b : f32
%add_abc = arith.addf %add_ab, %mem_c : f32
memref.store %add_abc, %d_c[%thread_offset] : memref<?xf32>
gpu.terminator
}
}
gpu.dealloc %d_a : memref<?xf32>
gpu.dealloc %d_b : memref<?xf32>
gpu.dealloc %d_c : memref<?xf32>
return
}
```
the program will hit a segment fault at 65372 iters and it doesn't change when the \
vector size changed to 16384 or 65536. the problem can be replicated on different \
A100 card. I am running at llvm@79786c4d23f1fd7af438e4fd4e33ec109626bee4 and use the \
following pipelines ```
lower-test:
@${BUDDY_OPT} ${INPUT} \
--gpu-kernel-outlining \
-gpu-async-region \
-buffer-deallocation \
-memref-expand \
-convert-scf-to-cf \
--convert-gpu-to-nvvm --gpu-to-cubin \
-convert-index-to-llvm -finalize-memref-to-llvm -convert-arith-to-llvm \
-convert-cf-to-llvm -convert-func-to-llvm --gpu-to-llvm \
-reconcile-unrealized-casts -o ./test.mlir
```
thanks for your help!
</pre>
<img width="1px" height="1px" alt="" \
src="http://email.email.llvm.org/o/eJy8mFuP4yoSgH8NeSk5cvAl8UMeuqdPa1vay9HqnId9ijCUE3Y \
wWID79utXYOfmODOandGRWp2Aq74qylUUhDkn9xpxS4pHUjwtWO8Pxm7VB6dpuirSRW3Ex_ZvktAv8AKsBdtrL \
fUeGHTW7C1roXdhvO96EJIp5D7IMi3gBRqpBUgPb1IpMJz3Fhg43LeoPTSsVx7eDqhBam6RRc6RLz3aJfxxwGC \
nVtgCU87AgXUd6kHr7Morcm8sMCEA31nbKSTpE0kfSJmOf3HY9Jovwz8gedoyqQndEFoBWT8OAgAAhBZcA8meg \
FnpD0tutPNMe8joutyQ7AGkFvh-pSB2LGrsu37JlDIcIrngOuKzB2ixtdiQ7AvJnt-bjJLstwmh_nECTBD8J50 \
4jMsY5AbIjzPqX8DgP83otWX66-5wtSLOnD8vdD6m3lxM04dvset5dv0r2Hyezf9vduSfRrEUOFMqlIJH53dSS \
78LSjHQp_CFYr4KPKGbOTv0y1gXtIIk5masrR-3WP_lFvkvsXi2GwqwxZZ3H-PeMPK_kXVB4psZPWXWJ-bdbPt \
hJj8x72bZd5khjOnc7pnC_NbJV3PSq7P0dd4SWoTOMKezSdP0nhUh2537nLVEN3d09laKO0q0KCc6Z03Hm2Vjb \
PQ06g4h8ebku_PYHZd-2XkGu8-EPkNnpfaxB04fF3I3CUD0YnfcIuRF5LwBmdFrwlVRCKz7_VAMA_aiAILmvco \
6Zo9iveYHqJXhX93AYXa_GvOI2T09f80CTeqxKbnP3fsxOmOgR1H3ufs4xW11nvy8nKzAHywycTaany0V56_ll \
VF_Mjnkwyjnb-35iTWyfoTr5Y9vY3BDivdT6x1mdlIAvM-qxGhJcVKI4-_J70zTOPQXr73tlbzAjY6fEv0mo2- \
cvkUyIeTVokboxINvoGPyXsSEPg7JIWQLhD4Mgxcx71aL7fUBRBkmjlto8Th1nBRP3z8JTPD1PL7-RXg-j-c_j \
4-BDRw5KD4BOw1CkC8HR7FZN5kQO1Zfv_TmFP3xhR9D9QDNdP-4wvAZzvDkAsTvg8ZIOW8sXlCPufzzgYsFiba \
Vmnkz3UvXFxE6Da7bo8DjSXo843_H7o3O_e58X-d-9514aNH3Vo-zpwVc33n8cIOKN7V4DTtIf3MJYx7KIlvT2 \
HFcvLxJD8Kg0-Q3SjYpqSoP_MD0Hoe7V8COty4nP3F8JkLLWZXZJgdjoSyKrFyCv7jDcaahRrDYKcmZRwFGg5B \
NgzZ487BKU-DMiuXksulBqdeW5Om6Wm9KnguaNatGrFmTZxvMG5FjliFfpVVJyxoxj2voHUbrjVHKvAVQJztUU \
qObjZUyb2iTcFIk2cMx2BXJU0Jzsn58_PPp6T-7f_3-B1mHQgtTL__8_c9hWHw5K6RVkuy7PvmKVqNKTO-VjOu \
YSgUh5j40TyzupdE3AnUfQpOM6cH8nMyQKAm-d2HN06fc6Fe0PnG8SbxJeHPr6lEkeONNol9fW0iOI97XUsNdb \
OwEQS68H0gaqZmSn3h06vTgKB_3ittpPiMaTirn2aM_cTT1xiI3mkuFSa8tRg9EEk5EDhIDS0Kfw0tdtkraO0X \
C9FcH4dj2YXoLB1QdoauF2Gaiyiq2wO2qrLJ8naZZujhsi6qkOWZpWtO1qFblSohaZFnJimqz3oh8Ibc0pVm6y \
VbphqZZuiybCstihdVmhXUjNiRPsWVSLcN6lsbuF9K5HrdlscryhWI1Khd_k6FU4xvEh4RSUjwt7DboJHW_dyR \
PlXTenSleeoXbf_z95d9hD5n8KGN77WWLgNYau-it2h6871xI9thg9tIf-nrJTUvocyy44SPprPlvZDxHRxyhz9HR_wUAAP__Hawy8g">
[Attachment #3 (text/plain)]
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic