[prev in list] [next in list] [prev in thread] [next in thread] 

List:       xen-users
Subject:    [Xen-users] cannot hotplug lvm volume to domU
From:       Tomas Mozes <hydrapolic () gmail ! com>
Date:       2016-10-14 7:10:22
Message-ID: CAG6MAzT5ON_Zq6M+BRBqE_Qux3=i4Nq=uP5ug8_Ng1z-=O0b9w () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


After trying to add an lvm volume to domU (PV):

dom0# lvcreate -L 50G -n /dev/vg/test
dom0# xl block-attach 6 '/dev/vg/test,raw,xvda77,rw'

The disk seems to be unavailable in the domU. Some of the errors observed
(tried on 3 separate servers):

1)
[1534117.448961] vbd vbd-268435533: 28 granting access to ring page
[1534117.449143] vbd vbd-268435533: failed to write error node for
device/vbd/268435533 (28 granting access to ring page)

2)
[1536276.636688] blkfront: xvda77: barrier or flush: disabled; persistent
grants: enabled; indirect descriptors: enabled;
[1536336.701456] udevd[24087]: worker [30951]
/devices/vbd-268435533/block/xvda77 is taking a long time
[1536456.827097] udevd[24087]: worker [30951]
/devices/vbd-268435533/block/xvda77 timeout; kill it
[1536456.827120] udevd[24087]: seq 1265
'/devices/vbd-268435533/block/xvda77' killed

3)
[1534860.936190] BUG: unable to handle kernel NULL pointer dereference
at           (null)
[1534860.936208] IP: [<ffffffff81787442>] _raw_spin_lock_irq+0x12/0x30
[1534860.936226] PGD 4bfc31067 PUD 4bec59067 PMD 0
[1534860.936236] Oops: 0002 [#1] SMP
[1534860.936246] CPU: 4 PID: 105 Comm: xenwatch Not tainted 4.4.21-gentoo #1
[1534860.936253] task: ffff8804c5721980 ti: ffff8804c5744000 task.ti:
ffff8804c5744000
[1534860.936261] RIP: e030:[<ffffffff81787442>]  [<ffffffff81787442>]
_raw_spin_lock_irq+0x12/0x30
[1534860.936276] RSP: e02b:ffff8804c5747cb8  EFLAGS: 00010046
[1534860.936282] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
0000000180400002
[1534860.936288] RDX: 0000000000000001 RSI: 0000000000000000 RDI:
0000000000000000
[1534860.936295] RBP: ffff8804c5747cb8 R08: 00000000863e8401 R09:
0000000180400002
[1534860.936303] R10: ffffea000e18fa00 R11: ffff8803863e8480 R12:
ffff8804b15bc400
[1534860.936310] R13: ffffffff81ac2340 R14: 0000000000000000 R15:
ffff8804b15bc400
[1534860.936328] FS:  0000000000000000(0000) GS:ffff8804c9280000(0000)
knlGS:ffff8804c9280000
[1534860.936344] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[1534860.936354] CR2: 0000000000000000 CR3: 00000004c1b72000 CR4:
0000000000042660
[1534860.936365] Stack:
[1534860.936372]  ffff8804c5747cf8 ffffffff815a57ab ffffffff8157f0ee
0000000000000000
[1534860.936387]  ffff8804b15bc400 ffffffff81ac2340 0000000000000008
ffff8804b15bc400
[1534860.936403]  ffff8804c5747d30 ffffffff815a6a65 ffff8804b15bc440
ffff8804b15bc400
[1534860.936420] Call Trace:
[1534860.936434]  [<ffffffff815a57ab>] blkif_free+0x1b/0x5c0
[1534860.936446]  [<ffffffff8157f0ee>] ? unregister_xenbus_watch+0x15e/0x1f0
[1534860.936457]  [<ffffffff815a6a65>] blkfront_remove+0x25/0x100
[1534860.936474]  [<ffffffff8157fb97>] xenbus_dev_remove+0x47/0x70
[1534860.936487]  [<ffffffff8158f6bd>] __device_release_driver+0x8d/0x120
[1534860.936498]  [<ffffffff8158ffae>] device_release_driver+0x1e/0x30
[1534860.936508]  [<ffffffff8158e853>] bus_remove_device+0xf3/0x140
[1534860.936519]  [<ffffffff8158c595>] device_del+0x125/0x240
[1534860.936528]  [<ffffffff8158c6bd>] device_unregister+0xd/0x20
[1534860.936538]  [<ffffffff8157ff97>] xenbus_dev_changed+0x97/0x1c0
[1534860.936548]  [<ffffffff8157e0b0>] ? split+0x100/0x100
[1534860.936559]  [<ffffffff815812e6>] frontend_changed+0x16/0x20
[1534860.936568]  [<ffffffff8157e148>] xenwatch_thread+0x98/0x130
[1534860.936583]  [<ffffffff8108f6c0>] ? __wake_up_common+0x80/0x80
[1534860.936595]  [<ffffffff81073c84>] kthread+0xc4/0xe0
[1534860.936603]  [<ffffffff81073bc0>] ? __kthread_parkme+0x70/0x70
[1534860.936616]  [<ffffffff81787c4f>] ret_from_fork+0x3f/0x70
[1534860.936628]  [<ffffffff81073bc0>] ? __kthread_parkme+0x70/0x70
[1534860.936636] Code: d8 5b 5d c3 89 c6 e8 2e ac 90 ff 66 90 48 89 d8 5b
5d c3 66 0f 1f 44 00 00 55 48 89 e5 ff 14 25 d0 91 a2 81 31 c0 ba 01 00 00
00 <f0> 0f b1 17 85 c0 75 02 5d c3 89 c6 e8 fd ab 90 ff 66 90 5d c3
[1534860.936734] RIP  [<ffffffff81787442>] _raw_spin_lock_irq+0x12/0x30
[1534860.936746]  RSP <ffff8804c5747cb8>
[1534860.936755] CR2: 0000000000000000
[1534860.936765] ---[ end trace a11a8bd0c27e9415 ]---

After shutting down the domU, editing the configuration and spinning up the
VM the disk is added properly - seems that only hotplug does not work.
Also, after a fresh reboot of dom0, it works properly.

The dom0 is running Xen 4.6.3 on Linux Kernel 4.4.21. The server uptime was
30 days when trying to add the disks.

The domU configuration is as follows:

kernel = "kernel-4.4.21-gentoo-xen"
extra = "root=/dev/xvda1 net.ifnames=0"
memory = 20000
vcpus = 24
vif = [ '' ]
disk = [
'/dev/vg_data/test-root,raw,xvda1,rw',
'/dev/vg_data/test-opt,raw,xvda3,rw',
'/dev/vg_data/test-home,raw,xvda2,rw',
'/dev/vg_data/test-tmp,raw,xvda4,rw',
'/dev/vg_data/test-var,raw,xvda5,rw'
]

When trying on a server with kernel 4.1.20 and Xen 4.6.1 it works properly
(server had uptime 130 days).

Thanks,
Tomas Mozes

[Attachment #5 (text/html)]

<div dir="ltr">After trying to add an lvm volume to domU (PV):<br><br>dom0# lvcreate \
-L 50G -n /dev/vg/test<br>dom0# xl block-attach 6 \
&#39;/dev/vg/test,raw,xvda77,rw&#39;<br><br>The disk seems to be unavailable in the \
domU. Some of the errors observed (tried on 3 separate \
servers):<br><br>1)<br>[1534117.448961] vbd vbd-268435533: 28 granting access to ring \
page<br>[1534117.449143] vbd vbd-268435533: failed to write error node for \
device/vbd/268435533 (28 granting access to ring page)<br><br>2)<br>[1536276.636688] \
blkfront: xvda77: barrier or flush: disabled; persistent grants: enabled; indirect \
descriptors: enabled;<br>[1536336.701456] udevd[24087]: worker [30951] \
/devices/vbd-268435533/block/xvda77 is taking a long time<br>[1536456.827097] \
udevd[24087]: worker [30951] /devices/vbd-268435533/block/xvda77 timeout; kill \
it<br>[1536456.827120] udevd[24087]: seq 1265 \
&#39;/devices/vbd-268435533/block/xvda77&#39; killed<br><br>3)<br>[1534860.936190] \
BUG: unable to handle kernel NULL pointer dereference at                     \
(null)<br>[1534860.936208] IP: [&lt;ffffffff81787442&gt;] \
_raw_spin_lock_irq+0x12/0x30<br>[1534860.936226] PGD 4bfc31067 PUD 4bec59067 PMD 0 \
<br>[1534860.936236] Oops: 0002 [#1] SMP <br>[1534860.936246] CPU: 4 PID: 105 Comm: \
xenwatch Not tainted 4.4.21-gentoo #1<br>[1534860.936253] task: ffff8804c5721980 ti: \
ffff8804c5744000 task.ti: ffff8804c5744000<br>[1534860.936261] RIP: \
e030:[&lt;ffffffff81787442&gt;]   [&lt;ffffffff81787442&gt;] \
_raw_spin_lock_irq+0x12/0x30<br>[1534860.936276] RSP: e02b:ffff8804c5747cb8   EFLAGS: \
00010046<br>[1534860.936282] RAX: 0000000000000000 RBX: 0000000000000000 RCX: \
0000000180400002<br>[1534860.936288] RDX: 0000000000000001 RSI: 0000000000000000 RDI: \
0000000000000000<br>[1534860.936295] RBP: ffff8804c5747cb8 R08: 00000000863e8401 R09: \
0000000180400002<br>[1534860.936303] R10: ffffea000e18fa00 R11: ffff8803863e8480 R12: \
ffff8804b15bc400<br>[1534860.936310] R13: ffffffff81ac2340 R14: 0000000000000000 R15: \
ffff8804b15bc400<br>[1534860.936328] FS:   0000000000000000(0000) \
GS:ffff8804c9280000(0000) knlGS:ffff8804c9280000<br>[1534860.936344] CS:   e033 DS: \
0000 ES: 0000 CR0: 0000000080050033<br>[1534860.936354] CR2: 0000000000000000 CR3: \
00000004c1b72000 CR4: 0000000000042660<br>[1534860.936365] Stack:<br>[1534860.936372] \
ffff8804c5747cf8 ffffffff815a57ab ffffffff8157f0ee \
0000000000000000<br>[1534860.936387]   ffff8804b15bc400 ffffffff81ac2340 \
0000000000000008 ffff8804b15bc400<br>[1534860.936403]   ffff8804c5747d30 \
ffffffff815a6a65 ffff8804b15bc440 ffff8804b15bc400<br>[1534860.936420] Call \
Trace:<br>[1534860.936434]   [&lt;ffffffff815a57ab&gt;] \
blkif_free+0x1b/0x5c0<br>[1534860.936446]   [&lt;ffffffff8157f0ee&gt;] ? \
unregister_xenbus_watch+0x15e/0x1f0<br>[1534860.936457]   [&lt;ffffffff815a6a65&gt;] \
blkfront_remove+0x25/0x100<br>[1534860.936474]   [&lt;ffffffff8157fb97&gt;] \
xenbus_dev_remove+0x47/0x70<br>[1534860.936487]   [&lt;ffffffff8158f6bd&gt;] \
__device_release_driver+0x8d/0x120<br>[1534860.936498]   [&lt;ffffffff8158ffae&gt;] \
device_release_driver+0x1e/0x30<br>[1534860.936508]   [&lt;ffffffff8158e853&gt;] \
bus_remove_device+0xf3/0x140<br>[1534860.936519]   [&lt;ffffffff8158c595&gt;] \
device_del+0x125/0x240<br>[1534860.936528]   [&lt;ffffffff8158c6bd&gt;] \
device_unregister+0xd/0x20<br>[1534860.936538]   [&lt;ffffffff8157ff97&gt;] \
xenbus_dev_changed+0x97/0x1c0<br>[1534860.936548]   [&lt;ffffffff8157e0b0&gt;] ? \
split+0x100/0x100<br>[1534860.936559]   [&lt;ffffffff815812e6&gt;] \
frontend_changed+0x16/0x20<br>[1534860.936568]   [&lt;ffffffff8157e148&gt;] \
xenwatch_thread+0x98/0x130<br>[1534860.936583]   [&lt;ffffffff8108f6c0&gt;] ? \
__wake_up_common+0x80/0x80<br>[1534860.936595]   [&lt;ffffffff81073c84&gt;] \
kthread+0xc4/0xe0<br>[1534860.936603]   [&lt;ffffffff81073bc0&gt;] ? \
__kthread_parkme+0x70/0x70<br>[1534860.936616]   [&lt;ffffffff81787c4f&gt;] \
ret_from_fork+0x3f/0x70<br>[1534860.936628]   [&lt;ffffffff81073bc0&gt;] ? \
__kthread_parkme+0x70/0x70<br>[1534860.936636] Code: d8 5b 5d c3 89 c6 e8 2e ac 90 ff \
66 90 48 89 d8 5b 5d c3 66 0f 1f 44 00 00 55 48 89 e5 ff 14 25 d0 91 a2 81 31 c0 ba \
01 00 00 00 &lt;f0&gt; 0f b1 17 85 c0 75 02 5d c3 89 c6 e8 fd ab 90 ff 66 90 5d c3 \
<br>[1534860.936734] RIP   [&lt;ffffffff81787442&gt;] \
_raw_spin_lock_irq+0x12/0x30<br>[1534860.936746]   RSP \
&lt;ffff8804c5747cb8&gt;<br>[1534860.936755] CR2: \
0000000000000000<br>[1534860.936765] ---[ end trace a11a8bd0c27e9415 \
]---<br><br>After shutting down the domU, editing the configuration and spinning up \
the VM the disk is added properly - seems that only hotplug does not work. Also, \
after a fresh reboot of dom0, it works properly.<br><br>The dom0 is running Xen 4.6.3 \
on Linux Kernel 4.4.21. The server uptime was 30 days when trying to add the \
disks.<br><br>The domU configuration is as follows:<br><br>kernel = \
&quot;kernel-4.4.21-gentoo-xen&quot;<br>extra = &quot;root=/dev/xvda1 \
net.ifnames=0&quot;<br>memory = 20000<br>vcpus = 24<br>vif = [ &#39;&#39; ]<br>disk = \
[<br>&#39;/dev/vg_data/test-root,raw,xvda1,rw&#39;,<br>&#39;/dev/vg_data/test-opt,raw, \
xvda3,rw&#39;,<br>&#39;/dev/vg_data/test-home,raw,xvda2,rw&#39;,<br>&#39;/dev/vg_data/ \
test-tmp,raw,xvda4,rw&#39;,<br>&#39;/dev/vg_data/test-var,raw,xvda5,rw&#39;<br>]<br><br>When \
trying on a server with kernel 4.1.20 and Xen 4.6.1 it works properly (server had \
uptime 130 days).<br><br>Thanks,<br>Tomas Mozes<br></div>


[Attachment #6 (text/plain)]

_______________________________________________
Xen-users mailing list
Xen-users@lists.xen.org
https://lists.xen.org/xen-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic