[prev in list] [next in list] [prev in thread] [next in thread]
List: qemu-devel
Subject: Re: [PATCH 1/2] hw/nvme: Implement shadow doorbell buffer support
From: John Levon <levon () movementarian ! org>
Date: 2022-06-15 11:45:56
Message-ID: YqnGdDaQsLMp2lnA () movementarian ! org
[Download RAW message or body]
On Wed, Jun 15, 2022 at 07:22:22PM +0800, Jinhao Fan wrote:
> > > > Isn't this racy against the driver? Compare
> > > > https://github.com/spdk/spdk/blob/master/lib/nvmf/vfio_user.c#L1317
> > >
> > > QEMU has full memory barriers on dma read/write, so I believe this is
> > > safe?
> >
> > But don't you need to re-read the tail still, for example:
>
> I think we also have a check for concurrent update on the tail. After writing \
> eventidx, we read the tail again. It is here:
> @@ -5854,6 +5943,11 @@ static void nvme_process_sq(void *opaque)
> req->status = status;
> nvme_enqueue_req_completion(cq, req);
> }
> +
> + if (n->dbbuf_enabled) {
> + nvme_update_sq_eventidx(sq);
> + nvme_update_sq_tail(sq);
> + }
Ah, and we go around the loop another time in this case.
> > driver device
> >
> > eventidx is 3
> >
> > write 4 to tail
> > read tail of 4
> > write 5 to tail
> > read eventidx of 3
> > nvme_dbbuf_need_event (1)
> >
> > set eventidx to 4
>
> Therefore, at this point, we read the tail of 5.
The driver could still update the tail after the nvme_update_sq_tail() above.
However, the driver ordering (read tail, then eventidx), does mean that it would
then do an mmio write, so yes, this looks safe, thank you.
regards
john
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic