From kfm-devel Sat Mar 30 21:56:54 2002 From: Dawit Date: Sat, 30 Mar 2002 21:56:54 +0000 To: kfm-devel Subject: Re: kio_http fix X-MARC-Message: https://marc.info/?l=kfm-devel&m=101752578906666 On Saturday 30 March 2002 12:45, Matthias Welwarsky wrote: > Dawit wrote: > > On Saturday 30 March 2002 10:11, Matthias Welwarsky wrote: [snipped] > > That is not the issue. Look at the Content-Length and you can clearl= y > > see the problem is that the POST data is not cached and hence never s= ent > > on the re-POST attempt when the connection is abruptly closed by the > > remote machine. Anyways, this has already been fixed and I will > > backported to the 3.0 tree. > > Well, no. It should actually never get to re-POST the request, because > httpOpen() should handle this case by just reopening the connection and > re-sending only the header. The code is there: [snipped code] > The problem actually is that sendOK is "true" even if the connection is > dead already. This seems to happen only with SSL, where write() is actu= ally > SSL_write(). The normal write(2) will return -1 in that case. > > I can even prove my theory: look at the output of netstat -t. You will = see > an ESTABLISHED https connection to the server for a while, and then see= it > change to CLOSE_WAIT or FIN_WAIT2. If you submit the form while the > connection is ESTABLISHED, all is OK, if you wait until it's in CLOSE_W= AIT > or FIN_WAIT2 it will fail. Okay after further testing you are indeed right. It seems that the SSL_w= rite=20 connection does not do the wrong thing if the remote machine closed the=20 connection and puts the status of the connection to CLOSE_WAIT. I tested= =20 this with both a GET as well as a POST condition so there is indeed a bug= =20 somewhere in SSL_write and we need to go back to the way things were wrt=20 to persistent connection over SSL and i.e. disable it. I guess that is w= hy we=20 had it like this to being with. We really need to report this issue to t= he=20 openssl people. > So, what happens is that we _think_ we sent the request and actually ru= n > into a timeout while waiting for the response and _then_ repost the req= uest, > and I'm really not sure we should do that at all!=20 No we do not run into a timeout. Otherwise we would get a timed out erro= r. What actually happens is that the gets call fails since the SSL_write fun= ction=20 returns an empty buffer and m_bEOF gets set which in turn will cause=20 ::readHeader to coorectly thing that the connection has been prematurely=20 closed and we need to re-establish. The bug in there was what I fixed,=20 because on a re-connect, the data is not being POST 'ed at all due to=20 unnecessary checks I put there myself. If that bug has not been there yo= u=20 would not have found the problem with the SSL_write since everything woul= d=20 seem to be working correctly :) > if the server in fact got the request and is just slow on responding so= we > might end up sending the request twice. Waldo pointed this out in his m= ail, > I think. No we do not. Look carefully at the code again :) You are incorrectly=20 assuming that we are timing out when we are not. Instead we get nothing = back=20 from SSL_write which is causing this condition. > So please don't fix it by chaching the request and reposting the data. = I'm=20 > rather sure it's a problem in openssl and the only way to work around t= his > and not risk side effects is to disable keepalives when working over SS= L. See above. It is a bug in kio_http that led you to the bug in SSL_write=20 whether you knew it or not :) If sendBody worked correctly you would=20 have never seen this at all. This is why we probably did not allow persi= stent=20 connections over SSL to begin with. Anyways, it is documented now so tha= t=20 we can tell in the future why this was needed to begin with. Regards, Dawit A.