04:45:53 <stepkut> what are some non-contrived examples of when an HTTP Response body would need to stream values from the HTTP Request?
04:46:16 <stepkut> I guess.. basic form processing would count
04:46:41 <stepkut> well.. maybe not
04:47:24 <stepkut> that has values from the Request in the Response... but generally the Request is entirely read before the Response is sent in that case
18:43:40 <levi> Is it allowed to start sending a response before the request is complete?
18:45:18 <stepkut> i wonder
18:45:49 <stepkut> this is something I am not entirely clear on
18:46:04 <stepkut> 'tis important to figure out now though ;)
18:46:48 <stepkut> http://stackoverflow.com/questions/14250991/is-it-acceptable-for-a-server-to-send-a-http-response-before-the-entire-request
18:48:02 <stepkut> some people say it is only valid to send a response before the request has been entirely received if the response is an error condition
18:49:38 <stepkut> that would make sense.. once you send a response, you can't change your mind and send a different one
18:50:01 <stepkut> so, you couldn't start sending a successful response, and then change your mind if an error pops up later and send an error response instead
18:50:29 <levi> Yeah.
18:50:48 <stepkut> however, once you have encountered an error.. you aren't going to do anything more
18:51:00 <levi> You could possibly start *building* a response in anticipation of sending it, but you might have to throw it away.
18:51:16 <stepkut> and that allows you to handle things like, exceding upload quotas, etc, in a sensible manner
18:57:46 <levi> Sending anything before you've got as much of the request as you're going to get seems to screw with the typical semantics of 'request/response' protocols.
19:02:17 <donri> i'm not sure the Request -> Response ideal really works out anyways, especially these days given websockets, spdy etc. even basic streaming gets weird with that.
19:07:34 <donri> hm purity might make it work though, but neither wai or current happstack are pure in the relevant places
19:15:12 <levi> Well, spdy is morphing into http2, and websockets upgrade to a completely different protocol.
19:18:10 <stepkut> yeah
19:18:22 <stepkut> still we need to study both of those too and see how they interface with hyperdrive
19:34:10 <levi> Actually, http1.1 seems pretty unambiguous about not sending responses before messages are complete.
19:34:17 <levi> "After receiving and interpreting a request message, a server responds with an HTTP response message."
19:34:50 <levi> And the definition of the request message includes the entire message.
19:36:08 <stepkut> hmmm
19:37:39 <levi> If you haven't got the entire message yet, you have not received a message.
19:39:39 <stepkut> I don't disagree with that interpretation -- however, that doesn't mean I am convinenced the author knew that is what they are implying
19:41:09 <stepkut> I am also curious about how pipeling plays into this
19:41:47 <stepkut> with pipelined requests, the client can, I believe, send all the requests before attempting to read any responses
19:42:11 <levi> Yes, the client can do that. But the server must respond to them in the order they were received.
19:42:22 <stepkut> right
19:42:36 <levi> So the server reads one message, sends one response, etc.
19:42:55 <stepkut> but that also means that you don't want the situation where the client is trying to send the requests, but the server is blocking because it is waiting for the client to read a response
19:43:40 <stepkut> not sure how to avoid deadlocks there. but if the server could stream the message body from the request into the response, it seems like deadlocking would be even worse
19:43:55 <levi> I think that is generally why no one implements http pipelining.
19:44:24 <stepkut> why no clients do?
19:47:03 <levi> They don't do it out of concern over servers/proxies that do it wrong, as far as I understand it.
19:49:23 <stepkut> right... I believe servers are required to 'support' http pipelining -- though that generally doesn't involve doing anything special
19:53:19 <levi> Clients aren't supposed to attempt to pipeline any non-idempotent requests either, so they're probably not going to have large message bodies to stream from anyway.
19:53:22 <stepkut> let's say that the server is supposed to read the entire request before sending a response.. that means that if the user supplied handler fails to consume the request body, hyperdrive should read the unconsumed data *before* sending the response, not after
19:53:31 <stepkut> right
19:54:12 <stepkut> though perhaps PUT?
19:55:40 <levi> Well, the reasoning is that you don't know whether the pipelined requests are going to go through or get cut off arbitrarily. I guess in the case of PUT you ought to be able to try it again without fear of changing things even if it did actually go through.
19:56:55 <levi> I don't think hyperdrive needs to actually look at the entire request, but should discard any unused bytes and ensure all expected bytes have arrived before responding.
19:59:45 <stepkut> right
20:00:03 <stepkut> we have to read all the bytes that are sent no matter what, even if we just ignore them
20:00:24 <stepkut> ... so we can get to the next request
20:00:34 <stepkut> unless it is not an http persistent connection
20:02:11 <stepkut> so in theory we could skip reading and discarding data for a non-persistent connection, but not if the spec requires that we read the entire request before sending a succesfull response
20:02:34 <stepkut> though, it doesn't sound like a very useful optimization in practice
20:02:44 <stepkut> clients are not usually uploading tons of data that the server is ignoring
20:03:03 <levi> Huh, you're only allowed one automatic retry if your pipelined requests fail.
20:03:15 <levi> Fail due to dropped connection, I mean.
20:03:18 <stepkut> yeah
20:07:35 <levi> The 100 Continue status is interesting.
20:10:13 <levi> I would consider its existence to be evidence that servers are intended to read the entire request before sending a response.
20:13:06 <levi> Otherwise, a client could just pause a moment after indicating its intent to send an expensive request via its headers to see if the server immdiately responds with a negative response.
20:15:19 <stepkut> how does that mesh with this,
20:15:22 <stepkut> "An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the network connection for an error status while it is transmitting the request. If the client sees an error status, it SHOULD immediately cease transmitting the body. If the body is being sent using a "chunked" encoding (section 3.6), a zero length chunk and empty trailer MAY be used to prematurely mark the end of the message. If the body was preceded by a
20:15:22 <stepkut> Content-Length header, the client MUST close the connection."
20:15:54 <stepkut> not clear what 'network connection' and 'error status' mean
20:17:16 <stepkut> and also,"If a client will wait for a 100 (Continue) response before
20:17:17 <stepkut>         sending the request body, it MUST send an Expect request-header
20:17:17 <stepkut>         field (section 14.20) with the "100-continue" expectation."
20:18:42 <stepkut> http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html, 8.2.3
20:19:04 <stepkut> seems to indicate that clients definitely can send the request headers, and wait to see if the server responds before sending the message body
20:19:23 <levi> Right, but it explicitly breaks it into two separate request/response pairs.
20:19:38 <stepkut> ah
20:20:02 <levi> That's what I was referring to earlier.
20:20:44 <levi> The network error status thing is interesting. It doesn't mention the server at all, so it could be some sort of transient network issue, but it's not really clear.
20:23:09 <stepkut> handling this 100-continue thing is an interesting problem in itself
20:24:23 <stepkut> sounds like the server only receives a single Request, but it can issue multiple Responses
20:25:11 <stepkut> specifically 1 or more 1xx Responses before a final Respone
20:26:36 <stepkut> in terms of parsing, that wouldn't affect much
20:27:20 <stepkut> we currently (in theory) parse just the headers, and then leave a producer in the Request body that allows the user to get the rest of the message body
20:27:36 <stepkut> but, we would need to add the ability to send 1xx responses not just a single response
20:29:45 <levi> Hmm, yes, there's a section about the 100-continue stuff that explicitly mentions a server sending a response before the request body is complete as well.
20:30:17 <levi> But it's supposed to wait to close the connection until the client has finished sending.
20:30:42 <stepkut> for web-sockets we need to support 101
20:31:14 <stepkut> is that for sending an error response only? or any response?
20:32:31 <levi> It just says 'a final status code'
20:33:10 <levi> Presumably it could indicate success or failure, then.
20:34:48 <levi> IETF standards should come with PICS like IEEE ones do. :P
20:36:50 <stepkut> ;)
21:00:08 <donri> stepkut: i know i know python is uninteresting but still i think there are insights in this post relevant to hyperdrive design http://lucumr.pocoo.org/2011/7/27/the-pluggable-pipedream/
21:00:56 <donri> wsgi is after all the original thing that eventually inspired things like wai
21:01:35 <donri> i think we can do much better with purity, pipes and cheap threads etc, but there are traps to look out for :)
21:02:55 <stepkut> noted.
21:04:48 <donri> for example one problem with middlewares in wsgi is that the request body is basically a Handle, so it doesn't "compose" for the same reasons io-streams doesn't compose
21:05:06 <donri> and i think both wai and current happstack have that same problem because we use MVars and IO for these things
21:05:52 <stepkut> yeah
21:10:33 <donri> but if say, the request body is a pure pipes Producer, we can have composable "middlewares" easily with pipes idioms