--- Log opened Sat Feb 28 00:00:27 2009
02:58 < mae> Saizan: yay for haddocks
12:47 < mae> hallo
13:06 < mae> stepcut: re urlt, looks neat! yeah being able to have an url with multiple request methods which do different things is very important IMO.
13:08 < mae> stepcut: <shameless marketing plug>You should really put that up on patch-tag :) We are adding very nice browsing features just around the corner (currently in testing)</shameless marketing plug>
13:17 < mae> hmm, do all the examles build?
13:17 < mae> http://code.google.com/p/happstack/issues/detail?id=7&q=0.2
13:45 < stepcut> mae: well, you want do be able to distinguish between and handle, POST (Image 1) and GET (Image 1), but you want to avoid allowing the the developers to construct nonsense like a request that is both a POST and GET at the same time, e.g., POST (GET Image 1), because a request can only be one or the other
13:47 < stepcut> mae:  allowing a request to match one of several methods would be ok, e.g., Methods [POST, GET] (Image 1)
14:32 < mae> stepcut: why can't we handle this with pattern matching?
14:32 < mae> ie
14:32 < mae> a tuple
14:32 < mae> (Method, Site)
14:33 < mae> if you want to match any method you can use (_, Site)
14:33 < mae> if they want to match POST and GET, well then they have to define two patterns
14:34 < mae> dispatch (POST, Site) = myPost
14:34 < stepcut> mae: let's say you make a component, like an image gallery. That component needs to specific whether things are going to be POST, GET etc.
14:34 < mae> dispatch (GET, Site) = myPost
14:34 < mae> I want to force people to differentiate
14:34 < mae> most of the time things should only have one method really.
14:35 < stepcut> but, when you use that component in a larger site, then what happens to the method part of the tuple?
14:35 < stepcut> so, let's say you have:
14:35 < mae> ? what do you mean
14:35 < stepcut> data Gallery = ShowImage Int | UploadImage
14:35 < mae> method should probably be checked only at the topmost uri component
14:36 < stepcut> and you want, (GET, ShowImage n) and (POST, UploadImage)
14:36 < stepcut> that is all stuff that is defined in your Gallery library
14:36 < stepcut> now, you create a site
14:37 < stepcut> data MySite = MyGallery Gallery | MyBlog | Hompage
14:37 < stepcut> and you want, (GET, Homepage)
14:37 < stepcut> but, not (GET, MyGallery (POST, UploadImage))
14:38 < mae> I would venture to say that we should only allow GET / POST to be defined anywhere but at the topmost uri component
14:38 < mae> i.e. defined inside mygallery, but not outside of it
14:38 < stepcut> but, the topmost uri does not know what the right request methods are, it is just using a library that someone else provides
14:39 < mae> Well, by topmost I don't mean /
14:39 < stepcut> that is the whole point of the system in the first place
14:39 < mae> i mean the most specific
14:39 < mae> as in
14:39 < mae> --> /very/very/specific/last/index
14:39 < mae> where index is the only place we allow them to define Method
14:40 < mae> so if the uris for galleyr ended up being
14:40 < mae> --> /mygallery/image001
14:40 < mae> for instance
14:40 < mae> assuming ShowImage uses this url format (image001)
14:41 < mae> ShowImage is where GET should be defined (talking at a high level right now, not specific to your code)
14:41 < stepcut> the default would be, /MyGallery/ShowImage/1 right now, btw.
14:41 < mae> ok.
14:41 < stepcut> right. ShowImage is where GET has to be defined. I just haven't figured out how to actually do that.
14:41 < mae> ok well
14:42 < mae> since web parts (not talking in happs, talking in generalities) are defined in types
14:42 < mae> then obviously the types themselves would need to define a method
14:42 < stepcut> one obvious approach is, data Method a = POST a | GET a, site = MyGallery (GET (ShowImage 1)), but that does not enforce that the method appears exactly once and in the right position
14:42 < mae> perhaps this is a good place for a typeclass
14:44 < mae> or not.
14:44 < stepcut> maybe phantom types
14:44 < mae> um
14:45 < mae> i see your issue
14:46 < mae> i was thinking
14:46 < mae> Class HasMethod t where
14:46 < mae>   method :: Method
14:46 < mae> instance MyGallery ShowImage where
14:47 < mae> err
14:47 < mae> sorry
14:47 < mae> instance HasMethod Gallery where
14:48 < mae>   method (ShowImage _) = GET
14:48 < mae> method  UploadImage = POST
14:48 < mae> this ensures that method is defined for all your types
14:49 < mae> (compiler can check for pattern completeness)
14:49 < mae> err all instances rather (not types)
14:49 < mae> what do you think?
14:50 < stepcut> yeah, but it does not stop you from doing, instance HasMethod MySite as well, resulting in the methods being defined twice
14:51 < mae> can't we use something like SYB to only call method on the leafs of your site tree?
14:51 < mae> i mean they can define it until they are blue in the face
14:51 < mae> but we only pay attention to the leafs
14:52 < stepcut> I'm not sure if the leaf is the right place either. for, data Gallery = ShowImage Int, the leaf is actually the Int part.
14:53 < mae> we could define an arbitrary type which your leafs have to use
14:53 < mae> like
14:53 < mae> you suggested
14:53 < mae> Method = GET | POST
14:53 < mae> this way we can identify them
14:54 < mae> hmm
14:54 < mae> see herein lies the problem, we won't be able to check everything statically
14:54 < mae> because we are wanting embeddable sites
14:54 < mae> so you gotta pick your battles
14:54 < mae> you want encapsulation? well you can't force them to think of everything in terms of absolute uris
14:55 < mae> hmm
14:56 < mae> on the other hand, maybe it isn't a real big deal if they define it at multiple levels
14:57 < mae> we can use the basic mutable concept of "last update wins"
14:57 < mae> in this case, specificity wins over generality
14:58 < mae> so even if we have (GET, MyGallery (POST, UploadImage)), POST wins
14:58 < mae> thoughts?
14:59 < stepcut> one moment, eating eggs
14:59 < mae> k
14:59 < stepcut> anyway, I have some partially formed ideas I am going to think about later
14:59 < stepcut> is there anything else I need to do for 0.2?
14:59 < stepcut> I closed all the open bugs
14:59 < stepcut> in my name
15:00 < stepcut> for 0.2
15:04 < mae> yeah
15:04 < dino-> Hey guys, I'm having the Could not find module `Data.Generics' thing with happs-tutorial 0.7.1 and ghc 6.10.1
15:04 < mae> mmm
15:04 < dino-> I went over the .cabal file, it's got base>=3.0.3.0
15:04 < dino-> And ghc has 3.0.3.0 and 4.0.0.0 here
15:04 < dino-> IDGI
15:04 < mae> stepcut: do you not like the idea of simply having specificity override generality? or are you really set on having the compiler warn about this as well :)
15:05 < dino-> The whole shooting match was installed with cabal install happs-tutorial
15:05 < dino-> So it chased down everything successfully (even Crypto!)
15:05 < mae> ahh
15:05 < dino-> Any ideas?
15:05 < mae> you need syb
15:05 < dcoutts> Data.Generics is in base 3 or syb
15:05 < mae> do you have syb?
15:06 < dcoutts> it comes with ghc
15:06 < dino-> $ ghc-pkg list | grep syb rts-1.0, safe-0.2, stm-2.1.1.2, strict-0.3.2, syb-0.1.0.0, syb-with-class-0.5.1, template-haskell-2.3.0.0, terminfo-0.2.2.1,
15:06 < mae> can you paste me the full error output
15:06 < mae> in a pastebin
15:06 < dcoutts> (it got split out from base 3, so it's a separate package now with base 4)
15:07 < dino-> mae: Sure can, just a sec
15:07 < mae> dino-: what version of happs did that pull in?
15:07 < mae> (or happstack_)
15:07 < dino-> dcoutts: Hm, maybe base>=4.0.0.0? I can try that quick
15:07 < dino-> $ ghc-pkg list | grep happs happstack-data-0.1, happstack-helpers-0.11, happstack-ixset-0.1, happstack-server-0.1, happstack-state-0.1, happstack-util-0.1,
15:08 < dino-> (this thing is not showing my line breaks in those greps)
15:08 < stepcut> mae: the whole point of the URLT stuff is to have the compiler find mistakes for you, so, the errors it can catch, the better ;)
15:09 < dino-> Ah, when I changed it to base==3.0.3.0 it works
15:09 < dino-> I see what dcoutts was saying now
15:09 < dino-> I used to be in 3.0.3.0 (well still is)
15:09 < dino-> But if the system is defaulting to shiny new 4.0.0.0, BZZT
15:10 < dcoutts> so if you use base4 then you must use syb
15:10 < dino-> ok, thank you
15:11 < dcoutts> use the same solution as for the packages that got split out from base3
15:11 < dino-> mae: Do you still want to see failure output?
15:11 < mae> dino-: nah, is this a problem with using the cabal file to compile tutorial?
15:11 < dino-> mae: yes
15:12 < dino-> So, this is probably borking new people trying to learn it.
15:12 < mae> dino-: ah ok, so it probably just doesn't have syb in the list for >= base 4
15:15 < dino-> I'm shaky on those expressions, it could have something like ' base >= 4.0.0.0 && syb ', you mean?
15:16 < mae> dino-: nah, i think its a bug in happs tutorial
15:16 < mae> not your fault
15:16 < dino-> mae, dcoutts: Thank you very much for helping. Much appreciated.
15:16 < mae> they need to define a build-depend if base 4 is used
15:16 < mae> i have already submitted a bug
15:17 < dino-> terrific, thanks again
15:22 < mae> stepcut: ok, well talk to you later, keep me updated!
15:24  * oshyshko is working on http://sites.google.com/site/oshyshko/happs-json-flex (early DRAFT)
15:28 < oshyshko> I still have several TODOs in list like exposing functions that take no arguments or exception/error handling, so your suggestions and comments are welcome
17:07 < mae> oshyshko: neat.
17:12 < mae> oshyshko: can't wait to see more ;)
17:35 < oshyshko> I need to grasp MACID basics asap. Where should I start?
17:36 < oshyshko> Just to have enough knowledge to write simple list/save/delete address book.
20:25 < stepcut> mae: actually, I am not sure that, data Method a = GET a | POST a really helps anything. The point a urlt is to be able to ensure at compile time that every link which can be generated has a corresponding handler. So, we really want to be able to ensure that if a Request requires POST, than anything generating a link to that uses POST not GET. So, something more like, data POST a = POST a, data GET a = GET a, would be needed to
20:25 < stepcut> into the type system
20:37 < mae> stepcut: so this would then require links to also be defined in terms of post / get
20:38 < stepcut> mae: they already are...
20:38 < mae> stepcut: right
20:39 < stepcut> the POST/GET stuff would not appear the the URL itself though
20:39 < mae> yep i get it
20:39 < mae> so then, this would allow you to create a type sig
20:40 < mae> linkTo :: POST a -> String
20:40 < mae> for example
20:40 < mae> err
20:40 < mae> linkTo :: GET a ->
20:40 < mae> String
20:40 < mae> which is a complete match
20:40 < stepcut> something like that.. not really sure yet.
20:40 < mae> what if you need to define a method which can work with get or post types
20:40 < mae> i.e.
20:41 < mae> submitTo :: POST a -> String
20:41 < mae> well, a normal html form can support post or get
20:41 < mae> so then you lose flexibility for this method
20:41 < mae> whereas if it were the previous definition
20:41 < mae> submitTo :: Method a -> String
20:41 < mae> and then the impl:
20:41 < stepcut> mae: submitTo :: (Method m) => m a -> String, ?
20:42 < mae> submitTo POST thing = ...
20:42 < mae> submitTo GET thing = ...
20:42 < mae> but then i guess this also kind of sucks
20:42 < mae> because the compiler says incomplete pattern match for linkTo
20:42 < mae> since it is only GET
20:43 < stepcut> mae: yeah, I haev not really thought too much about the implementation, I am still working on the requirements. Though, even then, I have only thought about it for a few minutes since we last spoke.
20:43 < mae> stepcut: so a typeclass for Method? yeah I think i like that
20:43 < mae> because then you can extend it in the future
20:43 < mae> for nonstandard methods
20:43 < rovar> to allow for PUT, DELETE in the future
20:43 < stepcut> mae:  the type, GET a -> String, does not make a whole lot of sense anyway, since the GET/POST part gets lost?
20:43 < mae> class Method m where
20:44 < mae>   methToString :: String
20:44 < mae> instance Method GET where
20:44 < mae>   methToString = "GET"
20:44 < rovar> or you could generalize that with TH
20:45 < mae> stepcut: its a simplistic example, I am just using the example where you generate an anchor with linkTo or a submit button with submitTo
20:46 < rovar> mae, is that any advantage over, type Method = String;  ?
20:46 < rovar> or is that Method going to be used to define controller functions as well?
20:47 < mae> stepcut: I think I prefer the simplicity of data Method m = GET a | Post a
20:47 < rovar> not that particular instance, but the type Method
20:47 < mae> but it sucks because if you have incomplete patterns
20:47 < mae> that will get dealt with at runtime
20:48 < stepcut> mae: yeah
20:48 < stepcut> runtime sucks
20:48 < stepcut> :)
20:48 < mae> the typeclass is sounding more and more like the right way to go
20:48  * stepcut works on happstack-agda
20:48 < mae> agda?
20:49 < stepcut> http://www.cs.chalmers.se/~ulfn/darcs/AFP08/LectureNotes/AgdaIntro.pdf
20:49 < stepcut> dependently typed language
20:51 < mae> stepcut: ok so we create a typeclass
20:51 < mae> um
20:51 < mae> but if we are going that far
20:51  * stepcut is not yet sold on the type class
20:52 < stepcut> anyway, if we do create a typeclass, then what?
20:52 < mae> why not cut out the middle man, and simply have the url types themselves become instances of HasMethod
20:52 < mae> I know its not the prettiest solution, but only so much can be inferred statically :)
20:53 < stepcut> they are instances of AsURL...
20:53 < mae> ok
20:53 < mae> so can we extend AsUrl
20:53 < stepcut> I am not clear that the Method type class really solves anything yet
20:53 < mae> acceptsMethod?
20:53 < mae> returns a string of Method(s)
20:53 < mae> [PUT, GET] etc
20:53 < stepcut> I only slept 4-5 hours last night, so I can't really think about this until tomorrow ;)
20:53 < mae> ok.
20:54 < mae> 10-4, I myself also have plans today
20:54 < stepcut> :)
21:02 < jsn> what about GADTs?
21:03 < jsn> you don't need it to be open to new members
21:24 < stepcut> jsn: GADTs are open to new members?
21:25 < jsn> stepcut: no
21:25 < stepcut> oh, I see
21:25 < stepcut> I misread that -- need sleep
21:25 < jsn> stepcut: GADTs give you a kind of sub-typing that would probably fit here
21:25 < Saizan_> you made that sound like an exclusive club
21:26 < jsn> 'twas not my intention
21:28 < stepcut> jsn: also, I only skimmed the the conversation you and mae had, but I thought I would point out that happstack multimsaster currently does not really ship the bytes that are going to be stored around to all the instances, depending on your point of view...
21:28 < stepcut> jsn: are you aware of that ?
21:28 < jsn> stepcut: i am not
21:28 < jsn> stepcut: please explain
21:28 < jsn> stepcut: how is it multi-master if it does not ship the bytes?
21:29 < stepcut> jsn: in happstack, the state is updated by emitting an update event. And update event is an algebraic data type like, data IncHitType = IncHitType
21:30 < jsn> stepcut: ah, so you ship the update to all the instances
21:30 < stepcut> there is a function associated with eat algebraic type, so, for exaple, there is a incHitType function
21:30 < jsn> right.
21:30 < stepcut> which can do all sorts of complicated stuff -- the key is that the update functions are always pure
21:31 < jsn> they are pure?
21:31 < rovar> are there docs on this?
21:31 < rovar> or samples?
21:31 < stepcut> so, we just ship the event, and each server replays the event locally. Obviously, the efficiency of that verses shipping the bytes depends on the specific application.
21:31 < Saizan_> rovar: this is what mkMethods does
21:32 < Saizan_> rovar: it creates the ADTs from the functions
21:32 < stepcut> jsn: it may be the case that for each method, we need a way to indicate whether it is better to send the event or send the results of the event...
21:32 < rovar> so sharding then would involve intelligently filtering these events?
21:33 < rovar> (oversimply)
21:34 < stepcut> rovar: yes. The simpliest form of sharding is to split things at the component level. A more complex form of sharding would allow you to split a single component across multiple servers.
21:34 < jsn> stepcut: there is no actual relationship to shipping the bytes in what you are talking about
21:34 < jsn> stepcut: i mean, whether you ship bytes that are to be interpreted as data or as a diff/event
21:34 < jsn> stepcut: you still have to ship bytes
21:35 < stepcut> jsn: right, that is why I said it depends on your point of view.
21:35 < jsn> stepcut: the system i described to mae actually is all about managing events and their relationships to maintain consistency
21:35 < rovar> so if a user uploads a large document, those bytes are still distributed, however, they effect the state of each mirror in a deterministic fashion
21:36 < jsn> the system you describe sounds like:
21:36 < stepcut> jsn: right, i think it still applies -- I just wondered if you knew how the current multimaster worked in terms of what was shipped, because I saw you mentioning diffs I think
21:36 < jsn> update comes in and is broadcast to all nodes
21:36 < jsn> no confirmation is required to commit
21:37 < jsn> therefore nodes that do not get the update will be inconsistent
21:37 < stepcut> rovar: depends, when uploading a document, you may not store it in happstack-state at all -- you might store it directly on S3, and only record the S3 URL in happstack-state
21:37 < jsn> they will not know they didn't get it, other nodes won't know they did
21:37 < stepcut> jsn: not quite.
21:37 < jsn> please explain
21:38 < jsn> i would really like to know more
21:38 < stepcut> jsn: the update comes in to happstack. happstack askes the local spread daemon to broadcast the event to all nodes.
21:38 < jsn> okay
21:38 < stepcut> jsn: spread ensures that all the nodes get the event
21:38 < jsn> so the update comes in to a happstack node
21:38 < rovar> stepcut: certainly. although at some point would you expect hs-state to function as a DHT that would be competitive with S3?
21:38 < jsn> happstack delegates to spread
21:38 < stepcut> jsn: the local spread daemon on each node then delivers the update event to each local happstack server
21:39 < jsn> stepcut: okay
21:39 < stepcut> rovar: DHT?
21:39 < jsn> stepcut: so, it is not true that spread guarantees the other nodes get the message
21:39 < Saizan_> Distributed Hash Table
21:39 < jsn> stepcut: anymore than TCP guarantees delivery
21:40 < jsn> stepcut: network partitions are an issue with the system you describe
21:40 < Saizan_> jsn: are you sure? i assumed spread would confirm the send (sending the message back to you) only when the other nodes acknowledged it
21:40 < stepcut> jsn: spread has several modes of delivery. We use safe which ensures that (1) all messages from all senders are sent to all receivers in the exact same order (2) a message is not delivered to any happstack server in the group unless all members of the group have recieved the message
21:40 < rovar> jsn: spread can guaranteed it.
21:42 < jsn> stepcut: so a request hits a happstack node. the message is formed into an update and delegated to spread. only if there is confirmation will our local node receive and apply the update?
21:42 < stepcut> rovar: I would say no. S3 and happstack have different, but complimentary goals.
21:42 < jsn> rovar: well, no, it can not
21:42 < jsn> rovar: you can not guarantee delivery on the internet
21:42 < jsn> stepcut: so, if one node fails, the whole system is dead?
21:42 < stepcut> jsn: right. The local node gets updated the same way as all other nodes, it actually recieves the update message that it sends to the network.
21:42 < rovar> jsn, but you can be assured that if delivery fails to one node, then it fails for all, in which case the mirrors are still synced
21:43 < stepcut> jsn: I doubt it. I expect spread has some method of removing dead nodes from the group. I do not yet know what that is.
21:43 < jsn> stepcut: it sounds like you guys have safety sorted, using spread for atomic broadcast
21:43 < jsn> so, the system is down until spread removes the node
21:43 < rovar> stepcut: is the goal of state simply for sharing session state, so that a request can be handled by any node in the cluster?
21:43 < stepcut> jsn: dunno, that is part of what I am figuring out and documenting.
21:44 < jsn> aye
21:44 < jsn> rovar: yes, this system ensures the delivery is identical
21:44 < rovar> managing node state is not trivial
21:44 < stepcut> rovar: it is for sharing all of the state so that a request can be handled by any node in the cluster.
21:45 < rovar> i mean status
21:45 < rovar> up/down/busy
21:45 < jsn> rovar: have you read the FLP paper?
21:46 < stepcut> rovar: 'session' state does not get any special treatment at that level, it's just more state. Managing the cookies happens at a higher level.
21:46 < jsn> you can't really detect node failure as such; you can just set a timeout and work with that.
21:47 < stepcut> jsn: keep in mind that the currently multimaster support is the very first attempt at adding multimaster to happstack. So, it is very likely to change :)
21:47 < jsn> rovar: http://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf
21:47 < jsn> stepcut: well, i think it is a mistake to approach this as an engineering problem
21:48 < stepcut> jsn: oh ?
21:48 < jsn> basically, if you guys are claiming consistent state across all nodes
21:48  * stepcut hopes it is not a marketing problem
21:48 < rovar> jsn, no I haven't read the flp paper, but yea, what I've found is even if you proactively discover bad nodes, there isn't much you can do for the pending transactions in a reliable manner
21:48 < jsn> rovar: you should read it, it is very important
21:48 < jsn> stepcut: basically, you guys need a picture and a proof
21:49 < jsn> stepcut: you can't just trust that some thing you got off the internet solves atomic broadcast
21:49 < jsn> i have talked to three different happstack partisans about this
21:49 < jsn> and none of them can actually say how many failures you can handle or how much clock error the system can tolerate
21:49 < gwern> we have 3 partisans? woot, the team is growing!
21:50 < rovar> heh
21:51 < stepcut> jsn: yep. That's why no one is claiming that multimaster support in happstack is done.
21:51 < jsn> rovar: one result of the paper is that there is no way to distinguish nodes that are slow from nodes that are dead, or nodes that have lost connection
21:52 < jsn> stepcut: well, to be more to the point -- this is totally antithetical to the haskell way
21:52 < rovar> jsn, if you perform a two stage transaction across all nodes, the 1st one delivers a message and and asks for a response, once a response has been received from all recipients, it sends a new message telling all recipients to process the message.
21:52 < stepcut> jsn: what would be the haskell way?
21:52 < jsn> stepcut: where is the paper "consensus in happstack" ?
21:53 < jsn> rovar: okay, so the first node fails as soon as it sends the message to three quarters of the others. then what?
21:53 < gwern> jsn: proceedings of planet haskell (2009)
21:53 < jsn> link?
21:53 < rovar> the 1st call/response can be thought of as part of a network layer transaction, the 2nd can be thought of as transport layer
21:53  * gwern was being facetious -_-
21:53 < jsn> gwern: right
21:54 < rovar> jsn, then all nodes will be have as if they hadn't received that message
21:54  * stepcut goes to bed
21:54 < rovar> 'nite
21:54 < jsn> g'nite
21:54 < jsn> rovar: so now the 1st node sends the message
21:54 < jsn> it receives the confirmations
21:54 < jsn> sends them out
21:54 < jsn> only some of them make it
21:54 < rovar> jsn, the 3/4ths of the nodes would have a timeout for that transaction before they give up and destroy the session
21:54 < jsn> then what?
21:55 < jsn> rovar: right, timeout is important -- it times out whether the first has failed or not, though
21:55 < rovar> jsn, then the remaining are screwed.  :)
21:56 < jsn> rovar: bad answer
21:56 < jsn> you need to be sure they kill themselves or some such thing
21:56 < jsn> you don't want to rely on the network
21:57  * stepcut thinks the current happstack multimaster is very much the haskell way. It looks at the idealized properties (ie. total ordering and deliver to all nodes), and attempts to ignore the realities of real world IO :)
21:57 < jsn> two phase commit with a leader exposes you to all kinds of problems like that
21:57 < jsn> stepcut: but that is not the haskell way
21:57 < rovar> so what is the answer, paxos?
21:57 < stepcut> jsn: it is when it comes to lazy IO
21:57 < jsn> rovar: no, strangely
21:57 < rovar> do we need to achive concensus for every message?
21:57 < jsn> rovar: to have consistency, you must have consensus
21:58 < jsn> you can do it without a leader
21:58 < jsn> just send the proposal to every node and then have every node broadcast its vote to every other node
21:58 < rovar> does each node need to broadcast its acceptance of the final message for that transaction to all parties?
21:58 < jsn> yes
21:59 < jsn> then any node that does not receive a majority "in time" knows it is out of date (missed the round) and goes to sleep
21:59 < rovar> so it's basically paxos, but without the leader election.  but in the case of detrimental state, you need a unanimous decision and all dissenters or absentees are shunned :)
21:59 < jsn> rovar: how is it basically paxos?
22:00 < jsn> paxos commits every read as well as every write, actually
22:00 < jsn> this system does not require that
22:01 < jsn> stepcut: the haskell way is to elegantly explain real world networking problems, describing them in a way that clarifies our intuitions; then we write a system that corresponds to these intuitions. the haskell way is not to bury our heads in the sand!
22:01 < jsn> also, IO is not impure
22:01 < rovar> paxos is just a concensus protocol, it  doesn't indicate that reads as well as writes must be agreed upon
22:01 < rovar> you can choose to apply it wherever.
22:02 < jsn> rovar: you have read that paper?
22:02 < jsn> let me go find the citation, just a minute
22:02 < rovar> but in some cases, in order to accept a message, you must validate state.
22:03 < rovar> which paper? I've read several on paxos and concensus in distributed systems. This is my job :)
22:03 < jsn> i mean, "Part Time Parliament"
22:03 < stepcut> jsn: but, lazy IO is very bad about controlling resources or time guarantees
22:03 < jsn> stepcut: right. it is contrary to the haskell way.
22:03 < stepcut> jsn: providing any guarantees that your file handles will ever be closed, etc
22:04 < jsn> lazy IO is the Perl way
22:04 < rovar> jsn, yes, i've read that.
22:04 < jsn> regardless, lazy IO does not pretend to offer any guarantees it does explain or demonstrate
22:05 < jsn> what i find to weird about this is that a bunch of haskell programmers are talking about cluster-wide acidity but are fuzzy on the details :)
22:05 < jsn> rovar: well, let's try a different tack.
22:05 < jsn> rovar: how is the broadcast voting thing "paxos without leader election" ?
22:06 < jsn> the paxos paper _is_ pretty opaque and i could be wrong
22:07 < jsn> it is possible that "commit on read" is just a variation
22:07 < rovar> jsn, in basic paxos, you have a leader propose to a quorum of acceptors. However, in practical cases, the leader is elected by this same quorum.
22:07 < stepcut> jsn: that is because the original authors have disappeared and the new authors have not gotten up to speed
22:07 < jsn> stepcut: really?
22:07 < rovar> further, it is usually members of this quorum that initate proposals for the leader to propose.
22:07 < stepcut> jsn: yes, that is why the name changed from HAppS to happstack
22:08 < jsn> did Alex Jacobson bail on the thing, then?
22:08 < rovar> so you can bypass this step, and have the acceptor propose the message in the same fashion, thus becoming an ad hoc leader
22:08 < rovar> this is acceptable if you trust none in your quorum to be malicios
22:08 < rovar> malicious
22:08 < rovar> that is to say, it can't solve the byzantine generals problem.. but its good in practice on a closed LAN
22:08 < stepcut> jsn: he currently lacks time to work on it
22:08 < jsn> stepcut: ah
22:09 < stepcut> jsn: though he is still interested in seeing the project continue with out him at the lead
22:09 < jsn> stepcut: yeah, i remember speaking with him once; he was talking about paying people to do it or something
22:09 < jsn> stepcut: i saw him talk at BayFP; that was the last I'd heard about HAppS until like Tuesday
22:10 < jsn> rovar: compared to Paxos, this system is pretty stripped down
22:10 < rovar> yea.. it's often referred to as "fast paxos" but again, it's only downside is that it doesn't protect against malicious members, if you're willing to make that assumption, you can save a lot of message exchanges
22:11 < stepcut> jsn: yes, there was a 6+ month lul for a while, until the happstack project got started.
22:12 < jsn> rovar: yeah, it is fast paxos
22:12 < stepcut> jsn: the initial phases of happstack are focused on (1) making the project easy to install (2) filling in a giant holes in the documentation (such as adding haddock docs, etc), tutorials, etc (3) building up the test suite to ensure that what is there actually works (4) performance testing to see how well it currently works (5) new developer education (6) other stuff
22:13 < rovar> jsn, but still paxos, at least in terminology ;P
22:13 < jsn> rovar: well, i don't know about that
22:14 < jsn> there seems to be a conflict resolution protocol even in fast paxos
22:14 < stepcut> due to 'the real world' being involved, any sort of consensus algorithm is subject to eperical testing results, regardless of what the theory says
22:14 < jsn> stepcut: well, no, actually
22:14 < jsn> stepcut: a consensus algorithm can be proven to handle a certain number of failures and a certain amount of clock error
22:15 < jsn> you either get that right in the drawing or it's just all luck
22:15 < jsn> but there's no magic or guessing
22:15 < stepcut> jsn: that same could be said of filesystem design, but practice shows otherwise
22:16 < stepcut> the worst case results can be calculated, but the the real-world results that matter are not so easy
22:16 < jsn> stepcut: consistency guarantees are all about worst case results
22:17 < stepcut> jsn: perhaps, but a higher total throughput seems more valuable than worst case in many situations
22:18 < jsn> stepcut: well, hold on
22:18 < jsn> let's go back to what you originally said
22:18 < jsn> "due to 'the real world' being involved, any sort of consensus algorithm is subject to eperical testing results, regardless of what the theory says"
22:18 < jsn> neither the safety nor liveness properties of the algorithm are subject to empirical testing
22:18 < jsn> that is all i meant to say
22:19 < jsn> certain number of failures and a certain amount of clock error
22:19 < stepcut> right, but when it comes to comparing real-world performance, I think empirical testing is your main tool
22:19 < jsn> yes, when comparing performance
22:20 < stepcut> sorry, I am very tired, so I am not being very clear
22:20 < jsn> this has been a topic of great interest to me for some time
22:21 < jsn> last year, my boss was convinced we could financial processing on top of SimpleDB; he figured transactions across a distributed web service would be a snap.
22:22 < jsn> after a little reading became a lot of reading, i've come to feel very strongly that the cheapest way to find the truth of these things is with paper and pen
22:22 < jsn> there are just too many experiments to run :)
22:23 < jsn> rovar: so, with merged acceptor and learner, yeah, it's fast paxos, looks like
22:28 < jsn> actually...
22:28 < jsn> rovar: to be honest, it is not clear to me that fast paxos forces voting at set times
22:31 < jsn> rovar: or to put it another way, the ascription "fast paxos" is not enough to specify an algorithm with any particular requirement to handle or overlook conflicts, advance in rounds of predeterminate length or otherwise
22:42 < jsn> anyways, i do not mean to troll your channel
22:52 < rovar> jsn, sorry, had to take care of an angry baby
22:53 < jsn> ah
22:53 < rovar> jsn, it's not as if this channel is too noisy to support it
22:53 < jsn> heh :)
22:54 < jsn> well, i am in slight danger of promoting my own work; i have posted one of my own papers here and have argued for broadcast voting
22:55 < rovar> we could always make a module for happstack that is Quorum instead of MultiMaster :)
22:55 < rovar> which paper?
22:56 < jsn> http://github.com/jsnx/members-only/raw/master/notes/Consistent%20Logging%20Algorithm
22:57 < rovar> data distribution and scalability are my primary concerns, but I come at it from a very practical background. I design data analysis systems for a news company.
22:57 < jsn> ah
22:57 < jsn> rovar: my personal opinion is that consensus is very expensive
22:57 < rovar> agreed
22:57 < jsn> rovar: not a good fit for most things
22:58 < jsn> in the paper, i advocate assigning a revision number to each changeset
22:58 < jsn> store the changesets immediately
22:58 < jsn> then sort out the revision numbers later
22:58 < jsn> you can get your storage now and have your confirmation/transaction in two rounds
22:58 < jsn> which is basically the best you can do
22:59 < jsn> rovar: for what you are doing, i imagine the data doesn't change very much
22:59 < rovar> hs-state seems to lean towards an object journaling database
22:59 < rovar> this concept would fit well.
22:59 < rovar> i hadn't until tonight considered making an object journaling database distributed
23:00 < rovar> I would imagine that data would change quite frequently. imagine a series of happstack servers behind a round-robin load balancer.
23:01 < rovar> at any point in a users session, any one of the masters could handle the request.
23:01 < jsn> yes
23:01 < jsn> so the round time is basically controlled by your clock error
23:01 < jsn> within a datacenter you could get really small rounds
23:02 < jsn> ten milliseconds for most cases
23:02 < rovar> i'd have to think about whether or not this would work with a comet based interactive system, or if that matters..
23:02 < jsn> comet?
23:02 < rovar> server -> client   ajax
23:03 < jsn> ah
23:03 < rovar> comet is the general idea of completely removing the difference between current web transactions and client/server architectures
23:04 < rovar> the primary concern is that connections are long duration, so you are forced to handle less concurrent users.
23:04 < jsn> well, the system is built on these regular rounds, right?
23:04 < jsn> the system i describe, i mean
23:04 < rovar> yea
23:04 < jsn> so things do not change except at the round times
23:05 < jsn> in other words, you can only change things at that frame rate
23:05 < rovar> so during each round, the concensus would cover all transactions that would have taken place in the interrim?
23:05 < jsn> right
23:05 < jsn> so for clients, they get a broadcast of the agreed upon changesets
23:05 < jsn> they don't need a connection
23:05 < rovar> i think I would like to implement this scheme, if for no other reason than to see how it works :)
23:06 < jsn> aye
23:06 < jsn> if we did it for bytes first, it would of course work well for a lot of people
23:06 < rovar> my concern is that I'm not proficient enough in haskell to do it justice
23:07 < jsn> well, i can help with that :)
23:07 < rovar> haskell has recently become a hobby,  I'm a c/c++ programmer by trade. My job takes up quite a lot of time as well.
23:07 < jsn> aye
23:07 < rovar> i would like to do some math and see what this scheme would do to bandwidth on our current system. :)
23:08 < jsn> this was basically shelved when it was clear to my boss how involved distributed transactions actually are
23:08 < jsn> i'm glad people in here are interested in it
23:08 < jsn> rovar: so the bandwidth is very bad
23:08 < jsn> because you have to broadcast the messages
23:09 < jsn> if you shard, you can avoid that -- mae has asked for that paper and i will start soon
23:09 < rovar> but we do support multicast  (PGM) on our networks, so this could be implemented efficiently
23:09 < rovar> at work, that is
23:09 < jsn> well, yeah, i think for web clients and such, diffs are a big win
23:10 < jsn> a comet client is kind of serving two roles -- it proposes ideas and it is also a slave
23:10 < rovar> jsn, wrt bandwith, my take is that if you're mirroring, you're going to have to send the data one way or another.
23:11 < rovar> jsn, also, have you considered SCTP for such an implementation?  ... hrrm.. is there even SCTP support in haskell?
23:11 < rovar> and if we're the only two talking, why do i keep prefixing with jsn, ?
23:11 < jsn> i need to look at that stuff, yeah
23:11 < jsn> rovar: it is not a bad habit :)
23:11 < jsn> makes it stand out in my chat logs
23:12 < jsn> so yeah, the next thing in line for me is to write the paper for sharding and membership
23:12 < rovar> hrrm.. googling haskell and sctp doesn't return promising results.
23:13 < rovar> jsn, sharding is less of a concern of mine until I run into a wall with naieve brodcast
23:13 < jsn> i need to demonstrate how it resolves failures and how it gets over the O(n^2) message cost
23:13 < rovar> so are you a student?
23:13 < jsn> no
23:13 < jsn> i am unemployed
23:13 < jsn> my firm folded in december
23:13 < rovar> ah.. I was thinking of a GSoC project :)
23:13 < rovar> sorry to hear that.
23:14 < jsn> it's nice to have people to discuss this with
23:14 < jsn> i am learning ruby on rails and will re-enter the job market with that
23:14 < jsn> i was using haskell at my old job
23:15 < rovar> that must be nice
23:15 < jsn> yes and no
23:15 < jsn> people blew really hot and cold over it
23:15 < rovar> how so?
23:15 < jsn> i had excellent approval when things we're going smoothly
23:16 < rovar> ah
23:16 < jsn> but whenever i hit a slow patch -- for example, an external interfacing requirement that required me to write a parser -- there was a tendency to assume that all "type inference" was slowing things down
23:16 < rovar> heh
23:17 < jsn> intellectual honesty is tough. people will waver and then anything unusual is up for getting fingered.
23:17 < rovar> its lack of general acceptance is certainly the cause of it's lack of general acceptance
23:17 < rovar> and yea, i've found programming shops to be just as superstitious as the rest
23:17 < jsn> right
23:18 < jsn> a project with unexpectedly time consuming requirements is likely to make people angry, basically
23:18 < rovar> i've often thought it would be fun to start a chop shop using fnal langs, but in reality that seems unlikely.
23:18 < rovar> a deathmarch is a deathmarch in any language :)
23:18 < jsn> a chop shop?
23:18 < jsn> indeed.
23:18 < rovar> contracting shop
23:19 < jsn> but nobody ever got fired for buying IBM, right?
23:19 < rovar> it would be great for the cost/production ratio.  However, owners would be invariably perplexed at the lack of people to support their code besides us
23:19 < jsn> if i had used C or Perl -- both languages everyone was familiar with -- they would not have blamed regular expressions or pointers
23:19 < jsn> aye
23:20 < rovar> which may no be a bad thing ;)
23:20 < rovar> not
23:20 < jsn> aye
23:21 < jsn> replacing your programming contractors is always going to be a little like replacing actors in the middle of movie
23:21 < rovar> so distributed prevayler... now I'm not going to get any sleep tonight
23:21 < jsn> heheh
23:21 < jsn> it's really distributed, lock-step string storage
23:21 < jsn> you can store any "diff language" in the system i describe
23:22 < rovar> right.. prevayler is nice because it stores actions, as described by hs-state
23:22 < jsn> oh
23:22 < rovar> approximately.. i'm over generalizing, and have learned very little about it so far.
23:22 < rovar> but the important part is achieving concensus
23:22 < rovar> it's really my own mind that is moving in that direction
23:22 < jsn> aye
23:23 < jsn> structured data is totally a different issue
23:23 < rovar> because the on-disk storage can be quite compact, storing CRUD actions for records in an ordered, replayable journal...
23:23 < rovar> also insanely fast for data retrieval.
23:24 < jsn> so, one approach to this is to keep the actions compiler separate
23:24 < rovar> but it may be a very bad idea, i've not thought it through
23:25 < jsn> the system i describe does not have any "compiling" -- it just stores the diffs and allows to determine a distinguished head revision
23:25 < jsn> you would need something to actually run the diffs and such so that you had relevant objects in the cache
23:25 < rovar> does it calculate the diff? or is it fed the diff by the proposer?
23:25 < jsn> it is fed the diff
23:26 < rovar> so what does this database look like and how is it diffed?
23:26 < jsn> so, when a request enters, the first node to get it gives it a unique number
23:26 < jsn> then it sends it out
23:27 < jsn> oh, and...
23:27 < jsn> the client that sent the request specifies which revision the diff follows
23:27 < jsn> so a client could work in a branch and not worry about consensus it so chose
23:27 < rovar> to establish causality
23:27 < jsn> yes
23:28 < jsn> also, to know where the conflicts are
23:28 < rovar> it's beginning to sound like oracle's optimistic transactions :)
23:28 < jsn> it is MVCC, yes
23:28 < jsn> it is optimistic transactions
23:28 < jsn> but that is much older than oracle, it goes back to, uhm...
23:28 < rovar> sure
23:29 < rovar> that's just the model I'm familiar with.. so how is this data stored? in a hash table? so a change is simply a new value for a key?
23:29 < jsn> really old paper, i can't remember
23:29 < jsn> hmm, well
23:29 < jsn> okay, so we have a change, right?
23:29 < jsn> <rev-following> <contents>
23:29 < rovar> can we make it more concrete?
23:30 < jsn> right, wokring on it
23:30 < rovar> :)
23:30 < rovar> i'm tired
23:30 < jsn> then we add a revision that it gets
23:30 < jsn> <rev-following> <rev> <contents>
23:30 < jsn> but that's not quite what we want
23:30 < jsn> since it conflicts with every other transactions
23:31 < jsn> so we have targets
23:31 < jsn> which are, yeah, keys or strings, really
23:32 < jsn> so the client sends     <rev-following> <key0> <change0> <key1> <change1> ...
23:32 < jsn> to specify which resources are in the transaction
23:32 < jsn> an update is a changeset
23:32 < rovar> makes sense.
23:32 < jsn> what is a changeset?
23:32 < jsn> the system does not actually care
23:32 < jsn> it's job is to order the changesets, according to MVCC
23:33 < jsn> the changesets are assumed to be in your programming langauge of choice
23:33 < jsn> they could be diffs
23:33 < jsn> they could be SQL logging statements
23:33 < jsn> doesn't make any difference
23:33 < rovar> sure
23:33 < jsn> that's why i mention the "compiler" or whatever
23:34 < jsn> this is an algorithm for presenting an maintaining a consistent log to whoever asks
23:34 < jsn> so you might actually put a database next to the logger and stream changes off of it
23:35 < jsn> the update does not necessarily represent an overwrite to the key
23:35 < jsn> it's just something to add to the key's log
23:37 < jsn> so let's say we have this string logging network service
23:37 < jsn> how do we make a sweet distributed database with it?
23:37 < jsn> well, we need to do two things:
23:38 < jsn> it's easy to hook MySQL up to the log and just feed it the updates
23:38 < rovar> http://www.cnds.jhu.edu/pub/papers/spread.pdf
23:39 < rovar> you might want to read that, I haven't yet, but am interested to see how they solve the problem.
23:39 < rovar> http://spread.org/SpreadResearch.html
23:39 < rovar> it would appear that this Yair Amir fellow has given the topic quite a bit of thought :)
23:40 < jsn> i have read it
23:40 < jsn> their approach is to focus on atomic broadcast
23:40 < jsn> my goal is storing revisions
23:41 < rovar> jsn, so really the core concepts here are  identifying a revision proposal, basing some action within a change set, agreeing on the change set, then applying each action locally...
23:41 < jsn> hmm, well
23:41 < rovar> if there may be a conflict, i.e. another proposer has based a proposal on a different change set..  what would happen then?
23:42 < jsn> well, in each round, there is only one quorum
23:42 < jsn> the quorum wins
23:42 < jsn> if there is no quorum, everyone goes to sleep
23:42 < jsn> so, why is this not a problem?
23:43 < jsn> they are all mirrors, working with the same set of revisions on the same revision graph
23:43 < jsn> if they don't all vote the same way, then maybe one missed a proposal
23:43 < jsn> due to network failures
23:43 < jsn> that's the only way
23:44 < rovar> but they have different stimuli acting on them, correct?
23:44 < jsn> no
23:44 < rovar> or are we talking about a single node disseminating changes?
23:44 < jsn> no
23:44 < jsn> voting takes place in rounds, right?
23:45 < rovar> sure.
23:45 < jsn> so we get the changes that apply in this round
23:45 < jsn> and then we vote on them
23:45 < jsn> each node does that
23:45 < rovar> i guess that is my question, in each round, each node submits its changesets?
23:45 < jsn> each node broadcasts the changeset requests it gets from clients
23:45 < jsn> each changeset must "take effect" later
23:46 < rovar> right
23:46 < jsn> basically, the timestamp on a changeset lets us know when we have to vote on it
23:46 < rovar> so there is immediate consistency
23:46 < jsn> well, hmm
23:46 < jsn> no, i wouldn't say there is immediate consistency
23:46 < jsn> you might get the changesets all out of order
23:46 < rovar> from a local standpoint, a client transaction doesn't complete until there is quorum
23:46 < jsn> right
23:47 < jsn> so a client sends a request
23:47 < rovar> so it's not eventual consistency... it's immediate :)
23:47 < jsn> well, bound
23:47 < jsn> yeah
23:47 < jsn> i have never seen that "immediate consistency" before, but yeah
23:47 < jsn> so, let's think about some evil stuff
23:47 < jsn> a node gets a request
23:48 < jsn> it broadcasts it
23:48 < rovar> we've discussed it in Scalaris forums.
23:48 < rovar> but anyways.. continue
23:48 < jsn> a terrorist attacks one of the messages going to one of the nodes
23:48 < jsn> it doesn't get there
23:49 < jsn> so in that round, all the nodes vote for the change, except for the node that didn't get the message -- it's vote is "no change"
23:49 < jsn> it receives the quorum of votes from everyone else
23:49 < jsn> and submits to majority opinion
23:50 < jsn> it may actually not have the data; but it has the right revision number, which it can use to fix the data store
23:50 < jsn> or forward the request
23:50 < jsn> but there is no ambiguity about whether a revision has been approved
23:51 < jsn> how many messages must the terrorist attack to prevent a quorum?
23:51 < rovar> but if a local user makes a request on any data effected by that branch, it has to block or fail until it gets synced, ya?
23:51 < jsn> well, let's handle two cases separately
23:52 < jsn> say this is a cluster of nodes serving data from behind a load balancer -- then it makes no difference
23:52 < rovar> why?
23:52 < jsn> the user just makes some changes to the branch, puts them up to be stored, the system stores them and then two rounds later says, by the way, we stored your stuff but you aren't off the trunk anymore
23:53 < jsn> in short we _store_ their transaction -- their _branch_ -- but do not _confirm_ it
23:54 < jsn> s/aren't off/aren't on/
23:54 < jsn> so private branches work fine
23:54 < jsn> now say the local user is a database that is hooked up to a logger
23:54 < jsn> the logger does not get the message
23:55 < jsn> then it gets the quorum and it is fixed
23:55 < jsn> say it does not get the quorum
23:55 < jsn> then the next time the database talks to the log, the logger is like, "hey, we're dead, bang, go to sleep"
23:55 < jsn> end of that node
23:56 < rovar> so a node goes down because it missed 1 message?
23:56 < jsn> because it missed a quorum, it can only serve the past
23:56 < jsn> it can not make claims as to the present
23:57 < jsn> it has to catch up
23:57 < jsn> it can still serve out old revisions correctly, though
23:57 < jsn> and of course, no system can do consensus if nodes miss a heartbeat
23:58 < rovar> i'd be tempted to implement this system with reliable UDP.
23:58 < jsn> or unreliable UDP
23:58 < jsn> terrorists can attack any message
--- Log closed Sun Mar 01 00:00:28 2009