00:05:34 <levi> stepkut: My point is that doing that provides one parser that you trust.  Working out a way to test HTTP parsers provides many more working parsers.
00:27:07 <levi> The other sad fact is that 'correctness' of protocol parsing doesn't actually buy you anything, because the *actual* requirement for a practical implementation is that it functions as the major browsers expect it to.
02:28:49 <rogovski> hi everybody. im trying to install hsp-0.7.1 on my ubuntu machine under ghc 7.6.2. im pretty sure i'm running into troubles because my version of base doesnt contain the Control.OldException module. im trying to install clckwrks and the reason i want to install hsp0.7.1 is becasue it is lised as a dependency in the .cabal file of one of the clckwrks packages. should i be using a different version of ghc? haha im starting to think i need
02:28:49 <rogovski> to have a better understanding of ghc package system before even attemping this installation.
03:02:06 <levi> I think the problem is that hsp has an incorrect version upper bound for base.
03:03:28 <stepcut> yeah
03:03:45 <levi> It should be fixed to use the new exceptions.  You might be able to fix it yourself, it's not too hard.
03:03:51 <stepcut> well.. 7.6 will work with the new base, but you need to add an extra dependency
03:04:04 <stepcut> I just need to make hsp 0.8 the norm :)
03:04:46 <levi> Hmm, cabal-dev 0.9.2 released.
03:05:03 <stepcut> does that have sandboxes?
03:06:05 <levi> Hasn't cabal-dev always done sandboxes, or do you mean something different?
03:06:41 <levi> The big seller for this version is, apparently, that it builds with recent GHCs.
03:07:53 <levi> I wonder if it would fix the cabal-dev ghci problems I'm having if I installed it.
03:08:23 <stepcut> oh, oops
03:08:31 <stepcut> I thought you were talking about a new cabal release
03:08:39 <stepcut> which.. would be a version much higher than 0.9 too :)
03:08:46 <stepcut> clearly time to go to bed
03:10:04 <levi> Heh.  G'night!
09:15:04 <ibotty> hi
09:15:09 <ibotty> anyone from clkworks here?
09:15:18 <ibotty> clkwrks, that is..
16:43:37 <levi> No one that has been active recently, but that's not necessarily an absolute indication of absence.
16:43:58 <stepkut> ACTION looks arounds
16:44:12 <levi> And hey, there he is.
16:44:21 <stepkut> clckwrks question?
16:54:45 <stepkut> ACTION dances around
16:56:30 <levi> stepkut: So, I am curious now, how are you going about constructing your 'more obviously corrrect' HTTP parser?  Are you making a new kind of parser, or just using something like Attoparsec in a particular way?
16:56:55 <stepkut> levi: a new kind of parser
16:58:51 <levi> What's the idea behind the new parser, then?
16:58:58 <stepkut> let me look :)
17:02:13 <stepkut> well, the really basic idea is that you copy the ABNF spec into your source, and parse it using a quasi-quoter (that part is done). Then you construct a parser by hand, but it also refers to aspects of the ABNF. This gets inpected at compile time (via QQ/TH) and the actual parser is generated, verifying that it conforms to the ABNF
17:04:19 <stepkut> so if you have the very abbreviated rules list:
17:04:23 <stepkut> http = [abnf|ALPHA =  %x41-5A / %x61-7A   ; A-Z / a-z |]
17:04:36 <stepkut> then you might be able to define a parser like:
17:05:09 <stepkut> alpha :: GenParser Char ; alpha = genChar "ALPHA"
17:07:59 <stepkut> hold on, having another conversation
17:09:03 <levi> So is the QQ defining parts of your parser that you then glue together, or is it somehow checking your hand-generated parser against the autogenerated one?
17:26:40 <stepkut> there is not an autogenerated parser
17:26:50 <stepkut> this part, http = [abnf|ALPHA =  %x41-5A / %x61-7A   ; A-Z / a-z |]
17:27:14 <stepkut> parses the ABNF spec and returns a RuleList -- which is a list of ABNF rules
17:27:28 <stepkut> but it is just a machine representation of the grammar, it is not a parser itself
17:28:50 <stepkut> there is a generated parser though
17:29:20 <stepkut> with something like parsec/attoparsec, you directly construct a parser via combinators like: many anyChar
17:29:30 <stepkut> and that is actually what gets run when you parse code
17:29:49 <levi> Right.
17:29:56 <stepkut> here we have similar type combinators, but there are instead specifying a parser and how they related to the ABNF rules
17:30:21 <stepkut> then some template haskell code runs that checks that what you wrote really does match the rules
17:30:25 <stepkut> and it generates a parser from that
17:30:42 <stepkut> so, more like how Happy works
17:31:47 <levi> So the combinators create a data structure that is later converted to a parser rather than actually creating a parsing function directly.
17:32:15 <stepkut> so if you do, string :: GenChar String ; string = genString "ALPHA", it should probably fail at compile time, because when it looks at the ABNF spec, it will see that alpha should be a Char not a String
17:32:21 <stepkut> yes
17:32:45 <stepkut> so, for some of the parsers, we can partially generate them from the spec
17:32:49 <stepkut> like with alpha
17:32:55 <stepkut> or with these tags:
17:32:59 <stepkut> language-tag  = primary-tag *( "-" subtag )
17:33:02 <stepkut> primary-tag   = 1*8ALPHA
17:33:06 <stepkut> subtag        = 1*8ALPHA
17:33:09 <stepkut>  
17:33:39 <stepkut> we should be able to make combinators with types like, primaryTag :: Parser Text, subtag :: Parser Text, where the implementation is derived from the spec
17:34:06 <stepkut> maybe something like this:
17:34:09 <stepkut> primary-tag-parser = primitive Text "primary-tag"
17:34:55 <stepkut> there we just want the whole match back as a single Text -- not extra structure required
17:35:02 <stepkut> but for language-tag, we actually want some structure
17:35:06 <stepkut> language-tag  = primary-tag *( "-" subtag )
17:35:24 <stepkut> we want to be able to inspect the primary-tag and subtag separately
17:35:30 <stepkut> subtags
17:35:36 <stepkut> so we probably want to create a type like this
17:35:38 <stepkut> data LanguageTag = LanguageTag
17:35:41 <stepkut>     { _primaryTag :: Text
17:35:44 <stepkut>     , _subTags    :: [Text]
17:35:48 <stepkut>     }
17:35:50 <stepkut>  
17:36:06 <stepkut> and then define the parser via something like this:
17:36:14 <stepkut> lanuage-tag-parser :: GenParser LanguageTag
17:36:21 <stepkut> language-tag-parser = constructor LanguageTag
17:37:01 <stepkut> probably needs a tad more information because of the "-"
17:37:37 <stepkut> the request-line could be handle in a similar manner
17:37:41 <stepkut> Request-Line   = Method SP Request-URI SP HTTP-Version CRLF
17:38:17 <stepkut> it is ok to through away the white space when parsing
17:38:24 <stepkut> so we might declare
17:38:26 <stepkut> insignificant SP WSP CRLF
17:38:58 <stepkut> which says that those fields can be treated as separators which can be safely discarded during parsing
17:39:15 <stepkut> another thing to deal with is things like Method
17:39:20 <stepkut> Method         = "OPTIONS"                ; Section 9.2
17:39:20 <stepkut>                | "GET"                    ; Section 9.3
17:39:20 <stepkut>                | "HEAD"                   ; Section 9.4
17:39:24 <stepkut>  
17:39:29 <levi> Well, it's not *completely* insignificant in HTTP; you still have to start a folded line with whitespace.
17:39:45 <stepkut> right
17:40:11 <stepkut> insigficant is probably not the right term here
17:40:56 <levi> The tricky thing with HTTP is that a lot of the rules are in prose.
17:41:03 <stepkut> the default is that you must prove to the system that every token that is parsed ended up somewhere in the data structure, or was explicitly ignored
17:41:12 <stepkut> there are a lot that are not :)
17:42:10 <stepkut> it is not possible to prove 100% that an implementation is correct.. but it is possible to provide far more evidence that is currently being done
17:42:22 <stepkut> so, for Method, you would clearly want a haskell type like
17:42:29 <stepkut> data Method
17:42:30 <stepkut>     = OPTIONS
17:42:32 <stepkut>     | GET
17:42:36 <stepkut>     | HEAD
17:42:39 <stepkut>  | ...
17:43:01 <stepkut> and then you would provide some mapping table, that would be checked to ensure there was a 1-to-1 mapping between the two
17:44:57 <levi> Sure, it sounds like a valuable tool for when you need something with the flexibility of a hand-written parser but with a bit more correctness-assurance than you usually get.
17:45:10 <stepkut> yes
17:45:22 <stepkut> obviously, there are levels of verification required
17:46:10 <stepkut> which require unit tests, etc
17:46:51 <stepkut> like checking that pipelining is working and stuff like that
17:47:37 <stepkut> in hyperdrive, it is really easy to run the server against alternative input/output sources
17:48:04 <stepkut> so, that will make testing a lot easier, because you can easily simulate requests and look at the reponses
17:48:26 <levi> If you do have an automatic mapping between the data structure and the grammar bits, you could also use this for the 'randomized valid language sentence generation' feature I was suggesting, where you could generate both the text and corresponding data structures together and ensure that passing the text through the parser gave the same data structure.
17:48:57 <levi> Via a quickcheck-style tester.
17:49:17 <stepkut> it is easy to generate random sentences from the grammar, I already have that feature
17:49:59 <stepkut> also the ABNF spec has the ABNF grammar as an ABNF grammar, so I can generate random ABNF grammars, and then generate random sentences from the random grammar
17:50:09 <stepkut> those are some weird grammars :)
17:50:18 <levi> Heh, I'l bet.
17:51:29 <levi> randomly-generated texts from grammars usually look profoundly weird unless you've trained a HMM on a 'realistic' sample set or something.
17:52:35 <stepkut> I am not clear how you would randomly generate a sentence from the ABNF spec and also generate the *same* values in a data structure
17:53:21 <stepkut> if you had two parsers then you could generate the a random sentence and then check that they both produce the same result
17:53:50 <stepkut> so, you could then, for example, have a slow, but known to be correct parser, and then a fast, hand-optimized version
17:53:56 <stepkut> and check that your hand optimizations are correct easily
18:00:31 <levi> Well, maybe I'm not understanding the level at which your grammar data and AST data correspond to one another and to the hand-generated parser.
18:02:49 <levi> I mean, ideally you have a grammar and AST that are isomorphic, so a random valid AST corresponds to a textual representation that can be generated from the grammar, and the parser should reverse that correctly.
18:04:52 <stepkut> are you randomly generating the AST, and then converting that to a string, and then parsing that to get the AST again?
18:07:30 <levi> That's one way to do it.
18:09:02 <stepkut> so you would need an arbitrary instance for the AST that only generates valid ASTs, and then a correct pretty printer
18:09:28 <stepkut> though, that is not likely to catch tricky things like handling of folding white space, unless your printer randomly inserts it ?
18:10:43 <levi> Well, that would be the advantage of driving it from the grammar definition rather than the AST itself.
18:11:09 <stepkut> so.. you randomly generate a setence from the grammar.. then way?
18:12:04 <levi> A sentence from the grammar *and* the corresponding AST, assuming the mapping between them is capable of doing such.
18:12:31 <stepkut> so, lets consider the Method rule
18:12:54 <stepkut> in the grammar it is a String like "GET" and in the data type it is a ADT data Method = GET | ...
18:13:12 <stepkut> the thing that says how to get from "GET" -> GET *is* the parser, isn't it ?
18:13:53 <levi> Well, it is *a* parser.
18:14:09 <stepkut> so, you are suggesting that we write two parsers and check that they get the same answer?
18:16:09 <levi> I suppose so, but one of them is likely to be considerably simpler than the other.
18:16:22 <stepkut> why is that?
18:17:07 <stepkut> also, if the same person writes both, aren't they likely to make the same misinterpretation twice?
18:17:25 <levi> Well, the point is that the person writes one, the other is auto-generated.
18:17:47 <stepkut> I don't see how to autogenerate one
18:18:28 <levi> The auto-generated one only has to consider correct sentences, doesn't have to deal with error reporting or on-line parsing, or any of the numerous other things that might lead you to write a hand-written parser.
18:19:47 <stepkut> well.. that is basically what I am doing.. writing a parser by detailing how the ABNF maps to the Haskell data-type
18:21:29 <levi> OK, so, I thought there was another step where you'd also write a more sophisticated parser after you did that.
18:22:57 <levi> But if there's not, then you could still use this tool to generate randomized test cases for someone else's http parser so long as you could write a mapping between your AST and their AST.
18:23:52 <stepkut> bah, they should just use mine ;)
18:24:05 <levi> "Not only do we have a highly-trustable HTTP parser, we can help you fix your HTTP parser too!"
18:24:35 <stepkut> but, to do that.. mapping between ASTs is not the right thing
18:25:05 <stepkut> I should submit a Request to their server, which generates a Response, and i check that I get the right Response back
18:25:09 <stepkut> then it is not language specific
18:25:30 <stepkut> to test a new webserver you implement some special http-test application
18:25:31 <levi> Well, there's a lot of value in code that's been running for a while in production, even if it's got some edge case bugs that haven't been discovered yet.
18:26:09 <stepkut> right
18:27:26 <stepkut> testing by sending/receiving the byte strings seems the best
18:27:52 <stepkut> checks more of the stack, and is cross platform/language/etc
18:28:17 <levi> And thanks to NIH, the chances of getting people to adopt your parser are slim if your only story is 'we know it parses correctly'.  It's not *that* hard to write one that only has errors in fairly obscure cases.
18:28:43 <stepkut> :)
18:28:58 <stepkut> it is very easy to write an incorrect parser
18:29:47 <stepkut> I want hyperdrive to have an abundance of evidence that it is correct
18:29:52 <stepkut> and fast
18:29:55 <levi> Sure, but I think you're overstimating the value of 'correctness' to people with demonstrably working code.
18:30:07 <stepkut> well, it will be fast too :)
18:30:19 <levi> It's a good sell for people who don't yet have working code, though.
18:30:25 <stepkut> indeed
18:30:34 <stepkut> it almost never makes sense to rewrite an existing app
18:32:31 <levi> It almost never makes sense to implement another HTTP parser when there are several available for production use already! There is an 'almost' there, but... :)
18:34:06 <levi> I'm not trying to talk you out of it, it's just not exactly something that would make a lot of 'business sense'.
18:34:12 <stepkut> well, when the creator of the parser is shown a bug and responds 'well it works for me in practice, so I don't care'.. it get dubious
18:34:39 <stepkut> also, being jerked around by someone elses schedule and being able to fix things is a big drawback
18:34:46 <stepkut> especially for code that is less than 1000 lines
18:35:39 <levi> It can take a long time to get the right 1000 lines.
18:35:51 <stepkut> yeah, and they still haven't ;)
18:35:52 <levi> Those are definitely valid concerns, though.
18:36:54 <stepkut> also, the parser is the most boring part of hyperdrive anyway
18:37:03 <stepkut> there are other things we want to do which no one is doing
18:37:36 <stepkut> but, no sense writing another incorrect parser in the process
18:37:39 <levi> I am just playing Devil's Advocate here... in reality, I have a lot of the same impulses and concerns that you're describing, and I am always interested in cool new approaches to things.
18:38:18 <stepkut> one reason why happstack still has the same old HTTP backend it has always had is (1) it works pretty darn good (2) kept hoping something better would come along
18:38:27 <stepkut> originally we want to switch to hyena, but tibbe never finished it
18:38:57 <stepkut> then tried to switch to the snap backend.. but they didn't really want to split it out in reusable way the way warp did
18:40:06 <sm> why not use wai/warp, again ?
18:40:15 <levi> I take it warp is the one where you've got issues with the parser?
18:40:30 <levi> Or is it with Conduit?
18:40:33 <stepkut> considered it for a long time, but my other experiences working with stuff from the yesod camp turned me off from it
18:41:01 <sm> at least wai, and make the server pluggable ?
18:41:13 <stepkut> would rather use pipes
18:41:39 <levi> WAI depends on Conduit, though, which is kind of unfortunate with so many options in play.
18:41:43 <stepkut> yup
18:41:54 <stepkut> or, I might use io-streams
18:42:33 <levi> What we really need is a cost-free abstraction over the underlying data-piping automata.
18:42:40 <stepkut> :)
18:43:02 <levi> Something like List-Like
18:43:06 <stepkut> acme-http just uses plain old strict IO :)
18:47:42 <levi> I really want to create some machinery around websockets and a sockjs implementation, but they have to interact with the server at the IO level, and there's no common way of doing that and everyone but Warp seems to be planning on switching their IO mechanism sometime soon.
18:48:01 <stepkut> :)
18:48:29 <stepkut> I would like to hear more about your requirements before we get too far into hyperdrive
18:48:40 <stepkut> so I can in support ahead of time
18:48:49 <levi> There's a pretty nice websocket library, but it's still written in terms of enumerator/iteratee pairs.
18:49:00 <stepkut> :)
18:49:23 <levi> There are three incomplete versions of SockJS!
18:49:51 <stepkut> \o/
18:50:35 <levi> Haskell has an embarassment of riches as far as abstraction facilities goes, but that's a two-edged sword. :)
18:51:03 <levi> My day job is coding in C, which has an entirely different set of problems.
18:51:10 <stepkut> :)
18:52:57 <levi> Glomming C code together is pretty easy, because there aren't any facilities to make incompatible abstractions out of, and even if you manage to, there's nothing keeping you from peering inside and taking what you want. And that is also its main problem.
18:53:10 <stepkut> yeah
18:53:19 <stepkut> that is part of why Haskell works well too I think
18:53:39 <stepkut> for the most part we just have simple functions and data-types.. so gluing things together is usually pretty easy
18:53:47 <stepkut> but this new IO libraries and screwing things up
18:54:47 <levi> In C, you walk into your workshop and you have a saw, a hammer, and some nails.  In Haskell, you walk in and there are a myriad of hand tools, power tools, stationary tools, mobile tools, and programmable recombinable tools.  Your first task is to figure out which subset of the tools you should apply to your problem.
18:55:02 <stepkut> :)
18:56:42 <levi> I am unfortunately easily stymied by excessive options, because I tend to want to understand them all and how they interact so I can make the "right" decision.
18:57:37 <Jeanne-Kamikaze> happens to me too
18:58:15 <Jeanne-Kamikaze> when that happens, to need to remember this image: http://absolutelytrue.com/wp-content/uploads/2010/05/give-a-fuck-o-meter.gif
18:58:27 <Jeanne-Kamikaze> works for me at least, and it can be generalised to other aspects of life
19:00:04 <Jeanne-Kamikaze> *you
19:00:21 <levi> Well, I apply that liberally, but it doesn't tend to lead to productive decisions in my case. :)
19:03:18 <Jeanne-Kamikaze> the key is that "productive" is a subjective term that is a function of how much of a fuck you give
19:05:32 <Jeanne-Kamikaze> productivity is just based on expectations; not giving a fuck lowers those expectations, and in turn makes you more productive
19:07:14 <levi> Well, I don't want to lower my expectations of myself.
19:07:59 <stepkut> levi: hyperdrive will have a WAI-like layer.. so, it will at least be possible to implement an adapter so that people can use warp if they really want to :)
19:10:38 <levi> I think that a modern application server should not view itself as primarily a HTTP server, but a node in a compute graph communicating with arbitrary protocols.
19:11:31 <stepkut> yes
19:11:50 <stepkut> happstack uses both HTTP and IRC for example :)
19:21:43 <Entroacceptor> happstack does irc?
19:23:11 <stepkut> sorry, happstack.com uses both
19:23:21 <stepkut> there is a plugin for clckwrks that runs the bot
19:23:32 <stepkut> synthea: ?dice 3d5+6
19:23:36 <stepkut> synthea: dice 3d5+6
19:23:46 <stepkut> ACTION forgets how it works ;)
19:26:02 <stepkut> dice 3d5+3
19:26:12 <stepkut> hmm
19:27:11 <Entroacceptor> ?dice 3w6+3
19:27:11 <lambdabot> unexpected 'w': expecting digit, "d", "+" or end
19:27:18 <stepkut> wrong bot :)
19:27:35 <Entroacceptor> !dice 3w6+3
19:27:39 <Entroacceptor> ?dice 3w6+3
19:27:39 <lambdabot> unexpected 'w': expecting digit, "d", "+" or end
19:27:47 <Entroacceptor> /dice 3w6+3
19:30:58 <levi> stepkut: I thing Erlang/OTP and Akka would be interesting sources for ideas for a new Haskell application platform.
19:31:08 <stepkut> #dice 3d21+4
19:31:08 <synthea> You rolled 3 21-sided dice with a +4 modifier: [12,5,5] => 26
19:31:35 <stepkut> so, that bot is the same process that servers happstack.com
19:31:57 <Entroacceptor> nice, I thought about doing that, too
19:32:16 <Entroacceptor> is the bot engine usable for something else?
19:32:18 <stepkut> alas, there is very little intergration between the two at the moment
19:32:41 <stepkut> it basically just logs the channel and display the channel logs
19:32:42 <stepkut> yes
19:32:42 <stepkut> the bot engine is separate
19:32:53 <stepkut> http://hub.darcs.net/stepcut/ircbot
19:33:03 <stepkut> but.. I really need to reimplement it using netwire I think
19:33:22 <stepkut> but I think I am waiting for mm_freak to rewrite the underlying irc client library or something
19:33:40 <levi> Which library does it use?
19:33:51 <levi> (I guess I could just look...)
19:33:52 <stepkut> irc
19:34:10 <stepkut> i was going to switch to fastirc, but then the author was like, "but wait!!"
19:34:12 <stepkut> gotta run, bbly
19:34:27 <levi> The app I'm working on is a multi-protocol chat system.
19:34:29 <stepkut> (oops, I guess that is, be back later, y'all)
19:34:33 <levi> Seeya.
23:03:21 <stepkut> got hsx2hs and fay working together :)
23:03:43 <stepkut> though, not integrated into the  build process yet
23:04:00 <donri> cool! is your patched hsx2hs out yet?
23:04:10 <stepkut> yup
23:04:14 <stepkut> it's on hackage now
23:04:32 <stepkut> though, the unpatched version would actually work slightly better ;)
23:05:02 <donri> type Text = String
23:05:05 <donri> there I fixed it
23:05:50 <stepkut> yeah, that's what I did ;)
23:06:04 <donri> \o/
23:06:27 <stepkut> really.. it might be best to patch hsx2hs so that you add a flag to overide the default type for string literals
23:06:37 <stepkut> hsx2hs --string-lit-type=String
23:07:10 <donri> or make it read some ANN module maybe?
23:07:19 <stepkut> dunno
23:08:01 <donri> or ANN ident and make it per-ident ;)
23:08:38 <stepkut> i should call my other library, fayjax :)
23:09:42 <donri> synthea49: hi! how's life as a clone treating ya'?
23:13:08 <stepkut> :)
23:13:15 <stepkut> I should shutdown happstack.com on the old vps
23:13:29 <stepkut> but.. we are supposed to just reformat the whole thing, so I haven't been too inspired
23:13:38 <donri> :)
23:14:02 <donri> needs scoutess! ohwait... needs keter!?
23:14:31 <stepkut> ACTION wonders when the fancy netwire based irc library will be out
23:17:54 <donri> ACTION too
23:18:39 <donri> he said a few weeks after netwire 4, which was released oct 24 ;)
23:18:50 <donri> ok not sure he said that
23:23:05 <stepkut> let's beat him up
23:23:08 <levi> Hmm, why would you use netwire for an IRC library?
23:23:31 <stepkut> levi: maybe not so much for the low-level parsing, but it would be great for higher level libraries
23:23:46 <stepkut> levi: like registering with nickserv and stuff
23:23:58 <levi> Ahh, I see.
23:24:18 <levi> I was thinking more along the server side of IRC.
23:24:28 <stepkut> i only care about the client
23:24:41 <levi> Yeah, you and most people.
23:24:59 <stepkut> there is a fair bit of annoying state and asynchonous behavior to irc
23:25:29 <levi> Well, I want to make something that is not really IRC, but can be connected to by IRC clients.
23:25:35 <stepkut> like, you connect, and then you are supposed to receive three or four messages before you can do certain other things, but other stuff can also happen in between
23:26:14 <stepkut> or even simple things
23:26:42 <stepkut> anyway, ircbot is a bit of a mess because of that
23:26:47 <levi> Yeah, it is kind of a lousy protocol.
23:27:06 <levi> But hey, people use it.
23:27:06 <stepkut> so, my theory, anyway, is that FRP could be pretty good for an ircbot
23:27:23 <stepkut> because you can have it reactive to events, and deal with all the different state transitions
23:28:00 <stepkut> also, if you have a bot made up of parts.. some messages should could to multiple parts.. which is also something FRP is good for
23:28:45 <levi> Parts of FRP, anyway.  The whole continuous time thing is a distraction to a lot of things.
23:29:42 <levi> edwardk's Machines look like a pretty nice way to program state machine behavior.
23:29:50 <stepkut> yeah, machines looks interesting as well
23:30:07 <stepkut> not sure that time is unuseful for irc though
23:30:23 <stepkut> you need to do things like implement watchdogs to check if you got disconnected from the server and stuff
23:30:41 <stepkut> like.. "has it been more than X minutes since I got a PONG"
23:30:50 <levi> Yes, I certainly agree with that.
23:30:50 <stepkut> i mean PING
23:31:03 <levi> SockJS has a lot of stateful bits like that.
23:31:06 <stepkut> or for implementing irc games
23:32:44 <levi> The thing is, FRP libraries force you to express everything in terms of a function of time, but really you want a regular machine that can schedule a cancelable time-triggered event or two.
23:33:01 <levi> Maybe I have just not worked with them enough.
23:36:01 <donri> Wire wraps a monad so you can kinda easily do whatever
23:36:03 <bergmark> stepkut: snaplet-fay already has a fayax function!
23:37:02 <levi> Are there any examples of netwire programs?
23:37:12 <levi> I've gotta drive home.  Idle for a while...
23:37:26 <stepkut> heh
23:37:43 <donri> levi: http://danbst.wordpress.com/2013/01/23/novice-netwire-user/
23:37:49 <donri> levi: http://jshaskell.blogspot.se/2012/11/breakout-improved-and-with-netwire.html
23:37:55 <stepkut> there should really be a common fay-ajax library
23:38:02 <stepkut> as there is nothing really framework specific about the code
23:57:00 <donri> i really need to try netwire for real sometime... i wonder if frp is any suitable for cellular automata