00:00:38 <luite> donri: uh you could be working on different things maybe, for example if you store some sorting preferences of tables in one browser, it could be annoying if another one also updates those automatically
00:01:04 <luite> and there's might be sessions required for users that don't have a login account
00:01:09 <donri> things like that could be stored in normal cookies or javascript storages?
00:01:17 <donri> don't really need to be signed or encrypted
00:14:45 <luite> right, good point
01:01:01 <donri> stepcut: oh new pipes-core packages
01:01:07 <donri> releases*
01:05:27 <donri> and data-lens with partial lenses in the core package
01:10:14 <stepkut> donri: yeah saw that, includes a proof that it really is a category (though I have not verified it)
01:19:40 <luite> donri: but are there never cases of data that should persist between requests, but you don't want the client to tamper with? for example some internal id's associated with something a user is working on, names of temp files or storage locations?
01:22:32 <donri> perhaps
01:22:52 <luite> sorry for being really slow :) I'm trying too much at the same time
01:22:57 <donri> hm i wonder if maybeZero was meant to be public in new data-lens
01:23:09 <donri> stepkut: isn't that what you were looking for some time ago?
01:23:34 <donri> maybeZero Nothing = mzero
01:23:35 <donri> maybeZero (Just a) = return a
01:24:15 <donri> ah yes you have the exact same function in ircbot
01:24:28 <donri> also public
01:24:33 <donri> totally should be moved to base or something ^_^
01:24:55 <stepkut> donri: :)
01:25:05 <stepkut> so, the new data-lens includes data-lens-partial.. looks nice
01:25:15 <stepkut> I look forward to a new data-lens-ixset :p
01:26:38 <donri> roconnor says ixLens is broken and it's a little above my level i think
01:27:24 <donri> also still no monadstate api for partiallens...
01:27:28 <stepkut> time to level up!
01:27:33 <donri> :D
01:28:37 <donri> i think ixLens implementation-wise does exactly the same as mapLens, and that the problem might be with ixset... not sure
01:29:27 <donri> ACTION looks for roconnor's comment on reddit
01:29:31 <stepkut> I think the problem is that with mapLens you supply the key and value separately, but with ixset, the key is embedded in the value
01:29:54 <stepkut> and you get weird behaviour if you specify the key, but the value contains a different key
01:30:15 <donri> http://www.reddit.com/r/haskell/comments/p75ko/introduction_to_using_acidstate_ixset_and/c3n6y1p
01:31:35 <donri> "In particular, fromList should be definable using ixLens."
01:31:44 <donri> don't fully get what he's on about
01:33:10 <donri> stepkut: any idea how to make a correct ixLens then?
01:33:53 <stepkut> donri: I would have to think, give me a minute to finish this post
01:34:10 <donri> sure
01:46:47 <stepkut> ok, I have thought about it. I decided it would take a lot more thinking to come up with a solution. Does that help?
01:47:24 <stepkut> :p
01:50:53 <donri> that's what i concluded as well!
01:51:10 <stepkut> now, we just need to find someone to do all that thinking
01:53:06 <stepkut> too late to propose it as a GSoC project :p
01:53:24 <stepkut> too late for Google Spring Break of Code even :)
01:54:06 <stepkut> what rss library should I use? There are two many choices on hackage.. and I even wrote one of them
01:54:21 <stepkut> do I care about atom? or only rss?
02:04:18 <donri> why do you care about rss but not atom? :P
02:04:21 <donri> wrong way around!
02:04:59 <stepkut> I don't even know what to care about
02:05:30 <stepkut> does anyone even use rss/atom anymore? Or do they just wait until it shows up on reddit/hacker news/social reader on facebook
02:05:52 <stepkut> maybe instead of an rss feed, I should just have my code automatically post to reddit :p
02:06:25 <stepkut> (actually.. I do plan to do that sometime.. and have it automatically add a link to the post for people who want to leave comments).
02:07:00 <stepkut> assuming reddit.com is still around when I get that far down on my TODO list :p
02:07:59 <stepkut> do I need both RSS and Atom? Or does everything I care about support both?
02:09:08 <donri> what is this for?
02:09:23 <donri> i think everything that matters supports atom, and rss is broken
02:12:50 <stepkut> yeah.. I know RSS is broken.. just didn't know if atom-only support is viable
02:13:08 <donri> why not use hsx to generate the feeds :)
02:13:29 <stepkut> I considered it
02:13:41 <stepkut> but then I have to figure out what the required fields are and stuff
02:14:00 <stepkut> seems like populating an Atom type and letting it do the dirty work is easier
02:14:51 <donri> i do that manual labor every time i write any atom feed :P mostly just copy-paste from wikipedia ;)
02:15:31 <stepkut> I guess my choices are hsx or the feed library
02:15:45 <stepkut> I have used the feed library and i didn't feel like my life was getting easier as a result
02:16:10 <stepkut> avoiding extra dependencies is nice
02:17:00 <stepkut> I'll try the atom approach and see how it goes
02:17:06 <stepkut> the atom+hsx approach
02:17:32 <stepkut> can I embed html into an atom feed? or only xhtml?
02:17:52 <stepkut> also, I think sessions are more important than fixing data-lens-ixset :p
02:18:53 <donri> you can embed html if you encode it, and xhtml if you namespace it
02:20:11 <donri> (which you can do on the container, no need to write namespace prefixes for every element)
02:21:09 <stepkut> html would be easier in this case
02:21:21 <stepkut> because I am using the markdown script, which only produces html
02:21:28 <donri> then it should be escaped
02:21:55 <donri> i.e. don't use cdata
02:22:10 <stepkut> ACTION is trying to figure out if it wants to read, "The Atom Syndication Format" or "The Atom Publishing Protocol"
02:23:17 <stepkut> I want the former
02:27:29 <donri> yep, the second is for writing blog posts and pushing them to a blog engine etc
02:28:23 <donri> re sessions and more secure randomization of IDs; should we get entropy for each new ID or just use it to initialize a RNG?
02:29:10 <stepkut> donri: the second one sounds useful.. but not what I need today
02:30:57 <stepkut> perhaps we should be using Crypto.Random from crypto-api?
02:31:05 <stepkut> http://tommd.wordpress.com/2010/09/02/a-better-foundation-for-random-values-in-haskell/
02:32:09 <stepkut> I guess that doesn't answer the question
02:32:11 <luite> donri: that's very slow, yesod uses AESRNG that it reseeds after some limit of data has been generated
02:33:28 <stepkut> luite: but, it is used only to generate new sessions? How many sessions per second do you need to generate?
02:33:47 <luite> stepkut: a new IV is generated for every new cookie
02:35:06 <stepkut> it looks like Crypto.Randow can be seeded, and then will throw an exception when it needs more entropy to reseed ?
02:36:53 <luite> hm, does that provide an implementation?
02:37:57 <stepkut> instance CryptoRandomGen SystemRandom ?
02:38:00 <stepkut> not really sure what you are asking
02:40:23 <luite> hm, yeah I see, that doesn't look like a terribly useful one though?
02:40:53 <stepkut> oh ?
02:41:27 <stepkut> ah
02:42:44 <stepkut> I am unclear if it is useful or not
02:42:53 <stepkut> we should email tom and ask :)
02:46:15 <donri> looks like it does exactly that: initializes an RNG using the entropy package
02:47:16 <stepkut> donri: it also detects when it needs to be reseeded to maintain integrity though ?
02:47:55 <donri> duno, it says reseed doesn't work?
02:48:39 <stepkut> donri: for that particular instance, yes.. but if you look at the code, it does not support reseed because it requires an initial seed that is infinite in length...
02:48:45 <donri> "This generator can not be instantiated or reseeded with a finite seed"
02:48:51 <stepkut> right
02:49:07 <donri> ok i see
02:49:19 <donri> is that thread safe and everything?
02:49:28 <stepkut> no idea
02:50:07 <stepkut> I would assume so
02:50:46 <donri> http://hackage.factisresearch.com/packages/find?name=random there's a shitton of randomization packages ^_^
02:50:49 <luite> donri: yes but it uses lots of entropy
02:51:22 <luite> basically just a lazy ByteString of /dev/random or something like that
02:51:50 <luite> which one was the nonblocking version?
02:54:22 <stepkut> looks like, instance CryptoRandomGen SystemRandom, ultimately opens /dev/urandom
02:55:48 <stepkut> "While /dev/urandom is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys."
02:58:30 <stepkut> http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
02:59:17 <donri> bos has a mwc-random package too, or is that not suitable for security?
03:01:01 <stepkut> so, I am not sure if the normal System.Random is suitable for cryptography, but the crypto-api thing looks reasonable unless someone tells us otherwise
03:01:15 <stepkut> ... which we will solicate comments on
03:02:52 <stepkut> the crypto-api includes a hmac module
03:03:25 <stepkut> i wonder if there is some advantage to doing plain-text+hmac in addition to an encrypted version
03:03:56 <stepkut> ACTION joins #ask-a-cryptographer
03:05:09 <donri> DRBG builds on crypto-api and seems to provide more instances
03:05:16 <stepkut> oo
03:05:43 <donri> same author
03:06:14 <donri> the irony of all these random random packages ^_^
03:06:16 <stepkut> yup
03:06:25 <stepkut> so.. DRBG looks promising
03:06:37 <donri> did you look at mwc-random?
03:06:52 <stepkut> I wonder how hard it would be to make the RNG configurable
03:06:56 <donri> "The generated numbers are suitable for use in statistical applications."  does that include security or not?
03:08:02 <stepkut> dunno.. snap uses mwc-random for something
03:08:47 <donri> and yesod uses crypto-api
03:09:05 <stepkut> maybe we can make a CryptoRandomGen instance for mwc-random :p
03:09:37 <stepkut> mwc-random has fewer dependencies, which is nice. crypto-api gives me more assurances that it is intended for crypto..
03:09:46 <donri> yea
03:11:18 <stepkut> System.Random.MWC.withSystemRandom "Note: on Windows, this code does not yet use the native Cryptographic API as a source of random numbers (it uses the system clock instead). As a result, the sequences it generates may not be highly independent."
03:11:32 <donri> aha
03:12:34 <donri> yep entropy does use CryptAPI on windows
03:13:14 <stepkut> I feel like mwc-random was intended to be use for experimental statistical models more than cryptography
03:13:44 <stepkut> like Monte Carlo simulation or and stuff
03:13:50 <stepkut> s/or//
03:13:57 <donri> no idea what that means but yes :)
03:14:18 <donri> i just thought appeal to bos=authority
03:14:38 <donri> but that doesn't mean anything if it isn't even meant for the relevant problems
03:14:46 <stepkut> indeed :)
03:15:58 <donri> cprng-aes?
03:16:56 <stepkut> dunno
03:17:09 <stepkut> clientsession already depends on crypto-api and cprng-aes
03:17:55 <donri> but we're talking about ID based sessions right?
03:18:01 <stepkut> so, for clientsessions, that does not add any new dependencies
03:18:11 <stepkut> well. depends how we want to package things
03:18:41 <donri> i only put them in the same package for convenience. not sure about final products :)
03:18:51 <donri> depends how much you care about dependencies i guess
03:19:02 <stepkut> yeah, they do not seem to depend on each other, so happstack-clientsession and happstack-serversession might make more sense
03:20:30 <stepkut> I say we use DRBG for the ID based sessions.. it seems sensibly designed, and should be plenty fast since we are only using it to create sessionids
03:20:30 <donri> the next question is, do we build it around acid-state directly or merely provide some means for the user to clean up session data?
03:21:11 <stepkut> personally, I would build it around acid-state using good design principles, and then decide after if it can be refactored
03:22:11 <donri> also maybe we should have a monadstate-like transformer around serverparts for clientsession...?
03:22:14 <stepkut> designing something that does one thing well is hard enough. trying to do it even more abstractly only makes your problems worse
03:23:06 <donri> i mean my current clientsession code is encrypting and decrypting on every call, and if you change the session and then "get" it again you get the unaltered one ...
03:23:23 <stepkut> ah
03:23:49 <stepkut> so you want to decrypt it once, stick it in state monad, and then encrypt it at the end before sending the Response?
03:24:00 <donri> something like that
03:24:38 <donri> except maybe not decrypt until first "get", and maybe not encrypt if it wasn't modified, and maybe not exactly StateT but something similar (to make custom use of StateT easier)
03:24:52 <stepkut> sure
03:25:29 <donri> even if it would be kinda cute to use the lens state apis inside serverparts :D
03:25:50 <stepkut> :)
03:26:03 <donri> (but probably shouldn't have anything complex enough in a client session to warrant lenses)
03:27:06 <stepkut> newtype ClientSessionT sessionData m a = ClientSessionT { unClientSessionT :: StateT (Modified, sessionData) m a }
03:27:33 <stepkut> I think the user would use lenses to work with their sessionData
03:28:21 <stepkut> or perhaps, newtype ClientSessionT sessionData m a = ClientSessionT { unClientSessionT :: StateT (Modified, Maybe sessionData) m a }
03:28:40 <donri> so you can delete it?
03:29:12 <stepkut> that too.. I was thinking so that you could avoid decrypting it until the first get ?
03:29:34 <donri> aha
03:29:44 <donri> but then that deletes it if it's not gotten!
03:30:07 <stepkut> true
03:30:20 <donri> btw should i create the two separate packages in the happstack repo so we can collaborate on this?
03:30:42 <donri> (or you, but i'm offering to do it :p)
03:30:52 <stepkut> sure.. or we could stop putting everything in one giant happstack repo
03:31:10 <donri> but perhaps that's an issue for another day...? :)
03:31:12 <stepkut> that wasn't my idea.. in fact, I argued against it
03:31:29 <stepkut> but that is a fine solution
03:31:32 <donri> i thought it was mad until i used it
03:31:41 <stepkut> there are certain aspects I like
03:31:44 <donri> it's kinda neat to have it all local
03:32:02 <stepkut> yeah, and it makes it easy to give commit access since there is only one repo to worry about
03:32:21 <stepkut> so, let's create happstack-clientsession happstack-serversession ?
03:32:24 <donri> yep
03:32:47 <stepkut> those make sense to go in the master repo since they are happstack specific. Where-as, web-routes is its own thing that happstack uses
03:34:16 <stepkut> data SessionState a = NotYetDecoded | Decoded a | Modified a | Deleted
03:34:16 <stepkut> newtype ClientSessionT sessionData m a = ClientSessionT { unClientSessionT :: StateT (SessionState sessionData) m a }
03:34:16 <stepkut>  
03:37:06 <donri> also might want to encode expiration times in the data but hide that from the api user
03:37:38 <donri> but maybe that doesn't change that type
03:39:00 <donri> we should also take plugins into account... like web-routes... we want pluggable components that can use the session without interfering with the rest of the app
03:39:07 <donri> oh dear so much to consider
03:39:43 <stepkut> :)
03:39:52 <stepkut> incremental improvements :)
03:39:57 <donri> true
03:40:20 <stepkut> if you put everything in the first release, then you only get to make one press release
03:40:29 <donri> haha
03:40:32 <stepkut> but if you add something new every week, then you get to make lots of press releases
03:40:54 <donri> it's the yesod way!
03:40:58 <donri> oh wait--
03:42:30 <donri> so i guess first we need to produce something that "just works" for basic use cases, in a secure manner
03:42:45 <donri> do we maybe focus on just one of client vs server first?
03:44:10 <stepkut> let's do client first, because it is easier (aka, almost done)
03:44:21 <donri> yea
03:44:32 <donri> i think server might be better for plugins though (later concerns!)
03:44:47 <donri> e.g. less worry about too much data
03:44:57 <stepkut> indeed
03:45:09 <donri> if you have 20 plugins storing small bits of data, it adds up :)
03:45:19 <stepkut> yup
03:45:39 <stepkut> but, since the plugin architecture is not released yet, we can not support it :)
03:46:02 <donri> truth
04:09:55 <hpaste> stepcut pasted ClientSessionT at http://hpaste.org/66751
04:10:10 <stepkut> donri: not perfect, but it's an idea
04:44:03 <donri> sleepcut?
04:44:16 <donri> i'm bedtime too
04:53:04 <donri> tempire: did you get lambdabot's messages?
04:56:09 <tempire> no
04:56:09 <lambdabot> tempire: You have 1 new message. '/msg lambdabot @messages' to read it.
04:56:19 <donri> heh
04:56:44 <tempire> rock on
04:57:41 <donri> :)
17:18:58 <st3pcut> alpounet: so I have been thinking about the source service
17:19:03 <st3pcut> alpounet: I have some ideas
17:19:16 <st3pcut> donri: did you see that ClientSessionT? Is that similar to what you imagined ?
17:19:53 <st3pcut> donri: would be nice if you could use the real get/put for the session data.. but that will take a bit more thought to solve
17:23:05 <donri> yea... having access to the lens state api makes it easier to write code that doesn't overwrite the whole session
17:23:30 <st3pcut> yeah
17:23:42 <st3pcut> I ran out of time, so I posted what I had
17:23:59 <donri> but then you can't use e.g. nextInteger' for hsx-jmacro
17:24:21 <donri> (right?)
17:25:24 <donri> so maybe a state monad can be something you "enter"
17:25:29 <st3pcut> uh
17:26:14 <st3pcut> that would be unfortunate.. I wonder how best to deal with that
17:26:18 <donri> withSession $ currentUser ~= UserId x
17:28:21 <donri> translates to... sd <- getSD; putSD $ sd { _currentUser = UserId x }
17:28:59 <donri> (but actually something like sd <- getSD; putSD $ runState ... sd)
17:29:45 <st3pcut> that seems ok..
17:30:04 <st3pcut> though I wonder if there is a fundementally better way to do it.. even if we had to change ServerPartT
17:31:06 <donri> i think if we want a MonadState instance and still not interfere with any other StateT in the transformer stack... i don't see how?
17:31:39 <st3pcut> donri: not sure..
17:31:42 <donri> or can you actually have mtl figure out all the lifts even for the same monads for different types?
17:31:50 <donri> based on type inference
17:32:04 <st3pcut> donri: I think you can not because the you get conflicting functional dependencies
17:32:12 <donri> ok
17:32:25 <donri> it'd also be rather confusing to the programmer perhaps
17:32:43 <st3pcut> donri: the only thing I can think of offhand would be to put a StateT inside the ServerPartT and uses lenses to add extra state components
17:32:54 <donri> also even if that did work, it would mean you couldn't have e.g. an Integer as the session and also for hsx-jmacro IntegerSupply ... :P
17:32:56 <st3pcut> but.. that may be a pore idea
17:33:12 <st3pcut> poor
17:33:15 <st3pcut> yikes
17:33:28 <st3pcut> .. and I haven't even started drinking yet
17:33:30 <donri> you mean like snaplets?
17:33:38 <st3pcut> perhaps
17:33:43 <donri> that's similar to my "withSession" but instead you have "with session" where session is a lens
17:33:47 <st3pcut> yeah
17:34:16 <st3pcut> so, we could also have a, WithSession class, and have, getSessionData / putSessionData functions
17:34:54 <st3pcut> so that you know what get/put are getting and putting.. though withSession solves that as well
17:40:30 <donri> Patch-Tag is down. Bah!
17:40:48 <st3pcut> it'll be back
17:41:32 <donri> oh you already have a withSession... but not sure what it does?
17:41:41 <stepcut> that withSession is different
17:41:58 <stepcut> it takes care of modifying the Response header at the very end to make sure the cookie gets updated
17:42:14 <alpounet> stepcut, great! what are your ideas?
17:42:18 <stepcut> donri: patch-tag is back
17:42:52 <stepcut> alpounet: ok, so I think every time you run, updateSources, you should get back an UpdateId
17:43:05 <stepcut> if nothing changed since the last run, then you get back the same UpdateId as last time
17:43:15 <stepcut> but if something changed, then you get a new, higher UpdateId
17:44:03 <stepcut> using the UpdateId you should then be able to query the database and find out the current versions, etc, of all the sources, or get a list of all the packages that changed, or even get diffs of the sources changes
17:44:52 <stepcut> So the UpdateId would change if any of the packages changed, or if you changed the list of SourceLocations you send in
17:45:09 <stepcut> but, if everything is exactly the same, then you get the same UpdateId
17:45:45 <stepcut> then services like LocalHackage, etc, can store the laste UpdateId they worked on, and only do work if the Id has changed ?
17:46:15 <stepcut> does that seem sensible?
17:46:58 <alpounet> stepcut, so basically, an UpdateId represents a unique set of package+versions (for whatever sense of version we'll give to repos, tar.gz archives, etc :p)
17:47:06 <stepcut> yes
17:47:06 <alpounet> packages*
17:48:24 <alpounet> stepcut, well technically LocalHackage should support sevaral UpdateId's, so to speak
17:48:30 <stepcut> oh ?
17:49:00 <alpounet> did you have in mind a 1-1 correspondance of LocalHackage and packages-set?
17:49:33 <stepcut> not sure
17:49:50 <stepcut> probably ? but not for any specific reason
17:49:53 <alpounet> LocalHackage supports various versions of a single package
17:50:07 <alpounet> just like the real hackage
17:50:23 <alpounet> we can just not use that feature, but as a matter of fact, it does :)
17:51:06 <stepcut> k
17:51:46 <alpounet> i thought the LocalHackage would just be a way to avoid fetching stuffs again
17:51:53 <alpounet> when we do a sandboxed build
17:52:11 <stepcut> no..
17:52:21 <stepcut> yes and no
17:52:27 <alpounet> well
17:52:35 <alpounet> tell me what you have in mind exactly for local hackage
17:52:44 <alpounet> it most likely already supports it, but tell me anyway
17:53:03 <stepcut> the primary reason we need it is because we want to use cabal install to do the sandboxed builds, but the packages we want to 'cabal install' are not yet on hackage
17:53:40 <stepcut> the secondary feature is that we can also copy dependencies from hackage to LocalHackage to avoid redownloading them
17:55:11 <alpounet> ok so in your mind a local hackage instance will just last as long as a "sandboxed building shot", meaning a group of sandboxed builds, with all the various versions we want to try to build the software with
17:56:03 <stepcut> I think so..
17:56:14 <stepcut> though, we can be clever about it
17:56:32 <stepcut> we could have all the package files be in a big directory
17:56:51 <stepcut> and then just create a trimmed down package instance that is appropriate for each build ?
17:57:28 <alpounet> i'm not sure I understand what you mean here
17:57:35 <alpounet> could you elaborate a bit?
17:58:03 <stepcut> Well, let's say you want to build against the packages referenced by some UpdateId
17:58:29 <stepcut> using that UpdateId you can get a list of all the packages+versions that should be relevant
17:58:48 <stepcut> so, you can create a index.tar.gz that is specific to thet UpdateId
17:59:21 <stepcut> the index.tar.gz will reference a bunch of package.tar.gz files on the disk in the local hackage repo directory
18:00:19 <stepcut> we could have 10 different index.tar.gz that all referenced subsets of the package .tarag.z files in the local hackage directory -- since anything not listed in the index.tar.gz will be ignored by cabal-install
18:00:32 <alpounet> hmm
18:00:44 <alpounet> i would have to check if it's possible to handle multiple index files
18:00:46 <stepcut> so, we can still avoid redownloaded for each build
18:01:07 <alpounet> i don't think so
18:01:11 <stepcut> it is certainly possible.. the server can fake it
18:01:37 <stepcut> when you give cabal the url to the LocalHackage repo, it will include the information needed to get the correct repo
18:01:41 <alpounet> well, for local repos, you just give the hackage directory
18:02:02 <alpounet> the same goes for distant repos
18:03:00 <stepcut> So, we could have,  http://localhost:8000/localhackage/<updateid>, or something?
18:03:52 <alpounet> if localhackage/<updateid>/ has a "hackage" structure
18:03:58 <stepcut> exactly
18:04:18 <alpounet> that is an index, and then pkg/version/pkg.cabal and pkg/version/pkg-version.tar.gz
18:04:31 <stepcut> so, each updateid would return a different index.tar.gz, but the files that index.tar.gz references would all be in a shared directory
18:06:11 <stepcut> does that make sense?
18:06:19 <alpounet> hmm
18:06:46 <alpounet> I think you HAVE TO have hackage/<updateid>/pkg/version/(cabal file and archive)
18:06:48 <stepcut> so, each localhackage/<updateid> would look like a complete, self contained, independent hackage repo, but behind the scenes would could share files
18:06:49 <alpounet> as a structure
18:07:00 <stepcut> why is that a problem ?
18:07:04 <alpounet> i'm not sure about the possibility of having a shared dir for all updateids
18:07:51 <alpounet> i think that when you give cabal-install a repo url/dir, it justs says hi to the index at url/00-index.tar.gz
18:07:59 <stepcut> if it is going through the http server, then we can make things look however we want regardless of the disk layout
18:08:13 <alpounet> and when we say "cabal install foo-1.0", it just takes url/foo/1.0/foo-1.0.tar.gz
18:08:33 <alpounet> stepcut, yeah but then it wouldn't be a local repo anymore?
18:08:43 <stepcut> alpounet: local enough
18:09:10 <stepcut> alpounet: LoopbackHackage?
18:10:28 <alpounet> stepcut, so we enforce a "dependency" on a web server?
18:10:29 <stepcut> for building on multiple OSes it might make sense to have a central server where you check out all the source, and set up LocalHackage, and the remote bots pull the packages from there
18:11:03 <stepcut> sure..
18:11:20 <stepcut> If want to stick with pure disk there are still possibilities
18:11:28 <stepcut> though it makes portability harder
18:12:41 <stepcut> for direct disk based access.. we could use hardlinks/symlinks, but that is not very windows friendly
18:13:24 <dcoutts> ACTION notes this will be easier with the new index format
18:13:45 <stepcut> also depends if we need multiple LocalHackage instances active at the same time
18:14:11 <stepcut> we could always have index-<updatid>.tar.gz files saved in the directory, and then just copy over the index.tar.gz as needed
18:14:15 <alpounet> stepcut, what i had in mind originally was just one LocalHackage instance
18:14:23 <alpounet> that would last...
18:14:25 <alpounet> FOREVER!
18:14:50 <stepcut> alpounet: but some builds may not want to see the latest versions of everything
18:15:13 <alpounet> stepcut, we don't care
18:15:18 <stepcut> sure we do
18:15:21 <alpounet> we would really just do "cabal install foo-0.1"
18:15:31 <stepcut> if we want to test build the stable branch vs the unstable branch..
18:17:00 <alpounet> stepcut, what would not be handlable by the "global" localhackage?
18:17:21 <stepcut> because there would be newer packages in there that should not be seen yet
18:17:23 <alpounet> dcoutts, yeah, the one that'll break all my LocalHackage service? :P
18:17:36 <dcoutts> alpounet: no it'll be fully backwards compatible
18:18:12 <alpounet> stepcut, that would be a problem if we weren't specifying the version when calling cabal-install. but i don't see why it is a problem if we do specify it...?
18:18:54 <stepcut> alpounet: because top-level package may pull in dependencies that we are not explicitly specifying versions for
18:18:57 <stepcut> alpounet: so, imagine this case
18:19:18 <alpounet> dcoutts, yes, but I will need to adapt my code anyway 'cause we would really benefit from the additional flexibility, i was just kidding :)
18:19:32 <dcoutts> alpounet: yeah, you'd want to change things
18:19:46 <dcoutts> alpounet: and I might break your setup anyway ;-)
18:20:02 <dcoutts> since I'll only test that it doesn't break for the main hackage server
18:20:09 <dcoutts> and released cabal-install versions
18:20:55 <stepcut> alpounet: we are doing a build to test the darcs version of foo builds against Hackage. And foo depends on bar. Furthermore, foo fails to build against bar-1.0.0 which is on hackage, but successfully builds against bar-1.0.1 which is in LocalHackage.
18:21:08 <stepcut> alpounet: so, now we think that foo builds fine against hackage, when in fact it does not.
18:21:37 <alpounet> dcoutts, I think i'll try to get a set-up-your-own-hackage package out of scoutess soon, and might just update it to the new index format when it's done
18:21:50 <alpounet> but you might actually be working on such a thing so maybe i'll not have to :P
18:22:17 <dcoutts> alpounet: not on the server side aspect
18:22:37 <dcoutts> except for the full-on hackage-server of course
18:22:52 <dcoutts> but there's certainly a use case for a simple package-only server with no smarts
18:23:33 <alpounet> dcoutts, well it's just nice to be able to do that programmatically instead of writing ugly shell scripts that would move a bunch of files and folders around and then tar them up, and then call these from Haskell code...
18:24:12 <alpounet> stepcut, alright, it indeed makes us forget about where things come from
18:24:39 <dcoutts> alpounet: oh you mean code just for manipulating local passive hackage archives?
18:24:40 <alpounet> and thus keep us from having clear "modes" for scoutess, like the "try to build against hackage" one, or the "unstable" one
18:24:51 <alpounet> dcoutts, yes
18:24:54 <dcoutts> alpounet: that you could serve via a file server or dump http server
18:25:00 <alpounet> yup
18:25:05 <dcoutts> alpounet: yes, I'd plan to do that along with a CLI in cabal-install
18:25:21 <dcoutts> for manipulating indexes and archives
18:25:24 <alpounet> i'd love to have it as a lib too
18:25:34 <dcoutts> yes, much of cabal-install needs to be a lib
18:25:34 <alpounet> just sayin', you know.
18:25:37 <dcoutts> I know :-)
18:26:02 <dcoutts> creating a palatable API takes a bit of time
18:26:07 <alpounet> dcoutts, note that i can help if needed, for that part
18:26:15 <dcoutts> thanks
18:26:23 <alpounet> since that's something i have to do anyway, may it be in scoutess or externally, in a separate lib
18:28:29 <stepcut> alpounet: I think the only tricky part here is simply how to trick cabal when it is looking directly at the file system ?
18:29:01 <alpounet> yeah
18:29:17 <alpounet> depends on if you enforce the dependency on a webserver
18:29:23 <alpounet> if you do, we don't have to trick cabal
18:29:54 <stepcut> once the new happstack backend is complete.. that is all we would need
18:29:56 <alpounet> otherwise... yeah, there's the "cp" trick
18:30:32 <stepcut> we could hardlinks on filesystems that support it and full out cp on ones that don't as well
18:30:33 <alpounet> stepcut, yeah, better be lightweight though
18:30:38 <alpounet> otherwise people will complain
18:30:48 <stepcut> it will be
18:30:50 <alpounet> "yeah i already have apache/whatever installed, why can't I use it uh-oh"
18:31:07 <stepcut> we could almost just use acme-http
18:31:13 <alpounet> hahaha
18:31:25 <alpounet> "scoutess comes with the fast haskell web server ever."
18:31:28 <stepcut> :)
18:31:46 <stepcut> the only thing we need to do is map the incoming url to a file on disk and send it with sendfile
18:32:27 <stepcut> acme-http could do that with a few more lines of code
18:32:42 <stepcut> at its only 100 lines of coded at most
18:32:46 <alpounet> how could it guess what version it must send just knowing the updateid?
18:32:49 <alpounet> it would use our code
18:33:30 <stepcut> yes.. it would be integrated into the scoutess app itself
18:33:40 <stepcut> not a separate daemon that runs
18:34:34 <stepcut> scoutess is already running, the overhead of openning an extra socket is not really a big deal?
18:35:43 <alpounet> nah, it isn't
18:35:51 <stepcut> that's part of what makes it lightweight.. you don't have to run anything extra or configure anything extra
18:35:53 <alpounet> i mean, it's a do-all build bot
18:36:12 <stepcut> yeah
18:36:19 <stepcut> I expect the bot will have to modes
18:36:20 <alpounet> people must expect scoutess to actually interact with the world, open sockets, ...
18:36:26 <stepcut> singleshot and daemon mode
18:36:57 <stepcut> if you have a small project, you can just run it by hand when you want to know stuff / do stuff
18:37:22 <stepcut> or, for something like happstack, you would leave it running all the time in daemon mode and it would automatically watch resources and perform updates
18:37:49 <alpounet> yeah
18:38:08 <alpounet> and the daemon mode would have the nightly builds / on-commit builds options
18:38:20 <stepcut> exactly
18:50:45 <stepcut> hmm, I need to add UUIDs to this data-type during migration.. but that requires IO
18:51:41 <stepcut> I think I will fake it
18:51:57 <stepcut> I think I can use this, http://hackage.haskell.org/packages/archive/uuid/1.2.3/doc/html/Data-UUID-V5.html
18:52:06 <alpounet> migration of?
18:52:22 <stepcut> alpounet: the data type that holds the pages in happstack.com
18:52:42 <stepcut> I want to create an atom feed, and that requires some sort of unique id that is associated with each page
18:52:55 <stepcut> uuid works well for that.. but the old pages to not have a uuid
18:54:07 <stepcut> but I think, generateNamed namespaceOID (toWord8List page), should do it?
18:54:45 <stepcut> there are less than two dozen pages in the world that need to be migrated, so it just has to be good enough
18:55:02 <alpounet> yeah
19:11:05 <alpounet> http://hackage.haskell.org/package/board-games-0.1.0.1
19:11:06 <alpounet> cool
19:14:46 <stepcut> :)
19:19:52 <alpounet> stepcut, you've seen http://www.yesodweb.com/blog/2012/04/working-together i guess
19:20:42 <stepcut> yup
19:22:39 <stepcut> so.. I think we should focus on the Source Service for now ?
19:23:45 <alpounet> yeah
19:23:50 <alpounet> there's obviously quite much to do
19:23:59 <alpounet> with Source / Hackage
19:24:02 <stepcut> yeah
19:24:11 <stepcut> and it is useful all by itself
19:24:37 <stepcut> in nothing else, it can provide diffs about what changed when a new package is uploaded to hackage (there is a service like that already, but.)
19:26:49 <alpounet> can it? hah
19:27:15 <stepcut> well.. that is something I would like to see the Source service do eventually :)
19:27:32 <stepcut> right now, simply reporting that the source has changed at all would be a good start
19:27:59 <stepcut> I guess we need to email the darcs people with a clear question
19:28:10 <alpounet> and we still have to choose how to generate a version id from a darcs get/pull, git clone, etc
19:28:18 <stepcut> right
19:28:23 <stepcut> hence the email to darcs
19:28:26 <alpounet> yeah
19:28:30 <alpounet> they are going to hate us
19:28:33 <stepcut> nah
19:29:17 <stepcut> so, we want to do a pull, record some information, then do a another pull and be able to tell if there were changes in a specific sub-directory since the last pull
19:29:42 <alpounet> yeah
19:29:54 <alpounet> i can take care of writing this
19:29:58 <stepcut> so, darcs changes --last=10 somedirectory, is kind of close
19:30:05 <stepcut> writing the email?
19:30:07 <alpounet> and asking for the version-id thing to
19:30:19 <alpounet> too
19:30:22 <alpounet> yeah
19:30:23 <alpounet> but hmm
19:30:28 <stepcut> well, I am not sure if darcs even needs to add anything.. maybe we can already calculate all we need to know
19:30:38 <alpounet> can't we do that somehow from darcs changes --last=n somedir ?
19:31:35 <stepcut> perhaps
19:31:42 <stepcut> I am not clear on exactly what 'last' means
19:32:13 <stepcut> last by date the patch was recorded, or by the order it was added to repo?
19:32:37 <stepcut> since you can cherry pick patches that makes 'last' a bit ambiguous
19:32:52 <alpounet> it says
19:33:02 <alpounet> "select the last NUMBER patches"
19:33:13 <alpounet> there's also --from-patch
19:33:42 <stepcut> yeah
19:34:08 <stepcut> --from-path would be awesome .. assuming the ordering is based on the order the patches were added to the repo, not the date the patches were recorded
19:34:17 <stepcut> but that is something only darcs folks can answer
19:34:25 <alpounet> yeah
19:34:31 <donri> PHP: "preg_replace with the /e (eval) flag will do a string replace of the matches into the replacement string, then eval it."
19:34:34 <donri> metaprogramming made easy!
19:34:36 <alpounet> gonna ask about it on #darcs
19:34:43 <stepcut> sounds good
19:34:50 <stepcut> donri: heh
19:38:21 <donri> php has a global core function for getting a line from a gz-file pointer, stripping html tags
19:38:29 <donri> we must implement this for happstack *right now*
19:38:52 <stepcut> donri: after sessions
19:38:56 <donri> true
19:39:09 <donri> but after sessions it's on the top of the TODO
19:39:13 <donri> mission critical
19:39:47 <stepcut> yup
19:39:59 <stepcut> that and porting board-games to happstack
19:44:25 <stepcut> mwuhahahah, I'm wasting uuid's .. there won't be any left for the rest of you!!
19:44:41 <alpounet> :(
19:44:51 <stepcut> alpounet: except you.. I'll save you a few
19:45:31 <alpounet> hoorayyy
19:45:40 <alpounet> stepcut, seen #darcs?
19:48:25 <stepcut> anyway, it looks like this could be easy then
19:48:32 <stepcut> we just need to record the name of the most recent patch
19:48:42 <stepcut> and then we can use that to check for changes
19:49:10 <stepcut> alas, we have to call darcs and parse the output probably, but still not that bad
19:49:25 <alpounet> yeah
21:03:33 <stepcut> I think using using the darcs library is probably fine
21:03:51 <stepcut> I mean, the hackage source fetcher uses libraries, so it is not without precedent
21:04:33 <stepcut> they may be a libgit2 that is usuable..
21:04:55 <stepcut> linus, in his infinite wisdom, did not make git a proper library like he should have from the start..
21:06:35 <donri> in other news, i was trying to grok the state monad without considering its monad instance
21:06:55 <stepcut> donri: oh ?
21:07:02 <alpounet> stepcut, there are a few packages for working with git
21:07:06 <alpounet> we should check them out...
21:07:08 <stepcut> alpounet: yeah
21:07:14 <donri> yes, hilarity ensued. "how the hell does 'get' even *work*!!1"
21:07:26 <stepcut> donri: :-/
21:07:35 <donri> later: "i'd have to be in control of >>=! oh wait--"
21:07:41 <stepcut> :)
21:07:54 <donri> ACTION facepalm
21:08:30 <stepcut> minix is great BSD licensed *and* a microkernel.. got some catching up to do though
21:08:55 <donri> hurd? :D
21:09:04 <stepcut> no..
21:09:25 <stepcut> I was on the hurd mailing list.. they will never release anything.. mostly because of the GPL
21:09:48 <donri> i want nixos running on plan9 with gobo-style file hierarchy... :D
21:10:33 <stepcut> the people on the hurd mailing list are such hardcore, fundamentalist GPLists, that they are not willing to implementing any OS that could possibly be used in any way to implement anything that might possibly go against some perceived interpretation of the GPL
21:10:50 <donri> haha
21:11:02 <donri> what do you expect, it's the core tech of the gnu os
21:11:43 <stepcut> for a while they were considering basing it on CoyotosOS (as the microkernel)
21:12:01 <stepcut> that was pretty interesting -- because Jonathon Shapiro is very outspoken and verbose as well
21:12:18 <stepcut> so.. lots of very long discussions..
21:12:31 <stepcut> and no flaming .. just very long discussions that didn't get very far often
21:12:31 <alpounet> stepcut, hm, and if people really don't want (neither need) the darcs source fetcher/updater/etc, we can still provide an -fno-darcs flag or smth
21:12:50 <alpounet> that'd be a pretty dumb move but possible
21:12:53 <stepcut> my favorite was when some 'city boy' gave a hypothetical example about
21:13:09 <alpounet> not sure i'm willing to provide that though hah
21:13:33 <stepcut> shooting a horse.. and then Shapiro replied included the fact that he was a rifle marksmen champion, had actually shot a horse, and did not currently own any guns
21:14:47 <stepcut> alpounet: -fno-darcs flags are tricky.. because there is no way to depend on a version of the library that was built with the darcs enabled
21:15:26 <stepcut> alpounet: this gets down to the fundamental problem of Haskell plugins pretty quickly
21:15:58 <stepcut> alpounet: you would like to be able to depend on, scoutess, scoutess-darcs, scoutess-git, etc, so explicitly define what you care about
21:16:08 <stepcut> but there are some type-checker releated problems to overcome
21:16:22 <stepcut> that is a problem we are looking at in Happstack 8
21:16:35 <stepcut> though it clearly extends beyond web dev
21:17:22 <alpounet> yeah it comes down to having to reference things that are in a different package but without depending on that package
21:19:13 <stepcut> for now, we can just require darcs
21:20:46 <stepcut> since the scoutess config will (eventually) come from a .hs file.. that gives a bit of flexibility to make things optional later
21:21:12 <stepcut> but our problem right now is having too few source backends, not too many :)
21:23:44 <alpounet> well the TarGz (or Archive or URL or smth) one i can implement easily
21:23:55 <alpounet> and with git and darcs
21:24:01 <alpounet> we should be pretty fine for now
21:24:05 <stepcut> yeah
21:24:11 <alpounet> (and the already existing hackage one ofc)
21:24:15 <stepcut> yeah
21:34:12 <donri> ACTION installing MINIX in a VM
21:37:03 <stepcut> in minix, you should be able to do a kill -9 on your filesystem or hard disk driver and it should come back up automatically :p
21:39:07 <mekeor> what's the usual http-port?
21:39:33 <mekeor> i want to be able to write "localhost" rather than "localhost:8000".
21:39:41 <mekeor> how can i do that?
21:40:43 <stepcut> nullConf { port = 80 }
21:40:58 <stepcut> main = simpleHTTP (nullConf { port = 80 }) handle
21:41:01 <mekeor> instead of Nothing? i'm using happstack-lite.
21:41:05 <stepcut> ah
21:41:07 <stepcut> one moment
21:41:35 <mekeor> main = serve Nothing $ msum [ .. ] -- this is my current main-function
21:42:05 <stepcut> main = serve (Just $ defaultServerConfig { port = 80 }) $ msum [ .. ]
21:42:13 <mekeor> awesome
21:42:23 <mekeor> great, thank you very much!
21:42:59 <stepcut> my pleasure!
22:09:46 <mekeor> i can't compile happstack-server on ARM (debian testing). http://hpaste.org/66780 -- why?
22:10:27 <donri> what compiler and version?
22:10:40 <mekeor> The Glorious Glasgow Haskell Compilation System, version 7.0.4
22:11:18 <stepcut> seems like ARM does not support template haskell ?
22:11:26 <mekeor> oO
22:11:53 <mekeor> how can i test that?
22:11:59 <stepcut> I think that is the only template haskell in happstack-server. You could easily replace it with a constant
22:12:12 <stepcut> one moment
22:13:20 <stepcut> in Happstack.Server.Internal.SocketTH, change it to, supportsIPv6 = False
22:13:41 <hpaste> donri pasted actually... at http://hpaste.org/66781
22:13:56 <mekeor> donri: what does that mean?
22:14:08 <mekeor> stepcut: i install via cabal, so how can i change the code?
22:14:13 <stepcut> donri: are any of those modules actually used already
22:14:16 <mekeor> do i have to compile it myself?
22:14:24 <stepcut> donri: s/already/anymore
22:14:35 <donri> duno?
22:14:42 <stepcut> mekeor: yeah, do, cabal unpack happstack-server ; cd happstack-server ; <edit the file> ; cabal install
22:14:54 <mekeor> cool
22:15:46 <stepcut> donri: I think the pragma in TimeoutTable is wrong.. and the other modules are not listed in happstack-server.cabal anymore
22:15:52 <donri> yea
22:16:03 <donri> clean up time? :)
22:16:09 <stepcut> actually, timeout table is not listed either
22:16:22 <stepcut> most of those modules come from the snap timeout handling code, which we do not use anymore
22:17:13 <mekeor> ACTION is doing what he's said to do.
22:17:30 <mekeor> wow! that was a good english sentence, wasn't it?
22:17:48 <stepcut> :)
22:17:50 <mekeor> huh? was it? say yes, please.
22:18:22 <mekeor> ACTION is excited whether the trick will work.. he waits...
22:19:17 <donri> it's almost a tautology
22:19:30 <mekeor> heh.
22:19:36 <donri> except what you're said to do could be a lie or misunderstanding
22:19:44 <mekeor> hmm.
22:19:51 <mekeor> where are you from, donri, stepcut?
22:19:57 <donri> sweden
22:20:08 <stepcut> chicago, il
22:20:22 <donri> hey i've been to cicago
22:20:36 <stepcut> :)
22:20:56 <mekeor> stepcut: it doesn't work.
22:21:05 <donri> i was 12 and touring with a violin group ^_^
22:21:08 <mekeor> stepcut: do i have to remove the pragma?
22:21:17 <mekeor> the TH-pragma, i mean
22:21:25 <donri> try, can't hurt
22:21:25 <mekeor> ACTION is from germany, btw.
22:21:29 <donri> maybe remove the dependency too
22:21:36 <stepcut> mekeor: what donri said
22:21:53 <mekeor> ok
22:22:02 <mekeor> donri: it's quite late over here, btw.
22:22:24 <mekeor> i think, we're in the same time zone...
22:22:35 <mekeor> ACTION talkes too much off-topic. -.-
22:23:07 <donri> yesterday i went to bed at around 7 am
22:23:16 <mekeor> donri: O_O
22:23:23 <donri> hey gotta sync up with stepcut!
22:23:30 <mekeor> hehe
22:23:37 <stepcut> donri: good plan!
22:23:40 <mekeor> :D
22:23:46 <mekeor> still. error.
22:23:58 <stepcut> same error ?
22:24:00 <mekeor> but in src/Happstack/Server/Internal/Socket.hs:59:5:
22:24:01 <mekeor> no
22:24:15 <mekeor> paste? paste! paste...
22:24:20 <stepcut> oh right
22:24:21 <stepcut> one moment
22:25:03 <stepcut> change the body to:
22:25:11 <stepcut> case addr of
22:25:11 <stepcut>                       (S.SockAddrInet _ ha)      -> showHostAddress ha
22:25:16 <mekeor> http://hpaste.org/66783
22:25:27 <stepcut> I wonder if we can remove this template haskell code entirely yet
22:25:30 <mekeor> stepcut: body? which body?
22:25:47 <mekeor> stepcut: that'd be great
22:25:59 <stepcut> oh, sockAddrToHostName in Socket.hs
22:27:10 <mekeor> ACTION nods.
22:28:28 <mekeor> so, what can i do?
22:29:26 <stepcut> mekeor: is it still not working?
22:30:17 <mekeor> i didn't really understand what i have to change... is it line 72 &c ?
22:30:57 <stepcut> one moment
22:31:38 <mekeor> ACTION is looking forward to having happs installed.
22:32:38 <hpaste> stepcut pasted removing template haskell from Happstack.Server.Internal.Socket at http://hpaste.org/66784
22:33:40 <mekeor> thanks
22:33:58 <mekeor> not only happs itself, but in particular #happs is awesome.
22:34:20 <stepcut> thanks :)
22:34:20 <mekeor> @remember mekeor not only happs itself, but in particular #happs is awesome.
22:34:20 <lambdabot> Done.
22:36:13 <mekeor> does happs compile with -Wall ?
22:36:56 <stepcut> no
22:37:02 <mekeor> ok
22:37:21 <stepcut> yes actually
22:37:37 <stepcut> -Wall -fno-warn-unused-do-bind
22:37:50 <stepcut> and, last I checked, it does not generate any warnings
22:38:20 <stepcut> I knew it compiled with out warnings, but I did not realize we had even enabled -Wall
22:38:28 <stepcut> we are more awesome than I even thought!
22:38:39 <stepcut> we is pretty hard.. because I think Happstack is pretty awesome ;)
22:38:42 <mekeor> cool
22:39:40 <mekeor> ACTION likes -fno-warn-unused-do-bind
22:39:49 <stepcut> :)
22:40:56 <mekeor> since the ARM-processor is very slow, we have to wait a bit. thank you so far, happs, #happs, haskell, stepcut, donri, hpaste and lambdabot.  thank you!    you're awesome!
22:41:31 <stepcut> what sort of ARM machine are you using?
22:44:17 <mekeor> qnap fileserver
22:44:48 <stepcut> neat
22:45:20 <stepcut> if it compiles in the end, I can see about adding some magic so that it works out of the box next time
22:45:33 <mekeor> neat
22:51:31 <mekeor> yeeeeeeeeeeeeeaaah -- yippee yay. hurray. it compiled.
22:51:52 <mekeor> stepcut: shoudl happstack-lite compile easier/just_fine ?
22:52:26 <stepcut> mekeor: it should if you have happstack-server installed. happstack-lite just re-exports functions from happstack-server
22:53:15 <mekeor> ah, nice.
22:54:16 <mekeor> that's gonna be awesome.  (i love that sentence (because of barney stinson. (btw, do you also watch how-i-met-your-mother? it's legendary!)))
22:55:08 <mekeor> compiled fine.
22:55:10 <mekeor> perfect.
22:55:46 <stepcut> nice
22:56:34 <mekeor> ACTION is lucky.
22:57:14 <stepcut> http://code.google.com/p/happstack/issues/detail?id=190
22:57:47 <mekeor> stepcut: http://uxl.dyndns.org/
22:57:51 <donri> stepcut: am i horrible if i don't follow the style in happstack's code?
22:58:07 <stepcut> mekeor: nice!
22:58:30 <stepcut> donri: the coding style in happstack varies, though I am slowly conforming it to a standard of sorts
22:58:40 <stepcut> donri: so, if you fail to, I will change it :)
22:58:48 <donri> heh
22:58:51 <stepcut> donri: though you are welcome to  help write the style guide
22:59:11 <donri> i think my style is close to https://github.com/tibbe/haskell-style-guide/blob/master/haskell-style.md
23:00:14 <stepcut> that looks like what I do as well
23:00:45 <stepcut> for Data declarations, etc, I used the second format with the '=' on the second line
23:00:58 <stepcut> I tend to favor things that keep code left-most as possible
23:01:07 <stepcut> because haskell does have a tendency to creep right.. plus I think it looks better
23:01:22 <stepcut> I also tend to align = and <- when it looks pretty
23:01:25 <donri> one thing in the happstack source i'm not too fond of is the two meter long deriving lists
23:01:30 <stepcut> yeah
23:01:53 <stepcut> $(deriveHappstack ''MyType)
23:01:53 <stepcut> :p
23:02:00 <stepcut> I am not sure how to avoid it
23:04:02 <donri> well i split it up on multiple lines
23:04:33 <donri> the need for those derives in the first place is another issue, yea, but i'm talking about the line-length itself :)
23:04:56 <stepcut> oh
23:05:00 <stepcut> yeah, folding them is a good idea
23:05:03 <stepcut> I have started doing that
23:05:17 <hpaste> donri pasted for example at http://hpaste.org/66785
23:05:41 <stepcut> hmm. that seems a bit extreme in the other direction
23:05:45 <donri> :)
23:05:49 <donri> easier to read IMHO
23:06:36 <hpaste> stepcut annotated for example with for example (annotation) at http://hpaste.org/66785#a66786
23:06:41 <stepcut> crap
23:06:47 <stepcut> stupid proportional width fonts
23:06:56 <donri> sure, but that is not as easy to "scan"
23:07:08 <stepcut> the thing I find the hardest to deal with is imports
23:07:14 <stepcut> there is no way to make those look pretty
23:07:24 <donri> yea, i've stopped wrapping import lines because i like to sort them :)
23:07:51 <donri> could probably script it to support that but hey
23:09:02 <stepcut> a might pay $100 for someone to fix haskell-mode so that it automatically managemes import lines
23:09:20 <stepcut> I think leksah does that.. but I am not ready to switch
23:10:15 <stepcut> I do find your method easy to read.. just not sure how I feel about the amount of vertical space it uses
23:10:27 <stepcut> not that there is a world-wide shortage of vertical space or anything
23:10:37 <stepcut> readability is certainly the most important thing
23:13:03 <donri> well i have vim2hs fold newtypes into just that before the =
23:18:17 <mekeor> i have to go now. good night. keep it happsy and pure.
23:19:18 <stepcut> mekeor: best of luck!
23:19:26 <stepcut> mekeor: would love to hear any feedback you have
23:19:39 <stepcut> mekeor: easier to make things better when people complain about what is wrong :)
23:19:39 <tazjin> stepcut: Can I query you? :]
23:19:45 <stepcut> tazjin: sure!
23:19:51 <mekeor> stepcut: of course.
23:20:03 <stepcut> tazjin: but only if you buy me a nice dinner first ;)
23:20:18 <tazjin> Sure, Italian food sounds good?
23:21:01 <stepcut> ugh. Sorry to be a pain, but I have issues with milk and flour. Thai maybe