--- Log opened Fri Feb 06 00:00:03 2009
01:11  * mae_ watches
01:12 < mae_> looks like wchogg might do alot of tutorial / example documentation for multimaster
01:12 < mae_> Saizan: I noticed spread released a new version 4 recently, its supposed to have easier configuration and better visibility into whats going on
01:23 < Saizan> that reminds me that hspread doesn't seem to work on my machine
01:23 < Saizan> stepcut: have you tried it on 64-bit?
05:11 < mae_> uploaded skeleton for "meta" package
05:11 < mae_> so we can start putting glue code there (so we can decouple other packages where it makes sense, i.e. server not depending on state)
05:11 < mae_> bbl
05:29 < h_buildbot> Build for ghc-6.8.3 OK. Test suite ran. 16/75 test cases failed. Check http://buildbot.happstack.com/ for details.
05:42 < h_buildbot> Build for ghc-6.10.1 OK. Test suite ran. 16/75 test cases failed. Check http://buildbot.happstack.com/ for details.
09:54 < stepcut> Saizan_: I have not tried it on 64-bit, but what happens when you run it ?
13:06 < webframp> can anyone else darcs get/pull from http://patch-tag.com/publicrepos/happs-tutorial ?
13:10 < gwern> I can't
13:10 < gwern> darcs failed:  Not a repository: http://patch-tag.com/publicrepos/happs-tutorial (Failed to download URL http://patch-tag.com/publicrepos/happs-tutorial/_darcs/inventory: HTTP response code said error)
13:12 < webframp> same err i get here
13:40 < koeien> hi. i'd like your feedback on a tutorial on using multiple Happstacks on one server and/or Happstack+SSL. See http://huygens.functor.nl/blog/?p=21
13:44 < stepcut> koeien: sweet!
13:44 < jfoutz> looks ok to me. i like this kind of walkthrough hiding the zillion apache config options in favor of getting something working quickly and correctly. you *might* point out that you don't open up 1729 in the firewall rules, but anybody proxying will get that anyway.
13:45 < koeien> jfoutz: thanks, i'll add it just to be sure
13:59 < jfoutz> koeien: it does look great. i'm going to need this soon. thanks for putting it together.
13:59 < koeien> jfoutz: be sure to leave a comment or ask here if there is an error or omission :)
14:25 < jrx> Are there FlashNotices (or sth like that) implemented in happstack? I remember something like that in happs
14:25 < jrx> or could you give some clues, how to implement such mechanism myself?
14:28 < jfoutz> i've never used flashnotice myself, so i may be off base. It looks like you conditonaly include some text in a ajaxy way?
14:30 < jrx> jfoutz: well, the mechanism I'm looking for is something like storing temporary information for a client between two subsequent requests
14:31 < jrx> for example, there is a register form
14:31 < jrx> user fills it in and clicks "submit"
14:31 < jrx> and then my app looks it app in the database and if the user with such name exists
14:32 < jrx> it returns seeOther "register" ...
14:32 < jfoutz> ah, i see. you've got sessions. so at the very least you could put info in there. the Working With State section of the tutorial is a little thin right now...
14:32 < jrx> and I'd like to set some "notice" in the handler for the first register to be displayed after redirect
14:33 < jrx> well but the sessions I have currently work only for logged users
14:34 < jrx> and the case I showcased happens for non-logged user
14:34 < jfoutz> you don't set a cookie when they touch the server?
14:35 < jfoutz> you might want to do sessions as a unique visitor, and under that have a logged in user
14:35 < jrx> oh well, I must try that
14:35 < jfoutz> data Session = Visitor Map Stuff | Member Map MoreStuff ?
14:36 < jrx> sounds pretty well
14:36 < jfoutz> <- hasn't actually built an ap :) so take that for what it's worth.
14:37 < jrx> I'm building my first, so I must begin with something ;)
15:23 < koeien> Saizan_: ping
15:24 < koeien> Saizan_: we want to have results of testcases in build reports and stuff :) is this already implemented in the cabal reports-client?
15:50 < Saizan_> koeien: no, there's nothing for testcases, but it should be easy to add if we propose an interface
15:52 < koeien> if we want to have all functionality of the buildbot we should add it :)
15:53 < Saizan_> so we need a way to collect at least simple stats about how many tests failed and the log?
15:54 < koeien> yes. i'm not sure how cabal currently works
15:54 < koeien> ideally it should be integrated with cabal test
15:54 < Saizan_> yeah
15:54 < koeien> however happstack does not use cabal test afaik
15:55 < koeien> i.e.   ( cabal install && cabal test ) ; cabal send-report
15:56 < Saizan_> there's already an optional field for the outcome of trying to build the documentation in the build-report datatype
15:57 < Saizan_> so we can do something similar for test
15:57 < koeien> also haddock-builds
15:57 < koeien> yes
15:57 < koeien> test is a bit more interesting since we can report # failures / # errors / # success. i don't know if cabal test supports that however
15:57 < Saizan_> though i think you need to pass --enable-documentation to cabal install to get that field populated in the report
15:58 < koeien> yeah fine
15:58 < koeien> we also need to run cabal test to populate the hypothetical test field :)
15:59 < Saizan_> cabal test currently just executes the test UserHook, without expecting any feedback from it
15:59 < koeien> then parsing of results is in general not feasible
15:59 < koeien> which is unfortunate
16:00 < Saizan_> yeah, we've to design a more sensible UI
16:01 < Saizan_> well API, rather
16:01 < koeien> if people use HUnit we could parse that
16:01 < koeien> i.e. specify that we parse any lines that look like HUnit results
16:01 < koeien> if people use $CUSTOM_SYSTEM then the can mimick that output
16:02 < koeien> s/the /they /
16:03 < koeien> i don't know if that is optimal though
16:04 < Saizan_> rather than parsing output i think we should change the type of the Hook to return the stats in an ADT
16:04 < koeien> the Hook is not a String to execute a file?
16:04 < Saizan_> no
16:05 < koeien> ok. we could make an IO TestResult, this would be far better of course
16:06 < koeien> i'm both a cabal and happstack noob, so i apologize for my ignorance
16:06 < Saizan_> http://moonpatio.com/fastcgi/hpaste.fcgi/view?id=1220#a1220
16:06 < koeien> yeah just change the unit type to something more sensible would be fine
16:07 < koeien> would break anything that is dependent on it, but the API is unstable anyway afaik
16:07 < Saizan> yeah
16:07 < koeien> also providing convenience functions for HUnit / QuickCheck would be nice
16:07 < koeien> don't know if they exist
16:08 < Saizan> not in Cabal at least
16:08 < jfoutz> koeien: your tutorial worked perfectly. ( i only did the first part, but plan on doing more over the weekend)
16:08 < koeien> jfoutz: nice. good to hear!
16:08 < Saizan> and that's a bit of the problem since Setup.hs can't declare dependencies..
16:08 < Saizan> s/the/a/
16:09 < koeien> Saizan: yeah that's what happstack has encountered :(
16:09 < Saizan> i'd be inclided to run the happstack-$foo-tests binary built in dist/build/* from runTests
16:10 < koeien> there's no other way, i think, at least in the current framework. is there a better way?
16:11 < Saizan> i don't think so, no
16:11 < koeien> then we need to parse the output anyhow
16:12 < Saizan> yeah, but at least the runTests can be expected to know what output the executable it's calling will use
16:12 < koeien> yep
16:12 < koeien> mae_: ping
16:13 < Saizan> though i feel bad about adding another flag to cabal install to --enable-tests ..
16:14 < koeien> 'cabal test' could do this automagically?
16:15 < koeien> in happstack the default is to build the testsuite
16:16 < Saizan> yeah, but i was thinking about constructing the report
16:18 < Saizan> if we don't make "install" to run "test" then it means that test has to patch the already built report
16:18 < koeien> yes
16:18 < koeien> if you run 'install' twice, do you get two reports?
16:20 < Saizan> oh, there's already a field for tests!
16:20 < koeien> woot
16:22 < Saizan> just data Outcome = NotTried | Failed | Ok
16:22 < koeien> too restricted, but at least there is some support
16:23 < koeien> that is also the result of a build?
16:24 < Saizan> yeah, that's the main thing
16:27 < koeien> mae_, stepcut: i just confirmed that the testcases already were failing at the time igloo@ committed them
16:27 < Saizan> ok, currently the test-outcome field is always NotTried
16:28 < stepcut> koeien: the ones in HAppS-Data ?
16:28 < koeien> stepcut: yes
16:28 < stepcut> koeien: :(
16:28 < koeien> stepcut: http://code.google.com/p/happstack/issues/detail?id=51
16:28 < koeien> Saizan: we need to extend that field a little bit
16:29 < koeien> Saizan: or create a new field, test-output or test-results or so
16:29 < stepcut> koeien: maybe we can ask him why he submitted test cases that were failing :)
16:29 < Saizan> yeah, and add something for darcs --context
16:29 < koeien> stepcut: yes
16:29 < koeien> Saizan: if it was a checkout from darcs, yes, otherwise a version number would suffice
16:30 < Saizan> the version number is already included
16:30 < koeien> Saizan: do you have a proposal for the version number? like 0.2-20090206 ?
16:30 < koeien> wouldn't this totally screw up the ordering?
16:31 < Saizan> well, if we stat using versions like then adding a fourth field won't interfere with the ordering
16:31 < Saizan> *fifth
16:31 < koeien> why 4 version numbers? what is wrong with using 3 ?
16:32 < jfoutz> this one goes to eleven.
16:32 < koeien> Also, how would a current nightly tarball be versioned? ?
16:34 < Saizan> yeah, 3 is fine
16:34 < koeien> i know cabal has 4, but i don't know why
16:35 < Saizan> we're still at 0.1.9
16:35 < Saizan> as
16:35 < Saizan> *so
16:35 < koeien> ok, that's okay
16:35 < koeien> hoping that there will be no more than 9 point releases :)
16:35 < Saizan> why?
16:36 < Saizan> the ordering is lexicographical
16:36 < koeien> otherwise we will get 0.1.1, 0.1.2, ..., 0.1.8 and what is the next?
16:36 < Saizan> 0.1.10?
16:36 < koeien> that is not completely correct,  since 0.1.9 is "newer" in some sense
16:36 < Saizan> 0.1.10 > 0.1.9
16:37 < Saizan> well not in the sense that cabal considers version numbers
16:37 < koeien> i know :) but if 0.1.9 is "0.2.0 beta"
16:37 < Saizan> ah, you mean which version number we should use for RCs?
16:38 < koeien> RCs and nightbuilds
16:38 < koeien> i would prefer something like 0.2.0~20090206 but this is impossible
16:38 < Saizan> for nightbuilds i'd just use the $current_number.date
16:39 < Saizan> deciding first how many digists we want to use in the version number
16:39 < koeien> i think 3 should be fine
16:39 < koeien> doesn't really matter though
16:40 < Saizan> ok
16:40 < koeien> but 0.2.0~20090206 or so would be nice.
16:40 < koeien> where the ordering is x~y < x~z if y ~ z, and   x~y < x  for all y
16:41 < Saizan> ah, weird
16:41 < koeien> eh, that should be "if y < z"
16:42 < Saizan> yeah, i imagined
16:42 < koeien> that makes clearer what the "target" for the build is
16:43 < koeien> anyway, i don't think it's feasible in cabal
16:43 < Saizan> i think Cabal just uses for the first RC, and bumps the last number for the ones after
16:44 < koeien> so 1.6.1 ?
16:44 < koeien> or ?
16:44 < Saizan> the latter
16:44 < Saizan> if fact ghc-6.10.1 ships with
16:44 < koeien> i don't know what GHC uses, but i know there is no ghc-6.10.0 :P at least not for end users
16:45 < Saizan> heh :)
16:45 < Saizan> because you branch the $major.0 before actually making the release, maybe add some patches, and then bump just before releasing
16:46 < koeien> like Cabal does?
16:46 < koeien> and KDE4, appearantly :P
16:46 < Saizan> yeah
16:46 < koeien> odd/even like ghc/linux is also a solution for this
16:47 < Saizan> both ghc and cabal also use odd/even
16:48 < koeien> i think we should reconsider using this as well, at least when we reach say 0.4 or so
16:49 < koeien> anyway i like the .0 idea
16:52 < Saizan> right odd/even means we don't have to stick to an API for the whole odd version, and we can bless what we have at the end of it as the next-even API
16:53 < koeien> yes. also it avoids somewhat awkward 0.1.99-version numbers
16:53 < koeien> that can get inconvenient
16:56 < koeien> anyway, i'll start porting the build script
16:57 < koeien> but if cabal is going to keep track of everything, that is very short
16:59 < Saizan> we need some very basic code to parse HUnit results at least
16:59 < koeien> yeah i have that, a few lines of code
16:59 < Saizan> something that won't depend on non-core libraries
16:59 < koeien> i used regex
17:00 < koeien> regex-posix to be precise
17:01 < Saizan>  it'd be safer and more portable to use http://haskell.org/ghc/docs/latest/html/libraries/base/Text-ParserCombinators-ReadP.html
17:02 < koeien> is this in ghc-6.8.3?
17:02 < Saizan> yeah
17:02 < koeien> ok fine
17:04 < koeien> anyway, that part is very easy ;) the hard part is figuring out how to get the test results into cabal
17:05 < Saizan> yeah, what's a good representation for test results?
17:06 < koeien> we need the complete log. i don't think it is worthwhile to parse exactly which test cases failed
17:06 < koeien> besides that, we have # Cases, # Tried, # Failures, # Errors
17:06 < koeien> i'm not sure what the difference between the first two is
17:08 < stepcut> koeien: during an interactive run # Tried changes over time
17:09 < koeien> so for batch runs they're always the same? Then we only need to include #Tried, #Failures, #Errors
17:09 < Saizan> it'd be nice to associate the failures to the names of the tests, ii think
17:09 < koeien> perhaps
17:09 < Saizan> whats' the difference between Failures and Errors?
17:10 < koeien> all that is doable, but doesn't HUnit provide an API for that?
17:10 < koeien> Saizan: a Failure is expected (i.e. trying to assert 3 == 2)
17:10 < koeien> and an Error is unexpected (i.e. trying to assert undefined == 4)
17:10 < Saizan> ah, so error is when the test crashes, while failure is when it completes but gives a negative result?
17:10 < koeien> yes
17:24 < koeien> i think it is feasible to call HUnit in such a way that we get a report
17:24 < koeien> that is more easily parseable (e.g. read/show)
17:29 < koeien> so we need something like    data TestRunResult = TRR { cases :: [(Identifier, TestResult, String)] }     and data TestResult = Ok | Fail | Error
17:30 < Saizan> String being?
17:30 < koeien> the error message
17:31 < koeien> or whatever HUnit tries to print
17:31 < Saizan> ok, so Log
17:32 < koeien> yes
17:32 < koeien> type Log = String
17:32 < koeien> (or ByteString)
17:33 < koeien> we can then let the Haskell-program output this using Read/Show if called with `cabal test', and in a human-friendly way if called interactively
17:36 < Saizan> yeah, or a format like .cabal files like cabal does for build-reports and other files so that the external format is not so coupled with the internal type
17:36 < koeien> let's use XML ! :)
17:36 < stepcut> yay!
17:36 < Saizan> gah
17:38 < Saizan> uhm, so we could leave the runTests like that and require a comforming output
17:38 < Saizan> we can't call runTests directly from cabal-install anyway
17:38 < koeien> yes
17:39 < koeien> that's why we need such machinery for generating / parsing output
17:39 < Saizan> ok, that can go in Cabal i think
17:40 < koeien> yeah, we just need to decide on the precise format
17:41 < Saizan> i wonder if someone will say that TestResult = Ok | Fail | Error is too limiting
17:42 < Saizan> but there's Log anyway..
17:42 < koeien> that's all we get from HUnit
17:43 < Saizan> yeah, but if this goes in Cabal it has to be enough general
17:44 < koeien> true
17:44 < koeien> i don't know what dcoutts thinks
17:44 < dcoutts> Saizan: we've got a design for tests in Cabal
17:45 < Saizan> dcoutts: oh, good, where?
17:45 < dcoutts> http://hackage.haskell.org/trac/hackage/ticket/215
17:47 < koeien> dcoutts: this doesn't say anything about the output format of the test program that is run, afaics
17:49 < koeien> just yes/no or # test cases failed is too limiting, we can have more information about which test cases failed
17:49 < dcoutts> koeien: the idea is to specify a number of protocols
17:49 < dcoutts> the most trivial protocol we'll start with is exit code
17:49 < koeien> hmm
17:49 < koeien> yeah that makes it easy-to-use and powerful at the same time :)
17:50 < dcoutts> then we can add more sophisticated protocols
17:50 < dcoutts> they do not have to be command line protocols
17:50 < dcoutts> they could be exposing modules with exported functions
17:50 < dcoutts> and a detailed api for listing and running tests
17:51 < koeien> so Cabal won't run a binary then, but import the modules and run the tests?
17:51 < dcoutts> then we'd need to have code to provide the glue between hunit and qc testsuites
17:51 < Saizan_> how are you going to run them though? compiling a small module that imports them?
17:51 < dcoutts> koeien: it'd probably be implemented by compiling a standalone program that imports the modules
17:52 < dcoutts> but the form of that stub is flexible and can depend on the context
17:52 < dcoutts> the point is the api could be specified as exported haskell functions
17:53 < dcoutts> api/test protocol
17:53 < koeien> yeah nicer than dumping test files and parsing them
17:53 < dcoutts> each test section in a .cabal file must specify the protocol that it uses
17:54 < dcoutts> that gives us forward compat and lets us add improved protocols later
17:55 < koeien> sounds good
17:55 < koeien> and "protocol version 0" is pointing to an executable and using exit code
18:00 < Saizan_> dcoutts: btw, dependencies in Executable stanzas get added to the installed-pkg-config of the library, right?
18:01 < dcoutts> Saizan: yes, that's an infelicity.
18:01 < dcoutts> Saizan: it incorrectly unions all deps from all sections and uses that when building each section
18:01 < dcoutts> and registering libs
18:16 < Saizan_> to support HPC we must build the library with such a flag, right?
18:17 < stepcut> dcoutts: i approve of the exported library API method. In fact, that is how happstack does it now. Everything is library based, and there is a tiny generic wrapper that runs the suites
18:18 < dcoutts> stepcut: right, should be easy to use the new system when it arrives
18:18 < stepcut> this problem of not being able to depend on 'flags' is troublesome. If you try to build with profiling enabled, you run into that problem again of needing to depend on profiling enabled libraries
18:18 < dcoutts> Saizan_: yes
18:19 < dcoutts> stepcut: yes, the fact that profiling libs are not tracked is the big problem there
18:19 < dcoutts> it looks like we'll have an overhaul in time for ghc 6.12
18:19 < stepcut> dcoutts: right. And that goes for enabling hpc, extra test libs, etc
18:19 < dcoutts> where each instance of a package will be tracked separately
18:20 < Saizan_> also that we can't depend on e.g. happstack-server-0.1+tests=True
18:20 < Saizan_> we've a flag tests in each library that makes it export the tests
18:20 < dcoutts> Saizan_: that's a feature, not a bug :-)
18:20 < stepcut> Saizan_: right, that is what I mean, though I did not say it so clearly :)
18:20 < dcoutts> Saizan_: ah, but that's not a problem, because the test can import the modules directly
18:21 < dcoutts> so it can pass whatever cpp flags you need to export more internals
18:21 < stepcut> dcoutts: not always
18:21 < Saizan_> dcoutts: it's incovenient though, since we have a separate happstack-tests that imports and runs all the tests from all the libraries
18:21 < dcoutts> stepcut: cross-package?
18:21 < stepcut> dcoutts: right
18:21 < dcoutts> right, ok so my suggestion only works when the tests are in the same package as the code their testing
18:22 < dcoutts> but I'd guess that's usually ok for "white box" testing
18:22 < stepcut> dcoutts: in happstack, each package provides its own tests, since many packages may be used individually. We also have a top-level happstack-tests package which imports all the tests, and runs them
18:22 < dcoutts> and external tests will only need the external api
18:23 < dcoutts> stepcut: so the problem is aggregating
18:23 < stepcut> dcoutts: yes
18:24 < stepcut> dcoutts: we would like, cabal install happstack-tests, to *just work*. But the only way to do that right now is to always compile all the packages with tests enabled.
18:25 < dcoutts> stepcut: I think the better approach beyond the short term is only per-package tests and aggregate results externally
18:25 < dcoutts> eg via build reports
18:25 < dcoutts> and aggregate the data, rather than constructing a single test prog
18:25 < stepcut> dcoutts: that seems messy compared to just ,import Some.Tests
18:25 < koeien> dcoutts: and how to do integration tests, then?
18:26 < koeien> you may want to call some existing test cases as well
18:26 < dcoutts> koeien: ah but those are tests using external apis right?
18:26 < koeien> dcoutts: normally, yes
18:26 < dcoutts> so not the same problem about having to export all innards that you have with per-package tests
18:27 < dcoutts> stepcut: with generic tools to aggregate tests from multiple packages it should be ok I hope
18:27 < stepcut> dcoutts: here is a use case:
18:27 < dcoutts> stepcut: depending on package + flags is just out of the question anyway, so we need some other solution
18:27 < stepcut> I create module that has a new class with is supposed to follow some laws, (similar to Monad, Monoid, Arrow, etc)
18:28 < dcoutts> ah the instance / package problem? :-)
18:28 < stepcut> being the nice guy that I am, I write some quickchecks that test the  laws and put them in my testsuite
18:28 < stepcut> when someone else creates instances, they want to use my quickchecks in their testsuite to make sure their instance follows the laws
18:28 < stepcut> when they create instances in *their* package
18:29 < dcoutts> right, so does your package end up depending on QC or not
18:29 < stepcut> that depends on how the tests are shipped
18:29 < dcoutts> yes, if the tests are separate then your package itself doesn't need to depend on QC
18:30 < dcoutts> but then your tests had better not need internal access
18:30 < stepcut> right
18:30 < Saizan_> yeah, but where do you package them?
18:31 < stepcut> dcoutts: at one point I thought what i wanted was to be able to add a second Library paragraph in the .cabal file that I could enable via a flag
18:31 < stepcut> dcoutts: so that my .cabal would product, happstack-util and optionally happstack-util-tests
18:31 < dcoutts> additional libs in a package are ok, but they cannot be exposed, they could only be used internally
18:31 < stepcut> but, that does not scale if you start caring about combinations of flags, (prof, hpc, tests, etc)
18:32 < dcoutts> stepcut: but look at it with your debain hat on, you cannot build a binary package that provides the +-tests thing
18:32 < dcoutts> sure, building from source you might be able to make it work, but it does not translate into binary distro packages
18:32 < dcoutts> and that's an important design goal
18:33 < dcoutts> hence the api cannot change depending on flags
18:33 < stepcut> dcoutts: well, debian sucks :)
18:33 < dcoutts> heh
18:33 < koeien> :(
18:33 < dcoutts> all binary packages have the same problem
18:34 < stepcut> yeah
18:34 < dcoutts> even if they could do it they'd have to provide 2^n variants
18:34 < dcoutts> for all the different flag combos
18:35 < dcoutts> stepcut: for the example of your class it's not so hard though
18:36 < dcoutts> you can make a separate package from your class that tests the laws
18:36 < dcoutts> since it only depends on the class members then it's ok
18:36 < dcoutts> it should not need access to internals in the same package where the class is defined
18:37 < stepcut> dcoutts: yeah, I thought about that. We already do have tests in happstack which depend on internal modules.
18:38 < dcoutts> stepcut: right and the design I'm advocating is to put those in the package they test
18:38 < stepcut> dcoutts: but, also, for no particular reason, it would be nice if running, cabal sdist, for happstack-util, produced a tarball with the source *and* the tests, even if tests are not always enabled by default
18:38 < Saizan_> stepcut: right, but are they supposed to be imported from other libs?
18:38 < dcoutts> and we aggregate the data externally
18:38 < dcoutts> stepcut: that's just a bug
18:39 < stepcut> Saizan_: sure, why not?
18:39 < dcoutts> stepcut: though one that requires an API break to solve sadly
18:39 < Saizan_> stepcut: i don't see why i'd want to check that -server works correctly from another library..
18:40 < dcoutts> stepcut: right, I'd argue for always exposing those tests in the lib api or never doing so
18:40 < stepcut> Saizan_: well, that other libary would be the application that I am about to deploy, and if happs-server fails, I will care greatly, yes?
18:40 < Saizan_> stepcut: yes, but you can chech happs-server once for all of them
18:40 < Saizan_> *check
18:41 < dcoutts> stepcut: if that's the position, that built-in self checks are essential then they should always be exported
18:41 < dcoutts> that especially makes sense for tests that check interactions with the operating environment
18:42 < dcoutts> less so for pure checks that either work or don't
18:42 < stepcut> dcoutts: right. happstack has some of each already
18:42 < stepcut> dcoutts: and, if the pure checks fail only on your operating environment, then you really know something is wrong ;)
18:43 < dcoutts> heh, you certainly would
18:43 < dcoutts> though as Saizan_ says, that'd be possible to check at the time the lib package is built/installed
18:44 < dcoutts> unless we're really paranoid and don't trust that just because the pure code tests worked on the build host means they'll work on the deployment server
18:44 < stepcut> but, the lib package is not installed in the operating environtment, it is installed in the dev environment
18:45 < stepcut> happstack has non-pure tests as well though. Plus, memory constraints could cause 'pure' tests to fail
18:45 < dcoutts> stepcut: so that's all fine, I'm not saying don't export the tests from the library if it's important for the application.
18:45 < dcoutts> I'm arguing either export them or don't, but don't make it conditional
18:45 < stepcut> dcoutts: well, right now it is conditionally, but on by default
18:46 < dcoutts> why is it conditional?
18:46 < stepcut> well, originally they were off by default
18:47 < dcoutts> so you didn't think that it's important for end-user apps? :-)
18:47 < stepcut> but, mostly it is conditional so that if people complain about have to compile a bunch of tests they don't want to use, they have an easy option to disable it
18:47 < dcoutts> heh :-)
18:47 < dcoutts> resist!
18:48 < stepcut> dcoutts: I just wasn't aware of difficulties that would be encounter by end-user apps that did want them when they were disabled by default ;)
18:48 < dcoutts> and happstack depends on enough things anyway adding a dep on QC is no big deal
18:49 < dcoutts> the thing to think about it is which way do you expect distros to configure it
18:49 < dcoutts> because they only get to pick one
18:50 < stepcut> the current test aggregation method has a nice simplicity and familiarity which is nice
18:50 < stepcut> it just uses 'import' and lists
18:50 < dcoutts> sure, it's reasonable in the short term given the absence of any other support or tools
18:50 < stepcut> and HUnit
18:50 < stepcut> :)
18:55 < Saizan_> so, if we ass "test" stanzas to .cabal files, they aren't going to produce something that gets installed, right?
18:55 < Saizan_> *add
18:55 < dcoutts> Saizan_: right
18:56 < dcoutts> but they could be run at install time
18:56 < dcoutts> and results collected
18:56 < Saizan_> and added to the report.
18:56 < dcoutts> or whatever, yes
18:57 < dcoutts> detailed reports are not very well specified yet
18:57 < koeien> dcoutts: i was thinking about having data TestResultReport = TRR { cases :: [(Identifier, Result, Log)] } where data Result = Ok | Fail | Error
18:59 < dcoutts> koeien: is that the protocol? a function that returns that type?
18:59 < koeien> dcoutts: an example, yes :)
19:00 < dcoutts> if it's sufficiently lazy that might be ok
19:00 < dcoutts> so we can select which tests to evaluate
19:00 < dcoutts> by name, before actually evaluating them
19:00 < koeien> might be hard in general because of I/O
19:01 < dcoutts> that looks pure to me
19:01 < koeien> yes, the result
19:01 < koeien> but tests can do I/O in general, me thinks
19:01 < koeien> you could pass a filter function, but I think this would be dependent on the test framework used
19:03 < dcoutts> so I'd like something where we get a list of tests and we can run all/any of them
19:03 < dcoutts> and a test could either be pure or IO
19:07 < Saizan_> so that you can say "cabal test prop1" and get only that result?
19:20 < dcoutts> Saizan_: I'm not sure about the command line UI, but yes
19:21 < dcoutts> you might have some package test driver eg a remote test bot and have it invoke a specific set of tests
19:22 < koeien> but then the test bot needs to have the latest source code
19:22 < koeien> so what makes it different from just executing the test on the bot
19:35 < dcoutts> koeien: yes, I was assuming the test bot had the full source but eg imagine a central web ui where you can tell bots to update and try a certain subset of tests
19:36 < dcoutts> just slightly more control than run everything and collect the results
19:40 < koeien> ok. do you really want to build this in Cabal; i would view this as a separate package
19:41 < dcoutts> koeien: Cabal would only defined the interface/protocol
19:41 < dcoutts> so different apps on the collecting side (eg perhaps simple one in cabal-install) and different package test suites on the producing side
19:43  * dcoutts -> sleep
19:44 < koeien> good night ;)
--- Log closed Sat Feb 07 00:00:04 2009