19:08:36 <stepkut> hmm.  I think we need yet another timeout manager
19:11:57 <alpounet> uh?
19:13:50 <stepkut> the one from warp appears to hold on to cancelled timeouts for (by default) 30 seconds. If your kill action allocates a fair bit of RAM (which it does with TLS support because it needs the SSL type), then you can easily drive your RAM usage into the GB under high load
19:13:58 <stepkut> also.. it appears buggy
19:14:35 <stepkut> https://github.com/yesodweb/wai/blob/master/warp/Timeout.hs
19:15:04 <alpounet> wow
19:15:11 <stepkut> in line 30 they read the list of timeouts, in line 31 they expire some, and then in line 32 they put back the modified list
19:15:54 <stepkut> oh..
19:15:59 <stepkut> I think I see how that part works
19:16:01 <stepkut> nm.
19:21:11 <stepkut> in other news.. the log bot has not gone missing ever since I fixed that issue with nick collisions :)
19:22:20 <stepkut> I pushed a possible fix for the SSL bug though
19:22:38 <stepkut> but the timeout issue definitely needs to be fixed as well
19:25:18 <alpounet> good luck hah
19:26:00 <stepkut> this will be the 4th timeout manager that Happstack has had
19:26:14 <stepkut> hopefully.. 4th and final :p
19:27:21 <alpounet> yeah, how come you had to change it that many times?
19:29:17 <stepkut> originally we just forked off separate a watchdog thread for each incoming request that would kill it after inactivity. But it turns out that calling, 'threadDelay' from thousands of threads does not scale well at all. Uses up a bunch of RAM and performance sucks. (At least it did at the time)
19:29:48 <stepkut> then we used snap's timeout manager for a while. I don't remember why we stopped
19:29:54 <stepkut> then we used warp's
19:31:20 <stepkut> hmm. SSL is still broken as well... but less broken :)
19:47:24 <donri> st3pcut: unimportant low-priority ideas for new website + logbot: have a pretty interface for reading logs, not just serve .txt, and maybe show on the front page something like "3 people are talking right now in #happs, click here to join"
20:19:02 <donri> something like http://irclog.perlgeek.de/
20:32:41 <donri> ohai one of all the stepkut incarnations
20:32:51 <stepkut> :)
20:33:10 <donri> did you totally miss what i just said because of your dissociative identity disorder?
20:33:51 <stepkut> oo
20:33:55 <stepkut> pretty logs are a possibility
20:34:26 <donri> and perhaps searchable
20:35:16 <stepkut> http://code.google.com/p/happstack/issues/detail?id=174
20:35:23 <donri> full-text search should be easy with ixset + stemmer + tokenize
20:35:27 <stepkut> updated
20:35:53 <donri> although perhaps you don't want to keep all the logs in memory
20:36:06 <stepkut> yeah, I just keep the logs on disk
20:36:20 <stepkut> since they are append-only and there is only one writer
20:36:24 <donri> how many MBs total?
20:38:45 <stepkut> around 9MB I believe
20:38:53 <stepkut> 12MB
20:39:26 <stepkut> the easiest way to get started would be to just include a google search box on the page
20:39:44 <donri> obviously ixset adds some overhead but that doesn't sound like a lot to keep in memory though
20:39:55 <stepkut> I do need an acid-state, ixset, full-text search engine though for many reasons
20:40:00 <donri> true. but i don't like google custom search much for some reason :)
20:40:07 <stepkut> me neither
20:40:32 <stepkut> mostly because it does not know how to exploit the structure of the data as well as a custom search could
20:40:33 <donri> it's nice that it exists for quick hacks, but certainly not comparable to a custom-rolled solution
20:40:41 <stepkut> right
20:41:04 <stepkut> hence 'easiest way to get *started*' :p
20:41:09 <donri> in deed
20:41:22 <donri> it might certainly be a good start / better than nothing
20:41:51 <stepkut> yeah
21:01:30 <stepkut> ok.. I think I have a hackaround for the timeout manager RAM issue
21:02:05 <stepkut> now there is one last oddity left..
21:02:56 <stepkut> If i used httperf to hammer on the SSL server.. it does fine for the first 5000-8000 requests.. and then performance way down. But then if I run it again, I see the same thing..
21:03:05 <stepkut> somehow giving it a little break seems to help things :-/
21:08:09 <donri> sounds like some race condition / "clogging"
21:10:49 <stepkut> yeah.. not sure why
21:10:57 <stepkut> ... yet :)
21:13:53 <stepkut> last bug blocking Happstack 7 though I think
21:14:33 <donri> is it consistently 5k+ requests or within a short period?
21:14:41 <donri> doesn't sound like a big issue if the latter
21:15:39 <donri> i mean if you do 4k requests, wait a minute and the another 4k? does it happen then?
21:16:01 <stepkut> if you do 4k requests, wait less than a second and do it again, it is fine..
21:16:24 <stepkut> for all I know.. the problem is with httperf itself .. though that seems less likely
21:16:30 <donri> is anyone actually having >5k rqs with ssl using happstack?
21:17:03 <stepkut> well.. I don't want to release buggy code just because no one I know of is currently affected
21:17:17 <donri> true, but if it's blocking and very rare... :P
21:17:26 <donri> and you're sure it's only SSL?
21:17:37 <stepkut> most likely
21:17:58 <stepkut> I need to retest the non-SSL code. It certainly did not have this problem in the past
21:18:00 <donri> because i don't easily get much more than 8k rqs without ssl anyway, on my workstation
21:18:22 <donri> but i might be misunderstanding what you mean by "performance way down"
21:18:46 <stepkut> with ssl it drops from 900rqs/second to 20 or 0
21:18:53 <stepkut> on this particular test
21:19:22 <donri> and it's specifically about happstack? have you tested other ssl servers?
21:19:39 <donri> i'd expect ssl to be more cpu intensive either way, for starters
21:19:57 <donri> i'm probably just saying things that are obvious to you though :D