11:20:25 <gienah> Does this seem like a reasonable fix for happstack-state-6.1.0 failing to build with ghc 6.12.3 mtl-1: http://hpaste.org/49303
16:54:44 <lpeterse> is the author of ixset available?
17:07:14 <Lemmih> The maintainer of ixset is available.
17:10:26 <stepkut> lpeterse: what's the question?
17:12:14 <lpeterse> i'm developing an happstack application with acid and ixset and it is leaking space. from what i profiled i suspect it's originating from ixset, but i'm not sure. help is appreciated
17:13:00 <stepkut> ah
17:13:15 <stepkut> what makes you think it is leaking space?
17:13:51 <lpeterse> https://gist.github.com/1097658
17:14:21 <lpeterse> i run the application and with every request it allocates more memory without releasing it
17:14:42 <lpeterse> quite linearly
17:15:38 <Lemmih> You should do a heap profile.
17:16:16 <lpeterse> you mean what i get from +RTS -s? sorry, i'm not that familiar with that kind of problem yet
17:17:10 <Lemmih> http://www.haskell.org/ghc/docs/7.0.2/html/users_guide/prof-heap.html
17:17:58 <stepkut> lpeterse: did you compile with -auto-all ?
17:18:17 <lpeterse> no, just with -prof
17:19:04 <stepkut> you should also compile with -auto-all -auto-caf, or you won't really get any information about what is happening due the code you wrote
17:19:23 <lpeterse> but i'll do. give me some minutes
17:19:50 <stepkut> you should probably do a -fforce-recomp when you do that
17:20:02 <stepkut> to ensure all your modules are recompiled with those flags enabled
17:22:00 <lpeterse> that would have been my next question. i couldn't figure out how to enable these flags in cabal. doing it according to documentation returns warning about these flags are outdated. currently i recompiled the executable seperately with the given flags
17:24:41 <Lemmih> Cabal has ghc-prof-options which are enabled when you compile with --enable-executable-profiling.
17:28:16 <lpeterse> auto-caf seems to be no valid option. is it caf-all?
17:28:27 <stepkut> lpeterse: sorry, it is caf-all
17:28:36 <lpeterse> thank you
17:38:28 <lpeterse> are you still here later? I'll try to do some more profiling with the new options you told me and then some of you can hopefully help me reading it
17:41:24 <Lemmih> I'm still here.
17:41:42 <lpeterse> okay, see you than. thank you so far
18:58:06 <balor_> #documentcloud
18:58:10 <balor_> shit
18:58:40 <stepkut> :p
21:18:45 <lpeterse> @Lemmih: So, I'm back and did my homework, but there are several things I don't understand. First of all, here is my collected data: https://gist.github.com/1098226
21:18:45 <lambdabot> Unknown command, try @list
21:21:05 <stepcut> lpeterse: that data shows where memory was allocated.. but not whether it was freed or not.. do you have heap profiles as well?
21:21:24 <lpeterse> I started the application and right after acid rewinded the state top told me it acquired 40Mb of memory. After 10000 wget requests memory consumption was 100Mb (no data was added, just serving). How do I interpret the 16Mb RTS is telling about?
21:22:24 <stepcut> the garbage collector will often allocate more RAM than is actually being used
21:22:33 <stepcut> gotta run, bbl.
21:24:34 <stepcut> under heavy load, happstack-server will (according to top, etc) consume 100MB of RAM. Which is a bit high. Hopefully that will go down when we switch to warp. But it doesn't grow unbounded.. it does level out. By tuning the garbage collector flags you can bring that number down a fair bit (by trading off performance)
21:25:45 <stepcut> i think the next version of GHC might also help.. i think it allocates less RAM per thread
21:28:01 <lpeterse> the 100Mb are still quite okay. some days ago I ran the application in a production-like environment and memory consumption was about 1Gb after 1 day
21:29:28 <lpeterse> do you think I could be sucessfull tackeling the problem with gc flags and there is not necessary a problem in the program itself?
21:32:14 <lpeterse> here is the heap-profile: http://dl.dropbox.com/u/1021691/homment-heap-profile.pdf
21:32:31 <Lemmih> That looks good.
21:34:04 <lpeterse> Could you explain that a little bit to me? What do you read from it?
21:36:52 <Lemmih> That it doesn't leak memory. The space taken by heap objects is constant.
21:37:56 <Lemmih> What does +RTS -s say?
21:38:49 <Lemmih> The key things to look for are "maximum residency" and "total memory in use".
21:40:15 <lpeterse> Sounds positive. But what I still don't understand is why the process is allocating more and more memory from the os. Why doesn't it reuse the freed one? The data stored (comments) just take about 500kb when stored to disk, but the running program holds 1gb according to top after 24 hours.
21:41:02 <lpeterse> it pastes +rts -s in the gist above
21:43:06 <Lemmih> You need to stress test your program consistently. Otherwise it's impossible to correlate the data.
21:43:33 <Lemmih> The date from "+RTS -s" looks very different from "+RTS -hc".
21:44:53 <Lemmih> Also, "+RTS -s -p" will generate different results than "+RTS -s".
21:46:05 <lpeterse> you're right. it wasn't the same run, but the test szenario was the same. The state was the same in both cases and the only thing I did was requesting a page 10000 times with wget. But I can repeat it, of course
21:46:42 <lpeterse> so, what profiling options do you suggest?
21:47:02 <Lemmih> +RTS -hc -s
21:47:14 <lpeterse> okay. give me another 5 minutes
21:58:45 <lpeterse> heap-profile: http://dl.dropbox.com/u/1021691/profiling-with-RTS-hc-s.pdf statistics: https://gist.github.com/1098321
22:04:36 <lpeterse> despite of the profiling options conditions were the same. nonetheless it seemed like there was a plateau at 70Mb and memory consumption did not crack the 100Mb this time. Obviously I'm tracking a heisenbug here
22:09:38 <Lemmih> I think you're reading the wrong value from top.
22:10:25 <Lemmih> GHC reports that it used 11 megs, including all GC overhead.
22:13:04 <lpeterse> Sounds rational. I'll check this, just to be sure.
22:35:40 <lpeterse> argggh, according to the _real_ top everything is fine and consistent with the profiling. It's htop which is telling weird values.
22:42:47 <lpeterse> So, for the moment my problem seems solved. Thanks Lemmih and stepcut. I owe you a beer at the next hackathon :-)
22:53:41 <stepkut> lpeterse: sweet!