Yep. I guess we all know that. I were wondering one thing. In libtorrent documentation they say that using compact allocation is better for cache. But in matter of fact, compact allocation causes more disk access than full allocation. Because compactly allocated pieces need to be moved to right place. In some cases several times. Then moving pieces to right position causes rolling effect. When you need to relocate this piece, you'll need to relocate piece which is occupying space of the piece being relocated, which can cause recursive loop before it's done.performance wrote:Cache is meant to reduce # of disk writes but neither of the 2 did a good job at that, it was more of a delay of disk writes than anything else.
I do understand that if disk cache is really lousy one, then compact allocation might be good idea. Because before you reach half way, you can ad most of pieces directly to the end of file. But after getting to that point, you'll need to relocate and relocate to relocate and so on. Which really doesn't make any sense. I'm quite sure that full allocation is much better in terms of disk access than compact allocation.
Lousy disk cache might not allow seeking in file without flushing. If it supports "append only" buffered write. Does windows support random write staging? I really don't know. Maybe not. It would be clear reason why there is so much disk access when making "random" writes to full allocation files.
How large chunks of data libtorrent or deluge write to disk? 64k? Smaller, larger? There might be option to have kind of cache. I mean simple data buffer which would hold data before it's written out. Like writing only out complete pieces. But in some cases that might take up some memory. Let's say that I'm downloading 50 2 MiB peaces in parallel. Then it would take up 100 megabytes of memory. And in case of crash, I would lose in average half of that.
Writing out only complete pieces would be one solution for those who wish to reduce number of disk writes. And have plenty of disk space and most probably very fast net connection. So what if I lose 50 megabytes of data, it only takes 5 seconds downloading it again using 100 Mbit/s connection. And I got anyway 4 gigs of memory which is mostly "unused" or used for system disk cache anyway. Also when ever some piece is needed for uploading, it would be read completely to memory.