[feature request] Skip hash check

Suggestions and discussion of future versions
Post Reply
方向音痴
New User
New User
Posts: 9
Joined: Tue Mar 02, 2010 10:13 pm

[feature request] Skip hash check

Post by 方向音痴 »

It would be awesome if like utorrent deluge had an option to skip hash checking when adding files. I went from rtorrent -> utorrent (wine) -> deluge. The reason I switched to utorrent via wine (yuck) was because the read-caching that utorrent supported which greatly reduced I/O on the machine and allowed for higher seed speeds (my box is pretty much only limited by I/O or peers and not the internet connection).

Anyway when switching from utorrent -> deluge I had to rehash about 2 TB worth of data which took about two hours to complete. Had there been an option to skip the hash check then this could have saved me a bunch of time (time where I am not able to seed as well). The only reason it only took 2 hours is because the machine has a fast CPU and a fast raid array. It was CPU limited (deluge using 100% cpu) and was able to hash at around 260 megabytes/sec and it still took about 2 hours even at that rate =(
loki
Moderator
Moderator
Posts: 787
Joined: Tue Dec 04, 2007 3:27 pm
Location: MI, USA

Re: [feature request] Skip hash check

Post by loki »

The problem with this specific example is that deluge has no idea if the data is reliable, or it is what it should be.
方向音痴
New User
New User
Posts: 9
Joined: Tue Mar 02, 2010 10:13 pm

Re: [feature request] Skip hash check

Post by 方向音痴 »

loki wrote:The problem with this specific example is that deluge has no idea if the data is reliable, or it is what it should be.
Well it shouldn't have to. This is good for when you are moving from one program to another and have a lot of data that you can skip being hashed because you *know* that its already 100% completed and the files are ok and maybe force the check if say one of the files is missing. This is pretty much what utorrent allows and it would be nice to have this same feature with deluge as well. It also helps when I add torrents that I downloaded using a different program (such as rtorrent) as I typically stay away from GUI based programs but rtorrent can't do read caching like deluge can.

Otherwise I am very happy with deluge's performance. I am getting nearly the same speeds as utorrent and only at 60-75% of the iowait I was seeing with utorrent. The only downside so far has been the CPU usage is about double (uses 20-30% cpu with encryption enabled and 15-20% with it disabled when utorrent was 7-10% with wine and no encryption). I am much more concerned with I/O load than CPU load though. I think what helped was being able to set my cache size up really high as its a 64-bit program. I did have to modify some of the source files as the cache was set to a max of 99,999 16 Kib blocks which was around 1.6 GB and I upped it to 400,000 16 KiB blocks which allows deluge to use around 6.2 GB of memory (I might up it a bit more).

That being said and a bit off topic I was curious exactly what the read cache hit ratio value meant? I might be blind but I didn't see anything mentioned in the docs:

Image
loki
Moderator
Moderator
Posts: 787
Joined: Tue Dec 04, 2007 3:27 pm
Location: MI, USA

Re: [feature request] Skip hash check

Post by loki »

I'm thinking the best way to describe read/write cache hit ratio is like a grade, a way of telling you how 'good' it is doing based on the files it has cached... with a maximum of 1, being perfect.
方向音痴
New User
New User
Posts: 9
Joined: Tue Mar 02, 2010 10:13 pm

Re: [feature request] Skip hash check

Post by 方向音痴 »

First sorry for the wrong reply and continuing to go off topic.

Ok I thought 1 might be the case. I guess since I am getting .98 then that is very good except I guess it also depends on the size of the read or are they always the same size block? Can I take it to mean that 98% of the reads are being read from cache?

I tried upping the cache even further to 700,000 (7 times the normal maximum) and I see deluge using up to 10 GB of ram now:

Code: Select all

Tasks: 565 total,   2 running, 562 sleeping,   0 stopped,   1 zombie
Cpu(s):  3.9%us,  1.4%sy,  0.0%ni, 88.4%id,  2.1%wa,  1.4%hi,  2.8%si,  0.0%st
Mem:  24734748k total, 24608192k used,   126556k free,   156400k buffers
Swap:  1048568k total,   431784k used,   616784k free, 11352640k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 9827 xevious   40   0 11.4g  10g  15m S   35 45.9   1031:02 deluge
 9691 mysql     40   0  664m 143m 5056 S    0  0.6   1431:02 mysqld
10684 root      40   0  359m 100m 7544 S    0  0.4   0:54.79 gkrellm2
23370 root      40   0 2594m  76m 9020 S    0  0.3   0:12.75 SoF2MP-Test.exe
I think this is helping even more because I have about 2TB worth of torrents on two different devices (sdc and sdd). sdc is raid6 disk and sdd is raid0 SSD. When running iostat -xm 10 (10 sec averages) the %util on sdc is around 10-20% now and almost never going over 20% when before 30-40% during heavy seeding was normal. sdd seems to still be around 7-10% for %util though:

Code: Select all

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.62    0.01    7.26    2.71    0.00   85.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.20     1.00   76.22   10.49     9.93     0.18   238.88     1.66   19.18   2.36  20.48
sdc1              0.20     1.00   76.22   10.49     9.93     0.18   238.88     1.66   19.18   2.36  20.48
sdb               0.00     0.40    0.00    0.60     0.00     0.00    13.33     0.00    0.00   0.00   0.00
sdb1              0.00     0.40    0.00    0.60     0.00     0.00    13.33     0.00    0.00   0.00   0.00
sdd               0.30     0.00  208.59    0.00    30.18     0.00   296.34     1.13    5.40   0.54  11.19
sdd1              0.30     0.00  208.59    0.00    30.18     0.00   296.34     1.13    5.40   0.54  11.19
sda               0.00     0.00    1.30    0.20     0.01     0.00    11.20     0.01   10.00   6.67   1.00
sda1              0.00     0.00    1.30    0.20     0.01     0.00    11.20     0.01   10.00   6.67   1.00
sda2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.01    0.00    5.64    1.97    0.00   88.37

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.00     0.00   54.55    1.90     8.05     0.02   292.81     1.27   22.21   2.87  16.18
sdc1              0.00     0.00   54.55    1.90     8.05     0.02   292.81     1.27   22.21   2.87  16.18
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.10     0.00  209.19    0.00    30.89     0.00   302.42     0.85    4.07   0.46   9.59
sdd1              0.10     0.00  209.19    0.00    30.89     0.00   302.42     0.85    4.07   0.46   9.59
sda               0.00     2.20    0.70    3.40     0.01     0.02    16.39     0.02    4.15   1.95   0.80
sda1              0.00     0.50    0.70    2.90     0.01     0.02    13.78     0.01    4.17   2.22   0.80
sda2              0.00     1.70    0.00    0.40     0.00     0.01    42.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.10     0.00     0.00     8.00     0.00   20.00  20.00   0.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.33    0.00    5.33    1.67    0.00   88.67

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.40     0.00   54.79    1.50     7.94     0.02   289.57     0.65   11.72   2.07  11.68
sdc1              0.40     0.00   54.79    1.50     7.94     0.02   289.57     0.65   11.72   2.07  11.68
sdb               0.00     0.40    0.90    1.30     0.01     0.01    12.00     0.02    8.64   8.64   1.90
sdb1              0.00     0.40    0.90    1.30     0.01     0.01    12.00     0.02    8.64   8.64   1.90
sdd               0.50     0.00  210.58    0.00    31.32     0.00   304.61     0.82    3.89   0.42   8.88
sdd1              0.50     0.00  210.58    0.00    31.32     0.00   304.61     0.82    3.89   0.42   8.88
sda               0.00     0.00    0.40    0.60     0.00     0.00    13.60     0.01   10.00   4.00   0.40
sda1              0.00     0.00    0.40    0.50     0.00     0.00    14.22     0.01    8.89   4.44   0.40
sda2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.10     0.00     0.00     8.00     0.00   20.00  20.00   0.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.57    0.01    5.76    2.26    0.00   88.40

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.20     0.10   77.15    2.20    11.32     0.01   292.37     1.29   16.24   1.97  15.67
sdc1              0.20     0.10   77.15    2.20    11.32     0.01   292.37     1.29   16.24   1.97  15.67
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.30     0.00  232.44    0.00    34.50     0.00   304.00     1.11    4.77   0.47  10.88
sdd1              0.30     0.00  232.44    0.00    34.50     0.00   304.00     1.11    4.77   0.47  10.88
sda               0.00     0.80    2.89    1.80     0.03     0.01    16.68     0.03    7.23   3.40   1.60
sda1              0.00     0.00    2.89    0.50     0.03     0.00    18.12     0.03   10.00   4.71   1.60
sda2              0.00     0.80    0.00    1.30     0.00     0.01    12.92     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.88    0.09    6.34    1.78    0.00   87.91

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.10     0.70   59.08    1.30     9.02     0.01   306.31     0.95   15.74   1.98  11.98
sdc1              0.10     0.70   59.08    1.30     9.02     0.01   306.31     0.95   15.74   1.98  11.98
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.00     0.00  222.06    0.00    32.72     0.00   301.78     0.94    4.25   0.41   9.18
sdd1              0.00     0.00  222.06    0.00    32.72     0.00   301.78     0.94    4.25   0.41   9.18
sda               0.00     0.80    1.20    2.50     0.03     0.01    25.30     0.02    6.22   3.24   1.20
sda1              0.00     0.00    1.20    2.30     0.03     0.01    24.46     0.02    6.57   3.43   1.20
sda2              0.00     0.80    0.00    0.20     0.00     0.00    40.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.35    0.00    6.23    1.65    0.00   87.77

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.20     0.00   70.33    1.30    10.62     0.01   303.74     0.90   12.59   1.84  13.19
sdc1              0.20     0.00   70.33    1.30    10.62     0.01   303.74     0.90   12.59   1.84  13.19
sdb               0.00     0.00    0.00    0.50     0.00     0.00     8.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.50     0.00     0.00     8.00     0.00    0.00   0.00   0.00
sdd               1.10     0.00  209.29    0.00    31.91     0.00   312.27     0.96    4.61   0.46   9.69
sdd1              1.10     0.00  209.29    0.00    31.91     0.00   312.27     0.96    4.61   0.46   9.69
sda               0.00     0.80    2.60    0.40     0.03     0.00    26.67     0.03   10.33   4.67   1.40
sda1              0.00     0.00    2.60    0.20     0.03     0.00    25.71     0.03   11.07   5.00   1.40
sda2              0.00     0.80    0.00    0.20     0.00     0.00    40.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.76    0.00    6.48    1.84    0.00   87.92

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc               0.40     0.00   95.50    1.20    14.27     0.01   302.37     1.29   13.31   1.56  15.08
sdc1              0.40     0.00   95.50    1.20    14.27     0.01   302.37     1.29   13.31   1.56  15.08
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.20     0.00  221.58    0.00    33.03     0.00   305.30     0.87    3.94   0.44   9.79
sdd1              0.20     0.00  221.58    0.00    33.03     0.00   305.30     0.87    3.94   0.44   9.79
sda               0.00     0.00    1.40    2.10     0.01     0.01    10.29     0.01    4.00   1.71   0.60
sda1              0.00     0.00    1.40    0.30     0.01     0.00    12.71     0.01    8.24   3.53   0.60
sda2              0.00     0.00    0.00    1.80     0.00     0.01     8.00     0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
I don't suppose the default version of deluge could be changed to allow people to use over 99,999 blocks of cache without me having to edit versions in the future? I do understand that this would likely not help the vast majority of most people as the 99k limit makes sense for most people as most people don't have 1+ gig of connectivity and a machine with 24 GB of ram seeding 2TB+ of torrents pushing several TB per day.

Also out of curiosity is the system that deluge/libtorrent use for reads multi threaded? I believe with threading and having multiple blocks requested at the same time things like NCQ can be used to help get more seeks/sec. I know when using a tool to test seeking on my home system it makes a big difference when dealing with disks in a raid array but not as much as with a single disk. For example on a single drive testing seeks I get:

1 Thread:

Code: Select all

myth ~ # ./seeker_baryluk /dev/sdb 1
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb [1465149168 blocks, 750156374016 bytes, 698 GB, 715404 MB, 750 GiB, 750156 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 45 seeks/second, 21.755 ms random access time (293246830 < offsets < 749064701819)
16 Threads:

Code: Select all

myth ~ # ./seeker_baryluk /dev/sdb 16
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb [1465149168 blocks, 750156374016 bytes, 698 GB, 715404 MB, 750 GiB, 750156 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 45 seeks/second, 21.882 ms random access time (122414226 < offsets < 749984069382)
32 Threads:

Code: Select all

myth ~ # ./seeker_baryluk /dev/sdb 128
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdb [1465149168 blocks, 750156374016 bytes, 698 GB, 715404 MB, 750 GiB, 750156 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 30 seconds..............................
Results: 48 seeks/second, 20.464 ms random access time (119719930 < offsets < 749150894831)
So not much of a difference, but for a raid array it makes a huge difference:

1 Thread:

Code: Select all

root@sabayonx86-64: 12:55 AM :~# ./seeker_baryluk /dev/sdc 1
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdc [34951163904 blocks, 17894995918848 bytes, 16666 GB, 17065998 MB, 17894 GiB, 17894995 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 74 seeks/second, 13.405 ms random access time (9496586245 < offsets < 17893578116065)
32 Threads:

Code: Select all

root@sabayonx86-64: 12:57 AM :~# ./seeker_baryluk /dev/sdc 32
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdc [34951163904 blocks, 17894995918848 bytes, 16666 GB, 17065998 MB, 17894 GiB, 17894995 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 1353 seeks/second, 0.739 ms random access time (575427454 < offsets < 17893811031939)
128 Threads:

Code: Select all

root@sabayonx86-64: 12:57 AM :~# ./seeker_baryluk /dev/sdc 128
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdc [34951163904 blocks, 17894995918848 bytes, 16666 GB, 17065998 MB, 17894 GiB, 17894995 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 30 seconds..............................
Results: 2244 seeks/second, 0.446 ms random access time (55589488 < offsets < 17894772902586)
I also noticed that using this method all the disks in my raid array appear to get activated at once instead of each one at a time (they almost all appear solid) even though its doing random reads. When I am seeding torrents with pretty much any application I have tried (utorrent, rtorrent, etc...) the disks flash a bunch but they don't appear to show activity at the same time (just changing from disk to disk very quickly). With rtorrent I saw very heavy iowait (30-40% of 4 CPU's on a quad core system) and even then the disks didn't lookas solid as they are when using a multi-threaded seeks/sec test.

I haven't looked at my server while seeding with deluge yet (its remote so its not easy for me to do anytime I want to). I was curious if having multi threaded I/O could help deluge in the I/O department some more if it wasn't already. Here is the source code to the multi threaded seeker program I am using:

http://box.houkouonchi.jp/seeker_baryluk.c

Also the difference between 1 and 128 threads didn't change the %util during the test it just made it produce much higher values:

1 thread 10 sec average

Code: Select all

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.43    0.00    0.31   12.24    0.00   86.02

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc              10.20     0.10   75.30    0.90     0.33     0.00     9.08     1.00   13.12  13.11  99.92
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.00     0.10    0.10    0.70     0.00     0.00     9.00     0.00    1.50   1.50   0.12
sdc3             10.20     0.00   75.20    0.20     0.33     0.00     9.08     1.00   13.25  13.24  99.80
128 threads 10 sec average:

Code: Select all

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.44    0.00    0.90   16.71    0.00   80.95

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc             276.00     0.00 2247.70    0.60     9.86     0.00     8.98   128.00   56.76   0.44 100.02
sdc1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc2              0.60     0.00    4.40    0.10     0.02     0.00     9.07     0.28   62.22  48.13  21.66
sdc3            275.40     0.00 2243.30    0.50     9.84     0.00     8.98   127.72   56.75   0.45 100.02
Post Reply