Smoothening upload distribution. Hope for a discussion.

Feature requests not specific to either the Mac OS X or GTK+ versions of Transmission
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Hello Dear forum and maybe devs ;)
It's my first post so first goes my thank for the work :) I think it's pretty obvious that I use Transmission on my linux and max box. Windows belongs to uTorrent though I haven't booted it since a moth or so.

[ Umm after seeing how much I have written in the end, you may want to skip to the Actual Request part, and if you are interested keep reading sections backwards =]

{edit}
Version: Transmission 1.82 (10007)
Box: Debian on kernel 2.6.32-trunk-amd64 ( irrelevant I think )

Nevertheless let's get back to the subject. I'd like to present my problem and put it under a discussion, as I am not fully convinced my idea is right. If it is maybe solve it :).

My problem:
I am getting a pretty nasty web/internet experience whenever I let transmission a decent upload. The term "nasty web exp" seems pretty vauge and it is. What I mean by example is that pages are loading in a kind of strange manner, sometimes some connections get broken. It is not only web, IMs and probably an ssh would also get influenced. By strange I mean it freezes and loads a bit, and so on. Generally I _think_ I get data in packets in large intervals.
Of course I keep bandwidth caps on Transmission on a certain level. To be precise my inet connection theoretical through put is 5120 / 640 kbps (down/up) means 640/80kBps, so I set up limits at 600/65. And I think this should be really enough to handle outgoing webtraffic and rest of the stuff. Also the problem remains with lower download limit, or at seeding at all. So I consider it to be connected with upload. Transmission reports that it keeps under the limit and somehow I believe it.

My thesis:
My first idea was that number of connections was the problem, but changing it didn't help much. My current idea, to support which I have some proofs, is that Transmission is keeping under the 'per second' limit but it pumps everything out at once and nothing on the way up to the routers wan eth device throttles it (why would it any way). What I think is going on, my ISP's devices which stand on guard that I am not using more bandwidth than I am paying for work this way:

For 0,1s (_time_interval_) accept incoming packets (e.g. send() requests) and calculate accumulated traffic. //Only upload; I think it has to be this way it has some resolution, which is too low and thats the problem
If traffic_made >limit * _time_interval_ //meaning that I have exceeded my 'interpolated' bandwidth limit (so if this would keep up I would probably exceed the total 'per second' limit)
Deduct overhead from next time slot limit
If over_limit delay packets

There is one point or a note missing, that if I go way above the limit the overhead is subsequently deducted from every next time slot. Example. I have managed to send data for half of my limit (40 KB in my case) in 0.1 second. Results I won't be able to send for next 0,4 seconds.
Makes sense to me.
Actually the have to do something like this, the method might be more sophisticated though I doubt it. And It would be stupid to make the time interval to 1s cause my whole net would bloat :) Image getting banned for 1s every second if let say a torrent or whatever upload would go uncapped.

My research:
To support my thesis and to avoid making stupid ideas public and getting humiliated and laughed at :) I have written a simple network stresser. An application which can send data to another endpoint at a requested bandwidth limit with a requested resolution. It means that it will take limit_per_second multiply it by _resolution_ to get atomic packet_size (which is supposed to be a fraction of second) and will call send() every _resolution_ seconds with calculated packet_size.
What I have observed was that setting limit relatively close to my bandwidth limit and poor resolution (like 0,5s or above) resulted in going with output calculated per sec was like twice of my limit (140 kBps) (I have mostly calculated with a resolution of 0.33s) and then going down to 0 for some time. It means that I was able to output data at 140kBps (way over my limit) for some time ( smaller than a second! ), and then got a 'short' ban. Well this is still ok for this resolution. Limit for the total data per sec would be 80*3.
My best was getting 140, 140 and 0 for one second. Meaning I sent so much data it 0.33sec that it would make 140kBps... twice in one second and this beats my global limit it is 93,3. Sometimes I got those ~140kBps pretty condensed which meant that went really overboard. But then I got periods of whole seconds (2-3 in a row) with 0 upload.
To sum up, 5sec avg would most probably be ok. One sec gets quite screwed. Under 1 sec looks strange, cause my ISP lets me send more the the calculated avg for umm let's say 0.33 would be.
And when I set my app to output data with small packets let's say 0.01s. The reported transfers are very nice keeping very close to whatever I set the limit to. But this is when I am throttling the upload myself by sending smaller packets more frequently.

One last thing is that I have checked my output with wireshark, and my observations are that Transmission is outputting the data in two large chunks approximately twice a second. I leave that for you to believe me. Though I may present my thoughts why I think so, upon a request.


Actual Request:
I would like to hear what do you think of this idea and if you agree is the solution to provide Transmission with a mean to smoothen its upload over time distribution hard to implement.
Probably the developers could answer right away, as they know how Transmission is built and have enough experience to confirm if such phenomena as I have described may occur.
Anyway I hope for comments, I am open to perform some more tests and provide more info. I am thinking abt submitting a request ticket but I am not sure if it makes sense :) Tell me what'd ya think.

Cheers.
Last edited by luk32 on Fri Feb 05, 2010 11:20 pm, edited 1 time in total.
Longinus00
Posts: 137
Joined: Fri Aug 21, 2009 5:46 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by Longinus00 »

You're using some 80-90% of your theoretical maximum bandwidth and you're complaining it's slowing down your connection? Have you tried testing (http://www.speedtest.net/) your connection and seeing your real life bandwidth? Somehow I doubt this is transmission's fault.
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Thanks for the reply.
Yeah I see no reason why wouldn't it work. I have tried speedtest and I think that the remaining bandwidth is enough for http requests. At least I tried ruling out the option with insufficient bandwidth. I will recheck, though with my app I got really stable upload at abt 82kBps.

Also to be precise I am not complaining about the slow connection, rather it's quite strange behaviour. I am also not sure if it can be called Transmission's fault. Though I cannot rule out and have some indirect proofs that smoothening Transmission's upload would help; and if so you can't really blame Transmission I'd rather put it on throttling algorithms and strange "resonances" between the app upload distribution and the ISP devices/algorithms.

I will try putting the same upload limit on my testing application with smooth upload distribution and check the behaviour. Then I will get comparision, Transmission uploading vs app with smooth upload. I will post my findings then. Today noon at most I think, sooner if I cannot sleep =)
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

OK, a bit late although I have checked it pretty throughly.

Research:
This is output from my network stresser:

limit set to 70kBps time interval between send() calls 0.05s (measured every 0.2s)
[ 28.1798150539 ] sent: 68.6216779049 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 28.3838539124 ] sent: 68.614381147 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 28.5878200531 ] sent: 68.6388434312 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 28.7918338776 ] sent: 68.6228006208 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 28.9957590103 ] sent: 68.6526462487 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 29.1997559071 ] sent: 68.6284949598 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 29.4037590027 ] sent: 68.6264095992 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 29.6077680588 ] sent: 68.6244045643 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 29.8118009567 ] sent: 68.6163855962 kBps, rcvd: 0.0 kBps [ 14336 / 0 B from 1 alive and 0 finished connections]
[ 30.0157649517 ] sent: 51.4796741505 kBps, rcvd: 0.0 kBps [ 10752 / 0 B from 1 alive and 0 finished connections]

limit set also to 70kBps time interval between send() calls 0.5s (measured every 0.2s)
[ 140.0654459 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 140.269728899 ] sent: 166.435778601 kBps, rcvd: 0.0 kBps [ 34816 / 0 B from 1 alive and 0 finished connections]
[ 140.473496914 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 140.677438974 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 140.881489992 ] sent: 166.624995911 kBps, rcvd: 0.0 kBps [ 34816 / 0 B from 1 alive and 0 finished connections]
[ 141.085438967 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 141.289375067 ] sent: 166.718888902 kBps, rcvd: 0.0 kBps [ 34816 / 0 B from 1 alive and 0 finished connections]
[ 141.493378878 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 141.69748497 ] sent: 0.0 kBps, rcvd: 0.0 kBps [ 0 / 0 B from 1 alive and 0 finished connections]
[ 141.901427031 ] sent: 166.714016332 kBps, rcvd: 0.0 kBps [ 34816 / 0 B from 1 alive and 0 finished connections]

First case: web experience preety ok, ssh quite normal
Second: web rough, ssh very rough and irregular

Both cases have preety much the same 1s average. When I set Transmission to limit upload to 65kBps it is also rough. I am strongly convinced that it is exactly the same case - very irregular upload (in terms of a second) and outgoing packets being clogged. With upload cap at 50kBps it seems to be ok, though i could upload 15kBps more :) that is almost 1/3rd.

I am pretty surprised that nobody stumbled upon this issue before. Also I am more convinced towards filing a request ticket with a low/moderate priority. It the network I/O model is designed in a certain way it should be very easy to adjust, though it is way above my capabilities to dive into Transmission sources and get it right away. I have hit the ground pretty hard :)
Jordan
Transmission Developer
Posts: 2312
Joined: Sat May 26, 2007 3:39 pm
Location: Titania's Room

Re: Smoothening upload distribution. Hope for a discussion.

Post by Jordan »

luk32 wrote:One last thing is that I have checked my output with wireshark, and my observations are that Transmission is outputting the data in two large chunks approximately twice a second. I leave that for you to believe me. Though I may present my thoughts why I think so, upon a request.

Actual Request:
I would like to hear what do you think of this idea and if you agree is the solution to provide Transmission with a mean to smoothen its upload over time distribution hard to implement.
Probably the developers could answer right away, as they know how Transmission is built and have enough experience to confirm if such phenomena as I have described may occur.
Anyway I hope for comments, I am open to perform some more tests and provide more info. I am thinking abt submitting a request ticket but I am not sure if it makes sense :) Tell me what'd ya think.

Cheers.
By smoothing, do you mean outputting the data at more frequent intervals than twice a second? If so, that twice per second value comes from

Code: Select all

    /* how frequently to reallocate bandwidth */
    BANDWIDTH_PERIOD_MSEC = 500,
in libtransmission/peer-mgr.c.

Typically what one needs to do, to keep BitTorrent from interfering with their other network bandwidth, is to set their BitTorrent bandwith caps lower than their ISP's cap, or to use QoS on their router. uTorrent's new uTP protocol may ameliorate this, but it's still a work in progress, and to my knowledge hasn't been implemented anywhere except uTorrent yet.

You may want to try changing that interval from 500 to 250 or some smaller number and see how things run.
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Thanks for a reply.
That's precisely what I meant and was searching for.

My network stresser point was to emulate Transmission's behaviour outputting data twice a second. The other run was to show that with finer data ( send() calls distribution ) over time it works better.
About the typical situation presented by you, the whole problem is that keeping under "per second" limit is not sufficient. I think my ISP QoS solutions are crappy, or not precise enough and Transmission's upload pattern is interfering with it resulting in nasty behaviour.

I will rebulid Transmission with smaller BANDWIDTH_PERIOD_MSEC and will edit this post to share my findings.

[edit]

Ok, I have set BANDWIDTH_PERIOD_MSEC to 50 instead of 500. The general experience is much better.
The problem is that with such low setting the reported bandwidth usage is lower than the limit. When set cap to 70 kbPs it varies from 60 to 67 with abt 63 as an average.
Probably setting the interval to 0.05s is going a bit overboard, and it requires to tamper with the limit to get the values you want in the end but in general from my experience it works better.

If you had time I'd be glad to hear where this discrepancy between reported value and the limit might come from, or point me to some documentation (if it exists :) ). In fact I have observed such behaviour in my network stresser.
My model is simple and I think that when I calculate the atomic packet size it has to be an integer so I'm losing some bits at rounding (probably a floor() is applied there). In the end the smaller the time step the more rounding errors cumulate (more packets per sec) and the difference between the specified and real limit gets higher. I don't know if this can be the thing with Transmission. Well I not 100% sure it is with my case, it's just my first idea.

This is probably a bit complicated matter so if you're busy then it's ok.

I would be cool then if Transmission loaded this value from a config file. I haven't seen anything 'suspicious' in the $HOME/.config/Transmisson/settings.json file. Just the enum in the file you mentioned.
That could be my actual request :) but with much lesser priority, just as a convenience matter.

Once again, thank you for the answer :) It was what I was looking for.
Jordan
Transmission Developer
Posts: 2312
Joined: Sat May 26, 2007 3:39 pm
Location: Titania's Room

Re: Smoothening upload distribution. Hope for a discussion.

Post by Jordan »

Refilling the bandwidth buckets 20 times is insane overkill. Laptop batteries will drain much faster with Transmission churning that many wakeups per second!

I still am not sure that I understand the benefit of these smaller intervals. Could you explain it in short, concise description?
Longinus00
Posts: 137
Joined: Fri Aug 21, 2009 5:46 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by Longinus00 »

Maybe the reason it's working better is because you're basically lowering the maximum bandwidth cap? Have you tried an intermediate value, say 200 msec? Have you tried running @ 60kbps using the 500 msec default?
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Hello,
I am sorry for the late reply though I've had some troubles when it turned out that my modem doesn't work with 2.6.32 kernel after I have upgraded system on my router.

@Longinus.
To compensate I have set up the limit higher then the bandwidth I wanted to use. I have mentioned it. I wanted to keep around 65kBps so I had set the limit to 70 instead.

@Jordan.
Umm.. I am sorry I know I am being overdescriptive and talkative. I'll try to make it short.
My point is that Trasmission's upload pattern somehow screws with my ISP's throttling system. Even though keeping under the limit in terms of a second. I suspect that the throttling is done on a finer grain. I'd like Transmission to throttle itself in a finer manner, better grain, however you'd call it, so no further adjustments would be rendered necessary.
This throttling part on my IPS's side is what I am not sure about ... I imagine the simplest throttling method, which description took quite amount of text so I cut it out, though I might present it if anyone wishes so. But I think it is safe to think my way. The conclusion is that my ISP still tries to throttle and smoothen out those two large impulses coming from me. There fore, my basic idea is to make Transmission's upload so fine and smooth so no system will think of touching and altering it. =)

Actually I think Transmission and my net works flawlessly :) fast with low latenciess, and that's the problem. There is no natural dispersion of the data.
I can imagine that a bloated client gives better network experience because it naturally sends it's data in a more distributed manner.

Ok. Thanks for listening, I hope I made it more understandable and pleasant to read.
themonster2000
Posts: 5
Joined: Fri Feb 05, 2010 3:07 pm

Re: Smoothening upload distribution. Hope for a discussion.

Post by themonster2000 »

I'm having the same problem and right now that's preventing me from switching from Vuze to Transmission. Vuze works perfectly fine but as soon as I have an active torrent in Transmission my ssh connections (and probably others as well but I don't care about them as long as ssh is not working properly) start to hang for a fraction of a second in regular short intervals (likely the 500msec mentioned above). This makes Transmission pretty much unusable for me.
As soon as I stop all torrents the problem goes away.
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Hmm... you could check two things. For me setting limits at ~60% worked. The peaks were probably less harmful.
The second option is to do what I have done. Get the sources change this one value to 100 and build your custom version. It is relatively easy on systems with package managers, which is pretty common nowadays. You just need the dependancies and a simple ./configure --prefix=[dir] ; make; make install will do.
I did on debian.

From my experience this helps but you may have different issues. Ummm... another thing that have came to my mind is the number of connections. If you use excessive number of connection it can cause some troubles for routers close to you, your ISP's in particular. This is less likely though.
themonster2000
Posts: 5
Joined: Fri Feb 05, 2010 3:07 pm

Re: Smoothening upload distribution. Hope for a discussion.

Post by themonster2000 »

Lowering the value indeed seems to improve things but if the download goes beyond 200Kb/s than the effect return and becomes increasingly worse the faster the download speed. With Azureus I can download at 1.5MByte/s without noticing anything on my ssh connections.
Once the download stops an only the 60Kb/s upstream are used things run smoothly so it really seems to be the downstream that is the issue here.
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

yeah ... well it really strange in my opinion, that there is such difference. 1,5MB vs 200KB is order of magnitude .. and on download... truly I have no idea what might be the cause. It might be it is the same for upload but I have nothing on mind that would make sense.

Umm... actually the issue I wanted to discuss could apply to download, but it is way harder to get such rough download curve. Though on the other hand at 1.5MB maybe somehow... In fact, you could say that if it was similar issue it would mean that you acutally are DoSing yourself by asking other peers to send you data that your downstream cannot handle... LOL. I am no expert on the network and I really don't know if different handling of data reception could relieve it. In particular I am not sure how in detail whole tcp and network works. Your peer would send you data anyway, there is not much options for the receipent to do. Dunno if you could defend yourself against the overflow of the data. If yes, then you could probably improve reception of the data to avoid any interference with any QoS/throttling system. If that were out problem in the first place, which I am not absolutely sure.

Well your download case might be something entirely different, it is a lot of data, maybe hashing or whatever is related in process of handling of incoming data exhaust your CPU power or something. That would be strange 'cause Vuze is written in java, and Transmission in C and I don't dare to question the dev's programming skills. It's just an example :)
Longinus00
Posts: 137
Joined: Fri Aug 21, 2009 5:46 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by Longinus00 »

themonster2000 wrote:Lowering the value indeed seems to improve things but if the download goes beyond 200Kb/s than the effect return and becomes increasingly worse the faster the download speed. With Azureus I can download at 1.5MByte/s without noticing anything on my ssh connections.
Once the download stops an only the 60Kb/s upstream are used things run smoothly so it really seems to be the downstream that is the issue here.
Are you sure your units are correct? You say that transmission has problems beyond 200 kilobits and azureus can do 1.5 megabytes. If you were to switch these two around it seem that they actually work around the same speed, 200 kB ~= 1.5 Mb. I also highly doubt you have a connection that's faster than 12 mbps.
luk32
Posts: 16
Joined: Wed Feb 03, 2010 2:20 am

Re: Smoothening upload distribution. Hope for a discussion.

Post by luk32 »

Hehe I have also noticed it though I assumed he just didn't press shift at 200KB/s. Otherwise the post would not make much sense. On the other hand 1,5MB download and 60KB (again assuming he meant KiloBytes )up seems just too asynchronous to me ...

As a reminder for the future I'd recommend to listen Longinus and advice to pay attention to the units you are you using. They are quite messed up in the quoted post.
Post Reply