
It's my first post so first goes my thank for the work

[ Umm after seeing how much I have written in the end, you may want to skip to the Actual Request part, and if you are interested keep reading sections backwards =]
{edit}
Version: Transmission 1.82 (10007)
Box: Debian on kernel 2.6.32-trunk-amd64 ( irrelevant I think )
Nevertheless let's get back to the subject. I'd like to present my problem and put it under a discussion, as I am not fully convinced my idea is right. If it is maybe solve it

My problem:
I am getting a pretty nasty web/internet experience whenever I let transmission a decent upload. The term "nasty web exp" seems pretty vauge and it is. What I mean by example is that pages are loading in a kind of strange manner, sometimes some connections get broken. It is not only web, IMs and probably an ssh would also get influenced. By strange I mean it freezes and loads a bit, and so on. Generally I _think_ I get data in packets in large intervals.
Of course I keep bandwidth caps on Transmission on a certain level. To be precise my inet connection theoretical through put is 5120 / 640 kbps (down/up) means 640/80kBps, so I set up limits at 600/65. And I think this should be really enough to handle outgoing webtraffic and rest of the stuff. Also the problem remains with lower download limit, or at seeding at all. So I consider it to be connected with upload. Transmission reports that it keeps under the limit and somehow I believe it.
My thesis:
My first idea was that number of connections was the problem, but changing it didn't help much. My current idea, to support which I have some proofs, is that Transmission is keeping under the 'per second' limit but it pumps everything out at once and nothing on the way up to the routers wan eth device throttles it (why would it any way). What I think is going on, my ISP's devices which stand on guard that I am not using more bandwidth than I am paying for work this way:
For 0,1s (_time_interval_) accept incoming packets (e.g. send() requests) and calculate accumulated traffic. //Only upload; I think it has to be this way it has some resolution, which is too low and thats the problem
If traffic_made >limit * _time_interval_ //meaning that I have exceeded my 'interpolated' bandwidth limit (so if this would keep up I would probably exceed the total 'per second' limit)
Deduct overhead from next time slot limit
If over_limit delay packets
There is one point or a note missing, that if I go way above the limit the overhead is subsequently deducted from every next time slot. Example. I have managed to send data for half of my limit (40 KB in my case) in 0.1 second. Results I won't be able to send for next 0,4 seconds.
Makes sense to me.
Actually the have to do something like this, the method might be more sophisticated though I doubt it. And It would be stupid to make the time interval to 1s cause my whole net would bloat

My research:
To support my thesis and to avoid making stupid ideas public and getting humiliated and laughed at

What I have observed was that setting limit relatively close to my bandwidth limit and poor resolution (like 0,5s or above) resulted in going with output calculated per sec was like twice of my limit (140 kBps) (I have mostly calculated with a resolution of 0.33s) and then going down to 0 for some time. It means that I was able to output data at 140kBps (way over my limit) for some time ( smaller than a second! ), and then got a 'short' ban. Well this is still ok for this resolution. Limit for the total data per sec would be 80*3.
My best was getting 140, 140 and 0 for one second. Meaning I sent so much data it 0.33sec that it would make 140kBps... twice in one second and this beats my global limit it is 93,3. Sometimes I got those ~140kBps pretty condensed which meant that went really overboard. But then I got periods of whole seconds (2-3 in a row) with 0 upload.
To sum up, 5sec avg would most probably be ok. One sec gets quite screwed. Under 1 sec looks strange, cause my ISP lets me send more the the calculated avg for umm let's say 0.33 would be.
And when I set my app to output data with small packets let's say 0.01s. The reported transfers are very nice keeping very close to whatever I set the limit to. But this is when I am throttling the upload myself by sending smaller packets more frequently.
One last thing is that I have checked my output with wireshark, and my observations are that Transmission is outputting the data in two large chunks approximately twice a second. I leave that for you to believe me. Though I may present my thoughts why I think so, upon a request.
Actual Request:
I would like to hear what do you think of this idea and if you agree is the solution to provide Transmission with a mean to smoothen its upload over time distribution hard to implement.
Probably the developers could answer right away, as they know how Transmission is built and have enough experience to confirm if such phenomena as I have described may occur.
Anyway I hope for comments, I am open to perform some more tests and provide more info. I am thinking abt submitting a request ticket but I am not sure if it makes sense

Cheers.