I've stood on other people's shoulders to write this post, so before we go this is not original material, just stuff researched and written by me - a big thanks to all who did the work.
I found the cl_interp command whilst tweaking other stuff and wondered what it was. I found out that despite being designated cl_ it is actually a server variable which is applied to all clients on connection.
Now Interpolation is the creation of predicted data points from a known data set. Think of a graph. You plot your data, join the dots. The line between your dots shows you the interpolated data, what should probably be there. Now the source engine handles interpolation to cover those connecting from siberia to the good old UK via 2 cups and a piece of string, so there is a LOT of slack in there. Ok so what does it actually do. In short it guesses where everything and everyone is at any one time based on the data previously sent. This fills in the delay in time usually made by your ping. Well a lower interpolation value leaves less margin for error in client predictions, meaning if you have a low ping you can make use of this by lowering interpolation and therefore having more accurate hit registration. Here's what the commands are and how to optimise them as well as their default values, and i'll apologise for the format as it really has to be mathematical. This is directly linked to server tick time (updates per second the server can send to each client if requested) which I think on ours is 66.
cl_interp "0.1" //how far in the past objects are predicted. Think of this in terms of seconds. cl_interp_ratio "2" //multiplier for the above to ensure data redundancy.
Reducing the time on this prediction setting will obviously increase the accuracy of rendered objects in game. Reduce it too far and you'll lag out, leave it too high and you're predicting too far into the future and wasting bandwidth (thereby lagging the server and clients) as any un-needed predictions are identified by the client and discarded. To explain, if you receive 50 updates per second you would need to predict 0.02 seconds (1 / 50) into the future. During the 0.1 seconds of prediction you would get 5 updates (0.1 / 0.02)
OK lets work out what ping the default values are optimised for.
To calculate this backwards we do the following. cl_interp 0.1 / 2 (to remove the effect of cl_interp_ratio) = 0.05seconds. 1 second / 0.05 seconds per update = 20 updates per second.
Our server is currently optimised for clients receiving 20 updates per second, or predicting 200ms into the future!
Now I would say the average ping on our server is about 60. Now, slower connections WILL suffer if cl_interp is set too low, and we don't want the server becoming unplayable. What I'd suggest is that we optimise for a ping of 100 which seems a happy medium. Its VERY rare to see many with a ping over 100 on the server, and to be fair theyre a laggy annoyance when approaching 200 and teleporting everywhere anyway. This is simple, just halve the cl_interp value to 0.5.
The tick time on our server is 66, meaning 66 updates per second which means an update every 0.015 seconds. During the default 0.2 seconds (200ms) of prediction you will send 13.2 updates to each client (0.2 / 0.015). Now if we halve this to 100ms prediction we only need send out 6.6 updates to each client. Weve halved the amount of prediction data being sent out by the server. Imagine how much load that would remove from it, not just in bandwidth but in terms of processing power too.
Now it may be possible to do this client side by substituting server tick time for your cl_updaterate value, but i'm not sure. cl_updaterate controls how many updates you receive from the server per second, and must be set to equal to or lower than the server tick rate. I think cl_interp may be a locked cvar as it could be feasibly used for artifical lag cheating and may be forced by the server. The below should be considered experimental as it may do sweet fanny adams! The only way to know for sure is to try it. The advantage would be that we could use a specific value tweaked to our own individual connections instead of a jack of all trades blanket value.
Now, for my cl_updaterate of 50 (1 second / 50 cl_updaterate, so an update every 0.02 seconds) I need to predict 0.02 seconds into the future. This means my cl_interp should be set at 0.02. As previously mentioned the cl_interp_ratio "2" default will take care of redundancy for me by doubling the updates I get.
So, to sum up (applicable to clients and servers)....
cl_interp "0.1" //optimises for clients with a 200 ping. cl_interp "0.05" //optimises for clients with 100 ping. cl_interp "0.03" //optimises for clients with 60 ping. cl_interp "0.025" //optimises for clients with 50 ping. cl_interp "0.015" //optimises for clients with 30 ping. cl_interp "0.0125" //optimises for clients with 25 ping. cl_interp "0.0025" //optimises for WaNeY
Hopefully you understood that. It took me about an hour to get my head round it properly.
It is my understanding the cl_interp doesn't do anything anymore. Everything is controlled by cl_interp_ratio now. I'll have a look around where I saw that info.
Its since been put back in Hito. They removed cl_interp and set it as a fixed cvar and gave people cl_interp_ratio to play with - specifically as it was at the time being exploited to create fake lag (you had to shoot in front or behind someone to hit them). As soon as they re-wrote the interp code to limit it between 0.001 and 0.1 that was less of a problem so it was brought back. The posts I linked to above are some 6 months plus older than the ones you saw mate. You know valve...never make their minds up.
Final note as I forgot to say above. You can turn off interpolation client side altogether by using cl_interpolate "0". Just remember that where you aim will now be entirely dependant on your ping and may vary wildly. A split second 100ms lag spike as you fire means they are nowhere near where they were supposed to be... Still, you may prefer it. See this post for an explanation: http://forums.steampowered.com/forums/s ... p?t=490324
I played with interp set to 0.015 yesterday instead of the normal 0.1 and I noticed the interp number in netgraph sometimes turned yellow. Does that mean anything?
Wed Jan 09, 2008 10:18 am
PiLsY.
HH VIP
Joined: Fri Nov 16, 2007 3:27 am Posts: 891
Re: Server Interpolation values...
Errr...new to me mate!
It certainly makes a difference though. If you set it real low its almost like playing with the netcode off. Means we can change it client side and not mess with any server settings.
Users browsing this forum: No registered users and 5 guests
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot post attachments in this forum