High tickrate and rate SRCDS FAQ [Eng]

Тема в разделе "Статьи и мануалы", создана пользователем Andrey, 2 мар 2009.

  1. Andrey

    Andrey Администратор

    Сообщения:
    3.234
    Симпатии:
    3.507
    High tickrate and rate SRCDS FAQ

    Definitions

    tickrate: From Valve: During each tick, the server processes incoming user commands, runs a physical simulation step, checks the game rules, and updates all object states. After simulating a tick, the server decides if any client needs a world update and takes a snapshot of the current world state if necessary. A higher tickrate increases the simulation precision, but also requires more CPU power and available bandwidth on both server and client.
    When a client connects to a server, the clients Source Engine matches the SRCDS (Source Dedicated Server) tickrate that the client connected to.
    Server tickrate 100 = Client tickrate 100
    Server tickrate 66 = Client tickrate 66
    Server tickrate 33 = Client tickrate 33
    THE ONLY PLACE YOU CAN CHANGE THE TICKRATE IS VIA THE COMMAND STARTUP LINE. NO, THERE IS NO WHERE ELSE, GOT IT?

    SRCDS: Source Dedicated Server. The program you should be running and trying to optimise if you are here. FPS: Frames per Second. Client fps: is the number of times per second the game checks for inputs either from keyboard/mouse and incoming game packets, basically any I/O operation.

    Server fps: because there are no keyboard or mouse I/O's occurring, it only deals with how often the server checks for game packets.

    fps_max: Sets an upper limit on the frames per second the server runs at. Default=300 sv_maxrate: The Maximum amount of data in Bytes per Second the server will send to the client. Conversely, the Maximum amount of data in Bytes per Second the Client can request from the Server. sv_maxrate overrides the clients rate setting if sv_maxrate is less than the clients rate setting. Default=0 Maximum=30000

    sv_minrate: The Minimum amount of data in Bytes per Second the server will send to the client. Conversely, the Minimum amount of data in Bytes per Second the Client can request from the Server. sv_minrate overrides the clients rate setting if sv_minrate is greater than the clients rate setting. Default=0

    sv_maxupdaterate: The Maximum amount of Update per Second the server will send to the client. Conversely, the Maximum amount of Updates per Second the Client can request from the Server. sv_maxupdaterate overrides the clients cl_updaterate setting if sv_maxupdaterate is less than the clients cl_updaterate setting. Default=60

    sv_minupdaterate: The Minimum amount of Update per Second the server will send to the client. Conversely, the Minimum amount of Updates per Second the Client can request from the Server. sv_minupdaterate overrides the clients cl_updaterate setting if sv_minupdaterate is greater than the clients cl_updaterate setting. Default=0

    rate: The Maximum amount of Bytes Per Second the Client will request from the server. rate overrides the servers sv_maxrate setting if rate is less than the servers sv_maxrate setting.Default=(Depends Upon Clients STEAM Internet Connection Setting) Maximum=30000

    cl_updaterate: The Maximum amount of Updates Per Second the Client will request from the server. cl_updaterate overrides the servers sv_maxupdaterate setting if cl_updaterate is less than the servers sv_maxupdaterate setting. Default=20

    cl_cmdrate: The Maximum amount of Updates Per Second the Client will Send to the server. Default=30 Minimum=10 Maximum=100

    NB: sv_maxupdaterate and cl_updaterate cannot cause more data to be sent to the client than the sv_maxrate and rate settings allow, or the servers, or most likely the clients actual available bandwidth allows. Choke occurs when either;

    The servers sv_maxupdaterate causes the amount of bandwidth to exceed the bandwidth allocated per client by sv_maxrate or the total amount of bandwidth the server has access to,

    Or

    The clients cl_updaterate causes the amount of bandwidth required by the client to exceed the clients rate setting or the total amount of bandwith the client has access to.

    More Info here: Valve's Source Multiplayer Networking Explanation

    Instructions

    1. tickrate is set by adding the -tickrate 100 (for a tickrate 100 server) in the command line start up parameters. Tickrate cannot be changed on the fly via console, HLSW or rcon, it can only be changed in the commandline and the server must be restarted for the change to take effect.

    2. If you want your tickrate changes to have any noticeable benefit you must change a few other server variables as well as change the Windows Kernel Timer Resolution (pingboosting)

    3. To change the Windows Kernel Timer Resolution (pingboost a server) all you need to do is run Windows Media Player. It does not need a file open, it just has be running in the background, if you do not do this, your servers fps will be limited to around 64 frames a second.

    You can also use a little app that somebody wrote, which you can find here: srcdsfpsboost.zip

    The file now includes the Source Code so you can compile it yourself if you wish and I can confirm that the version is safe.

    Thank you needforspeed for the source code
    Thanks to trog for compiling it

    4. The servers fps can be seen by issuing the stats command in console or via HLSW or via the RCON STATS if you are logged in via rcon on the server.

    5. The server fps is regulated by the fps_max command. The default is 300, which, for whatever reason ends up producing around 250 fps in RCON STATS. Don't ask me why, I've asked Valve, but the next step up is to run your fps_max at 600 so you then get 500 fps in RCON STATS. It can be set somewhere between 500 and 600 via console or HLSW and permanently by adding the commandline parameter +fps_max 600 but if you set it at 500 you will see that your FPS according to RCON STATS will still sit at 250 fps even after a map change, so you must set your fps_max higher than what you actually want to achieve the desired affect. Don't take my word for it, test it yourself!

    6. So you have a high tickrate, your server is pingboosted and is running at high fps, none of this is of any use to your clients (the players) if you do not change your servers rates, specifically the sv_maxrate and sv_maxupdaterate variables.

    7. sv_maxrate by default is set to 0, I personally have found that this setting is detrimental to server performance (purely subjective opinion, but there you go, feel free to ignore it until your clients start complaining about stupid lag and player warping issues that don't correlate to any actual network or cpu usage or over usage issues as the case may be) then set your sv_maxrate to 20000

    8. sv_maxupdaterate (default 60) setting must be changed to start using all this server generated data more effectively and get the data out to your players who want to run 101/101/20000/10000 cl_cmdrate/cl_updaterate/rate/cl_rate (yes I know cl_rate is defunct but some people can't be told so I humour them and leave it in) settings, thus you need to change your sv_maxupdaterate to something like 1.2 times tickrate. You only have to do this if you run a tickrate higher than 50. eg for tickrate 66 run sv_maxupdaterate 66 or even 100, for tickrate 100 run sv_maxupdaterate 100. If you do not do this, your clients will NEVER see the full benefit of your tickrate changes, and even then, because of server load the clients will not see the full sv_maxupdaterate or tickrate reflected in a net_graph 3.

    9. Do not run tickrate higher than 100, Valve have admitted that there will be issues if you push the tickrate too high.

    10. Make sure you have the bandwidth and CPU to cope with SRCDS running with these settings. If you don't have at least a 10Mbps Full Duplex link , you probably do not have the bandwidth to see the full benefit of following the above instructions. This is aimed at people with servers in dedicated data centres with appropriate high speed Internet connections. Most home users will not have the necessary bandwidth or hardware to take full advantage of ALL of these settings. You may though be able to increase your end users overall experience, just by pingboosting your server and increasing the tickrate and fps_max whilst leaving the sv_maxrate and sv_maxupdate rate settings low.

    11. 24, 32 & 40 player servers should be run with a tickrate of no more than 66 and sv_maxupdaterate of 100. I've tried higher but you get strange issues for the clients if you do.

    12. The fps_max setting of 600 does not appear to hit the CPU as hard as other settings I have mentioned here do, your mileage may vary, but try reducing this back to default of 300 if your clients get strange lag issues and you have tried reducing the other server variables mentioned here. i.e. Change this one last!

    13. For competition servers, or any server at the 18 player or less mark, then you should be able to use a tickrate of 100 and an sv_maxupdaterate of 100 successfully without any issues, so long as you have the bandwidth and CPU to cope!

    14. For changes to take affect, the settings must be changed in the server.cfg file (except tickrate & fps_max which should be command line variables) and the server restarted, or if done via RCON, a map change must be done.

    15. This information is for Windows Source Dedicated Server only, do not ask me about LINUX, I cannot help you.

    16. This was written on the basis that your SRCDS is a default SRCDS Installation with no Mods/Plugins or Non-Standard anything else, such as sounds, skins, maps etc on the server. Using them (Mods/Plugins) will increase CPU utilisation and thus limit the final result. Obviously you need to monitor these for your particular situation.

    17. Finally, you need to actually play on your server for several hours with all SRCDS processes full to see if there are any issues that do not show up by normal performance monitoring tools, to ensure everything is running ok. Subjective in game experience can deviate significantly from Objective Server Statistics, thus you will not know there is a problem unless you are on the server playing at the time it happens.

    18. Sample SRCDS Command Startup Line:
    Код:
    [COLOR=blue]C:\srcds\srcds.exe -console -game cstrike -tickrate 66 +fps_max 600 +maxplayers 18 -port 27015 +exec server.cfg +map de_dust2[/COLOR]
    Linux Kernel Timer Instructions

    Thanks to triphammer in the STEAM Linux SRCDS Forum
    You need to do a custom (re)compile of the linux kernel in order to change kernel interruptability / timer.


    Since Kernel 2.6.14 you change the HZ with "make menuconfig", just go to: "Processor type and features" > "Timer frequency (XXXXX HZ)". The default HZ for 2.4 Kernels is 100. You can also change the HZ via the "USER_HZ" variable located in: include/asm-<arch>/param.h.
    param.h:
    <code> define USER_HZ 100 /* .. some user interfaces are in "ticks" */ </code>

    More along the lines of your question, you can also set the kernel timer frequency by chaning the HZ variable in the same file
    <code> define HZ 1000 /* Internal kernel timer frequency */ </code>
    Also:
    <code> + config HZ
    + int "Frequency of the Timer Interrupt (1000 or 100)"
    + range 100 1000
    + default 1000
    + help
    + Allows the configuration of the timer frequency. It is customary
    + to have the timer interrupt run at 1000 HZ but 100 HZ may be more
    + beneficial for servers and NUMA systems that do not need to have
    + a fast response for user interaction and that may experience bus
    + contention and cacheline bounces as a result of timer interrupts.
    + Note that the timer interrupt occurs on each processor in an SMP
    + environment leading to NR_CPUS * HZ number of timer interrupts
    + per second.
    +
    endmenu
    </code>
    For a server fps to cater for you high tickrate under Linux, you either need to recompile your 2.4 Kernel, with its Kernel timer resolution changed, but the easiest and probably best course of action is to use the 2.6 Kernel and change the "USER_HZ" variable, (I would suggest starting at 500 and seeing what happens before experimenting with other numbers) which will enable higher server fps on your Linux server.

    Instructions for compiling the Linux kernel:
    The following webpages provide instructions for compiling the Linux kernel:

    Client Settings

    Clients must have their STEAM Internet Connection Settings setup correctly for their Internet connection. See here for explanation on how to do this.
    The clients rate should = the servers sv_maxrate
    The clients cl_updaterate should = the servers sv_maxupdaterate which equals the servers tickrate
    Thus a server with sv_maxrate 20000 tickrate 100 sv_maupdaterate 100 the clients though run the following settings:

    • rate 20000
    • cl_updaterate 100
    • cl_cmdrate 100
    • cl_interpolate 1
    • cl_interp 0.1
    • cl_smooth 0
    These settings will provide the best client experience so long as your server & network can cope with running with a high tickrate and the rates required to take advantage of them.
    NB: If your server settings are different to the example just mentioned, your client settings will have to change accordingly. This is just an example, do not think that these rates are optimum for all server settings, they are not, and your optimum client settings will need to change accordingly.



    Summary:

    < 20 Player servers
    -tickrate 100
    sv_maxrate 20000
    sv_maxupdaterate 100
    fps_max 600

    > 20 Player servers
    -tickrate 66
    sv_maxrate 20000
    sv_maxupdaterate 66
    fps_max 600

    Make sure you have the CPU and bandwidth to cope
    What you need to look out for is high CPU usage on the server and/or choke on clients that did not get it before you made changes to your servers tickrate and associated settings, and/or fps that running constantly well below the Kernel Timer and/or below the tickrate. Or otherwise, just blatanly obvious crap lag on the server to put it bluntly.
    i.e. If you run at 66 tickrate with 50% CPU and 100 tickrate at 90% CPU then its obvious that 66 tickrate is what you are going to have to run your server at.
    If you set your kernel timer to 500Hz or there abouts, and fps_max at 600, but your server is only getting 150-200 fps constantly, then its obvious you need to change the kernel timer and/ore the fps_max to a lower setting.

    Server Bandwidth Calculation for Dummies:

    sv_maxrate and rate are the 2 variables that decide the maximum amount of bandwidth each player will use. Both are measured in Bytes per Second, so an sv_maxrate of 20000 = 20,000 Bytes per Second! A Rate of 15000 = 15,000 Bytes per Second.
    Network Speeds are by convention quoted on bits per second, whether Kilobits (Kb) 1,000 bits, Megabits (Mb) 1,000,000 bits or Gigabits (Gb) 1,000,000,000 bits.
    The other convention is b is for bit, B is for Byte, it is important not to confuse the two.
    8 bits = 1 Byte
    To calculate the amount of upload bandwidth your server must have, you multiply your sv_maxrate by the number of players on the server. Thus a sv_maxrate of 20000 with 20 players will require at least 20 * 20,000 = 400,000 Bytes per Second of Bandwidth. I say at least, because your theoretical maximum upload speed is just that, theoretical, and you will find that most connections will not sustain their theoretical maximums for long periods of time, which is exactly how GameServers must operate to provide a positive end user experience.
    Now going back to our example, we have calculated that you are going to require 400,000 Bytes per Second of Bandwidth to serve 20 players. We now need to convert this to normal Networking conventions, so we can compare apples with apples. To do this, the calculation for this example is as follows:
    400,000 Bytes * 8 bits / 1,000 = 3,200 KiloBits/Second (3,200Kbps) or 400,000 Bytes * 8 bits / 1,000,000 = 3.2 Megabits/Second (3.2Mbps)
    The point of this calculation is that whatever Bytes Per Second a particular SRCDS setup requires, you need to convert that into a bit speed, by multiplying the total about of bytes generated per second by 8 (8 bits = 1 Byte) and then convert that into either kilobits or megabits, by dividing by 1,000 for kilo or 1,000,000 for mega to give you a value in Kilobits per Second (Kbps) or Megabits per Second (Mbps), whichever is easier to read so you are able to make a correct comparison with your connection speed.
    Please realise that your X Mbps connection maybe rated very close to what the server requires, but it is nearly always necessary to leave an overhead of between 10%-25% to make sure the server can always cope since many connections are not able to constantly run at their peak theorectical speeds. So an sv_maxrate 20000 server with 20 players is probably going to require a 4Mbps Upstream Connection to adequately cope with the load.
    Final Calculation looks like this: sv_maxrate * {player number} * 8 / 1,000 = Maximum Upstream Speed in Kbps your server requires.
    This calculation will work for multiple SRCDS processes on the one physical server.
    If you want to turn this calculation around, and wish to calculate the maximum theoretical sv_maxrate your server can run for a given upload speed (in kbps) and player number, the calculation is as follows:
    upload bandwidth in kilobits per second / 8 * 1000 / player number = the theoretical maximum sv_maxrate you can run your server at.
    This Calculation only works for a single SRCDS process on a single physical server.
    Hopefully now, you can all work out just how much Upstream Bandwidth your server requires for any given sv_maxrate, upstream bandwidth and player number values.
    Below are tables that with calculation values already worked out for you.
    [​IMG]
    [​IMG]

    [​IMG]

    b
    is for bit
    B is for Byte
    There are 8 bits to a Byte

    Networking speeds are always measured in bits and quoted as multiples of 1,000 for Kilo, 1,000,000 for Mega and 1,000,000,000 for Giga.
    Do not argue, that's just the way it is!
    The really technical reason why data speeds are in multiples of 1,000, or more to the point, do not generally correlate to binary maths measurments, is that data is not sent down the wire, it is only a signal that represents data that is sent down the wire. That signal is measured in Hertz, and has nothing to do with binary maths, even though the signal is representing binary data, and all you so called networking experts should know this, and if you don't, well you are not much of a networking expert, are you?
    Can you tell that I am sick of this argument yet? :)
    30,000 is the maximum sv_maxrate for SRCDS. That is why all the calculations above, max out at 30,000

    [​IMG]
    This is why it is generally better to run at a higher fps_max so long as your server can cope. I would suspect that 2,000 updates per second is the most a SRCDS process is ever going to realistically going to have to deal with due to per player & tickrate/total player number considerations.
    It is important to note that in SRCDS, fps = I/O per second.
    So the goal of raising your fps_max is to keep a low ratio of server frames per second to updates per second
    It is also important when designing a Gaming Network, that all your network devices (Routers & Switches) can sustain the packets/frames per second these setups can generate. That is to say, your Network might be fine with 1 SRCDS running on 1 box, BUT, running 10 boxes with 6 SRCDS each, all with high rates, and then you realise you might have a problem!

    Hardware Spec Example:

    We run 6 x 16 Player SRCDS on DUAL Xeon 3.0GHz (or better) Servers with 3GB of RAM with 100Mb Switched Network Connections into 1Gbps or 10Gbps BackBones with a 66 tickrate. So when I say good hardware with good Network Connectivity, this is the benchmark I am basing my opinions on.

    Suffice it to say we could probably run 4 x 12player 100 tickrate servers, BUT although the difference between 33 tickrate and 66 tickrate is the difference between night and day, the difference between 66 tickrate and 100 tickrate is negligable on the Internet, when all issues are taken into consideration. Also some maps and player numbers caused intermitant issues at 100 tickrate that a GSP does not really want to have to worry about, especially when you can have 6 x 16 player servers that run excellently at 66 tickrate! :)

    </arch> Choke

    THE MAIN CAUSE OF BAD CHOKE IS A CLIENTS STEAM INTERNET CONNECTION SPEED BEING SET INCORRECTLY
    Please ensure that all clients STEAM Internet Connection Speeds are setup correctly. See BAD CHOKE SOLUTION
    Choke is quite simply the server wanting to send an update to the client, but cannot.

    • If the server cannot sustain the tickrate, you get choke (You may not actually get choke, but the server will lag very badly)
    • If the server cannot sustain the fps the tickrate requires, you get choke
    • If the server cannot sustain the fps the sv_minupdaterate requires, you get choke
    • If the server cannot sustain the the sv_minupdaterate, you get choke
    • If the server connection cannot sustain the bandwidth required to support the updaterate, you get choke
    • If the server connection cannot sustain the bandwidth required to support the sv_minrate, you get choke
    • If the required bandwidth demanded by the sv_maxupdaterate exceeds the sv_maxrate, you get choke
    • If the clients connection cannot sustain the bandwidth required to support the cl_updaterate you get choke
    • If the clients required bandwidth demanded by the cl_updaterate exceeds the rate, you get choke
    Notes regarding Netgraph Updates per second measurements you need to be aware of:
    You won't get higher updates than;
    a) The servers tickrate
    b) The servers sv_maxupdaterate
    c) As fast as your server fps allows (Limited by fps_max, hardware and the Kernel Timer Resolution)
    d) As fast as your servers sv_maxrate allows
    e) As fast as the client/server connection allows
    f) As fast as the clients rate allows
    g) As fast as the clients cl_updaterate allows
    <strike>h) As fast as the clients fps allows (Limited by fps_max, hardware, and Refresh Rate)</strike>
    h) Client FPS controls how fast the client can send updates to the server, this the OUT on the net_graph 3
    NB: Other than tickrate, Choke is caused by any of the above list of things not being large enough. This usually occurs because the sv_maxupdaterate or cl_updaterate being higher than the sv_maxrate or rate allows, since the server will not send more data per second than sv_maxrate or rate allows.
    Fixing Clients Choke

    Besides making sure clients follow BAD CHOKE SOLUTION and set their STEAM Internet Connection Settings correctly, the only 2 Variables that are going to really help the clients choke problems are RATE and CL_UPDATERATE.

    1. If in doubt about your STEAM Internet Connection Setting, set it 1 higher than what you have.
    2. If you are getting choke and the throughput on the net_graph 3 (see below) is lower than what you expect then raise your RATE.
    3. If you still get choke, then make sure you set the CL_UPDATERATE to the servers tickrate and then in steps of 5, lower CL_UPDATERATE until choke disappears or is at least minimised. eg. Start at 100 and try 95, 90, 85, 80 etc. etc.
    4. The blame may not always be with you, the client! Try another server, or another server on another Game Service Provider.
    5. Finally, don't buggerise around trying to fix choke problems if you have loss problems. You are just wasting your time and everybody elses if you ask them to help fix your choke problems if you have LOSS!!! Loss is a network problem. (See point 6. of net_graph 3)
    Net_graph 3 Explanation

    [​IMG]
    1. fps is how many frames per second the client is rendering. This is limited by the clients fps_max setting or the refresh rate of the monitors vertical refresh rate if Vertical-Sync is enabled.
    2. ping is:
    a) netgraph ping is the round trip time for game packets, NOT including any tickrate or updaterate induced calculation delays
    b) Scoreboard latency (ping) is one way trip latency (I have to find out in which direction)
    c) rcon status command ping, well nobody really knows what this means yet, but I am aiming to find out.

    IN is what is being received by you the client, FROM the server.
    OUT is what is being sent by you the client, TO the server.

    The IN & OUT both have 3 components, starting from left to right:

    3. The size of the game packet in bytes being sent and received (Not sure if this includes UDP Segment + IP Packet overhead)

    4. The Average amount of KiloBytes Per Second being Sent or Received of GameData + UDP Segment + IP Packet overhead

    5. The Average amount of Updates being Sent or Received per Second

    If you multiply 3. by 5. and then divide by 1000 you will get a close approximation of the value of 4. which includes rounding errors because 4. and 5. are only averages. So using the numbers we see above, for IN we get (154*102.4/1000)=15.7696 with the value shown in the picture above for net_graph 3 being 15.16. Meh, close enough. :)


    The amount of IN Updates received by the client per second (controlled by cl_updaterate) will in most cases equal the servers tickrate, but will NEVER exceed:


    • The Clients cl_updaterate
    • The Servers sv_maxupdaterate
    • The Server/Clients tickrate which are always the same, as the client will always use the same tickrate as the server it connects to
    Which ever is the smallest of those 3 numbers will determine the number you see for Updates per second RECEIVED by the client.

    If the clients AND/OR servers bandwidth is not sufficient, or is limited by the clients rate or servers sv_maxrate, then the client will NOT see the IN Updates received by the client per second equaling the servers advertised tickrate. This is one example of when the client will see choke.
    If the server does not have enough CPU to sustain the servers fps above the servers tickrate, the client will NOT see the IN Updates received by the client per second equaling the servers advertised tickrate. This is another example of when the client will see choke.

    The amount of OUT Updates sent by your computer per second (controlled by cl_cmdrate) will NEVER exceed:


    • The Clients cl_cmdrate
    • The Server/Clients Tickrate
    • The Clients Frames Per Second
    Which ever is the smallest of those 3 numbers will determine the number you see for Updates per second SENT by the client.

    It may look like the OUT Updates sent by your computer per second does exceed the fps, but it reality it does not. It is that the net_graph 3 readings are not always perfectly in sync or there are rounding errors in the calculations, because the the two per second counts 4. and 5. shown in net_graph 3, are only averages. There is also the error in the net_graph 3 that occurs when the Average Updates Received by you per Second magically seem to exceed the servers sv_maxupdaterate, servers tickrate, and the clients cl_updaterate, which were all set to 100 at the time the screenshot was taken above, despite what is shown in the picture above.

    6. Loss Are lost packets due to Network problems, either with your computers connection to your ISP, your ISP, or the ISP that is hosting the Server or anywhere in between. If you have loss then you will probably have choke. Do not bother trying to solve Choke problems if you have Loss problems. Resolving loss problems is done by following standard Network Trouble Shooting Procedures. Get a friend to help you or call your ISP, or ask in the Game Server Providers Forum for help. Helping you with network problems is outside the purview of this document, and people who know what they are doing get paid 3 or 4 figure dollar amounts an hour to solve them.

    7. Choke Is quite simply the server wanting to send you data but cannot. The reason for this though are not always simple to understand, diagnose or fix. See the Choke explanation above.
    8. You bring up your net_graph by typing net_graph 3 into console. You may find it helpful to centre the net_graph using the net_graphpos 2 command, and raising it a little so it does not overlay your HUD using the net_graphheight 100 command in console. The net_graphheight command is a function of your screen resolution, so you will need to adjust it accordingly with net_graphheight 100 working well for 1024x768. Increasing the value of net_graphheight makes the net_graph higher and Decreasing the value of net_graphheight lowers the net_graph.
    Here is a little script you can put into your autoexec.cfg for cycling through the various net_graphs that will work in all Valve games:
    <code>//netgraph script
    alias graph "graph1"
    alias graph1 "net_graphpos 2; net_graphheight 100; net_graph 1; alias graph graph2"
    alias graph2 "net_graphpos 2; net_graphheight 100; net_graph 2; alias graph graph3"
    alias graph3 "net_graphpos 2; net_graphheight 100; net_graph 3; alias graph graph4"
    alias graph4 "net_graphpos 1; net_graph 0; alias graph graph1"
    bind "r" "graph"</code>​
    Obviously adjust net_graphheight and bind "r" where "r" is the keyboard key you use to cycle through the different net_graphs to suit your own personal preferences.
    Important Information for both Players and Server Administrators

    The Server will not send more data and/or updates than the Client is setup to receive unless the clients violate the server minimums, in which case the servers sv_minrate and sv_minupdaterate will be used by the client. The Client cannot make the Server send more data and/or updates than the Server is set up to send
    You should, after reading the entire article above, now know what it is that controls what the Server & Client can and cannot send & receive, how often, and why.
    If you do not, you either did not read what I have written, or some part of my explanation was not clear to you. Suffice it to say all the information you need is in here, even if you do not realise it.
    For those of you who are still struggling to comprehend the above, try the Noobies Guide to Netgraph & Ping.
    Why don't my clients get 100 Updates a Second?

    Assuming you have set the tickrate correctly to 100 (in this example) and the server is in fact running at 100 tickate, the FIVE main causes of clients not receiving 100 Updates per Second are as follows:

    1. If you have a Windows SRCDS and your kernel timer resolution is not increased (ping boosted), you won't see 100 Updates per Second. Most likely it will be stuck at around 64 as that is how many fps SRCDS will run at. This happens alot, even to me, because the person who updates the Window Installation and reboots the box, forgets to make sure srcdsfpsboost.exe is running.
    2. If you have a Linux SRCDS on a default Linux OS installation, you probably will not see 100 Updates per Second. Most likely it will be stuck at around 50 as that is how many fps SRCDS will run at.
    3. If you do not change the sv_maxupdaterate (Default = 60) you will obviously not see 100 Updates per Second.
    4. If you do not have enough CPU for the number of players you are running, and the SRCDS fps keeps falling below the 100 mark, you will not see 100 Updates per Second.
    5. If the clients cl_updaterate is not set to 100, then obviously they will not see 100 Updates per Second.
    There are more reasons than this, but you these are the main causes of not seeing as many updates as you might expect.
    This whole guide, if you have read and understood it all, seeks to address how to resolve the issue of sending and receiving as many updates as you want your server to.
    Finally, never discount that it is in fact a client side issue with the client computer that is connecting to your server, unless of course there has been a recent Valve SRCDS update, and everything has suddenly inexplicably gone to hell.
    For Client Side issues please refer to Fixes for FPS Problems with Counter-Strike and most other games


    Solving the mystery of cl_interp_ratio

    The client side setting cl_interp (it is not suppose to exist any longer) has been replaced by the client side setting cl_interp_ratio
    cl_interp_ratio simply causes the interpolation delay to be calculated off the clients cl_updaterate (The amount of updates the clients receive per second. This will not exceed the servers sv_maxupdaterate or servers tickrate, which ever is the smaller)
    The outcome for this is as follows:
    [​IMG]
    Clients can have higher cl_interp_ratio values to accomodate for packetloss & choke.
    There are 2 server side variables that limit how large a clients cl_interp_ratio can be, these are:
    "sv_client_min_interp_ratio" = "1" replicated - This can be used to limit the value of cl_interp_ratio for connected clients (only while they are connected). -1 = let clients set cl_interp_ratio to anything any other value = set minimum value for cl_interp_ratio
    "sv_client_max_interp_ratio" = "2" replicated - This can be used to limit the value of cl_interp_ratio for connected clients (only while they are connected). If sv_client_min_interp_ratio is -1, then this cvar has no effect.
    We will be using the defaults for now
    The bottom line is this: interp is calculated off your updaterate. Change your updaterate and your interp will change, it is as simple as that!
    P.S. This is only the theoretical interpretation of what is supposed to happen. Please do not attempt to blame the author if your reality does not agree with the theoretical explanation.
    P.P.S. This new information (18January2006) invalidates small parts of the other sections of this tickrate guide, please take this into consideration. Thank You

    Useful Links


    In conclusion

    I hope this helps clear up some of the mystery of setting up a server with high tickrate, don't worry if it does not, I am still asking questions myself.
    Special Thanks to Alfred Reynolds and Martin Otten for their input that helped me put this guide together.
    If you are still having problems with SRCDS then please refer to my Troubleshooting_valve_HLDS-SRCDS Guide
    Cheers
    Whisper
    Whisper's Very Basic Competitive Counter-Strike War Strategy Guide

    PostScript: Questions Remaining

    • The actual fps a server generates does not equal the fps_max number and certain intermediate values cause no change to the reported server fps. eg. fp_max 300 produces approximately 250fps fps_max 400 still produces 250fps where as fps_max 600 will raise the reported server fps to 500.

    • There are 3 measures of ping: Scoreboard, net_graph, status/rcon status, none of them remotely agree with each other. What does each measure, what is the relationship between them?
      • This is almost answered.

    • The net_graph 3 reported 'IN' & 'OUT' when calculated as a combined total, does not appear to exceed the sv_maxrate or rate settings even though the definition of both settings is meant to only control how much data the server can send the client in Bytes per second?

    • The sort order of 'rcon status' screen does not appear to have any order whatsoever.
      • Apparently it is based on player position on the server, which means I now know as much as before I asked the question.
      • The full answer is: Because that is the easiest way to iterate players from the code :) They are sorted in entity order, which doesn't match player id or connected time.

    • Need a precise explanation for all causes of choke and how to resolve each cause?

    • An indication of how much data is actually generated per player for a given tickrate and player number assuming the clients and server have enough bandwidth and the server has enough CPU capacity?

    • Hardware to tickrate benchmarks to provide people an indicator on how many players they can run for a given tickrate and available bandwidth?

    • Explanations for all causes of Loss according to the clients net_graph 3

    • Effect of -pingboost in command line
      • you mean - effect for srcds servers ? for hlds: 1 = standard, 2=more cpu (better frames/pings), 3=get as much cpu as you can (not recommended when running more than one server on the computer)
        • Is pingboost still used for SRCDS? I thought it was, but I don't deal with Linux SRCDS day in day out so I wouldn't know for 100% sure, but as far as I knew it does still exist for SRCDS.
    • Kernel Timer Explanations for Linux Servers (We are slowly getting there, thanks to the contributors so far)

    • More Answers for Linux Server Admins, and make the article less Windows Centric
     
    Rt. нравится это.
  2. antoha1998

    antoha1998

    Сообщения:
    42
    Симпатии:
    1
    А кстате, половину прочитал... Интересно...
     
  3. Rt.

    Rt.

    Сообщения:
    396
    Симпатии:
    121
    Очень полезный материал. Давно такое искал.