First PR to address #14244.
Adds common potential reasons as to why a direct connection to the workspace agent couldn't be established to `coder ping`:
- If the Coder deployment administrator has blocked direction connections (`CODER_BLOCK_DIRECT`).
- If the client has no STUN servers within it's DERP map.
- If the client or agent appears to be behind a hard NAT, as per Tailscale `netInfo.MappingVariesByDestIP`
Also adds a warning if the client or agent has a network interface below the 'safe' MTU for tailnet. This warning is always displayed at the end of a `coder ping`.
#13617 bumped the Agent/Tailnet API minor version because it adds telemetry features. However, we don't actually use the protocol features yet, so it's a bit obnoxious for our CLI client to ask for the newest API version.
This is particularly true of the CLI client, since that's distributed separately, so if an end user installs the latest CLI client and their organization hasn't fully upgraded, then it will fail to connect.
Since we have a release coming up and the telemetry stuff won't make it, I think we should roll back to version 2.0 until we actually implement the telemetry stuff. That way the newest release (2.13) will work with Coder servers all the way back to 2.9.
I initially made this change when hacking wgengine to also capture wireguard packets going into the magicsock, so that we could capture the initial wireguard handshake.
I don't think we should ship that additional capture logic, but... it seems generally useful to capture packets from the get go on speedtest, so that you can see disco and pings before the TCP speedtest session starts.
When an agent receives a node, it responds with an ACK which is relayed
to the client. After the client receives the ACK, it's allowed to begin
pinging.
Currently, importing `codersdk` just to interact with the API requires
importing tailscale, which causes builds to fail unless manually using
our fork.