This is such a stupid bug. I swear I checked this exact piece of code for
this exact bug and it wasn't there. But after I fixed it, I had no problem
running a download for 11 hours straight.
This fix won't affect a case where the firewall is actually closing long-
running connections (which is bad behavior, but it could happen) or a case
where the Internet is just flakey and the connection drops.
But it will fix the most common case where both client and server are on
robust connections and the download times out anyway.
I forgot to set the version in Cargo.toml files for 1.0.0.
I'm not gonna do 0.x versions because this is already live somewhere
and I don't like that 0.x adds complexity to versioning.
There's a lot of missing pieces, but the big picture is like this:
- Use 2 completely separate HTTP streams, and try to keep them alive as long
as possible, each in basically half-duplex mode
- Each stream has a long-running PUT and GET, sort of like station307
- Each end has to be terminated by a native app that either connects to a local
TCP server, or acts as a local TCP server
- No clue how it would work for multiple connections on the same port. Poorly,
I guess?
- It's probably gonna run like garbage because we're splitting TCP into
2 TCP streams, and although backpressure might work, the ACKs will be less
efficient. And the congestion control might get confused
My only goal is to tunnel Tracy over it, so that I can have that remotely.
This won't affect anything, because I had manually written the not_after for
the testing keys. Even the automated tests weren't using the new_30_day
function