ptth/issues/2021-01Jan/port-forwarding-TTQG2MM5.md

126 lines
3.6 KiB
Markdown
Raw Permalink Normal View History

# Port forwarding
(TTQG2MM5)
## Known issues in this document
I just changed my mind, it's probably better to make all 4 connections
POSTs, and not have any GETs.
## Connection overview
Forwarding arbitrary TCP ports is similar to PTTH, but with more moving parts.
The client must run a PTTH proxy to translate between HTTP and TCP.
These programs are party to a connection:
1. A TCP client (agnostic of PTTH)
2. The PTTH client-side proxy
3. The PTTH relay server
4. The PTTH server-side proxy (integrated into ptth_server)
5. A TCP server (agnostic of PTTH)
To establish a connection:
1. The TCP server must be bound to a port.
2. ptth_server GETs out to the relay.
3. The user starts the PTTH proxy and it binds to a port.
4. The user tells the TCP client to connect to the PTTH proxy.
5. The PTTH proxy generates a connection ID and GET+POSTs out to the relay.
6. The relay holds open that POST and returns ptth_server's GET.
7. ptth_server connects to the TCP server.
8. ptth_server GET+POSTs out to the relay.
Error handling:
Under any error condition, either the client-side proxy or ptth_server will
notify the relay, and the relay will close the upstream and downstream.
When the client proxy sees its streams close, it closes its connection to the
TCP client. When ptth_server sees its streams close, it closes its connection
to the TCP server.
The relay is also allowed to time out idle connections.
## Client terminator's POV
- Bind to a TCP port on localhost
- Listen for a TCP connection
- Request to `connect` endpoint
- If it gives us an ID, take it and request to `upstream`
- Accept the TCP connection
- Stream down and up until any of the 3 connections closes
## Server terminator's POV
- Request to `listen` endpoint
- If it gives us an ID, connect to a TCP port on localhost
- If that succeeds, take the ID
## Relay's POV
```
upstream --> accept
connect <-- downstream
```
When a client opens a connection, the relay will see these endpoints
request / respond in this order:
1. listen request
2. connect request
3. listen response
4. accept request
5. connect response (client downstream)
6. accept response (server upstream)
7. upstream request (client upstream)
8. downstream request (server downstream)
The order is relaxed for steps 5 - 8. If any connection closes, the other 3
will close, but the specific order doesn't matter much.
**For servers:**
All calls are authenticated by the server's key, so the relay always knows
who the server is.
`listen` (opaque_server_port)
GET here to wait for clients. The relay will long-poll until a client connects,
or until a timeout.
server_port is an opaque ID that somehow maps to a TCP port number.
Only the server knows what it means.
Response:
- An opaque connection ID shared between you and the relay
`accept` (opaque_connection_id)
POST here to accept an incoming connection. On success, the relay responds
with 200 OK, and the response body is your half of the upstream.
`downstream` (opaque_connection_id)
POST here with your downstream as the request body.
**For clients:**
All clients are authenticated by the client's key, so the relay always knows
who the client is.
`connect` (server_name, opaque_server_port)
POST here to try connecting to a server. On success, the
relay responds with 200 OK, and the response stream is your downstream.
If the connection fails, the relay drops any bytes you've sent,
and responds with an error code.
The response will include an opaque connection ID as a header. Pass this
to 'upstream'.
`upstream` (opaque_connection_id)
POST here with your upstream as the request body. The server can
only respond by closing the stream when the stream fails for any reason.