🚧 wip: I was working on some port-forwarding idea.
It was going to be generic over TCP and use 2 HTTP streams, one each way. The plan's written down somewhere.main
parent
f94b40b6b8
commit
c40abb0fe6
|
@ -1,4 +1,5 @@
|
|||
use std::{
|
||||
collections::HashMap,
|
||||
sync::Arc,
|
||||
time::Duration,
|
||||
};
|
||||
|
@ -13,7 +14,10 @@ use hyper::{
|
|||
use tokio::{
|
||||
spawn,
|
||||
stream::StreamExt,
|
||||
sync::mpsc,
|
||||
sync::{
|
||||
RwLock,
|
||||
mpsc,
|
||||
},
|
||||
time::interval,
|
||||
};
|
||||
use tracing::{
|
||||
|
@ -26,8 +30,53 @@ use tracing_subscriber::{
|
|||
};
|
||||
use ulid::Ulid;
|
||||
|
||||
pub struct RelayState {
|
||||
struct RelayState {
|
||||
connections: HashMap <String, ConnectionState>,
|
||||
client_opaques: HashMap <String, String>,
|
||||
server_opaques: HashMap <String, String>,
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
HTTP has 2 good pause points:
|
||||
|
||||
- Client has uploaded request body, server has said nothing
|
||||
- Server has sent status code + response headers
|
||||
|
||||
Because we want to stream everything, there is no point in a single HTTP
|
||||
req-resp pair
|
||||
having both a streaming request body and a streaming response body.
|
||||
|
||||
To move the state machine, the first request from client and server must not
|
||||
be streaming.
|
||||
|
||||
With all that in mind, the r
|
||||
|
||||
*/
|
||||
|
||||
enum ConnectionState {
|
||||
// We got 1 connection from the client. We need a 2nd to form the upstream.
|
||||
WaitForUpstream (String, String),
|
||||
// We got 2 connections from the client. We need the server to accept
|
||||
// by sending its downstream.
|
||||
WaitForAccept (String, String, String),
|
||||
Connected (String, String, String, String),
|
||||
}
|
||||
|
||||
// An established connection has 4 individual HTTP streams
|
||||
|
||||
struct EstablishedConnection {
|
||||
// Request body of 'upstream' call
|
||||
client_up: String,
|
||||
|
||||
// Response body of 'connect' call
|
||||
client_down: String,
|
||||
|
||||
// Response body of 'listen' call
|
||||
server_up: String,
|
||||
|
||||
// Request body of 'accept' call
|
||||
server_down: String,
|
||||
}
|
||||
|
||||
pub struct HttpService {
|
||||
|
@ -133,8 +182,6 @@ impl HttpService {
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
#[tokio::main]
|
||||
async fn main () -> Result <(), anyhow::Error> {
|
||||
use std::time::Duration;
|
||||
|
@ -155,3 +202,13 @@ async fn main () -> Result <(), anyhow::Error> {
|
|||
info! ("Starting relay");
|
||||
Ok (service.serve (4003).await?)
|
||||
}
|
||||
|
||||
#[cfg (test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn state_machine () {
|
||||
assert! (false);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,125 @@
|
|||
# Port forwarding
|
||||
|
||||
(TTQG2MM5)
|
||||
|
||||
## Known issues in this document
|
||||
|
||||
I just changed my mind, it's probably better to make all 4 connections
|
||||
POSTs, and not have any GETs.
|
||||
|
||||
## Connection overview
|
||||
|
||||
Forwarding arbitrary TCP ports is similar to PTTH, but with more moving parts.
|
||||
The client must run a PTTH proxy to translate between HTTP and TCP.
|
||||
|
||||
These programs are party to a connection:
|
||||
|
||||
1. A TCP client (agnostic of PTTH)
|
||||
2. The PTTH client-side proxy
|
||||
3. The PTTH relay server
|
||||
4. The PTTH server-side proxy (integrated into ptth_server)
|
||||
5. A TCP server (agnostic of PTTH)
|
||||
|
||||
To establish a connection:
|
||||
|
||||
1. The TCP server must be bound to a port.
|
||||
2. ptth_server GETs out to the relay.
|
||||
3. The user starts the PTTH proxy and it binds to a port.
|
||||
4. The user tells the TCP client to connect to the PTTH proxy.
|
||||
5. The PTTH proxy generates a connection ID and GET+POSTs out to the relay.
|
||||
6. The relay holds open that POST and returns ptth_server's GET.
|
||||
7. ptth_server connects to the TCP server.
|
||||
8. ptth_server GET+POSTs out to the relay.
|
||||
|
||||
Error handling:
|
||||
|
||||
Under any error condition, either the client-side proxy or ptth_server will
|
||||
notify the relay, and the relay will close the upstream and downstream.
|
||||
When the client proxy sees its streams close, it closes its connection to the
|
||||
TCP client. When ptth_server sees its streams close, it closes its connection
|
||||
to the TCP server.
|
||||
|
||||
The relay is also allowed to time out idle connections.
|
||||
|
||||
## Client terminator's POV
|
||||
|
||||
- Bind to a TCP port on localhost
|
||||
- Listen for a TCP connection
|
||||
- Request to `connect` endpoint
|
||||
- If it gives us an ID, take it and request to `upstream`
|
||||
- Accept the TCP connection
|
||||
- Stream down and up until any of the 3 connections closes
|
||||
|
||||
## Server terminator's POV
|
||||
|
||||
- Request to `listen` endpoint
|
||||
- If it gives us an ID, connect to a TCP port on localhost
|
||||
- If that succeeds, take the ID
|
||||
|
||||
## Relay's POV
|
||||
|
||||
```
|
||||
upstream --> accept
|
||||
connect <-- downstream
|
||||
```
|
||||
|
||||
When a client opens a connection, the relay will see these endpoints
|
||||
request / respond in this order:
|
||||
|
||||
1. listen request
|
||||
2. connect request
|
||||
3. listen response
|
||||
4. accept request
|
||||
5. connect response (client downstream)
|
||||
6. accept response (server upstream)
|
||||
7. upstream request (client upstream)
|
||||
8. downstream request (server downstream)
|
||||
|
||||
The order is relaxed for steps 5 - 8. If any connection closes, the other 3
|
||||
will close, but the specific order doesn't matter much.
|
||||
|
||||
**For servers:**
|
||||
|
||||
All calls are authenticated by the server's key, so the relay always knows
|
||||
who the server is.
|
||||
|
||||
`listen` (opaque_server_port)
|
||||
|
||||
GET here to wait for clients. The relay will long-poll until a client connects,
|
||||
or until a timeout.
|
||||
|
||||
server_port is an opaque ID that somehow maps to a TCP port number.
|
||||
Only the server knows what it means.
|
||||
|
||||
Response:
|
||||
|
||||
- An opaque connection ID shared between you and the relay
|
||||
|
||||
`accept` (opaque_connection_id)
|
||||
|
||||
POST here to accept an incoming connection. On success, the relay responds
|
||||
with 200 OK, and the response body is your half of the upstream.
|
||||
|
||||
`downstream` (opaque_connection_id)
|
||||
|
||||
POST here with your downstream as the request body.
|
||||
|
||||
**For clients:**
|
||||
|
||||
All clients are authenticated by the client's key, so the relay always knows
|
||||
who the client is.
|
||||
|
||||
`connect` (server_name, opaque_server_port)
|
||||
|
||||
POST here to try connecting to a server. On success, the
|
||||
relay responds with 200 OK, and the response stream is your downstream.
|
||||
If the connection fails, the relay drops any bytes you've sent,
|
||||
and responds with an error code.
|
||||
|
||||
The response will include an opaque connection ID as a header. Pass this
|
||||
to 'upstream'.
|
||||
|
||||
`upstream` (opaque_connection_id)
|
||||
|
||||
POST here with your upstream as the request body. The server can
|
||||
only respond by closing the stream when the stream fails for any reason.
|
Loading…
Reference in New Issue