diff --git a/issues/2020-12Dec/auth-route-YNQAQKJS.md b/issues/2020-12Dec/auth-route-YNQAQKJS.md index 9c114f7..2fdd142 100644 --- a/issues/2020-12Dec/auth-route-YNQAQKJS.md +++ b/issues/2020-12Dec/auth-route-YNQAQKJS.md @@ -4,39 +4,52 @@ ## Test curl commands -(In production the API key should be loaded from a file. Putting it in the -Bash command is bad, because it will be saved to Bash's history file. Putting -it in environment variables is slightly better) +Put your API key into a header file, like this: -`curl -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/api/test` +``` +X-ApiKey: bad_password +``` + +Export the scraper API's URL prefix to an environment variable: + +`export API=http://127.0.0.1:4000/scraper` + +Call it "scraper-secret.txt" or something else obviously secret. +Don't check it into Git. The key will expire every 30 days and need +to be rotated manually. (for now) + +New versions of Curl can load headers from a text file. All commands +will use this feature to load the API key. + +`curl --header @scraper-secret.txt $API/api/test` Should return "You're valid!" -`curl -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server_list` +`curl --header @scraper-secret.txt $API/v1/server_list` Should return a JSON object listing all the servers. -`curl -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server/aliens_wildland/api/v1/dir/` +`curl --header @scraper-secret.txt $API/v1/server/aliens_wildland/api/v1/dir/` Proxies into the "aliens_wildland" server and retrieves a JSON object listing the file server root. (The server must be running a new version of ptth_server which can serve the JSON API) -`curl -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server/aliens_wildland/api/v1/dir/src/` +`curl --header @scraper-secret.txt $API/v1/server/aliens_wildland/api/v1/dir/src/` Same, but retrieves the listing for "/src". -`curl -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server/aliens_wildland/files/src/tests.rs` +`curl --header @scraper-secret.txt $API/v1/server/aliens_wildland/files/src/tests.rs` There is no special API for retrieving files yet - But the existing server API will be is proxied through the new scraper API on the relay. -`curl --head -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server/aliens_wildland/files/src/tests.rs` +`curl --header @scraper-secret.txt $API/v1/server/aliens_wildland/files/src/tests.rs` PTTH supports HEAD requests. This request will yield a "204 No Content", with the "content-length" header. -`curl -H "range: bytes=100-199" -H "X-ApiKey: $API_KEY" 127.0.0.1:4000/scraper/v1/server/aliens_wildland/files/src/tests.rs` +`curl --header @scraper-secret.txt -H "range: bytes=100-199" $API/v1/server/aliens_wildland/files/src/tests.rs` PTTH supports byte range requests. This request will skip 100 bytes into the file, and read 100 bytes.