-
-
Notifications
You must be signed in to change notification settings - Fork 447
Bug: API ProtonVPN - Invalid access token - 401 Unauthorized #2788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@qdm12 is more or less the only maintainer of this project and works on it in his free time.
|
This is possible to 'fix' they seem to have locked the 'logicals' API endpoint behind auth now, and you basically need to supply two pieces of info. You need to send a cookie that has this format:
You can get one via logging into the website I've not worked out yet what 'auth api' endpoint would give you the token. It does have a huge max-life on it like 6 months, but you need to make sure nothing 'refreshes' it as in you can 'generate' a refreshed version which will cancel the unexpired valid token. You also need to send the following header to the API request:
The Proton UID is a 33 character long alphanumeric value, which is the same length as the auth token value. Also I've found that this version of the logicals api endpoint is more reliable with this auth too.
|
Thank you for your quick response. I tested your technique with Postman; however, I regenerated a token three times, and each time I received an AUTH- token of 32 characters. For each token, I am encountering the same recurring issue:
To obtain the tokens, I generated them by logging into my account and then going to F12 > Storage > Cookies (I am using Firefox). |
So I've been playing a little this afternoon, the following Go should in theory work, with the bits of testing I've done, it manages to grab a new session and auth token from their sessions api endpoint, marshals that response json to a struct, then sets up the request for the logicals endpoint with the required Bearer Token Authorization header as that seems to be the type of auth token the sessions API returns then it does a http request to pull the logicals api endpoint and in this case as it was quick a super dirty just dump the JSON to the console. Piping it into jq the JSON looks okay, but that is about all the validation I've done.. package main
import (
"encoding/json"
"fmt"
"io"
"net/http"
"os"
)
func UnmarshalProtonSession(data []byte) (ProtonSession, error) {
var r ProtonSession
err := json.Unmarshal(data, &r)
return r, err
}
func (r *ProtonSession) Marshal() ([]byte, error) {
return json.Marshal(r)
}
type ProtonSession struct {
Code int64 `json:"Code"`
AccessToken string `json:"AccessToken"`
RefreshToken string `json:"RefreshToken"`
TokenType string `json:"TokenType"`
Scopes []interface{} `json:"Scopes"`
Uid string `json:"UID"`
LocalID int64 `json:"LocalID"`
}
func main() {
sessions_url := "https://account.proton.me/api/auth/v4/sessions"
session_req, _ := http.NewRequest("POST", sessions_url, nil)
session_req.Header.Add("x-pm-appversion", "[email protected]")
session_req.Header.Add("x-pm-locale", "en_US")
session_req.Header.Add("x-enforce-unauthsession", "true")
session_res, _ := http.DefaultClient.Do(session_req)
defer session_res.Body.Close()
body, _ := io.ReadAll(session_res.Body)
pm_session, err := UnmarshalProtonSession(body)
if err != nil {
os.Exit(-1)
}
logicals_url := "https://account.proton.me/api/vpn/logicals"
req, _ := http.NewRequest("GET", logicals_url, nil)
req.Header.Add("Authorization", fmt.Sprintf("Bearer %s", pm_session.AccessToken))
req.Header.Add("x-pm-uid", pm_session.Uid)
req.Header.Add("x-pm-appversion", "[email protected]")
res, _ := http.DefaultClient.Do(req)
defer res.Body.Close()
body, _ = io.ReadAll(res.Body)
if len(body) == 0 {
os.Exit(-1)
}
fmt.Println(string(body))
} I've currently have some basic info on getting 'Auth Cookies' out of the API but I'm not able to get it to work atm as I think I'm missing a couple of params on the request and the info I had doesn't provide any details on the missing params so I'm attempting to reverse engineer them.. |
One thing I've noticed, which is why I switched to the route in my previous post, there is a 'refresh token' endpoint it is super annoying as it basically kills any active tokens for that session if it gets queried. When I was watching the web browser traffic earlier I noticed that depending on where you are on the site their web app clientside code gets super chatty with the refresh token endpoint. Like it hits it every 30 seconds or so in some cases. Which can be mega annoying if you're pulling tokens from the browser session storage to use as it was basically invalidating them before I could use them in hoppscotch or in any script I threw together.. |
So I've just pushed a PR for a potential fix.. If you want to test it out I've pushed a Docker Image for amd64 and arm64 to ghcr.io/mort666/gluetun:latest if you notice any problems please let me know. |
Thank you very much for your fork. I tested it on my side by comparing two lists of servers.json: the one that was generated and the one available here. There are 38 additional Proton servers in the generated list, which means it works. However, in the logs of the Gluetun image you built, there are some logs that I do not understand the meaning of:
What do all these warnings correspond to? Are they a list of old servers, or are they possibly a list of new servers for which it cannot retrieve the information due to this token issue? |
So I'm assuming that it is just warning that those servers where not included within the server list as they are currently marked as 'status 0' in the list which having confirmed in the normal Proton VPN client those servers are 'down/disabled' and have no published services available. I've updated all my running containers with gluetun to my test one and so far I've not noticed any connectivity problems and I've most of mine with the LOG_LEVEL set to error but a couple with default and I've noticed the same bit of spam in the logs on those. I did notice today that some of those reported servers have changed as it seems some servers swapped being 'status 0' to being online overnight and new ones ignored between today's update check and the one from yesterday. |
I pulled your fork and ran an update, worked without the 401 deny for me too. |
fyi. Things broke again.. I've put a bit of an update on my PR.. more to follow after I finish seeing what is happening. What would be handy is getting some feedback about how do people feel about requiring valid either free or paid account auth details being provided in config to enable the update check being able to query the API? |
The obvious answer is the fewer sensitive details stored in the compose files the better, but if that is the most viable method right now I would personally rather it than not being able to use gluetun at all. Appreciate your effort here |
I totally agree with you. I prefer to use gluetun with a refresh of the server list with the credentials in the compose rather than it not working. |
Thanks for the feedback, I'm going to spend some time implementing the auth workflow over the next couple of days once that is sorted I should be able to have something integrated into the update process to enable update and will push a test image to see how things work for everyone. One thing I've yet to check is if the 'VPN' creds that you can get from the proton account management which were for Openvpn usage could be used in this scenario, which given they were creds you would normally include into a Gluetun config anyway would mitigate the need for sensitive info in the config if they did work.. I'll give them a try to see, hopefully they work in this scenario. |
I'm not sure how viable it is, but I would be happy with only needing to run a manual update(ie |
So I've been testing out something, which basically allows for two ways of working one uses username and password, the second way is you have the UID and Refresh token from a login session. Now the way I've got a initial implementation done that I'm testing out atm is it 'saves session info' to a file currently a simple text Key/Value store but this isn't ideal as it is plaintext session info (probably change this if I stay with this idea). First time it runs through without any session file created it requires Username and Password which I'm passing via the Environment, however during that first run it saves the session info to the file, which means every run after the file is loaded first, if there is a valid session in it, that is then used instead of username and password. It doesn't save the username and password, just the UID, AuthToken and RefreshToken all of which can be used to authenticate again next time it is needed. Because the AuthToken and RefreshToken can change the session file gets updated with any changed info after each update. So in theory you could run the update from the commandline passing the auth info via the environment at that point to do an update and create the session store. I'm just testing out this process atm and basically making sure it gets saved to a docker volume so it isn't lost between container teardowns. |
So a little bit of an update after a few hours of testing on a half dozen instances of Gluetun, I've had a few niggles. Mainly around the session file getting saved properly if 'weirdness' happens during an update (stuff like if an update errors before save of the file, multiple updates happening on the same file out of order, shared storage for the file is a no-no). I have all the instances setup with a docker volume for the /gluetun folder where the servers.json is stored for users with session file being saved in this folder to provide persistence. Because the tokens change on use a little more often than I originally accounted for saving them needs to be handled with a bit more care than my initial implementation, going to need to be extra careful what happens if the other updater code errors out, as I've had a few of the instances render the session save file unusable as tokens either get updated with expired tokens or not updated. I was saving the session info as the last step of the fetchAPI call once everything is good to go and return the server list to the next level up in the updater code, i.e. no pending errors and a good decoded API response. I may need to move it closer to the last use instead, as even if the API call fails for the server list the tokens end up changing and this is one point the corruption is creeping in and it should help minimise any file update race window. I would also note, when working with the session file, that first run, definitely works with just having the env vars set for Username and Password as a 'one off' either as something temporarily provided via .env file to docker run or in a interactive shell, with the session info getting saved and used until it becomes invalid. Which would minimise the need to include the info in any 'perm' docker config. |
This may not be related and could just be something on protons end(or me being an idiot) - after updating using this image, I'm only seeing a subset of servers that Proton have for a country. Take New Zealand for example, after a successful update(run logs
These appear to be the same servers bundled in gluetun here EDIT: My confusion lies in the fact that ProtonVPN actually only has 7 servers/EntryIPs in NZ, but each server has multiple ExitIPs, which I guess is what shows up in their apps. |
So if you look at the raw JSON from proton there is a JSON property for each item called 'Status' this basically records the current operational status of a VPN Server from Proton it is a numerical value which as far as it documented a value of 0 means the server is disabled and not available and status of 1 is the server is available. Gluetun does a filter when parsing the server list an ignores and therefore doesn't save any servers that are marked with a status of 0. It is the default behaviour of the updater and I've not touched that code other than to verify it handles the API output. I've noticed that there is almost a rolling nature to which servers are considered online or offline. I've updated one day and then done again the next and some servers will just swap in and out for a region. |
Hi @mort666 - thanks for all your efforts in this. Is your |
Is this urgent?
Yes
Host OS
Ubuntu
CPU arch
x86_64
VPN service provider
ProtonVPN
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2025-01-22T08:30:14.628Z (commit 13532c8)
What's the problem 🤔
Hello,
I am using Gluetun and updating the servers.json list via a Docker container with the following command: command: update -enduser -providers protonvpn. However, it seems that the ProtonVPN API has been modified recently, and the method for connecting to retrieve the servers is no longer functioning (please see the logs I have provided).
Could you please look into this issue? Any guidance on how to resolve it or updates regarding the API changes would be greatly appreciated.
Additionally, I noticed that the message "Using mullvad servers" appears when I specify "protonvpn." Is this expected behavior, or does it indicate a configuration issue?
Thank you for your help!
Best regards,
Share your logs (at least 10 lines)
Share your configuration
The text was updated successfully, but these errors were encountered: