. Advertisement .
..3..
. Advertisement .
..4..
When you use the Python Request library, chances are this error, “Max retries exceeded with URL” will occur. How can you get rid of it? Our guidelines will introduce three methods for your reference.
How to Fix The Error “Max Retries Exceeded With URL”?
Method 1. Check The URL Again
Chances are the requested URL is not correct, which becomes malformed or leads to non-existent endpoints. That’s a common case for many Python beginners, but experienced programmers do not escape from this error, either, especially if their URLs are parsed from webpages with relative or schemaless URLs.
Hence, one solution to debug this error is to double-check your URL in advance, before printing it and making an actual connection.
# ...
url = soup.find("#linkout").href
print(url) # prints out "/api" which is a non-valid URL
r = requests.get(url)
Method 2. Increase Your Request TimeOut
Another method to solve the “Max Retries” error – especially if your server is too busy tackling a large amount of connection – is simply to increase the time your “requests” library awaits the server’s response.
To summarize, you should wait a bit more for its response while increasing the chances for the requests to finish successfully. Such an approach is also applicable in situations where the server is far from your location.
So how can we increase the request timeout? Pass a time value (measured in seconds) to the “post” or “get” method:
r = requests.get(url, timeout=4)
It’s also possible to pass one or two tuples to timeout, given that the first element is a connecting timeout (meaning the time the server allows for its clients to establish a server connection). Meanwhile, the second element is a reading timeout (meaning the time to wait for a server response once a connection is established).
Let’s say the request creates a connection in 3 seconds and obtains data in 6 seconds after the connection is established. In that case, the response will get sent back as it used to before. But what if those requests time out? Then it will send you the Timeout exception:
requests.get('https://api.github.com', timeout=(3, 6))
Method 3. Check If The Internet Connection is Unstable or The Server is Overloaded
The root of your dilemma might lie in the internet connection or server with which you try to connect. Unstable Wifi might lead to packet loss among network hops, causing unsuccessful connection. Also, in some cases, the server has to receive too many requests that any further process is impossible, meaning your request cannot get a response.
To solve that problem, try to increase retry attempts and disconnect the keep-alive connections. The issue will probably go away after that.
The time spent on each of your requests will increase accordingly, but that’s a price you need to pay. Or else, simply find reliable internet connections somewhere else.
import requests
requests.adapters.DEFAULT_RETRIES = 6 # increase retries number
s = requests.session()
s.keep_alive = False # disable keep alive
s.get(url)
Conclusion
This article has instructed you how to solve the error “Max Retries Exceeded With URL” with three different methods. Applying them directly to your program will help you sidestep these issues in the future. For solutions to tackle other URL errors in Python (such as HTTP Error 404), keep browsing our website.
Leave a comment