-
First Check
Commit to Help
Example Code# From the official documentation
# Run with uvicorn main:app --reload
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"} DescriptionUse the minimal example provided in the documentation, call the API 1M times. You will see that the memory usage piles up and up but never goes down. The GC can't free any objects. It's very noticeable once you have a real use case like a file upload that DoS'es your service. Here some examples from a real service in k8s via lens metrics: Operating SystemLinux, macOS Operating System DetailsNo response FastAPI Version0.74.1 Python VersionPython 3.10.1 Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 38 comments 20 replies
-
Are you using |
Beta Was this translation helpful? Give feedback.
-
What uvicorn version are you using? Do you have a health check that sends an TCP ping? If answers above are: "not the latest" and "yes", then bump uvicorn to the latest one. |
Beta Was this translation helpful? Give feedback.
-
Is the application running in the docker container? In the container, python recognizes the memory and CPUs of the host, not the resources limited of the container, which may cause the GC not to be actually executed. Similar problems have occurred in my application before. I solved them with reference to this issue: #596 (comment) |
Beta Was this translation helpful? Give feedback.
-
I have solved this issue with following settings:
|
Beta Was this translation helpful? Give feedback.
-
I have no such problem on windows x64 |
Beta Was this translation helpful? Give feedback.
-
Running on docker in I didn't have memory leak issues with fastapi 0.65.2 and uvicorn 0.14.0 in my project before. I then did a binary search of different fastapi versions (using uvicorn 0.17.6) to see where the memory leaks first appear. |
Beta Was this translation helpful? Give feedback.
-
0.69.0 was the introduction of AnyIO on FastAPI. Release notes: https://fastapi.tiangolo.com/release-notes/#0690 |
Beta Was this translation helpful? Give feedback.
-
I tested using uvicorn 0.17.6 and both FastAPI 0.68.2 and 0.75.0. On 0.68.2, memory usage settled on 358 MB after 1M requests, and on 0.75.0, it was 359 MB. Is there something surprising about these results? |
Beta Was this translation helpful? Give feedback.
-
I can't exactly say, my container is limited to 512MiB and the base consumption of my app before was already ~220 MiB, so having an additional 350 MiB and then for it to settle would be well within what I can observe. It's just that for me prior to 0.69.0, I don't have any sharp memory increase at all: |
Beta Was this translation helpful? Give feedback.
-
Can anybody else reproduce these results? |
Beta Was this translation helpful? Give feedback.
-
How do we go about this? The issue is marked as question but the memory leak certainly is a problem for me for updating fastapi. New ticket as a "problem"? |
Beta Was this translation helpful? Give feedback.
-
To start with... People need to reply @agronholm's question. |
Beta Was this translation helpful? Give feedback.
-
I definitely have this same memory behaviour in some of my more complex services, i.e. memory utilization just keeps climbing and seemingly nothing is ever released, but I haven't been able to reduce it to a simple service that displays the same memory behaviour. |
Beta Was this translation helpful? Give feedback.
-
Not sure if directly related but i detected a leak when saving objects to the request state. the following code will retain the large array in memory even after the request was handled:
a working workaround is to null the The complete example with a test script can be found here: I'll note in addition that I tried to run this code with older versions of FastAPI and got the same results (even if i went as far as 0.65.2 as was suggested in an earlier note). hence...not sure it's directly related. |
Beta Was this translation helpful? Give feedback.
-
In my case where I'm seeing it, I'm attaching a kafka producer to the request.app variable (i.e. My questionthen is how do I create a kafka producer on startup that's accessible to endpoints without causing this leak issue: I want to avoid creating a new kafka producer on every single request because is really inefficient as startup of a kafka producer takes some time. |
Beta Was this translation helpful? Give feedback.
-
This is the dockerfile that can reproduce the leak. FROM continuumio/anaconda3:2019.07
SHELL ["/bin/bash", "--login", "-c"]
RUN apt update && \
apt install -y procps \
vim
RUN pip install fastapi==0.81.0 \
uvicorn==0.18.3
WORKDIR /home/root/leak
COPY client.py client.py
COPY server.py server.py Run the following commands, and the docker build -t leak-debug:latest -f Dockerfile .
docker run -it leak-debug:latest bash
# in container
nohup python server.py &
nohup python client.py &
top The memory goes to 1GB in about 3mins. |
Beta Was this translation helpful? Give feedback.
-
@agronholm Thanks. # server.py
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.routing import Route
from starlette.middleware.base import (
BaseHTTPMiddleware,
RequestResponseEndpoint,
)
from starlette.requests import Request
from starlette.responses import PlainTextResponse
class TestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, req: Request, call_next: RequestResponseEndpoint):
return await call_next(req)
async def ping(request):
return PlainTextResponse("pong")
app = Starlette(
routes=[Route("/_ping", endpoint=ping)],
middleware=[Middleware(TestMiddleware)] * 3,
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=14000) |
Beta Was this translation helpful? Give feedback.
-
That Dockerfile won't build for me:
I tried using the official |
Beta Was this translation helpful? Give feedback.
-
@agronholm FROM python:3.7.12
RUN pip install fastapi==0.81.0 \
uvicorn==0.18.3 \
requests
WORKDIR /home/root/leak
COPY client.py client.py
COPY server.py server.py |
Beta Was this translation helpful? Give feedback.
-
I can reproduce it on Python 3.7.13, but it's not reproducible from 3.8+. Notes:
I'll not spend more time on this issue. My recommendation is to bump your Python version. In any case, this issue doesn't belong to FastAPI. |
Beta Was this translation helpful? Give feedback.
-
@Kludex |
Beta Was this translation helpful? Give feedback.
-
Hi, is there any workaround or solution how to avoid this? Its happening on latest version of libs. |
Beta Was this translation helpful? Give feedback.
-
Can you prove it with a reproducible code sample? |
Beta Was this translation helpful? Give feedback.
-
Also, what version of Python are you running? |
Beta Was this translation helpful? Give feedback.
-
Try this:
|
Beta Was this translation helpful? Give feedback.
-
I may be the only one who made this obvious mistake, but I was seeing a gradual and constant memory leak when I built and ran my FastAPI/Uvicorn app in Docker. In my case it was because I forgot to disable the I'm not sure if Watchfiles has a memory leak, but setting This was Running in docker (Ubuntu 22) with:
|
Beta Was this translation helpful? Give feedback.
-
Hi, I got some stats on Docker with a simple test code and found the Python3.11 + uvicorn didn't have memory leak. (sample codes: https://github.com/kato1628/fastapi-memory-leak) Python3.11 + Fastapi(0.95.2) + uvicorn(0.22.0)
|
Beta Was this translation helpful? Give feedback.
-
Sometimes it's pydantic's issue, pydantic has some memory leak issue: https://github.com/pydantic/pydantic/issues?q=memory+leak upgrade pydantic to latest version solve my problem. you can use tracemalloc to find what happen |
Beta Was this translation helpful? Give feedback.
I can reproduce it on Python 3.7.13, but it's not reproducible from 3.8+.
Notes:
I'll not spend more time on this issue. My recommendation is to bump your Python version.
In any case, this issue doesn't belong to FastAPI.