I have a Fast API application that I deployed to GCP Cloud Run. It's been working fine until yesterday and I genuinely don't know what went wrong. The issue seems to start at this line when I read in a .pkl
file
model = pickle.load(open(os.path.join('models', 'appartementen.pkl'), 'rb'))
The traceback:
File "pandas/_libs/internals.pyx", line 572, in pandas._libs.internals.BlockManager.__cinit__: TypeError: __cinit__() takes at least 2 positional arguments (0 given) at <module> (/app/src/api/util.py:25)
at <module> (/app/src/api/main.py:8) at
_call_with_frames_removed (<frozen importlib._bootstrap>:219) at exec_module (<frozen importlib._bootstrap_external>:728)
at _load_unlocked (<frozen importlib._bootstrap>:677)
at _find_and_load_unlocked (<frozen importlib._bootstrap>:967)
at _find_and_load (<frozen importlib._bootstrap>:983)
at _gcd_import (<frozen importlib._bootstrap>:1006)
at import_module (/usr/local/lib/python3.7/importlib/__init__.py:127)
at import_app (/usr/local/lib/python3.7/site-packages/gunicorn/util.py:358) at load_wsgiapp (/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py:39)
at load (/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py:49)
at wsgi (/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py:67)
at load_wsgi (/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py:144)
at init_process (/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py:119)
at spawn_worker (/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py:583)
Note that when I deploy this application locally, everything went fine.
My Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
WORKDIR /app
COPY . ./
COPY src ./src/
COPY models ./models/
RUN pip install -r requirements.txt
COPY setup.py ./
CMD exec gunicorn src.api.main:app
How I deploy to Cloud Run:
gcloud builds submit --tag gcr.io/project-id/api --timeout=3600
gcloud run deploy api --image gcr.io/project-id/api --platform managed --project=project-id --region=europe-west4
requirements.txt
:
fastapi==0.63.0
google-cloud-bigquery[bqstorage,pandas]==1.24.0
sentry_sdk==1.0.0
xgboost==1.3.3
scikit-learn==0.23.1
shap==0.39.0
matplotlib==3.4.1
I tried using the same version of scikit-learn
according to this suggestion but the issue remains.
Pickling is not allowed in different languages. So pickling and unpickling are only possible in the same versions of the python file. Many of the time we will face an error as an Attribute error. It shows like can’t pickle local objects. Let us see why this error occurs and how to solve that.
But while dealing with chrome drivers, you may face can’t pickle errors depending on your function scope. When using the flask application in production mode, there are different instances of workers handling the traffic. In such cases, the chrome driver will not be saved from the custom function if it’s called by a different worker.
Pickling is the process of converting an object into a byte stream to store it in either a file or database. These pickled objects are useful to recreate the python original objects.
This was a bug with pandas 1.3.0, and is fixed with pandas 1.3.1.
As a workaround, replace pickle.load
with pandas.read_pickle
.
I just ran into this same issue when building a new container today. Unsure of the exact cause at the moment (likely an incompatibility between the pickled object's pandas
version and the container's pandas
version), but reverting the pandas
version worked for me. The pickle was built with 1.2.5
, and the container installed 1.3.0
. So:
pip uninstall pandas
pip install pandas==1.2.5
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With