I have set up the Kubernetes MongoDB operator according to this guide: https://adamtheautomator.com/mongodb-kubernetes/ and it works well. However, when I try to update the MongoDB version to 6.0.4, I get the following error:
{
"error":"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR:
Location4926900: Invalid featureCompatibilityVersion document in admin.system.version:
{ _id: \"featureCompatibilityVersion\", version: \"4.4\" }.
See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.
:: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0.
See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.).
If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at
https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures."}
I have followed this guide: https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#upgrade-your-mongodbcommunity-resource-version-and-feature-compatibility-version
This means that my config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml file looks like this:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
members: 2
type: ReplicaSet
version: "6.0.4"
featureCompatibilityVersion: "6.0"
security:
...
The rest is set according to the linked guide (in the first link above).
The error that is thrown suggests that, for whatever reason, the featureCompatibilityVersion field is ignored, even though I have explicitly set it to "6.0". However, since the documentation clearly states that this is a possible configuration, this shouldn't be the case. My question then is: am I doing something wrong, or is this a bug?
After a couple of days' research, I managed to find a way to do this, and it is annoyingly simple...
The key to all of this lies in the documentation here. Basically, in order to update from mongo 4.4.0 to 6.0.4, you need to do it in steps:
First, change the mongo version from "4.4.0" to e.g. "5.0.4", whilst setting the featureCompatibilityVersion to "5.0":
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
version: "5.0.4"
featureCompatibilityVersion: "5.0"
...
After having applied this, verify that the featureCompatibilityVersion is indeed 5.0 and that all MongoDB pods are "5.0.4". If the MongoDB pods aren't "5.0.4", you need to restart the service (See "Restarting everything" below). You can now run the second step:
Update the mongo version to "6.0.4" and the featureCompatibilityVersion to "6.0":
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
version: "6.0.4"
featureCompatibilityVersion: "6.0"
...
Apply this change and verify that the featureCompatibilityVersion is indeed 6.0, and that all MongoDB pods are "6.0.4". Once again, if the pods aren't "6.0.4", Restart everything according to the procedure below.
kubectl port-forward service/mdb0-svc -n mongodb 27017:27017 (according to the guide).mongosh -u mongoadmin -p secretpassword --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' (if you're using the same credentials as the guide).kubectl delete -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb .kubectl patch pv data-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv data-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv data-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl delete deployments.apps mongodb-kubernetes-operator -n mongodb
kubectl delete crd mongodbcommunity.mongodbcommunity.mongodb.com
kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
kubectl apply -k config/rbac/ -n mongodb
kubectl create -f config/manager/manager.yaml -n mongodb
kubectl apply -f new-user.yaml -n mongodb
kubectl apply -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb
Quick solution - If you do not care for the data loss, just empty the dbpath (data) folder and start mongod again.
Details - Thanks @Andreas-Forslöw for the solution.
In my case the issue was faced with a local installation in Windows 11 laptop. I installed the community ver 7.0 while there was version 5.0 in the laptop already. They both coexisted (Server folder contained 5.0 and 7.0 and I could start both the server demons separately). I uninstalled 5.0 and started getting this error. Then I also tried uninstalling and reinstalling 7.0 with no luck.
From some research and this forum post, I concluded (with no real proof but the related action proved it was the cause) that in my case, it was related to the config data related to the "featureCompatibilityVersion" stored in the dbpath created by the 5.0 version which was at C:\data\db. This data path and its contents were not updated during the install of the version 7.0 and hence it possibly was having the config data with version value as 5.0. As per the docs, Version 7.0 can not be downgraded to 5.0 and the existing setup showed as if this was somehow the situation ! This was the real issue !
As I was having no important data, I just emptied this path and started the server demon with the mongod command again. This repopulated all the initial data (might be) updating the conflicting reference to version 5.0 and started the demon as usual ! Bravo, forum posters !!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With