I am dealing with CRDs and creating Custom resources. I need to keep lots of information about my application in the Custom resource. As per the official doc, etcd works with request up to 1.5MB. I am hitting errors something like
"error": "Request entity too large: limit is 3145728"
I believe the specified limit in the error is 3MB. Any thoughts around this? Any way out for this problem?
A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service.
A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. A LimitRange provides constraints that can: Enforce minimum and maximum compute resources usage per Pod or Container in a namespace. Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
Overview. ConfigMaps bind non-sensitive configuration artifacts such as configuration files, command-line arguments, and environment variables to your Pod containers and system components at runtime. A ConfigMap separates your configurations from your Pod and components, which helps keep your workloads portable.
Both ConfigMaps and secrets store the data the same way, with key/value pairs, but ConfigMaps are meant for plain text data, and secrets are meant for data that you don't want anything or anyone to know about except the application.
"error": "Request entity too large: limit is 3145728"
is probably the default response from kubernetes handler for objects larger than 3MB, as you can see here at L305 of the source code:expectedMsgFor1MB := `etcdserver: request is too large`
expectedMsgFor2MB := `rpc error: code = ResourceExhausted desc = trying to send message larger than max`
expectedMsgFor3MB := `Request entity too large: limit is 3145728`
expectedMsgForLargeAnnotation := `metadata.annotations: Too long: must have at most 262144 bytes`
The ETCD has indeed a 1.5MB limit for processing a file and you will find on ETCD Documentation a suggestion to try the--max-request-bytes
flag but it would have no effect on a GKE cluster because you don't have such permission on master node.
But even if you did, it would not be ideal because usually this error means that you are consuming the objects instead of referencing them which would degrade your performance.
I highly recommend that you consider instead these options:
There's a request for a new API Resource: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on.
If you still need help let me know.
This happened to me when I put some large files in my Helm chart directory. Removing those files helped me resolve my issue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With