I'm using bazel on a computer with 4 GB RAM (to compile the tensorflow project). Bazel does however not take into account the amount of memory I have and spawns too many jobs causing my machine to swap and leading to a longer build time.
I already tried setting the ram_utilization_factor flag through the following lines in my ~/.bazelrc
build --ram_utilization_factor 30
test --ram_utilization_factor 30
but that did not help. How are these factors to be understood anyway? Should I just randomly try out some others?
Some other flags that might help:
--host_jvm_args
can be used to set how much memory the JVM should use by setting -Xms
and/or -Xmx
, e.g., bazel --host_jvm_args=-Xmx4g --host_jvm_args=-Xms512m build //foo:bar
(docs).--local_resources
in conjunction with the --ram_utilization_factor
flag (docs).--jobs=10
(or some other low number, it defaults to 200), e.g. bazel build --jobs=2 //foo:bar
(docs).Note that --host_jvm_args
is a startup option so it goes before the command (build
) and --jobs
is a "normal" build option so it goes after the command.
For me, the --jobs
argument from @kristina's answer worked:
bazel build --jobs=1 tensorflow:libtensorflow_all.so
Note: --jobs=1
must follow, not precede build
, otherwise bazel will not recognize it. If you were to type bazel --jobs=1 build tensorflow:libtensorflow_all.so
, you would get this error message:
Unknown Bazel startup option: '--jobs=1'.
Just wanted to second @sashoalm's comment that the --jobs=1
flag was what made bazel build finally work.
For reference, I'm running bazel on Lubuntu 17.04, running as a VirtualBox guest with about 1.5 GB RAM and two cores of an Intel i3 (I'm running a Thinkpad T460). I was following the O'Reilly tutorial on TensorFlow (https://www.oreilly.com/learning/dive-into-tensorflow-with-linux), and ran into trouble at the following step:
$ bazel build tensorflow/examples/label_image:label_image
Changing this to bazel build --jobs=1 tensorflow/...
did the trick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With