I am cross compiling my native library for Android. My .bazelrc is:
build:android --platforms=@io_bazel_rules_go//go/toolchain:android_arm64_cgo
build:android --extra_toolchains=@androidndk//:all
I build native c++ library and then package that native library into a jar file using:
java_library(
name = "mylib",
resources = ["//:my-native-lib.so"]
)
Everything works fine and I can build my jar using bazel build --config //:mylib which generates libmylib.jar which contains my .so native library. So far so good.
However I'd like to include the target architecture (in that case arm64) in the name of my jar file. For example to be libmylib-arm64.jar. I do not want to hardcode that name but somehow to get the cpu architecture from the platform I am using to build with.
Any ideas how can do this?
If you're crosscompiling, you don't need the cpu of the host but the target. One way to do this is to define the rule for each cpu architecture you support, and bazel will automatically enable/disable the appropriate rule for the given target architecture.
For example, in a file like multi_cpu_rules.bzl:
supported_cpus = [
"x86_64",
"arm",
]
def must_be_str(val):
if type(val) != "string":
fail("invalid type, got: " + type(val))
return val
def format(unformatted_data, cpu):
if type(unformatted_data) == "list":
return [
must_be_str(data).format(cpu = cpu)
for data in unformatted_data
]
if type(unformatted_data) == "dict":
return {
key: must_be_str(value).format(cpu = cpu)
for key, value in formatted_data.items()
}
return must_be_str(unformatted_data).format(cpu = cpu)
def fn_wrap_rule(function, **kwargs):
for cpu in supported_cpus:
values = {
key: format(value, cpu)
for key, value in kwargs.items()
}
function(
**values
)
The target_compatible_with property is defined for all rules.
Then the targets can be defined in your build file as:
load("multi_cpu_rules.bzl", "fn_wrap_rule")
fn_wrap_rule(
java_library,
name = "mylib-{cpu}",
resources = ["//:my-native-lib.so"],
target_compatible_with = [
"@platforms//cpu:{cpu}",
],
)
This will define one artifact per target architecture, and bazel will only build the one appropriate for the currently selected platform. These targets would need to be referenced on the command line using the architecture string, however bazel can build all targets in a package using
bazel build //my_lib/...
Alternatively, if you only want to inspect the host platform your compiler is running on, constraints can be read using:
load("@local_config_platform//:constraints.bzl", "HOST_CONSTRAINTS")
This is a list of properties of the current platform. So to pick out the cpu value it needs to be looped over, however loops aren't allowed in BUILD files so it needs to be wrapped in a function (or macro).
So add a file, eg called get_host_cpu.bzl containing:
load("@local_config_platform//:constraints.bzl", "HOST_CONSTRAINTS")
def get_host_cpu():
cpu_constraint = "@platforms//cpu:"
for constraint in HOST_CONSTRAINTS:
if constraint.startswith(cpu_constraint):
return constraint[len(cpu_constraint):]
Then modify the rule call to use this function. However, then using this elsewhere would also require a call to this function, so we can also create an alias. Update your build file to:
load(":get_host_cpu.bzl", "get_host_cpu")
java_library(
name = "mylib" + get_host_cpu(),
resources = ["//:my-native-lib.so"]
)
alias(
name = "mylib",
actual = "mylib" + get_host_cpu(),
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With