Is there a term and design principle for the auditability/ observability/ traceability of system processes? For example, I've got a product with a calculation engine at it's core, it's passed a list of criteria and reads a list of transactions - it then returns a list of eligible transactions. It's a bit of a black box, I would like to implement auditability/ observability/ traceability into the process so that the end-user is able to understand how we determined that value. So as an end-user I can see how, why and what steps were taken to get to this result.
Is there a better term to describe this? Is there a design and/ or software principle that documents this?
I've spoken to a few colleagues and we can't agree on a common name, I feel like it should be a pretty standard concept but google would suggest not.
A good fit would be explainability:
Explainability, in the context of decision making in software systems, refers to the ability to provide clear and understandable reasons behind the decisions, recommendation, and predictions made by the software.
Source: https://www.computer.org/csdl/magazine/so/2024/01/10372519/1T8PlmpJ9Ic
It is much used in machine learning, in the concept of Explainable AI:
Explainability is a concept that is recognized as important, but a consensus definition is not yet available; one possibility is "the collection of features of the interpretable domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)".
Source: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence
In another context, SQL has the keyword EXPLAIN for giving more details about how a query would be executed and optimized.
See https://www.postgresql.org/docs/current/sql-explain.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With