I've been reading through the docs trying to figure out how to commit a transaction in QLDB, and in order to do so, a CommitDigest
is required, and the docs describe it as:
Specifies the commit digest for the transaction to commit. For every active transaction, the commit digest must be passed. QLDB validates CommitDigest and rejects the commit with an error if the digest computed on the client does not match the digest computed by QLDB.
So CommitDigest must be computed, but I'm not quite sure what's required for its computation given this example:
// ** Start Session **
const startSessionResult = await qldbSession.sendCommand({
StartSession: {
LedgerName: ledgerName
}
}).promise(),
sessionToken = startSessionResult.StartSession!.SessionToken!;
// ** Start Transaction **
const startTransactionResult = await qldbSession.sendCommand({
StartTransaction: {},
SessionToken: sessionToken
}).promise(),
transactionId = startTransactionResult.StartTransaction!.TransactionId!;
// ** Insert Document **
const executeStatementResult = await qldbSession.sendCommand({
ExecuteStatement: {
TransactionId: transactionId,
Statement: `INSERT INTO sometable { 'id': 'abc123', 'userId': '123abc' }`
},
SessionToken: sessionToken
}).promise(),
documentId = getDocumentIdFromExecuteStateResult(executeStatementResult)
// ** Get Ledger Digest
const getDigestResult = await qldb.getDigest({
Name: ledgerName
}).promise(),
ledgerDigest = getDigestResult.Digest;
// ** Commit Transaction **
// ** **The API call in question** **
const commitTransactionResult = await qldbSession.sendCommand({
CommitTransaction: {
TransactionId: transactionId,
CommitDigest: `${commitDigest}` // <-- How to compute?
},
SessionToken: sessionToken
}).promise();
// *******************************
// ** End Session **
const endSession = await qldbSession.sendCommand({
EndSession: {},
SessionToken: sessionToken
}).promise();
What do I need to hash for CommitDigest
in the CommitTransaction
api call?
With QLDB, the history of changes to your data is immutable—it can't be altered, updated, or deleted. And using cryptography, you can easily verify that there have been no unintended changes to your application's data. QLDB uses an immutable transactional log, known as a journal.
QLDB is way faster compared to traditional blockchain framework data transfers as it lets you do transactions 2-3x faster. Any traditional blockchain solution needs to work with its nodes to validate the transactions. But, in the case of Amazon QLDB, it is not a requirement.
In Amazon QLDB, the journal is the core of the database. Structurally similar to a transaction log, the journal is an immutable, append-only data structure that stores your application data along with the associated metadata. All write transactions, including updates and deletes, are committed to the journal first.
Amazon QLDB allows you to access and manipulate your data using PartiQL, which is a new open standard query language that supports SQL-compatible access to QLDB's document-oriented data model that includes semi-structured and nested data while remaining independent of any particular data source.
Update: the Node.js driver is now available. Take a look at https://github.com/awslabs/amazon-qldb-driver-nodejs/.
At the time of writing, the QLDB Node.js driver is still in development. It is going to be fairly difficult if you try and create one yourself, so I would caution against doing so. That said, I can explain both the purpose and algorithm behind CommitDigest.
The purpose is fairly straight-forward: to ensure that transactions are only committed iff the server has processed the exact set of statements sent by the client (all, in order, no duplicates). HTTP is request-response and it is therefore possible that requests may be dropped, processed out of order or duplicated. The QLDB drivers manage the communication with QLDB correctly, but having commit digest being in the protocol makes it impossible for an implementation to incorrectly retry requests and still commit the transaction. For example, consider incrementing a bank balance twice because a HTTP message is retried even though the first request succeeded.
The algorithm is also pretty straight-forward: a hash value is seeded with the transaction id and then updated with the QLDB ‘dot’ operator. Each update ‘dots’ in the statement hash (sha256 of the PartiQL string) as well as the IonHash of all of the bind values. The dot operator is the way QLDB merges hash values (this is the same operator used in the verification APIs) and is defined as the hash of the concatenation of the two hashes, ordered by the (signed, little-endian) byte-wise comparison between the two hashes. The client and server run this algorithm in lock-step and the server will only process the commit command if the value the client passes matches what the server computed. In this way, the server will never commit a transaction that isn’t exactly what the client requested.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With