I am attempting to implement a DraftJS editor that highlights words in a transcription while its recorded audio is playing (kind of like karaoke).
I receive the data in this format:
[
{
transcript: "This is the first block",
timestamps: [0, 1, 2.5, 3.2, 4.1, 5],
},
{
transcript: "This is the second block. Let's sync the audio with the words",
timestamps: [6, 7, 8.2, 9, 10, 11.3, 12, 13, 14, 15, 16, 17.2],
},
...
]
I then map this received data to ContentBlocks
and initialize the editor's ContentState
with them by using ContentState.createFromBlockArray(blocks)
It seems like the "DraftJS" way of storing the timestamp metadata would be to create an Entity
for each word with its respective timestamp, and then scan through the currentContent
as the audio plays and highlight entities up until the current elapsed time. But I am not sure if this is the right way to do this, as it doesn't seem performant for large transcriptions.
Note: the transcript needs to remain editable while maintaining this karaoke functionality
Any help or discussion is appreciated!
I ended up doing exactly what I described in the question: store timestamps in DraftJS entities. After a few more weeks with DraftJS it seems this is the correct way to do this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With