I'm not sure how to go about it as I've seen examples of shared memory used for interprocess communication. I was wondering if I could leverage it from within a server to take, say, periodic snapshots of certain objects and dump them in some format in shared memory which..if my program crashes..could be retrieved on restart of the program for partial recovery. Is this feasible? If so, what can I take a look at to get started?
UPDATE: I read somewhere that shared memory on linux (I am on linux) is persistent, so I was thinking I might be able to save state snapshots periodically without the need of a helper process. Say for example, a continuously updating struct which I dump to shared memory every few seconds. The reason I would opt for shared memory instead of a file is purely for speed as state would be a lot of binary data.
Just an idea (not tried) on Uni*x like systems.
Do a fork(2) and send a SIGTRAP signal to this child process (or any signal which creates a core dump).
Fork makes a copy of the original process environment. This will dump the full memory state. Then it can be analysed by gdb (or alike). Of course it is not for recovery...
You can create a gdbinit
file and You can dump the variables from a script calling gdb
with the core file.
Why the shared memory is needed? Is it not good to dump the state to disk?
I think this can be used for recover as well. Perl -u command line argument does similar thing. It parses the script file and dumps a core file. This core file can be used by undump program to load the core directly to the memory and start perl without the parsing phase.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With