Helllo, I would like to share small amounts of data (< 1K) between python and processes. The data is physical pc/104 IO data which changes rapidly and often (24x7x365). There will be a single "server" writing the data and multiple clients reading portions of it. The system this will run on uses flash memory (CF card) rather than a hard drive, so I'm worried about wearing out the flash memory with a file based scheme. I'd also like to use less power (processor time) as we are 100% solar powered.
Thank You
UPDATE: We slowed down the maximum data update rate to about 10 Hz, but more typically 1 Hz. Clients will only be notified when a value changes rather than at a constant update rate. We have gone to a multiple servers/multiple clients model where each server specializes in a certain type of instrument or function. Since it turned out that most of the programming was going to be done by Java programmers, we ended up using JSON-RPC over TCP. The servers wil be written in Java but I still hope to write the main client in Python and am investigation JSON-RPC implementations.
Processes don't share memory with other processes. Threads share memory with other threads of the same process.
Shared memory can be a very efficient way of handling data in a program that uses concurrency. Python's mmap uses shared memory to efficiently share large amounts of data between multiple Python processes, threads, and tasks that are happening concurrently.
An alternative to writing the data to file in the server process might be to directly write to the client processes:
Use UNIX domain sockets (or TCP/IP sockets if the clients run on different machines) to connect each client to the server, and have the server write into those sockets. Depending on your particular processing model, choosing a client/socket may be done by the server (e.g. round-robin) or by the clients signalling that they're ready for more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With