Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python multiple threads/ multiple processes for reading serial ports

I'm trying to write a python class utilizing parallel processing/threading for reading two serial ports(/dev/ttyS1 and /dev/ttyS2). Both of these ports are running at a 19200 baud rate and are constantly active. I used pySerial for this purpose.

Both of the read operations need to be run continuously and concurrently. I am wondering if should use the thread library or threading library or the multiprocessing library. I'm only worried because of the global interpreter lock which doesnt' give true threading ability for heavy IO operations. But if the global interpreter lock doesn't affect me then I will use the threading/thread module. However if it does then I would need to cross compile the python multiprocessing libraries because this is on an embedded system.

So my code would typically have thread1 or process1 = reading ttyS1 and writing to a buffer after performing some string operations on the read lines. thread2 or process2 = reading ttyS2 and writing to another buffer after performing some string operations on the read lines. Other functions etc These buffers are further utilized by other parts in the code.

Also does multiprocessing in python require multiple cores/cpus?

Thanks for reading!

like image 535
kal Avatar asked Nov 24 '11 17:11

kal


2 Answers

I'm not an expert on the subject in any way, but I keep on finding that the amount of additional subtleties that using threading requires is not worth the effort if I can parallelise via processes instead.

A third module that you did not mention among the alternatives is subprocess.

EDIT on request of OP: You can achieve parallel processing by creating separate scripts for the serial interfaces. This is a quick demo, it assumes that both files are in the same directory.

File com.py - the serial script - This is just a mock, but the idea here is that the script runs autonomously, and only uses stdin and stdout to communicate with the master program.

import sys

counter = 0
while True:  # The program never ends... will be killed when master is over.
    counter += 1
    sys.stdin.readline()
    sys.stdout.write('Serial from com1 is %d\n' % counter)
    sys.stdout.flush()

File master.py - the main program

from subprocess import Popen, PIPE
from time import sleep

p = Popen(['python', './com.py'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
print "serial communication started."  # com.py is working but we moved on!
for i in range(3):
    p.stdin.write('<command-here>\n')
    print "comand sent."
    print "received : %s" % p.stdout.readline()
    sleep(1)

Finally, this is a dump of the expected output:

mac@jabbar:~/Desktop$ ./master.py 
serial communication started.
comand sent.
received : Serial from com1 is 1
comand sent.
received : Serial from com1 is 2
comand sent.
received : Serial from com1 is 3

HTH!

like image 67
mac Avatar answered Oct 19 '22 23:10

mac


The GIL is released during read operations, so it shouldn't affect you much. Cross-compiling multiprocessing sounds like overkill, or at least premature optimalization. Do keep the code modular so you can switch later, though.

I do believe the threading performance will depend on your OS. Your mileage will vary, especially on an embedded system.

If you have an hour to spare, there's a talk on the GIL by David Beazley (PDF slides here). For high-performance threading, you'll want to see it to get the nasty details on how threading, the GIL, and the OS can all work together to kill performance.

like image 21
Petr Viktorin Avatar answered Oct 20 '22 00:10

Petr Viktorin