Following step 6 of Adrian's guide and some others, I managed to stream 320x240 frames with a speed of 10 fps and 0.1 s latency from my raspberry pi to my laptop. The problem is, when I test this system in my lab (which is equipped with an antique router), it can only stream 1-2 fps with a 1-1.5 second latency, which is totally unacceptable for what I intend to do with those frames.
Right now, my method is simple and straight forward: the server on the raspberry pi capture a frame and store it as a 320x240x3 matrix like the guide mentioned above, then pickle that matrix and keep pumping it over a TCP socket. The client on the laptop keep receiving these frames, do some processing on them, and finally show the result with imshow. My code is rather long for a post (around 200 lines) so I would rather avoid showing it if I can.
Is there any way to reduce the size of each frame's data (the pickled 320x240x3 matrix, its length is 230 kB) or is there a better way to transmit that data?
EDIT:
Okay guys, the exact presented length of the pickled array is 230563 bytes, and the payload data should be at least 230400 bytes so overhead should be no more than 0.07% of the total package size. I think this narrows the problem down to wireless connection quality and the method for encoding the data to bytes (pickling seems to be slow). The wireless problem can be solved by creating ad-hoc network (sounds interesting but I have not tried this yet) or simply buying a better router, and the encoding problem can be solved with Aaron's solution. Hope that this will help future readers :)
tl;dr: struct is actually slow.. Instead of pickle use np.ndarray.tobytes() combined with np.frombuffer() to eliminate overhead.
I'm not well versed in opencv, which is probably the best answer, but a drop-in approach to speeding up transfer could be to use struct to pack and unpack the data to be sent over the network instead of pickle.
Here's an example of sending a numpy array of known dimensions over a socket using struct
import numpy as np
import socket
import struct
#----- server ------
conn = socket.socket()
#connect socket somewhere
arr = np.random.randint(0,256,(320,240,3), dtype="B") # unsigned bytes "B": camera likely returns 0-255 pixel values
conn.write(struct.pack('230400B', *arr.flat)) #230400 unsigned bytes
#----- client ------
conn = socket.socket()
#connect socket somewhere
data = conn.read(230400) #read 230400 bytes
arr = np.array(struct.unpack('230400B', data), dtype='B').reshape((320,240,3),)
EDIT
A little digging shows numpy has a tobytes function that exposes a memory view of the data as a bytes object. This basically does the work of struct without needing the argument unpacking in the function call to encode. This prompted me to also see if we could do the unpacking too, and as long as you're okay with flying by the seat of your pants a little bit (interruptions or errors would not be caught gracefully), we can pack and unpack the data with almost zero overhead making the only limiting factor your network.
testing script:
arr = np.random.randint(0,256,(320,240,3), dtype="B") # unsigned bytes "B": camera likely returns 0-255 pixel values
t = time()
for _ in range(100):
arr2 = pickle.loads(pickle.dumps(arr))
print(f'pickle pack, pickle unpack: {time()-t} sec')
t = time()
for _ in range(100):
arr2 = np.array(struct.unpack('230400B', struct.pack('230400B', *arr.flat)), dtype='B').reshape((320,240,3),)
print(f'struct pack, struct unpack: {time()-t} sec')
t = time()
for _ in range(100):
arr2 = np.array(struct.unpack('230400B', arr.tobytes()), dtype='B').reshape((320,240,3),)
print(f'numpy pack, struct unpack: {time()-t} sec')
t = time()
for _ in range(100):
arr2 = np.frombuffer(arr.tobytes(), dtype="B").reshape((320,240,3),)
print(f'numpy pack, numpy unpack: {time()-t} sec')
prints:
pickle pack, pickle unpack: 0.005013704299926758 sec struct pack, struct unpack: 3.558577299118042 sec numpy pack, struct unpack: 1.2988512516021729 sec numpy pack, numpy unpack: 0.0010025501251220703 sec
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With