I would like to make a big display by putting monitors side by side.
Any layout (3x4, etc), but let's stick with 2x2 for now.
Presumably I have to open the video file, get each frame, divide it into 4 and write each part into a new video file (with suitable header).
Are there any FOSS components or libraries which can help with this, or do I have to code it all myself?
Oh, btw, I would also like to do the same with still images.
Update: I might need many and had been thinking of a windows based controller communicating over TCP/IP with a bunch of embedded devices, one per display. I thought that wasn't relevant to the question, but it might prevent people looking for alternative solutions.
Update: thanks for all of the comments & questions. I might need to drive up to 20x20 monitors or maybe even more (think of a "video wall" made from 21" TFT).
If one single magic graphic card can handle this, then that is obviously the way to go.
Otherwise, I will have a "controller" PC which allows the user to select video files and then slices them appropriately and sends each section to one MCU which controls a single display. The MCUs will store their slice of each video stream and later the controller will send a short command over TCP/IP to tell each to start playing it's slice of video # X. That ought to keep them in synch (I thought that I would have to do that, which is why the original question didn't even bother to explain, just to ask how to slice).
Use mencoder
with the -vf
option and use crop=b:h:x:y
as a filter.
By doing this (bxh=n) times you can generate the necessary number of videos even from a batchfile.
For still images the analogous solution is convert
with the -crop
option
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With