Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)

I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).

NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.

There are four ways I can think of...

  1. Capture frames on iPhone, send frames to mediaserver, have mediaserver publish realtime video using host webserver.

  2. Capture frames on iPhone, convert to images, send to httpserver, have javascript/AJAX in browser reload images from server as fast as possible.

  3. Run httpServer on iPhone, Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, have the other user connect directly to httpServer on iPhone for liveStreaming.

  4. Capture 1 second duration movies on iPhone, create M3U8 files on iPhone, send to httpServer, have the other user connected to the httpServer for liveStreaming. This is a good answer, has anyone gotten it to work?

Is there a better, more efficient option? What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?

Thanks, everyone.

like image 658
Jordan Avatar asked Apr 19 '11 16:04

Jordan


2 Answers

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

  1. Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
  2. Write your own parser for the H.264/AAC output (very hard)
  3. Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
like image 123
wombat57 Avatar answered Oct 16 '22 11:10

wombat57


"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."

I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.

like image 43
Satoshi Nakajima Avatar answered Oct 16 '22 12:10

Satoshi Nakajima