Our understanding is that with output provided by MediaRecorder object it is pretty difficult to implement work via standard protocols like RTP. That's why we suggest to go one level lower to NDK and see if we can get an access to raw data there.
## Deliverables
On Android platform the list of supported encoders may vary from one device to another, so to be sure that your application will work on all available devices you need to use encoders used in core platform. You can see supported media formats here:
<[login to view URL]>
So it looks like there could be a problem with both H264 and AAC encoding.
Working via RTP protocol, you need to have an access to separate media frames (and furthermore, audio and video streams are being sent separately, via different TCP ports). With high-level video capture API (MediaRecorder class) we get multiplexed (video+audio in a single stream) in a serialized form ready to be written to a file (3gp or mpeg4), actually there is not a video/audio streams, but media-file stream, thus we don't have access to the particular frame, we can use to put into an RTP package.
Working with RTP protocol we need separate, raw audio and video streams to send them separately (when talking in the context of RTSP protocol, probably SIP or custom protocol you mention don't require this separation, but I believe it still require frames).
Anyway, our understanding is that with output provided by MediaRecorder object it is pretty difficult to implement work via standard protocols like RTP. That's why we suggest to go one level lower to NDK and see if we can get an access to raw data there.
Please let us know if you agree on the approach or if you have an alternative one which you suggest. We are interested in receiving bids from those who can deliver the technology requested.