In previous posts, we have introduced WebRTC technology and basic concepts. As we discussed in previous posts WebRTC is for peer-to-peer communication on browsers. However, you can still use it in multiparty applications with one-to-many or many-to-many attendees. Besides, to use WebRTC in such a project you need WebRTC servers for the most times.
In this post, we will introduce WebRTC servers and new concepts such as Multipoint Conferencing Unit (MCU), Selective Forwarding Unit (SFU), transcoding and simulcasting.
Mesh is the simplest topology for a multiparty application. In this topology, every participant sends and receives its media to all other participants. We said it is the simplest because it is the most straightforward method. Moreover, there are no tricky works and a central unit such as WebRTC server.
- It requires only basic WebRTC implementation.
- Since each participant connects to each other as peer-to-peer, no need a central server.
- Only a restricted number of participants (nearly 4-6) can connect each other.
- Since each participant sends media to each other it requires N-1 uplinks and N-1 downlinks.
Mixing Topology and MCU
Mixing is another topology where each participant sends its media to a central server and receives a media from the central server. This media may contain some or all other participant’s media. This central server is called as MCU.
- Client-side requires only basic WebRTC implementation.
- Each participant has only one uplink and one downlink.
- Since the MCU server makes decoding and encoding each participant’s media, it requires high processing power.
Routing Topology and SFU
Routing is a multiparty topology where each participant sends its media to a central server and receives all other’s media from the central server. This central server is called SFU.
- SFU requires less processing power than MCU.
- Each participant has one uplink and four downlinks.
- SFU requires more complex design and implementation in server-side.
You can check here to get more information.
Transcoding is the process of decoding a compressed media, change something on it and then re-encoding it. Change is the keyword of this process. What can be changed on a media?
First, you can change codec since some codecs are compatible with protocol or players.
Moreover, transrating is one change that is on the bit rate of media. For example, changing media bitrate 600kbps to 300kbps.
Another change is trans-sizing which is on the size of media. For example, changing frame size of a media from 1280×720 (720p) to 640×480 (480p) is trans-sizing.
Besides, there are lots of other changes or filtering processes are available on the video processing area.
Adaptive bitrate streaming is the adjustments to video quality according to the network quality. In other words, if network quality is low then video bitrate is decreased by the server. This is necessary to provide uninterrupted streaming under low-quality network connections. Clearly, the different bitrates of the stream must be available to provide adaptive bitrate technique. One way to have different bitrates of the stream is the transrating. Namely, the server produces different streams with different bitrates from the original stream. However, transrating is expensive in terms of processing power.
One alternative to transrating to provide adaptive bitrate is simulcast. In this technique, the publisher sends multiple streams with different bitrates instead of one stream. The server selects the best stream for the clients by considering the network quality.
What do WebRTC Servers do?
After introducing the concepts above we can now explain the necessary features of WebRTC servers.
Although mesh topology does not require a central server, it still needs a signaling server. WebRTC server can meet this need.
Moreover, WebRTC server can be MCU or SFU in mixing or routing topologies.
Additionally, WebRTC server must support transrating or simulcast to guarantee the connection to be healthy under a weak network.