Media Streaming Techniques
✅ Paper Type: Free Essay | ✅ Subject: Media |
✅ Wordcount: 3358 words | ✅ Published: 16th Oct 2017 |
Chapter 7 : Media Streaming and Storage
In this chapter, we learn about media streaming techniques and typical devices used within streaming. Streaming is the process of media delivery via computer networks, most notably the internet.
Learning Outcomes
- To explain streaming and how media is transmitted via computer networks
- To give an outline of the encoding and storage of video material.
Media Streaming is the process of transmitting audio and video signals via computer networks, most notably the internet. It requires three parts, a source (to encode the stream), a server (to host the streaming service), and a browser or player (to view the stream).
Streaming is an increasingly important technology to learn about as content continues to be delivered to a variety of internet connected devices.
Internet Protocol Television (IPTV) is effectively streaming programmes (both TV and radio) and movies over the internet instead of terrestrial broadcast. The media streamed may be live (e.g. news) or on-demand (e.g. movies, programmes etc.). IPTV is usually over a ‘closed’ or ‘subscriber’ network e.g. VirginTV (UK) with a specified minimum Quality of Service. This should not be confused with Internet TV (a.k.a. Web TV) which is transmitted using the same protocols but primarily consumed via a web-browser on the ‘open’ internet (eg. BBC iPlayer).
7.1 Stream Creation
7.1.1 Capture
The media stream can be pre-recorded or a live feed that is ‘captured’ and run through an encoder. An audio live feed can be used for internet radio; it requires a sound card to capture the audio input. Sound capture devices can be internal (eg sound cards or integral computer motherboard device) or external devices (e.g. audio interface) though it would be wise to review the earlier chapter on professional versus domestic signal levels before assuming that a built-in motherboard can handle your incoming audio signal. Similarly, live video is via a camera (webcam – poor quality, or video camera – higher quality) but the capture device is typically by video capture cards, IEEE 1394 connection (aka Firewire) or for domestic quality, a USB device. Some USB plug-in devices carry both audio and video signals; it is the software in the encoder that is set to look for where the feeds are attached to the computer.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Find out more about our Essay Writing Service
Current tablet and smart-phones can be used as internet streaming devices. These, along with some cameras, are already ‘cloud connected’ for storage and sharing while IP-Cameras (viewable and controllable on streaming sites) can be used for home security etc. It is possible that these will become more ubiquitous and have greater streaming functionality in the future.
7.1.2 Encoding
Encoding is performed by an encoding software package such as Adobe Flash Live Media Encoder. A ‘live’ feed or pre-recorded media must be compressed and fed into the stream at a suitable bit rate and in a format with which the media server can cope for the purposes of ingest and re-direction. It is necessary for any audio video compression process to have the right decompressor at the play-out destination. Compression (bit-rate reduction) at the transmission end can be in either one or two pass encoding and at a constant bit rate (CBR) or variable bit rate (VBR).
Live encoding must be done in real-time ‘on-the-fly’ so it is a one pass encoding i.e. the image data is analysed and compressed once. Pre-recorded data can be a multi-pass (usually two-pass) process so the quality of the encoding is higher, but is not used in live stream encoding.
Constant bit rate (CBR) is used for media streaming as the encoding quality and feed to the server remains at the same bit rate, so can be at the maximum level the process will allow. Variable bit rate (VBR) is controlled by the bit rate range (Minimum – Maximum) or the average bit rate (each pass is averaged, and then several averages are re-averaged) to achieve a close to uniform bit rate for the stream. Variable bit rate is used for a multi-pass encoding process.
7.2 Network Connectivity
7.2.1 IP address (Internet Protocol Address)
Every computer device that can be connected to the internet requires a unique address so that it can be found, rather like a telephone number. So to host a media stream that can be found on the internet, the media server requires a static IP address. Like telephone numbers in a directory, IP addresses are convertible to meaningful names by a Domain Name Server (DNS) process running on a web-hosting server. Consequently media servers and web-servers are closely coupled, and media is capable of being found using a web-based URL (Uniform Resource Locator) such as www.youtube.com.
The source device (the computer or IP enabled camera etc.) needs to be found by the media server. This means the device has to ‘join’ the media server’s network (a more permanent connection) or pass on its IP address for the session (a temporary connection lasting until the session ends).
Network Router
The link between the source device and the media server may not be a direct connection, but may run through other connecting devices (network servers and routers). A router is a device that redirects data to another connected device either on its own or another network. This is basically how the interconnectivity of the internet works, routing between the server acting as the source device’s internet service provider to other networks and finally being re-directed to the destination.
Multiplexer (MUX)
Obviously it would be inefficient if the stream of data being passed to a media server is solely dedicated to one source device. This is a poor use of bandwidth, (bandwidth being the range of frequencies available in the data stream), so multiple devices are streamed simultaneously. A multiplexer (MUX) is a device that is used to combine input streams into a single output stream which is then split back into individual streams using a de-multiplexer (DEMUX).
Figure 7-1 : Multiplexer to Demultiplexer
7.2.2 Web-Host Servers
For a stream to be found it must be made available to web-browsers or stand-alone players (e.g. Windows Media Centre). The web-host server handles the web-site connectivity but need not be the same computer as the media server but both need a connection between them that isn’t prone to interruption. This web site often has a web page (HTML –HyperText Markup Language) with a plug-in media player connected to the media stream being managed by the media server, and contains the web-site in which the page resides. Consequently it must link to the internet and Domain Name Server process to resolve the IP Address from the web-site name, so the routing can direct the end user’s browser to the host.
Figure 7-2 : Connection diagram
The browser only needs to connect to the internet via its Internet Service provider to link to the media stream’s web-host server (see figure 7-2 Connection diagram). Hence any internet device with media playing capability could view the stream including smart-phones, tablets and PCs providing it has the right decompressing codec and can manage the bit rate of the stream from the media server. This is why it is important to offer various streams of differing quality, bit rate and formats (e.g. a Windows Media Video .WMV format file may not play on an iPad without a conversion app).
A further file often created at the time of the stream hosting web page is an announcement file. These are particularly important to make potential viewers aware of the content and set links to the media stream.
Podcast and Vodcast
Podcasts are audio files that are available for download from a web hosting service (vodcasts are video inclusive podcasts) and differ from media steaming in so much as the content is downloaded then played on the user’s device. Streamed media is viewable but not downloaded to the device. They are often announced by RSS feeds (a short web content file) to which your device has been connected.
7.3 Media Streaming Servers
A Media server is additional software that runs on a typical web server (or file server with web-host server connection). It requires a static IP – so its address does not alter on each session). The media server software needs to add additional protocols to those found on a simple web-server. In addition to Hypertext Transfer Protocol (HTTP) which is inbuilt with the web server, these additional protocols differ for proprietary server software. Real Time Messaging Protocol (RTMP) is for Adobe Flash systems along with HTTP Dynamic Streaming (HDS). Microsoft Media Services (MMS) is no longer supported for windows streaming and now uses HTTP and Real time Steaming Protocol (RTSP), and finally there is an HTTP Live Streaming (HLS) protocol which is for Apple iOS based systems.
Adaptive streaming (HDS, HLS and Microsoft Smooth Streaming) requires the stream to be fragmented (which is how HTTP delivers content) and may utilise the MPEG-DASH codec. Adaptive streaming is where the content is streamed as fragments in a variety of bit rates with the computer automatically selecting the next most appropriate sized fragment based on its current playback state to minimise buffering. This differs from the older method of providing different dedicated streams at constant bit rates and the client selecting one most appropriate to their computers (or routers) connection capability.
7.3.1Content Delivery
Unicast
In a Unicast scenario the client connects to the server on a one-to-one basis. The number of clients is limited by bandwidth considerations.
Multicast
In a multicast scenario the server streams to a multicast IP address (this is a special address on the client’s network). This is a one-to-many basis and is an effective means of reaching many clients with less bandwidth overhead.
UDP v TCP
All content (including streams) is delivered across a network in packets. In User Datagram Protocol(UDP) the stream is sent without checking the connection and no acknowledgement of receipt is made. UDP is seen as unreliable but it is simpler and quicker. Transmission Control Protocol (TCP) is bi-directional so will check for receipt and retransmit missing packets. TCP is seen as reliable but slower. A good discussion on this is at http://en.wikipedia.org/wiki/User_Datagram_Protocol#Comparison_of_UDP_and_TCP
7.3.2Live Streaming
A live stream needs to be seen at the time of broadcast. A live stream needs connection to a publishing point on the media server which connects to an encoding device. This may be another computer or a camera with IP addressable capability. The publishing point provides the connection between the content (live stream or pre-recorded) and the client’s computer which links to it via a web-host request from the internet.
7.3.3 Video on Demand (VoD)
If the stream is recorded then it is treatable in the same way as any pre-recorded media file. Note:, it is not advisable to record an encoded stream and then re-encode it as this would severely compromise its quality.
Streams and Playlists
Media streaming servers can have many pre-recorded files ready for streaming, often collected into separate playlists (one media file plays immediately after another). These playlist or file streams can be on a continuous loop, or awaiting selection by a viewer through the browser. This latter selection method is called video–on-demand, although it equally applies to audio files as well.
Bandwidth considerations
A media server can manage several streams and be linked to several web-hosts at the same time. This requires careful planning of the number of streams the media server can handle which is a function of its bandwidth connection. The more simultaneous streams being handled then the less the size of the bit rate is available for each of those streams. If a media server has a 1Gbit/second connection then it could only handle 1000 x 1Mbit/second streams. However, full utilization of the bandwidth like this is not normally done; there are recommended bit rates for video streams based on destination image size and aspect ratio, e.g. a 1280* 720 HD video with stereo audio will require around a 2.5 Mbit/sec rate. A good source for this is:
http://www.adobe.com/devnet/adobe-media-server/articles/dynstream_live/popup.html
Push – Pull
The relationship between the encoding source device and the media server is based on which device initiates the stream. i.e. it is PULLED from the source device by the media server (needed for video-on-demand), or it is PUSHED to the server by the source device to start the service (a broadcast need). The media server needs to know how the stream is to be initiated for it to start the service.
7.4 Storage
7.4.1 Read – Write speed
All devices that are used to store data (including audio and video) need to be able to write to the storage device faster than the transmitted data is fed to it, otherwise it must buffer (temporary memory store on a faster device) and then read from the buffer to maintain the data transmission sequence. Consequently if a device is used that cannot cope with the data transmission rate than it will fail or lose data e.g. using a low class SD card in a camcorder.
Data is written to storage devices and stored in a binary format but unlike data transmission the Kilo/Mega/Giga/Terra sizes are based on multiples of 1024 (210) not 1000. The speed of data being read from a storage device may be slower than required to play in real-time which would result in its being prone to stutter and freeze.
7.4.2 Simple Storage Devices Considerations
Tapes – early tapes (DVCAM, DV) required striping. This was to put a continuous time code on the tape before recording – however later tape devices (including mini-DV camcorders) made this unnecessary, though any discontinuities or repetitions in the timecode could cause problems when ingesting material to an editing workstation.
Cards – cards such as SDHC have a class rating which will denote the read/write speed of the card in Megabits per second, and storage size in Gigabytes. Always check to ensure the card will work with the device and check to see if there is a device firmware update, particularly if the device is more than a year old.
USB sticks – as with cards their read write speeds differ wildly – check using an on-line speed testing application, it is usually better to transfer video files to a hard disk before using the file for playback or editing.
CD and DVD disks have a read/write speed depending on the quality of the disk (recording speed), but another consideration is the data rate used in the writing process from such software as the video non-linear editor (NLE). An ‘average’ bit rate (based on Peak and minimum – Variable bit rate (VBR)) or ‘constant’ bit rate (CBR) needs to be selected, that will not only write to the disk but allow the disk to be played on the output device. Computers can write to disks comfortably at 9 – 11 Megabits per sec. But this needs to be slower (around 5 Mb per sec) if writing to a DVD that is to be played on an older standard-definition DVD player. You should consider the bit rate as part of your consideration of overall file size and the storage capacity on the disk. A good explanation of data rate calculation is given in:-https://helpx.adobe.com/encore/using/project-planning.html#bit_budgeting
Hard Disks – Many older hard disks (often found in laptops) spin at 5400 rpm, this is too slow for video playback and a minimum 7200 rpm disk is needed. Hard disk read write at around 50-150 Mbytes per sec.
SSD – Solid State Disks are now finding favour due to faster read/write speeds than traditional hard disk technology. SSD read write speed is between 200-500 Megabytes per sec.
7.4.3 Network Storage
Connection speed
Any network storage has to pass data via the network card (NIC – network Interface Controller), this needs to be as fast as possible (preferably a fibre connection of 1Gbits per sec – but a minimum of 10 Mbs for Ethernet). Unlike other storage considerations network traffic can be bandwidth throttled (i.e. the bit rate is reduced) and will affect speed. If the NIC card is under your administration always set it on maximum performance.
NAS, SAN, Cloud
NAS (Network Attached Storage) is what most people think of as network storage- an array of hard disks that allows for file storage remotely from your computer directly accessible via the network.
SAN (Storage Area Networks) are a separate network but pretty much do the same job as NAS differing in access protocol.
The Cloud is just another remote storage area (uses SAN technology) but accessed via the internet (typified by a URL connection – Uniform Resource Locator) not a local area network LAN connection (typified by a UNC connection – Universal Naming Convention).
7.4.4 Raid
RAID (Redundant Array of Independent (or inexpensive) Disks) allows the disk storage to have a measure of redundancy and/or striping to create a secure method of retrieving data should there be a disk failure. For media technology only a few RAID levels (configurations) are used (i.e. Levels 0 and 5 – see figure 7-3 Raid diagram).
Level 0 – usually this requires at least two disks and the data is striped across them. (Note: It can be put on one physical disk using two logical drives – but with little advantage). That essentially means data is split into blocks and distributed across the disks. Typically used in video storage applications as it is fast, there is no redundancy (no duplication) so recovery from a disk crash is almost impossible. Consequently if a disk fails then the file may not be fully recoverable. If you have RAID 0 on your disks, always be sure to keep an external copy of your original audio or video files.
Level 1 – disk mirroring, requires at least 2 disks but is slow as it writes the same data twice (once to each disk). Data is easy to recover as the system has full redundancy (disk duplicated). This level is good for general data and possibly audio only files. Many video editors feel this level is too slow for working with video files.
Level 5 – block striping and parity. This requires 3 disks minimum, data is striped across all the disks (except one) and the block parity is put on the excepted disk. This is done repeatedly using a different disk for parity on each block. One disk can fail and be rebuilt from the others by using the parity blocks on remaining disks. Raid 5 is also popular with video editors as long as the raid controller is fast enough. It is slower than level 0 but faster than Level 1 and has enough redundancy for disk recovery.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allDMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: