COMPRESSED-SENSING-ENABLED VIDEO STREAMING FOR WIRELESS MULTIMEDIA SENSOR NETWORKS
COMPRESSED-SENSING-ENABLED VIDEO STREAMING FOR WIRELESS MULTIMEDIA SENSOR NETWORKS
The design of a networked system for joint compression, rate control and error correction of video over resource-constrained embedded devices based on the theory of Compressed Sensing (CS). The objective of this work is to design a cross-layer system that jointly controls the video encoding rate, the transmission rate, and the channel coding rate to maximize the received video quality. First, compressed sensing-based video encoding for transmission over Wireless Multimedia Sensor Networks (WMSNs) is studied. It is shown that compressed sensing can overcome many of the current problems of video over WMSNs, primarily encoder complexity and low resiliency to channel errors. A rate controller is then developed with the objective of maintaining fairness among different videos while maximizing the received video quality. It is shown that the rate of Compressed Sensed Video (CSV) can be predictably controlled by varying only the compressed sensing sampling rate. It is then shown that the developed rate controller can be interpreted as the iterative solution to a convex optimization problem representing the optimization of the rate allocation across the network. The error resiliency properties of compressed sensed images and videos are then studied, and an optimal error detection and correction scheme is presented for video transmission over lossy channels. Finally, the entire system is evaluated through simulation and test bed evaluation. The rate controller is shown to outperform existing TCP-friendly rate control schemes in terms of both fairness and received video quality. The test bed results show that the rates converge to stable values in real channels.
Existing System:
There has been intense research and considerable progress in solving numerous wireless sensor networking challenges. However, the key problem of enabling real-time quality-aware video streaming in largescale multihop wireless networks of embedded devices is still open and largely unexplored.
There are two key shortcomings in systems based on sending predictively encoded video (e.g., MPEG-4 Part2, H.264/AVC, H.264/SVC through a layered wireless communication protocol stack, i.e., encoder complexity and low resiliency to channel errors Encoder complexity. Predictive encoding requires complex processing algorithms, which lead to high energy consumption.
New video encoding paradigms are therefore needed to reverse the traditional balance of complex encoder and simple decoder, which is unsuited for embedded video sensors. Limited resiliency to channel errors. In existing layered protocol stacks based on the IEEE 802.11 and 802.15.4 standards, frames are split into multiple packets. If even a single bit is flipped due to channel errors, after a cyclic redundancy check, the entire packet is dropped at a final or intermediate receiver.
Proposed System:
We show that a new cross-layer optimized wireless system based on the recently proposed Compressed Sensing (CS) paradigm can offer a convincing solution to the aforementioned problems. Compressed sensing (aka “compressive sampling”) is a new paradigm that allows the faithful recovery of signals from M _ N measurements, where N is the number of samples required for the Nyquist sampling. Hence, CS can offer an alternative to traditional video encoders by enabling imaging systems that sense and compress data simultaneously at very low computational complexity for the encoder. Image coding and decoding based on CS has recently been explored. So-called single-pixel cameras that can operate efficiently across a much broader spectral range (including infrared) than conventional silicon-based cameras have also been proposed. However, transmission of CS images and video streaming in wireless networks, and their statistical traffic characterization, are substantially unexplored.
Video transmission using compressed sensing. We develop a video encoder based on compressed sensing. We show that, by using the difference between the CS samples of two frames, we can capture and compress the frames based on the temporal correlation at low complexity without using motion vectors.
. Distortion-based rate control. C-DMRC leverages the estimated received video quality as the basis of the rate control decision. The transmitting node controls the quality of the transmitted video directly. Since the compression of the video is linearly dependent on the video quality, this effectively controls the data rate. By controlling congestion in this way, fairness in the quality of the received videos is maintained even over videos that have very different compression
ratios.
. Rate change aggressiveness based on video quality. With the proposed controller, nodes adapt the rate of change of their transmitted video quality based on an estimate of the impact that a change in the transmission rate will have on the received video quality.
The rate controller uses the information about the estimated received video quality directly in the rate control decision. If the sending node estimates that the received video quality is high, and Round Trip Time (RTT) measurements indicate that current network congestion condition would allow a rate increase, the node will increase the rate less aggressively than a node estimating lower video quality and the same round trip time.
. Optimality of rate control algorithm. We finally show that the proposed rate control algorithm can beinterpreted as an iterative gradient-descent-basedsolution to the optimal rate allocation problem (i.e., finding the rates that maximize the sum of video qualities).
Software Requirements:
Core Java
Front End – Swing
Servlet
Back End – MySQL Server
Windows XP
Hardware Requirements:
RAM : 512 Mb
Hard Disk : 80 Gb
Processor : Pentium IV
Comments are closed.