MENU

How FPGAs breathe life into analog video

How FPGAs breathe life into analog video

Technology News |
By eeNews Europe



Abstract
Digital video broadcasting, video compression, and ever expanding video resolutions such as 4k x 2k dominate the news in electronics magazines. Yet that little yellow RCA connector remains ubiquitous and large numbers of people still rely on NTSC or PAL analog broadcasting for their viewing pleasure. This article looks at how FPGAs are breathing new life into this presumed dead format.

Introduction
There are two main components to enable analog video transmission: the encoder at the transmitter (e.g., the camera) and the decoder at the receiver (e.g., the television). Most major semiconductor manufacturers offer at least one of these components – most offer both. Not surprisingly, these components are usually older designs, some perhaps having their origins as much as 20-30 years ago. After all, the standards have not changed, so why the need to update the ICs. In this article I will consider how the introduction of low-cost, high-functionality FPGAs justifies looking once again at analog video transmission.

Returning to old friends
Even high definition video sources can benefit from having that RCA connector, even when – for legacy reasons – they aren’t obliged to do so (e.g., your Blu-ray player). How many times have you had a blank screen when using HDMI and wished you had another robust output just to check you are not going mad?

Consider an HD security camera, for example, which may offer HD-SDI or analog component YPbPr outputs. Offering a simultaneous NTSC or PAL output provides an easy method to connect and test the installation, whilst the use of NTSC and PAL allows the transmission over hundreds of meters of existing low cost coaxial cable installations, something neither YPbPr or HD-SDI can do, but of course without the resolution of HD.

Recent low-cost FPGA offerings from, for example, Altera (Cyclone) and Lattice (XP2) offer a way of adding this output at low cost. A broadcast quality NTSC encoder may use approximately 6000 logic elements, much less for a consumer grade encoder, allowing the smallest of FPGAs to be utilized[1]. And, of course, the spare logic of the FPGA may be used for additional functions, such as the camera control interface.


But once we have our own encoder and are not limited by the constraints of a 10-year-old designed ASIC, we can improve its performance. We can change the sampling rate of the encoder to better match the sensor. For example, Sony’s Effio image sensors offer 960 pixels/line compared to 720 pixels for a typical NTSC/PAL sensor. This image can be transmitted over conventional ‘NTSC’ transmission paths, yet gives a much better detailed image for very little cost-up. Our FPGA encoder can easily be modified to transmit this extra information.

Perhaps we wish to transmit some additional data? Again, in a closed system, we can modify our custom encoder to transmit data or digital audio in the vertical blanking interval, similar to how Closed Captioning or Teletext worked.

NTSC and PAL are composite video formats – the chroma and luma occupy the same frequency domain – a restriction we no longer have in closed system. Our own video encoder can separate the luma and chroma, sending them at different frequencies, and therefore avoiding the issues with cross-color at the video decoder.

Of course, the analog video decoder has to ‘understand’ any changes we make to the transmitted signal, but if we implement that too in an FPGA, then we can again easily make the necessary changes.

Combing through the analog video decoder
Apart from providing the compatibility to the encoder changes we mentioned above, is there any other incentive to moving a well proven IC function to an FPGA? You will not be surprised to find the answer to that is a resounding "Yes!"

The most obvious result of viewing analog video sources on a large display is that any artifacts are, of course, larger and visually more apparent. For larger displays, the analog video decoder actually has a more stringent requirement. This problem is compounded because the flat screen displays require additional processing of the analog source before it can be properly displayed, namely de-interlacing and scaling. The de-interlacer, in particular, can amplify any artifacts left from the video decoder. This is because the de-interlacer is sensitive to motion in the image, and residual artifacts and noise left from the analog decoder cannot be discriminated from real motion in the image. The result is the de-interlacer may make the wrong mode decision resulting in additional artifacts.

A similar issue arises if the output of the video decoder is to be compressed, since all MPEG compression methods effectively send only the motion of an image. Unable to discriminate between artifacts, video source noise, and ‘real’ image motion, it can be shown that up 20% of satellite and cable digital broadcast bandwidth is utilized to send unnecessary information. This is extremely useful bandwidth that is especially useful given the high compression ratios used by today’s broadcasters, and is the difference between the viewer seeing the highly visible MPEG artifacts – such as blocking – or not.


One large improvement to the video decoder that has been made by some manufacturers is to add a 3D comb filter[2]. Even on the most complex images, near-perfect, artifact-free decoding is the result. However, the memory requirement for this is large enough to require an external device, the cost of which usually precludes this desirable feature being implemented.

A functional analog video decoder may only occupy 7000 logic elements of an FPGA, again allowing its use in the most effective of devices. Indeed, quite small devices can fit multiple instances of the decoder for functions such as the quad-screen displays of security systems. But the smaller FPGAs also need an external memory device for the 3D comb filter. Except that usually memory is already present in the system, whether for the MPEG compression or the de-interlacing. It is therefore possible for us implement a custom memory interface to share our small 3D comb memory requirement with the main memory. Such an interface can be made to multiplex with the existing video interface, typically BT656, so that I/O requirements are kept to a minimum. The FPGA therefore allows us to implement a 3D comb, giving much improved and almost artifact free images, for virtually no cost.

Since "a picture is worth a thousand words," as the old saying goes, I give you Figure 1. This is a screen capture of a zone plate test pattern. This rather esoteric image may be more familiar to you if you think of fine detail on your details, perhaps a newsreader’s fine check shirt, which they seem to have a penchant for wearing. A conventional video decoder is unable to discriminate between the fine luma detail and chroma; the result is shimmering colors where there should be none (see the left-hand side of the image). A well designed 3D comb filter can resolve these issues, thereby providing a clean output for better display and compression (see the right-hand side of the image).


Figure 1. Comb comparison (left = line comb, right = frame comb)

(Click Here to see a larger, more detailed version of this image)


aCVi – An HD analog interface
There are more advantages to being old that just being able to get away with wearing socks with your sandals. One other thing is an awareness of things that were tried before, but for one reason or another never made it into mainstream use. One such venture was the Japanese MUSE system for transmitting analog high definition television.

A company told us of an increasing need to be able to transmit HD video over long cable runs; e.g., for security camera installations. In many cases, the cable installation is pre-existing and uses low cost RG-59 cable, but this cable is limited to low frequency use (<200MHz); at 1GHz its attenuation is 28dB/100m.

Existing methods to transmit HD video include separate analog RGB/YPbPr, which requires three coaxial cables to transmit, or HD-SDI, a serial digital transmission method which runs at a bit rate of 1.485Gbps and can only achieve small distances with such cable.

Considering this problem, I remembered MUSE (and also the European EUREKA project), which encoded HD video into a single signal. These systems use time-multiplexing, but instead I thought of the Sony Effio approach and we came up with aCVi (Advanced Composite Video Interface), which uses a modified form of the well-known NTSC analog composite video standard to create the signal for transmission [3].

Distances of greater than 300m are achievable and in excess of 500m at 720p/60Hz with some small signal degradation. As with most analog transmission methods, the signal degradation is ‘graceful’ with no sudden cut-off of the signal that is encountered with digital methods.

The transmitter consists of a digital IP (intellectual property) core that is small enough to be accommodated on a small FPGA, along with standard analog parts available from numerous vendors. The output stage/encoder is also compatible with NTSC/PAL base-band transmissions.

The receiver consists again of standard analog components and a small IP core that fits in a small FPGA or can be incorporated into a larger device that many camera manufacturers already use for customising their product.

As the system is ‘closed,’ we can also transmit data (in both directions) and audio during the vertical blanking interval.


Low cost analog interfacing
FPGAs do not offer analog components, certainly not at video frequencies, so we always have to consider the additional cost of these parts in any total solution. However analog costs may be kept down by making small modifications to the FPGA IP cores.

In the case of the aCVI digital to analog converter, for example, we require a DAC (digital-to-analog converter) to have 10-bit resolution and a sample rate of 150MHz. Such devices exist, such as the Analog Devices AD9705, but it will set us back $4.25 even at moderate volumes. However, our output is composite video, effectively comprised of three components – the luma, the chroma, and the synchronizing signals. If we output these components separately to three DACs and sum the DAC outputs in analog, we only need to have 8-bit resolution (ideally 9 bits) to achieve the same performance. A clear benefit is that a triple 8-bit DAC – such as the Cadeka CDK3405 – is just $1.50. This approach is shown in Figure 2.


Figure 2. Using a triple 8-bit DAC is only 35%
of the price of a single 10-bit DAC.

(Click Here to see a larger, more detailed version of this image)


If a high speed ADC is required, then a similar approach can be taken. Again, for aCVi we needed a 150MHz ADC at 10 bits. For our development, we used an ADI part costing $40, but to avoid being thrown out of customer meetings at the first question from purchasing, we needed to bring that cost down substantially. Studying the offerings of the major manufacturers, we saw that ADCs increased substantially in cost above 100Msps. In fact, a dual 100MHz ADC could be had for much lower cost than a single 150MHz part. So we considered the possibilities of interleaving the ADCs, clocking them 180 degrees out of phase to create a single twice-speed ADC. Given that gains and offsets have to be closely matched, we chose a dual IC thinking that if the devices were on the same die we should achieve better matching. Such an approach is shown in Figure 3 using the ADI AD9216, which is available for $10, just 25% of our original chosen device.


Figure 3. Interleaving ADCs to achieve
lower-cost FPGA interfacing.

(Click Here to see a larger, more detailed version of this image)

The message to take from this is the use of FPGAs to implement even standard functions can be made cost effective and gives us a freedom we don’t have when using off-the-shelf components.


An FPGA-based analog decoder requires an ADC as well, of course. But such devices already exist, from ADI (AD9981) or Intersil, both offering SD and HD component video capture. It is possible to utilize one channel of these 3-channel devices for an analog video source. Indeed, it is possible to utilize all three ADCs, even though they use a common clock by using a sample rate converter in the video decoder. The FPGA decoder can be modified to accept a fixed rate clock allowing the digitization of three video sources, even if the sources are not synchronous. A low-cost triple ADC can then be used for multiple video source decoding such as in security digital video recording.

There’s life in the old dog yet
Analog video transmission involved the work of many ingenious engineers over many years. Whilst there remains a legacy requirement to provide this feature in many devices rather than be an irritant, I hope this article has shown that one of the newest of devices – the low cost FPGA – allows analog video to be given a new lease of life.

References

  1. PT8 Video Encoder IP core
  2. Revisiting the analog video decoder
  3. aCVi white paper
  4. Analog Devices
  5. Cadeka

About the author
Daniel Ogilvie is Technical Director of SingMai Electronics and has been involved with electronics, both at an amateur and professional level, since he was sixteen. Daniel has worked for both large and small companies as well as owning and running his own UK based company for eleven years. He has worked for companies in such diverse fields as university physics research support, high-end broadcast video, DVD recorder front end semiconductors, video decoder IC design and high volume consumer electronics in countries as diverse as Canada, USA, UK, Thailand and Singapore.

Products that Daniel has been involved in include forensic glass refractive index measurement equipment, (occasionally featured on the US TV program CSI), very low-light photon counting video processors, broadcast quality FPGA based video decoders, very high resolution real-time video processors and IC design of video input processors. Daniel is a senior member of the IEEE.

SingMai Electronics is a designer and manufacturer of intellectual property cores and hardware for video, imaging, and broadcast applications. SingMai is the originator of the aCVi interface. Based in Thailand and with sub-contractors in Singapore, SingMai has years of experience in the video market, from high end broadcast television to low cost consumer electronics.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s