Digital Cooking Series


We are grateful to Bertrand Jenner, Technical Director and Aaron Lange, the Sales Director, who are both from Lightware Visual Engineering Asia, in helping us dissect through the various formats and technologies. We highly recommend reading this column as to the most part it is generic in nature and definitely helpful.

You may get in touch with them at and respectively, should you want to.

1Handling Audio and Video Sources for a stable well-designed system: Our Digital Cooking series kicks off with showing you how to handle the different video and audio sources in order to create a stable, well-designed system.

I love cooking. At home, I adore taking all the different sorts of ingredients and melding them together into aromatic harmony, and ultimately delivering it to my friends and family.

I’m also fortunate enough to have a job that is a lot like cooking, but instead of ingredients, we have video and audio sources, and instead of feeding mouths, we feed display devices, speakers, etc.

With the arrival of multiple digital signal types (HDMI, DVI, DisplayPort, 3G-SDI) onto the AV scene, we now have a wide variety of “ingredients” to work with, which make designing stable, reliable solutions quite a challenge.

Now, it’s more important than ever to carefully handle and manage our signals, in order to create a stable, well-designed system.

There is a visible trend in our industry towards large centralized architectures.

By centralizing all source signals in one location and branching throughout the installation from this nucleus, we are able to build in scalability while also increasing the serviceability of the system, as our equipment is located in one place.

An overly simple example is the situation where you have multiple rooms, each with their own local sources (DVD Player, PCs, etc.), a small matrix switcher, peripheral converters, and possibly a scaler.

In this example, each room is its own separate entity. If a case arises where you wish to share the video content from one room out to other rooms, it’s simply not possible. In addition, if there are issues with the local sources, it requires technicians to visit each room to diagnose and fix them.

A centralized system makes much more sense.

By having everything in one place, it’s possible to share content across multiple rooms and when issues arise with equipment, technicians stay in their domain, rather than travel out to remote locations, thus increasing efficiency.

It’s like having an open kitchen with a central island – everything you need is within a hand’s reach.

At the heart of the centralized architecture is the matrix switcher.

As these types of architectures tend to be rather large, it takes bigger and bigger matrices to accomplish all of the matrix switching demands.

Currently there are two types of matrices in the larger domain: protocol-agnostic fiber matrix switchers and hybrid matrix switchers. Protocol-agnostic fiber matrix switchers are protocol-free, switching any signal within their allowed bandwidth.

In effect, they switch light, not any particular signal. Hybrid matrix switchers incorporate inputs and outputs on both native local connections (HDMI, DVI, etc.) along with twisted pair (CAT5/6/7) and fiber for remote extension.

So, which platform is more adept in this world of ever-changing signal types?

I would argue that the hybrid platform is much more suitable in most cases. Let’s contrast the features of each platform to get an idea.

The protocol-agnostic fiber matrix switcher is capable of switching a larger number of inputs and outputs.

Currently, this type of matrix switcher can achieve sizes of over 1,000 x 1,000. Thanks to the telecom industry, chip manufacturers developed high-density, fast switching silicon, enabling production of these large-scale switches.

Hybrid matrix switchers currently reach substantial sizes of up to 160 x 160, however they lose the battle of overall size.

Error Handling:
For the same reason the protocol-agnostic fiber matrix switcher can achieve large sizes, they are totally tolerant to errors and protocol shifts.

The chips in these matrix switchers are meant to switch Data, so they cannot differentiate between different types of signal content.

As a result of this, they cannot perform any meaningful reclocking such as TMDS (Transmission-Minimized Differential Signaling), which means that errors due to the signals travelling long distances are not corrected and eventually get passed right through to the end device.

For higher resolutions and also noisy signals such as ones from Video Conferencing codecs, the resulting errors could cause picture dropout or audio popping. Hybrid matrix switchers excel in this category.

Since the chips used in transmitters, matrices, and receivers are all meant for video transmission, basic TMDS reclocking can do a simple clean up of the signal and even the more advanced Pixel Accurate Reclocking can be implemented, where the signal is decoded to 24 or 36 video bits then reconstructed again with multiple PLL filters before passing it on to the end point. Hybrid matrix switchers dominate in this category for these reasons.

EDID Management:
Critical to all digital signal matrix switching, EDID management is an extremely important and often misunderstood feature.

Both protocol-agnostic and hybrid matrix switchers contain this feature. How and where the EDID is managed determines who wins this category. In protocol-agnostic matrix switchers, EDID is normally managed in the transmitters and cannot be set from the matrix.

In most cases, EDID management is also limited to either Emulation mode (end point EDID info is stored locally on the transmitter and emulated to the source) or Pass-Through mode (EDID is directly communicated from the endpoint to the source). Hybrid matrix switchers offer more power and flexibility when it comes to EDID management.

EDID can be managed from the matrix and sent out to be individual transmitters or it can be managed from the transmitters themselves. In some hybrid matrix switchers, there is even the ability to create EDIDs and send them out to the transmitters.

This is quite powerful, as certain unwanted resolutions can be removed from the EDID file, thus keeping the sources from outputting those resolutions (which translates to stability). Due to this flexibility and additional functionality, hybrid matrix switchers are again preferred.

It would seem that protocol-agnostic matrix switchers would be quite future-proof, as there is only the need to swap out the transmitters and receivers in order to accommodate new signal types.

Unfortunately, these matrix switchers normally have quite a low bandwidth; Newer digital formats, however, are requiring higher and higher bandwidth.

For example, DisplayPort 1.2 requires a bandwidth of 21.6 Gbps and HDMI 2.0 will require just over 18 Gbps. Since protocol-agnostic matrix switchers only have to switch light, the bandwidth requirement is very low, normally between 4 – 7 Gbps.

Hybrid matrix switchers, on the other hand, can now attain very high bandwidths up to 25 Gbps (Lightware Visual Engineering). Surely you can see who gets the nod in this category.

With matrix switchers now being centralized along with other source equipment, it’s important to have the flexibility of entering the matrix directly from the sources, or go from the matrix directly into other processing equipment that is also in this central equipment area.

Protocol-agnostic matrix switchers do not have the ability to accept local inputs or provide local outputs, but rather must enter the matrix via expensive fiber transmitters, which require fiber receivers on the output side.

So, to interact with local equipment is both expensive and cumbersome, as transmitters and receivers require both space and power supplies. Hybrid matrix switchers are built with this kind of interactivity in mind, as they can accept local inputs and outputs directly.

Moreover, some manufactures are even offering additional features on their local input and output boards, such as audio embedding and de-embedding, along with audio decoding from multi-channel to stereo, for example.

An additional bonus of having local inputs and outputs cards is the ability to provide extra processing functions inside the matrix, such as color space conversion, audio sampling rate conversion, and video range conversion.

By its very name, hybrid matrix switchers provide much more flexibility and additional functionality to an installation.

This is a tricky one, actually.

It’s much simpler to calculate the cost of the protocol-agnostic fiber matrices, as they only have one type of media – fiber. Hybrid matrices, however, have an array of boards to choose from which differ substantially in cost.

A standard DVI-D input board for instance, costs much less than a fiber input board. In real world cases, it is very rare to have all source equipment and all end points remote from the matrix (requiring extension).

For this reason, it is nearly impossible to generalize and compare pricing for the two types of central matrix switchers.

But you could say as a generalization that as your local sources and local end points (video walls, displays, etc.) increase in numbers, the hybrid matrix platform will decrease in cost relative to an all fiber platform.

From the above analysis, it is clearly preferable to go with hybrid matrix switchers when deciding on the type of matrix to use in future digital switching applications.

These types of matrix switchers are built from the ground up with actual video and audio protocol in mind, so they are much more suitable to dealing with all of the variables that come into play within the centralized switching arena.

In the kitchen, I enjoy the flexibility of having different tools for different jobs, as it offers increased efficiency and precision.

Why use a paring knife to chop chicken or a cleaver to tournée a potato?

2Proper Recipe for EDID: We continue the cooking with a special ingredient: EDID by highlight the differences between Hybrid Digital Matrix Switchers and Signal-Agnostic Fiber Matrix Switchers.

Figure 1

In this article, we will look at EDID from a cooking perspective in order to give you a “recipe” for applying it to your daily lives.

Our “ingredient”: EDID
So, let’s start with the basics of EDID to give us a foundation with which to build our “recipe”.

VESA created EDID back in 1994, not knowing the impact that they would have on the video world even 20 years later! EDID, which stands for Extended Display Identification Data, is often referred to as the passport of a digital display, as it is a data structure which describes the details and capabilities of it.

For traditional VGA and DVI displays the EDID contains 128 bytes of information such as manufacturer, model number, serial number, product type, phosphor or filter type, native resolution, supported resolutions and timing, display size, bezel size, etc.

With the addition of HDMI in the year 2000 came EDID version 1.3 with an additional extension containing 128 bytes of information about video timing, audio capabilities, speaker allocation, and colour information (Deep Colour, xvYCC, etc.).

This “passport” file resides in one or more read-only chips within each digital display in the HEX format, so unless you are intimate with this file structure, it will look like Alphabet Soup (hey, this is Digital Cooking, right?) when opened with a text editor.

Of course, there are many software programs which can read and help you make sense of this file (See Figure 1 for an example).

OK, but how does it work?

Before the DVI era, most of the EDID fields were obvious, but with the DVI mechanism, they became essential, because bad values or misinterpretation would result in a ‘No Picture’.

For DVI and HDMI, the mechanism is quite straight forward.

The source tells the display it has just been connected by applying +5V to the +5V power line of the DVI or HDMI connector. Then the display acknowledges by sending back +5V on what we call a Hot Plug line.

At that moment the source queries the display about its EDID thru the I2C bus (DDC line).

After analyzing the capabilities of the display, the source is going to formulate the video signal timing and send the picture to the display (See Figure 2 for a snapshot of this process).

Figure 2
Figure 2

Now the recipe gets complicated
If all we ever did was plug a source into a display directly, the story would be over and we wouldn’t be writing this article, because there would be few problems with EDID implementation.

However, too many chefs entered the kitchen when we started placing other equipment between the source and the display (or Sink, as it has been termed).

By chefs, we mean manufacturers; you see, each manufacturer has their own implementation of EDID, and the poor source does not know who to listen to.

In the case where you wish to use an HDMI splitter to route a source to two locations, does the source listen to the EDID coming from the 1080P LCD in location one, or the EDID coming from the 720P projector in location two?

Thus, we can use a colloquialism to explain the situation: “too many cooks spoil the broth”.

In cases where we have multiple displays, the signal issued by each of the sources has to fit all of the displays, in order for it to be a successful installation.

But, this task is not as easy as it may seem. If you think by having the native resolution, aspect ratio, and interface for all the displays will solve the problem, think again! It’s like saying that a bowl of noodles will taste the same every time so long as they have the same noodles; it’s what’s in the broth that matters.

Even worse is trying to mix these fancy 2560×1600 resolution monitors with Full HD 1920×1080.

This, my friends, is a recipe for disaster.

Everyone has their own EDID Management recipe these days
Do you remember the time when all the HDMI product manufactures were blindly stating that they were HDMI 1.3 compliant?

That wasn’t too long ago.

What we found out, however, is that many of them were only compliant with one or a few of the specifications of HDMI 1.3 – not all of them.

This led to many problems that would not have happened if everyone had played by the same rules.

Unfortunately, the same situation is now upon us with regards to EDID management. Most DVI and HDMI product manufacturers are stating that they have EDID Management, or even Advanced EDID Management, but we need to look into the details of their management to see how effective each one is. The difference varies greatly.

In the past, EDIDs were read directly from the display when each crosspoint was made.

Maybe you can remember experiencing black flickering while changing cross-points?

When using a matrix, most everybody now understands that they have to present an EDID to all the sources regardless of whether the matrix is actively routing the source to a display.

This way, sources that are not routed to actual displays think they are connected to something because the matrix keeps the Hotplug line high and they can read an EDID from the matrix.

This is called EDID emulation.

In this way, we can ensure that our sources will consistently output a signal.

So how does one manufacturer’s EDID management recipe differ from another?

Well, the main difference is how the optimal EDID is derived which will eventually be presented to each source. One method is to read the EDID from a connected display and present it to the source. This is commonly referred to as a setting called ‘External Mode’.

Memorizing and emulating an existing EDID that works for a source and a display is the common scenario in EDID management.

Unfortunately, if you can’t analyze what you are sending to the source, this stage can be time-consuming and is not error proof. It requires that installers take the ‘try and apply’ strategy (which is largely unsuccessful) of learning the EDID while testing the last combinations of sources and displays.

An emulated EDID taken from an HDMI display, for instance, may contain a lot of resolutions that can be scaled up and down to fit the native resolution of the display. But this is specific to the equipment inside that particular display, and may not be common with all the displays, even though they share the same native resolution.

For example, let’s say you want to transmit your content through a video conference codec while also displaying it locally. A lot of codecs, especially the Full HD ones come from the video world and its broadcast standards.

One known issue is that 3G-SDI 1080P timings are way different than consumer HDMI 1080P timings, and the signal that might work on your local display won’t be accepted by the codec.

Yet another strategy to attain the optimal EDID is to present a so-called Universal EDID to the source.

The Universal EDID is one which contains a very broad timing range and almost all existing computer resolutions, DTV resolutions, along with all possible audio formats. Some devices have this Universal EDID presented to the source if they select the device’s ‘Internal EDID’ switch.

Most likely, presenting the Universal EDID to a source will result in some picture and sound being output if your source is 100% compliant, but it often isn’t the best match for your equipment.

In a professional AV environment, using the Universal EDID could see even worse results: guest laptops would be free to select resolutions that match their internal displays if the resolution is listed, and this weird resolution might not be compatible with all the displays!

Common laptop resolutions are 16:10 aspect ratio (1280×800, 1440×900, 1680×1050), making them difficult for many video displays to accept.

Now you can see that not every EDID Management recipe will produce the same “taste”.

The proper recipe
Given the right utensils, an average chef can quickly elevate the taste level of their food, and create dishes that were here-to-for unattainable.

The same applies with EDID Management. Ask yourself this question: am I currently able to read all the display EDIDs and analyze them, mix them, and ultimately find the common working ranges by stripping the unwanted resolutions and adding the right EDID features for audio that are needed? If your answer is yes, then you’ll be able to route any signal to any end point (display or audio-video decoder or other sink device) with the desired effect.

One thing is for sure: this can’t be achieved every time with fixed EDID selection, memorized, or cached EDIDs.

Bullet-proof EDIDs are either built from scratch or manually adjusted by the engineer while commissioning the installation.

This task requires foresight and planning to be properly implemented.

Just as in a kitchen where you must be the master of the ingredients, in AV installations, you must acquire the knowledge and experience to be the master of EDID and be able to concoct your own unique EDID recipe for proper EDID Management of each installation.

3Category Cable Soup: Have a taste of our category cable soup and learn more about different CAT cable types supported by real-life examples.

Several years ago the AV industry took a huge turn and adopted cheap twisted pair as the preferred media to transport video and audio. With a low cost of ownership and extreme versatility, it caught on like wild fire.

In this installment, we will point out some tasty tidbits of history along with a few good present-day pointers regarding category cable (twisted pair).

With the increasing number of versions for category cable (5, 5e, 6, 6A, 7, etc.), it can be quite confusing to understand, but we hope to clear up the confusion about what we call category cable soup.

We’ve seen a multitude of extenders hit the market, from passive baluns to transport composite video, to skew-adjustable VGA (RGBHV) extenders with impressive technology.

Cable manufacturers even developed dedicated twisted pair cables that were not Cat5 certified to achieve low skew performance (skew is mainly generated by the different twist rates between the orange/green pair, and blue/brown). These super low-skew cables were using the same twist rate on the four pairs, resulting in a higher crosstalk level.

High crosstalk levels are not good for network transmission, but for dedicated video transmission, the lower skew improved the performance and lowered the overall cost.


Because it solved the major issue of having to compensate for skew (which needed expensive chips in the receiver to accomplish this job), leaving only the smaller issue of filtering out the crosstalk (which could be done with a low-cost chip).

Then these same category cables went back to their origins with the arrival of the streamers: video encoders which were used for transporting Composite, S-Video, YUV and even RGBHV signals over Ethernet (and TCP/IP).

The only pitfall was that these long cable runs, which were extending video beyond 100m, could not be used for network streaming, as it exceeded the maximum 100m distance for Fast Ethernet links.

The entrance of HDMI & DVI onto the AV stage changed the whole picture.

It made so much sense that if you want to go digital, then go digital on the same cables.

But this A to D transition is not so easy, especially if you’re using legacy Cat cables which were traditionally UTP (Unshielded Twisted Pair).

Keep in mind that Cat5e was designed to transport 100 MHz signal; its big brother, Cat6, needed to be 250 MHz compliant. Now take in the fact that we are quite far from the full rate of Single DVI and its insane data rate of 1.495 Gbps per channel!

The early HDMI/ DVI solutions which cropped up used the 4 twisted pairs in the cables to pass the Red, Green, Blue channels and Clock that are used in HDMI and DVI transmission.

But anybody that has ever dissected an HDMI or DVI cable (even the cheap ones) would know that their 4 twisted pairs are using thicker copper cable, and each pair is individually shielded.

Why would each twisted pair need to be shielded?

The nature of TMDS (Transition-Minimized Differential Signaling), which is the transmitting technology of DVI/HDMI, is not only sensitive to EMI (Electromagnetic Interference) and external noise, but it generates EMI! You see, each channel holds a high-frequency signal, and that same signal interferes with its neighbour.

By shielding each pair inside the real HDMI or DVI cable, the cable manufacturer minimizes inter pair crosstalk and inter pair noise (see Figure 1).

The same recipe (remember, this is Digital Cooking) is used with the overall foil that prevents emissions from the inside, and noise from the outside.

The second hidden trick of DVI and HDMI cable manufacturers is that they are using the same twist rate over the 4 pairs. Simple, right?

Now you’re thinking: but what about my HDMI Cat extender; will it work over the existing cable that was used for Analog RGBHV transmission?

The answer is a big MAYBE.

Many factors come into play in order to attain a definitive yes or no. DVI/HDMI transmission is much more finicky than VGA/RGBHV for the main reason of bandwidth concerns.

Think of the massive difference between WXGA bandwidth requirements (88.5 MHz) versus HDMI 1080P/60 Hz (995 MHz). If there is not a strong link from the digital sender to the receiver, the chances are high that there will be transmission issues.

So, just as in Cat cable certification, you need to be sure that every link in between these two points is verified; that means the connectors, the patch cables, the horizontal cable all need to be from the same Cat cable class, as well as be able to perform at a given frequency (certification at 100 MHz for Cat5e, 250 MHz for Cat6, etc.).

So it means that any breaking point might be a point of failure: going through patch panels, from female plug to female plug and then patch cables which are not made from the same cable have different physical performances.

The main recommendation across all manufacturers is to use single core copper cable (same as in the IT world; only 10% of the link can be made of patch cables, i.e., multi-strand copper cable).

So why do some extenders perform better than others?

The price and the technology inside make the difference.

If they have cable drivers that compensate the length of the cable with better EQ capabilities, they’ll go further. If they have a higher-end re-clocking technology, they will perform better, but there’s always a limit.

So basically you try to compensate the weakness of the cable link with better electronics.

But there’s one major rule, and most manufacturers have been silent on it: the shielding of the cable.

It might be that they don’t want to discourage customers from purchasing their HDMI/DVI twisted pair products just due to the high costs of re-cabling.

Wouldn’t it be great if you could digitize your entire installation by running over the same so-called ‘universal’ multimedia cable you’ve installed all around the place that could transport Ethernet or Video and Audio?

Alas, while transporting analogue audio or video the natural rejection of twisted pair could get rid of hums on the line, the same effect cannot be achieved when shifting over to digital, as doing this introduces a whole new set of problems, because digital signals run at more than 10 times the frequency!

So, unless the existing cable is suitable for digital transmission, chances are that issues will arise if it is used for HDMI or DVI.

Selecting a higher Cat cable class would help for sure, because higher Cat class means higher frequency. To achieve these higher frequencies offered by Cat6A and Cat7, the cable is built with per-pair foil and good overall braid.

Foil is effective at preventing noise in the 1 GHz range (where HDMI and DVI reside); the braided shield would then protect your signal from lower-frequency external noise (see Figure 2).

Both of these protective elements are necessary to ensure minimal errors during signal transmission. 10 years ago, when Cat5e S/FTP were overpriced compared to the UTP ones, not many thought to spend a little more to future-proof the installation for going into the digital era.

CATx cable structure
CATx cable structures

One new technology development from Valens semiconductor called HDBaseT has caused a little revolution in our digital AV world recently.

With its secret recipe for transporting the DVI/HDMI signals on these same four twisted pairs (it is close to the 10G Ethernet technology according to some leaks), Valens Semiconductor changed the landscape.

These new HDBaseT extenders are not based on media conversion and TMDS transmission, but rather on a different approach. That’s why they can reach 100 meters and sometimes even up to 180 meters with Cat7 cable.

And the cherry on the digital cake, HDBaseT does not transport only HDMI signals, but also Ethernet, RS232, IR and USB HID (a hidden feature, or at least not implemented often).

In all of this soupy category cable confusion, so long as you keep a few things in mind, you can stay out of trouble when going digital.

Unshielded twisted pair (UTP) and even Shielded Twisted Pair (STP) are not ideal category cable for digital transmission; S/FTP is always preferred so that you lessen inter pair noise.

Try to reduce the number of links from source to display, and if you do have to add links (patch panel, etc.), always use the same type of cable as the rest of the run.

Finally, when you can go fiber, never hesitate, as it is the ideal transmission medium for digital video.

4Get more Fiber in your Diet: For this edition we’ve added the delicious ’Fiber Optics’ dish to our special diet: This article tells us more about the difference between copper and fiber, the single- and multimode cable types and different DVI and HDMI extensions available on the market.

Singlemode and Multimode

You may have caught our last Digital Cooking article where we aimed to spell out the ABC’s of using Twisted Pair (Cat5/Cat6/Cat7), along with the journey this convenient cable has “travelled” over the years from IT networks to Analog Video transmission, and now into the Digital Video realm for HDMI, DVI, and even new signals like IR, RS-232, and Ethernet.

This time around, we stay in the cable soup, but divert to a more pure and lean variety, which is ideal for passing the newer signals of DVI, HDMI, and Display Port: FIBER.

Elderly people are commonly encouraged to take in substantial amounts of fiber in their diet to keep digestion flowing smoothly; us in the AV field should consider adding more fiber in our diet as well, to alleviate all the headaches which can be caused by Twisted Pair, and provide a smooth-flowing path for current Digital AV as well as the new upcoming HDMI 2.0 and Display Port 1.2.

For many years Fiber Optics have been used in the Telecom industry and always were looked upon as the crème de la crème transport media, almost restricted for cross continent communications and people playing with light in space suits.

But did you know that fiber optics are used in cars?

Yes, cars, because plastic Fiber doesn’t rust and has also good mechanical properties.

Glass Fiber are reserved for long distance communications, but plastic Fiber is quite common in the consumer world, with the best example being Digital Audio TOSLINK thanks to Toshiba.

The Copper vs. Fiber war (or race, I should say) has existed since the beginning and is not over.

Where Fiber settles speed and distances, copper will follow with better electronics and ‘not-so-cheap anymore’ cable. Costs play a big role in which media is chosen for your application. Copper prices have been crazy since 2002 and Fiber prices have dropped (plastic or silica substrates).

What Fiber has gained from lately is the economy of termination: connectors and ways to terminate fiber have become universal and affordable.

Now, a lot of solutions developed by the industry rely on a cleaver-only operation; it’s not that fusion splicing connection is gone, but it is limited to long reach or contiguous lengths.

In many cases, you will pull only the fiber trunks or ‘cable’ and hire a field termination team, or you will just pull pre-terminated Fiber assemblies.

This Copper vs. Fiber war/race is ruled by telecoms and networks: CAT2 cables could transport up to 4 Mbps data, then CAT3 up to 10 Mbps (Ethernet 10BaseT), CAT4 up to 16 Mpbs (Token Ring), CAT5 up to 100 Mbps and CAT5e up to 1000 Mpbs.

You will notice that CAT6 or CAT7 have no relationship with Ethernet or other network standard! Fiber on the other hand, has been used to transport different protocols from 25 Mbps, then 100 Mbps, 155 Mbps, 622 Mbps, 1Gbps, 10Gbps and lately achieving 100Gbps.

And guess what? Copper 10Gbps already exists but doesn’t attain the same distances.

One very important thing to keep in mind when deciding with customers how to upgrade them from Analog to Digital, is the budget and whether it is possible to switch over to Fiber now, rather then trying to use the existing cable infrastructure.

By now, you probably have already heard about the new specifications for HDMI 2.0 and DisplayPort 1.2. These new signal types are bandwidth-eating monsters, requiring more than 18 and 21 Gigabits per second respectively.

What an appetite!

This means that all of the current Twisted Pair cables in current applications simply cannot work with these new signal types. It’s better to just swap over to fiber now and save the headache.

If you are currently working with clients wishing to upgrade to Digital, Fiber Optics cables are divided in two groups: Single-Mode Fiber and Multi-Mode Fiber.

Their bandwidth and applications are not the same, and like CAT cables, they have different TIA/EIA standards. OM1, OM2, OM3, and OM4 are used for Multi-Mode Fiber and OS1/ OS2 are used for Single-Mode Fiber.

As you can see from the above graphics, the Core of the Single-Mode is much narrower than Multi-Mode (although the cladding is the same).

OM1 is 62.5µm and OM2, OM3, OM4 are 50µm. Where they differentiate most is the bandwidth using laser wavelength between 850nm and 1300nm (the color of the laser itself).

But this is all about the physics; what you need to understand is the way the fibers are terminated: as the Multi-Mode Core is thicker than the Single-Mode Core, it can support many splices or cleaver cuts, whereas Single-Mode can only use fusion splice with a ready-made pricey pigtail (assembled and tested in a factory).

Splice Loss Mechanisms
Splice Loss Mechanisms

So, Multi-Mode has more mechanical tolerance than Single-Mode Fiber because of the way the light beam is sent through the fiber and the way it is constructed.

A few nanometers off, and you don’t have any light at the Single-Mode connection, where you still have signal with Multi-Mode. Single-Mode also uses expensive laser modules compared to the LED and VCSEL used by Multi-Mode.

Optical Fiber Types

Copper connections have been chasing after Fiber distances.

Five years ago, any DVI or HDMI extension longer than 50m needed fiber transport. With the introduction of HDBaseT™, these connections have been extended to 100m, and even 180m for the long reach modes.

That’s less market share for Fiber extension, especially for Multi-Mode Fiber extenders, where most vendors are carrying DVI & HDMI signals up to 300-500m (extreme distances of up to 2600m can even be reached).

The typical extension distance for Single-Mode Fiber DVI extenders is up to 10 Km.

If you conduct a little product comparison, you’ll see that not many manufacturers offer HDMI because HDMI or DVI-HDCP requires two-way communication in a critical way, as HDCP checks answers from the display every 2 seconds.

Taking a quick glance at the different solutions available on the market to extend DVI or HDMI, you can classify them according to the following factors:

  • Single-Mode or Multi-Mode extenders
  • DVI (one-way communication) or DVI-HDCP/HDMI (two-way communications)
  • 4 strands of fiber, 2 strands of fiber, 1 strand of fiber
  • Whether an additional CAT cable is used for DDC & HDCP communications

HDMI products have always been more expensive than DVI, not only due to the licensing and technology used, but also because it needs two-way communication.

Another aspect you need to consider is the number of Fiber strands used.

Equipment using one strand of fiber costs more than those using two strands or four strands.

It might sound odd to minimize the number of strands when you have only one DVI link, but when you have 64 Fiber inputs on your matrix, the fiber termination budget is relatively much higher when comparing single strand to multiple strand.

Plus, with each extra strand of fiber it takes per channel, it substantially increases the potential for failures in termination, and reduces the redundancy in your system.

Another technical aspect brought with the two fiber solution (1 TX, 1 RX) is the use of SFP modules, or Small Form-factor Pluggable module.

The SFP module includes the media converter (optical converter) and the connectors: it determines which connector and which fiber type must be used.

It might sound versatile as you can mix Single-Mode and Multi-Mode transmissions from the same input or output board in your matrix, but these tiny modules are costly, so if you have a fully loaded board, the budget will be higher than a fixed media type.

Our final “dietary” comment to you is this: Today’s market offers a big variety of Fiber products, and using Fiber is very reliable and has many benefits. It is nearly future-proof for next-generation signals, impervious to EMI, and easy to install.

And the cherry on the cake: you can pull fiber in conduits along power in many countries (which is not allowed with copper cables!).

5Digital Cooking Game 2.0: Our digital cooking chefs are making experiments with two brand new standards this month: the HDMI 2.0 and HDBaseT TM 2.0. It’s high time to taste them!

2.0! After Web 2.0, digital video now gets its own 2.0: HDMI 2.0 and HDBaseT 2.0 have been announced within the same time frame, and has a lot of people still wondering which standard or what kind of cable infrastructure they should use. This jump in version is comparable to switching from traditional pots and pans to non-stick: it’s huge. In fact, apart from the “2.0”, these two standards have little in common. Let your digital cooking chefs break it down for you, as 2.0 brings video and audio to whole new level.

HDMI 2.0 was announced end of 2012, but the detailed specifications were not released until September 2013.

With the availability of the new UHD displays, the need of 4K distribution and transport grows bigger, and since the sources currently available are using HDMI 1.4, they are limited to a maximum of 4K @ 30Hz (3840×2160 resolution, refresh rate 30Hz), which is enough for cinematographic content (whose refresh rate is 24p), but it is not enough for video content which uses 50/60Hz refresh rate, depending on the location.

You might have noticed that the UHD demo content are either movies or super-slow content (both camera movement and action). You haven’t seen much sport content in 4K or adrenaline-thrill demo in 4K, as it would look broken and stuttering.

KBS in Korea did some broadcast feeds in UHD, but the models and the other parts of the decor didn’t move much… for the reasons stated above.

Thanks to HDMI 2.0, we can now increase the data rate to 6Gbits per channel (TMDS rate, or 14.4 Gbps effective content rate), which will give us enough room for 3840×2160 @60 Hz.

In the new flavor of HDMI, we’ll get some nice features that will compete a bit with Display Port 1.2, like Multistream video (only 2 different streams, where DisplayPort 1.2 allows up to 63 Audio & Video streams).

At the same time (September 2013), HDBaseT Alliance released the HDBaseT 2.0 specifications too. The specification objectives are different, however.

Whereas HDMI 2.0 is chasing Display Port, Higher speeds and Higher Resolutions, HDBaseT is chasing after distribution in the house: the multimedia matrix.

More or less, HDBaseT 2.0 specs still comply with HDMI 1.4 recommendation, which HDBaseT 1.0 followed 100%.

The 5Play of HDBaseT (Video, Audio, Control, Network and Power) now becomes 6 with the addition of USB 2.0 signal (yet another “2.0”).

It now makes sense to have a common hub in the house that connects all the HDBaseT sources and displays or end points.

As you can see below, HDBaseT HomePlay focuses on networking all these devices. It goes to the point of adopting the OSI model with its 7 layers (See Figure 2: Courtesy of HDBaseT Website


Behind the specifications, stands Valens, the chip manufacturer.

For Valens, it means a new set of chips, not only in the sources and end points, but also in the middle.

So with a new standard and a new set of chips, HDBaseT can now compete with HDMI, as Matrix manufacturers will have to buy such chips to achieve the same services, where today, a HDBaseT matrix relies on TMDS switching and matrixing (based on HDBaseT receivers, their way to matrix, switch, split HDMI signal, and HDBaseT senders).

Another feature for HDBaseT is that with 2.0, the matrix can have multiple layers, individually passing and switching each HDBaseT service (well almost, as you cannot make matrix switch Power, but you can remote switch on and off an endpoint).

The Designer Point of View
Shifting to HDMI 2.0 will change the whole game, as the bandwidth is now doubled from HDMI 1.4, and quadrupled from HDMI 1.3 (most of today’s sources).

Keep in mind that most of the time, we complied with HDMI 1.4 by integrating HDBaseT 1.0 transport in our designs (an easy way to transport signals on Cat cables, and also to comply with the extra bandwidth).

If we follow HDMI 2.0 specifications, the same HDMI Cat 2 cables will supposedly transport 18 Gbps signals, however in practice, the contrary will most likely be proven.

Most of the cable manufacturers are not rating their cable for 18 Gbps, especially if they have long cables (more than 5 meters), because transporting 18 Gbps over longer distances is going to be quite painful and in some cases impossible. Currently, there is only one platform that can handle signals at this speed (Lightware’s 25G Hybrid platform, which also supports DisplayPort 1.2 at crazy 21.6 Gbps).

Shifting to HDBaseT 2.0 on the other hand, is quite easy if the bandwidth of your system is 10 Gbps or greater.

Simply runCat cables from sources to matrix, and from matrix to the endpoints, just as in HDBaseT 1.0 design.

The only thing that is going to change is the matrix itself and the services it will provide.

Keep in mind one thing: the feature set of HDMI 2.0 will not be available over this framework. It will stay as HDMI 1.4 connectivity and allow the independent routing of the individual services of 5Play, along with USB 2.0 support!

The user: I want 2.0!
A time will come shortly when a big sporting event (F1, Olympics, World Cup) will be recorded in 4K @ 60 Hz.

Video editors will absolutely be editing in an HDMI 2.0 environment. They will demand hi-res, deep color, etc. Mister Customer will certainly plead to have the chance to view this on their new razor-thin, curved, HDMI 2.0 display. They’ll be willing to fight for it!

Whether they will be able to pay for it (new audio receiver, high-end speakers, etc.) is another question.

HDBaseT 2.0, on the other hand, may not pass 4K @ 60 Hz, but it will certainly be more convenient and allow a new freedom of services. We see a world where both 2.0’s will find their markets.

65-Play or Foreplay? How to pick the right technology for your needs? Lightware’s AV professionals will help you answer these questions in their usual style – let’s see what they have on the menu for us now!

In our previous few articles we talked about HDMI & DVI transport over Twisted Pair Cat Cables and Fiber Optics, but how should you pick the right technology for your needs? For sure you can take the Restaurant approach by having a peek at the menu which shows the “dish” names and images of what’s inside.

A quick peek at the price might convince you of the quality… On the other hand, some people want to know what are the benefits of the dish, so they look for the ingredient list and the calories. Unfortunately I never seen such menu, so that’s what guides and now e-guides are made for!

If you have to transport HDMI or DVI signals, you can try the same approach. Look for the magic HDMI extender, read lots of spec sheets, where the main criteria will be the distance extended and at which resolutions, followed by the list of features on the added services.

Since the introduction of HDBaseT and its 5-Play (Video, Audio, Control, Ethernet & Power), a lot of manufacturers got rid of their old Cat extension technology (which were based on TMDS technology).

These solutions often had distance and reliability limitations. They revamped their product lines and removed the “short” Cat extender (as if everybody needs 100m FULL HD HDMI extension for every cable run).

But why do you remove a product that does the job at a lower price?

Simply because you probably do not trust it, or because you relied on OEM and your ODM/OEM partner is now focused on new technology (HDBaseT is the trend). And as a good ODM/OEM partner, they want to have the best price so most chose to offer HDBaseT Lite for example, which doesn’t have the 5-Play (only Video and Control) and goes 70 meter max…

This is where customer can see who the real manufacturers are that design and control their technology. Don’t tell me that you need 3D/4K video and the 100 meter extension between your wall plate and the projector in all the meeting rooms!

Most manufacturers were not publishing distances and resolutions for their old extenders, so HDBaseT was a relief as the technology brought some stability on the distance side.

You might like the single cable connection for Video & Control plus remote powering the receiver (existing solutions required two); however that is 3 services out of 5.

Are you ready to pay full price to go only 20% of the distance and use 3 services out of 5?

So take out your 2 year ‘old’ catalog and have a look at what the manufacturers were offering: some of them had pricey HDBaseT extenders, not all of them with 5-Play and most of them had products that were using 2 Cat cables and some already had remote powering, RS232 control and/or IR. They maximized the usage of that second Cat link needed for the DDC line (EDID & HDCP communications).

And guess what? It was working well and still works well.

So if you don’t need a racing car to shop groceries, you might not need or have to afford a full featured, super-efficient extender for your short HDMI extension.

So why pay for all these services you don’t use at 100%?

Let’s go back to how we choose our signal extender: first we need to know exactly what we are extending.

Just Digital Video, or other services as well. What is the exact length? How do I reach the end point? Is it a real direct path or does it go through patch panels along the way? Do I have to go through existing cabling, and is that cabling UTP? If I can pull new cables, do I have to worry about EMI? How future-proof is my setup? What about fiber optics?

Here we lay out some topics for consideration:

The Services
We saw that HDBaseT is offering the 5-Play feature, which is 5 different services through one cable. So by enumerating the services needed for your project, you’ll be able to make an intelligent selection.

Knowing that HDBaseT alternatives are still available at a lower price, you might consider them if you don’t need the full 5-Play.

The Distance
HDBaseT’s longest run is about 180 meters using good quality CAT7 S/FTP with direct connection from sender to receiver (no patch panel, no interconnection) That is only for DVI and HDMI Full HD signals (8 bits, 1920×1080 @ 60Hz).

Following that, there is the Lite version of HDBaseT, limited to 70 meters maximum, at a slightly lower price. Then you might consider the ‘old’ TMDS over Cat extenders, especially the ones that are still in manufacturer’s catalogs, because they are reliable and still offer a lot of possibilities like remoter power, RS232/IR but need S/FTP cable to achieve 50-60 meters.

You should keep in mind that good thick DVI or HDMI cables are an excellent solution too, especially if you couple them with EQ boxes or matrices that have strong EQ on their input (some can EQ up to 60m).

The Path
The path of the cable is one very important thing: all extenders are designed for point-to-point connection, all the way through the same cable and terminated with male RJ45 connectors and no attenuation points along the route.

So it means that you can’t go through patch panels and patch cables that are often not made from the same thick rigid installation cable on either side. Any point of connection is your enemy in the extension business, and even if you use the best CAT7 patch panel, you can calculate attenuation values for Ethernet transmission, but not TMDS nor HDBaseT signal transmission, so don’t even think about evaluating the losses that will occur.

The other important factor is how bad your environment is, because we commonly have to deal with heavy loaded EMI surrounding nowadays: cellphones, cell towers, CFL lights, high voltage, bad grounding, etc.

So for sure you know that you are not allowed to mix high voltage and low voltage cables in the same conduits or cable path, but these invisible troublemakers like microwaves can ruin your setup.

And we are talking about wired extension here, not wireless extenders! HDBaseT is resilient in heavy EMI situations, but not impervious to it.

The Future
This is the 1 Million dollar question: how future proof is my design?

But the more important question is: is my client ready to buy the 1 Million dollar future proof design?

We already talked about future evolutions and the race between copper and fiber optics, but copper is getting better and faster all the time, but it is not the same copper… Today the main problem comes from using UTP Cat5 that has been installed 20 years ago. This is the same challenge that face RF distribution of DVB-S2 signals that can’t go through 1960 coax cable!

Conversely, outdated OM1 multi-mode fiber can now transport HDMI 1.4 signals, and it’s still future proof for upcoming technologies. Impressive.

At Lightware, we have a new generation of Modular Extenders (called the MODEX) that can transport all the HDMI 1.4 signals (4K/3D & Deep color, Audio return channel, Ethernet over HDMI) mixed together with the HDBaseT services (RS232/IR) and add on to it a separate SP/DIF forward audio signal layer in a single strand of fiber.

So yes, fiber would be the ultimate future-proof cable for this solution, but we can currently do all of this on a single Cat cable too, at a lower price. Again, you need to work with your client to understand their needs and budget when selecting the right solution.

74K Cookbook for Beginners: 4K was all the rage at CES and ISE. Our Cooking experts decided that it would be fruitful to introduce a 4K Cookbook for beginners.

If you’ve been reading some of the AV magazines, blogs or attending a few shows lately, or even paid a visit to your local consumer TV shop, you’ve seen these massive Ultra HD screens (from 55” to 84”) displayed at sky-high prices. So how did we end up with super-high resolution displays on these shelves? From both the Cinema industry and the TV manufacturers.

Cinema first adopted Ultra- High Resolution DCI (4096×2160), which was double the width and double the height of their 2K resolution (2048×1080).

Essentially, this resulted in four times as many pixels on the screen. Display manufacturers followed by quadrupling their consumer 2K resolution (i.e. 4 times 1920×1080 becomes 3840×2160 or 2160p).

So, will 4K a whole new set of pots and pans? Let’s look into it further.

4K issues of transport and content

Resolution-wise, making 4K displays wasn’t too much trouble at all.

The big trouble arises when attempting to get 4K content from (the now very few) sources to the display. Refresh rate and colour coding largely affect the amount of data that needs to be transported.

Common refresh rates are 24, 25, 30, 50, 60 frames per second (fps). 24 fps is the Cinema standard, which also become Blu-ray Disc cinema content. 50 and 60 fps are from the TV world, inherited from the power grid frequency, as they were historically the only time base reference available. 25 and 30 fps are refresh rates (leftovers from the interlace-to-progressive transition) that mainly transported content that was up-sampled in TV sets.

That’s what UHD is all about: up-sampling.

How you transport 4K is also significant, so let’s talk about the interfaces, beginning with a nice graph. Just like cookbooks, it’s good to have pictures!

interface limitations

4K @24Hz, 25Hz & 30Hz are resolutions supported by HDMI 1.4 in 4:2:2 encoding. The chroma sub-sampling can be achieved because the human eye is less sensitive to chrominance than luminance.

When captured in RGB, the signal is composed of equal amounts of each color, so if you convert it to YCrCb, you’ll have a 4:4:4 coding. It means that you transmit 4 bits of luminance and 4 bits of each color-difference components (Cr & Cb). 4:2:2 would result in dropping 2 bits of Cr and 2 bits of Cb for each 4 bits of Luma, reducing the amount of data, and thus reducing the overall bandwidth needed to pass all that information.

Figure 3
Figure 3

As you can see from Figure 3, the 10 Gbits/s limitation of HDMI 1.4 (10.2Gbps) and HDBaseT (10 Gbps) restrict the transport of 4K signals to 30Hz YCbCr 4:2:2 max (no problems for anything below this limit, like24Hz YCbCr 4:2:0 of Blu-ray, for example).

But computers use RGB 4:4:4 at a stable refresh rate of 60 Hz, which is comfortable for the eyes, and is above what today’s interfaces can do. Only the upcoming HDMI 2.0 and DisplayPort 1.2 can transport this signal.

If you want to learn more about colour sub-sampling you can read this Wikipedia article.

So the only way to transport 4K at 60Hz for computer applications (using HDMI 1.4 or HDBaseT for the transport) is to downgrade the signal to 4:2:0 (not shown on the the graphic), unfortunately.

“Houston, we have a problem here…”

Why discard a major part of chrominance, just to get a more stable picture on the display, when it results in a picture which is not compatible with post-production requirements nor the computer world?

This is where DisplayPort 1.2 makes sense: its large bandwidth is the only one that can transport the full stream (all except 12bits depth, as shown in Figure 3).

Post-production environments like to work on 12bits 4:4:4, and this is their major dilemma currently. They won’t have the right environment to handle 4K or 4K DCI, 12bits 4:4:4 @60Hz (they don’t want to work day & night on a 24Hz or 30Hz monitor!).

The workaround would be to have a workstation environment with a high resolution, and a rendering second display, at UHD (like the new Mac Pro, for example).

Another challenge arose when the TV manufacturers started to promote their freshly “cooked” UHD displays: what content will they show and how can they demonstrate them?

There is no permanent UHD broadcast (at least outside Korea or Japan), and there is no widely adopted broadcast standard either. The same applies with stored media: there is no Blu-ray Disc standard with UHD capabilities.

So guess what’s behind the UHD sets you’ve been watching in the shops? USB sticks or hard drives.

How do you get rid of transport issues? By having the content decoded locally. This is the cheap workaround. But as a System Designer you can’t tell your customers to have all their content generated in their displays, just because there’s no standardized rendering protocol.

The true 4K recipe
So we understand now that the Consumer 4K, aka UHD, will create a whole new set of headaches.

In professional applications, 4K professional displays are also available, more in form of projectors than in fixed panels, but they are coming.

The lack of a unified interface to connect the 4K sources to these displays didn’t stop the manufacturers to promote these gorgeous displays. They use four DVI Single-Link feeds, or two DVI Dual-Link feeds, and lately even four 3G-SDI feeds.

And the funny thing is that these transport methods have been achieving 4K since way before UHD was even an acronym, and they are more than reliable.

Aside from this method, you can reduce your HDBaseT links to 70m to be compliant with 4K UHD transmission, check that all your HDMI (real HDMI to HDMI) cables are High Speed with ethernet (if you’re going full speed, let’s go full features too), and pray that they are really High-Speed and can achieve the 10 Gbps, as printed on the label.

Marketing has been good for the HDMI cable manufacturers, as now we need “new cables” for compliance with HDMI 2.0 specs (18 Gbps), based on HDMI 1.4 recommendations (See previous Digital Cooking 2.0)

HDMI cables might be easy, but what about the DVI cables? There is no High-Speed DVI cables that are 10 Gbps rated, nor 18 Gbps, so you can say good-bye to the nice anchored connector head of that DVI cable you liked for years: you’ll be going 4K full-rate with HDMI “consumer” cables.

On the transport side, you can see on Figure 3, that the for the true “recipe” (4K @60Hz) full-feature (12 bits & 4:4:4) computer content, you’ll need to have at least HDMI 2.0 or DisplayPort 1.2 and a 25Gbps bandwidth infrastructure: extenders, matrices, and all your components will need to support that incredible 25Gbps stream.

Anything less than 25Gbps will not allow for this transition, so keep that in mind when future proofing your 4K designs.

8Live (à la minute) or Ready-made? Chefs have a decision when getting ready to cook a meal. They can choose to create a sauce from scratch (à la minute), or simply buy a ready-made packet and mix the contents with water. Both will end up as sauces, but the effect (and taste) is usually different.

When you design large digital video systems, both in terms of the number of inputs and output, and in terms of the overall topology and scale, you always want to check the latest development of network transport video first.

Network transport relies on a computer network infrastructure with network switches which may or may not be dedicated to video routing and distribution. This is an interesting option to consider, along with using a digital AV matrix switcher at the core of the installation.

What are the benefits? Expansion or Scalability.

People from this industry will tell you Cost is also the main factor as it might be hidden a bit, because you could think the average Joe can manage a Layer 2 or 3 switch and that manageable switch price would be some hundreds of USD…

But the main aspect of Networked A/V transport is the Network part, so you need a Network Engineer, and also an AV Engineer for the source & endpoints, or one of the new mutants that are coming out of the schools.

Having dealt with networked audio & video, the main pitch was : “Tell your customer to throw away his 32 I/O matrix when he wants to add one more input”.

And the regular AV Matrix guys would replicate with the big question: “What is the delay of your system?”

That’s it! You throw the dirty rock in the soup. Delay, the evil word of Networked AV.

New Codecs like H265 (only published in 2013) are better than their previous generation, but then the cost effective chips are far from being ready. Then you have the proprietary recipes, all better than the others, with the latest one kicking out the previous week’s flavour.

Keep in mind that these Codecs are processing the audio & video, and this operation takes some time.

With pricey custom silicon chips, the delay can be reduced to lines of pixels versus frames of video (which was previously the case).

And what is available now as a standard? H264 chips.

So, a lot of H264 streamers and exstreamers (decoders/renderers) are now common. All of this video is like processed food, however, and it differs from fresh food, or delay-free video.

There is also one other bad word in the Codec world: compression.

Compression is not always bad, and it was used in analog broadcast for example: Interlaced Video!

Dropping odd or even lines to reduce the amount of information transmitted is destructive (because your eyes & brain were supposed to recreate the image). The main aspect of compression is whether or not it is destructive. Some algorithms compress the data, but don’t loose information.

You’ve heard about a lot of “Lossless” audio codecs, but sadly, never got used to “Lossless” compressed video. YouTube teached you not to, even if today that the largest 4K broadcast platform (which it is!).

lipsync2 lipsync3

Psycho-acoustics is a tricky science, where sound before sight really bothers your brain, and depending on the frequency and type of sound, the effects are worse (see Figures 1-3). So there is one rule in Networked AV to minimise the lip-sync: embed the audio or suffer!

Then pray that good codec (Coder/Decoder set) is used to render the audio synced with the video.

What about video-only systems?

With these, you still have the biggest share of the pie in terms of delay: video processing is heavier and more difficult to digest than audio processing (especially with today’s DSP technology which is considered “Live”, meaning that your brain can’t tell the difference).

The same rule applies for any processor in the distribution chain for traditional A/V switchers: codecs (Video Conferencing or otherwise), scalers, seamless switchers, effects, composers, multi-viewers, even certain fiber extenders; all of them add some frame delay.

One of the benefit of embedding the audio is that the processed imaged will still contain the audio message (if the device works well), and then the audio message will still stick to the picture.

When it comes to CODEC, you need to be sure that the packets of audio and packets of video are still synced. This does not always hold true depending on the algorithm & protocol.

This is where the OSI model layers of communications enters the game. Most Audio & Video Codecs(H264, H265 or proprietary) will rely on high level layers (called application layers, as seen in Figure 4), so the way the network was setup and how it behaves is crucial.

This choice is usually made for integration with other devices on the same network (service providers or managing devices).

Lower level systems like AVB (or like Cobranet in the past) rely on Level II & Level III infrastructures, so they require a gateway to integrate with higher level devices on the network, and these gateways will introduce further delay in transmission. For example, an H264 camera could be on the same network with AVB devices, but it cannot communicate with those devices without a gateway.

osi model

As good as these protocols are, they always rely on network switches that are not only transmitting electricity (Layer I), which is why some manufacturers started to work on all-in-one chips that would interface directly with the physical layer or the Data-Link layer.

So more or less, you plug all the services you need to that chip (HDMI, DVI, DP, Ethernet, USB, Analog or Multi-Channel Digital Audio) and the same chip will be connected to a Core 10G Switch (all Fiber).

The only drawback of this setup is the management of services or cross points. The integration with third party automation controllers is not that easy: for now, the major control brands don’t offer 10G Fiber ports on their Controllers and the API of these chips are not well spread.

In a general manner, the network-based systems do not integrate tightly with classic A/V automation. Coming from the IT world or having a strong IT background always pushes them to have their own flavour of automation, where traditional Audio & Video switchers relied on third party control through RS-232 and now TCP/IP and/or the Web. This is their legacy.

So, which one is the best, network-based transport or dedicated A/V core matrix switcher?

The answer to this lies within the requirements of each application. When you need real live Audio & Video (live entertainment, medical, military, mission critical setup) the Core AV switcher is the answer. When you need integration with IT, scalability, size (hundreds of display with limited sources), WAN services (watching a camera 10,000 Km away), then Networked A/V might be the solution. But the truth is in between, and that’s maybe tomorrow’s generation of products: Core A/V switchers with Network interfaces.

If you need a mission critical setup to operate, diagnose and react, the Core A/V system is your best choice, but then if you want to broadcast some of this information in different locations thru LAN or WAN, then you can have connections to the Network A/V infrastructure. In term of size, Network-based A/V still needs improvement.

Today, traditional switchers exists in size of 80×80, 128×128, 144×144, 160×160, and even 256×256 (some crazy built on demand 1000×1000), but there aren’t any networked systems that can transport 80 Full HD (2K) non-destructive compressed streams.

The “recipe” still doesn’t exist.

9Digital Cooking in the Dark: In this instalment of Digital Cooking, we’ll turn our attention to a more practical subject matter: troubleshooting.

Being in the midst of the Analog Sunset, installers more often than not find themselves working with HDMI instead of VGA. These installers quickly find that troubleshooting digital installation is trickier than analog was. Where with analog signals they at least could see some sort of image, even if it was of poor quality, the same rules did not apply on the digital side. With digital installations, it is similar to cooking with Molecular Gastronomy methods, in that it’s difficult to troubleshoot and installers find themselves “in the dark” without good tools to understand what is happening.

When you have to deal with Digital AV installations, from a single source connected to the display, or a very large DVI/HDMI matrix connected to dozens of displays and unknown guest sources, the same recipe applies: if you don’t know where you are going, and what to check for, it’s like working while blindfolded.

When you find yourself looking at a black screen, there are steps you can follow to systematically troubleshoot the scenario.

As we discussed in an earlier edition of Digital Cooking, DVI and HDMI transport is ruled by EDID, so sources cannot produce a valid signal without the right EDID. This means that you need to check that EDID path, working step-by-step from the Display to the Source.

Sometimes the powering sequence can be important, so when you’re into that “Turn if Off and On again” procedure, don’t forget to do it properly: turn on the display first and then each device backwards towards the source (power the source last). See Figure 1:

Following the EDID path is one of the first thing to check when troubleshooting: check what EDID is actually received by the source. If it is a laptop or a computer, then use the On-screen Display properties panel of the computer, or a third party tool like the Phoenix EDID Designer to check what EDID is received.

Look for non ‘Generic Panel’ information (‘Generic Panel’ is the name Windows gives to unidentified panels). If it’s an appliance or device, then connect an EDID Manager or analyser to see what EDID is emulated to the source. If it doesn’t look like the EDID you set in your matrix, or the one from the connected monitor, then you have a problem in your cables or your extenders.

While your EDID Manager is still connected to the source, you might try to emulate a standard EDID with a single resolution that matches your display. If this solves the problem and you get a picture, then your job is almost done (do not leave your pricey analyser at the customer’s place, especially if it’s one of those nice Quantum Data 780’s).

You can now move your EDID Manager next to the display, record its EDID, followed by moving it one step back towards the source, and again recording the EDID, and so on until you have recorded the EDID’s between each link of your signal chain. Read these EDID’s from your analyser tool and check if they are consistent.

If you find there is a discrepancy, then you have found the weak link!

Another very handy tool is a portable monitor or analyser that you can move along the signal path.

Some matrix switchers give you the ability to analyse the incoming and outgoing video signals (video specs, audio specs, colour depth and even the detailed timing). See Figures 2 & 3:

Figure 2
Figure 2
Figure 3
Figure 3

Knowing the detailed timing can help you to shorten the the overall switching time. When you switch from source A to B, how long does it take for source B to be displayed on the monitor?

This overall time should include the loss of sync, or any added effects that can hide this loss. Presentation switchers do that for example, by adding transition effects such as fades to “smooth out” the loss of sync while the switching occurs.

The step of troubleshooting cabling on your matrix setup, can become tricky as all the current cable certifiers or qualifiers are using Telecom specs or at least CAT5e, CAT6 specifications.

However, today’s HDMI transmission over twisted pair is using one of two technologies:TMDS or HDBaseT. These two protocols are not Telecom protocols, so there are no dedicated tools to troubleshoot HDMI over Twisted Pairs. Basic LEDs that reflect link status, and sometimes even physical interface characteristics (TP Continuity, Laser presence, valid EDID, etc.) are a quick and easy ways to check the links. Having an error logger and data analyser is a very useful asset when you look for a weak point, especially if you need to go through patch panels and patch cables See Figure 4:

Figure 4
Figure 4

Some matrix CPU boards can monitor such data and report the link status over a period of time. Data logging is a little power-consuming and also needs memory, but that’s the only way to understand what is going on while you are wiggling the cables in the patch panel. Thus, it’s best to have a matrix with these types of analytical tools, or you’ll be in the dark for longer periods of time!

Finally, once you get picture all around the place, your automation team will take over and will try to use the these signal paths to control sources and displays through embedded RS232 or Ethernet channels (which are inside HDBaseT or Fiber links).

This process will likely introduce new issues which you must overcome. As with all integrated systems, breaking them down to services is the way to approach the solution.

First get the video to work, then the audio, and control will be the last step. As most matrix switchers act as gateways, the ability to test commands from the matrix is a quick way to check both your commands and also the control linked to the device. See figure 5:

Figure 5
Figure 5

Now that you understand much more about the troubleshooting procedures and tools for digital installations, you can see why some digital products are priced differently than others.

Implementing troubleshooting and analysing functionality in matrix switchers and extenders is not an easy task (or everybody would do it!). Our advice to you is to try out a few different solutions to see what works best for you. Don’t find yourself installing (or cooking) in the dark without a flashlight!

10Top Chef AV Recipe: In this 10th and final instalment of Digital Cooking, we come full circle in the kitchen of AV.

As we covered quite a lot of material in the previous nine articles, this one will pull the salient advice from them and give you only the crème de la crème of guidelines to which every installer should follow to some degree.

So, get your printer and laminate ready, as these might be good to have handy when designing systems or troubleshooting them.

1. Don’t trust the cable!

If you didn’t bring it, if you didn’t pull it, and certify it, then don’t trust it.

As all measurement equipment, regular inspection and service of your tools have to be made, and that includes your test cable. Always have two recently-tested cables to trouble-shoot with. This guideline is #1 for a reason.

2. Go with a Matrix that you can manage.

It’s important to have the tools to manage EDID, access a frame detector, embed & de-embed audio, and run cable analysis from the Matrix. Smarter is better!

Having the ability to do some signal conversion also provides flexibility in design. When everything is converted as TMDS inside the matrix, the hybrid structure offers unparalleled functionality and gives a critical point of management.

3. Only trust in true EDID management.

Without proper EDID settings, your source will not produce the expected video content (resolution, refresh rate, color depth, and all the features of HDMI 1.4).

In most instances, a well-tuned EDID will save you the costs of scaling receivers, and do an even better job. When not set up properly, scaling receivers can make the video look even worse by scaling up low resolutions when proper EDID management could have forced the source to output an optimal signal.

4. Remember the general CAT cable rule: in AV, UTP is Useless Twisted Pair!

Even if HDBaseT can be used on UTP Cat5e, the performances is far less than on STP version or even better, S/FTP.

This is especially the case when you have cable trunk.

If your field of operation is in a wide open space with single links in the void, away from cell towers, CFL lights, and any Electro-Magnetic & ESD interferences, then feel free to use UTP; otherwise, stay away from it like a chef putting too much salt in the dish!

5. Put fiber in your diet.

Don’t be scared. Fiber is a chef’s friend and should be yours too. It has zero Electromagnetic Interference, it’s uber-reliable, and it works on long distances or crowded environments (where trunking or space is an issue).

Most of the time, multi-mode fiber (cheaper than single-mode) would be enough for video transmission. Just take a look at old OM1 fiber installed 12 years ago, which is doing great passing UHD signals. This is not the case with an old CAT5e UTP…

6. HDBaseT, the life saver.

Yes, HDBaseT has solved a lot of issues. Yes, HDBaseT made it common to remote power senders and receivers. Yes, HDBaseT made it easy to have Ethernet, IR, RS-232 alongside an HDMI signal.

Heed this: not all HDBaseT are not equal, so beware when you select your extenders. Just take a look on Panasonic’s DIGITAL LINK website to see how different manufacturers measure up in terms of compatibility.

You will see that it is not always 5-Play compatible. And one more thing: do you really need all 5 Play services for simple computer extension in the control room?

7. Cables: read point 1. Read it again please.

The better the cable is, the less issues you’ll have delivering the installation and maintaining it. We said good cables, not necessarily the most expensive, extravagant ones. Industrial-good means published specs with measurements, on site certification and qualification. Take note that nylon braid (inner or outer) doesn’t take 4K30 any further. 95% of installation issues occur as a result of poor cable.

8. Talk 4K, but know it.

4K30 is nice in the TV Shop, and maybe at home with some computers or special media players. 4K30 is not common for the broadcasters, and also the NextGen Bluray industry.

What about Pro AV? They’d rather work with 4K60, but sorry, because only DP1.2 and HDMI2.0 will be able transmit 4K60, and these solutions are not on the market yet.

Another consideration is the bandwidth necessary to route 4K60 (~20Gbps) is only available from one manufacturer in the world at the moment, so it will take some time before other solutions hit the market.

9. The Internet is everything. Or is it?

Shall we transmit video over IP, or over network (remember that IP Protocol use networks, but all networks are not IP based)? H264 is nice, cheap, etc., but I’d rather not lay down in the Operation Room with an H264 transmitter for the endoscope, thank you. In many cases, our customers need high-reliability and negligible delay.

Until Skype releases a Medical Edition (and we’re not saying they won’t), it’s best to analyse the situation before choosing your transmission method.

10. Prepare your test kit.

Every decent chef has their tool kit, an the AV installer has theirs too. Keep yours up-to-date and ready. For instance, don’t use local or a freshly-unpacked device to troubleshoot the installation.

Read point 1, and bring your own tested and trusted cables. It means also that if your job is using a new type of equipment, test it in the workshop, or at the office before going on site with it. New cables types? Test them. Trust is all you need when troubleshooting, and test equipment has to be verified and subsequently maintained when you get back home.