When designing a new fibre infrastructure for the data centre, it is important to understand your projected need for future bandwidth and application usage. The goal must be to select a cabling solution that meets the growing data demands of the business, that is scalable and that ensures reliability and rapid deployment. The traditional discussion between singlemode or multimode fibre is no longer a simple one as the costs are very comparable between both these fibre types. We are now seeing the above four factors that will allow for better infrastructure planning and design with future growth in mind and will help businesses to economically scale up over time without impacting on performance or uptime.
Future data demands
As data demands in the data centre continue to increase exponentially, it is important to ensure that the fibre type selected will comply not only with the current application needs (ranging from 10Gb/s to 100Gb/s) but also with future needs (400Gb/s to 800Gb/s).
If we take a closer look at 400G applications, multimode fibre can support a maximum length of 100 to 150m while singlemode (depending on the transmission method) can reach 2km or more. At first glance, a singlemode link has an advantage on the supported length but multimode fibre would reduce active component costs. Multimode QSFPs are typically cheaper than singlemode equivalents when using parallel optic applications that require more than 8-core fibre. So the question is if an 8-core multimode fibre link would be more cost effective than a 2-core singlemode link and the answer is typically ‘yes’. However, most singlemode applications use Wavelength-Division Multiplexing (WDM) technology which enables the transmission of several signals on the same fibre, so that a duplex connection can support 400G and beyond. It all rather depends on considering the future requirements and data demand of the data centre.
Scalability and future proofing are key when designing a fibre infrastructure and the first metric in terms of scalability is length. For very short distances, especially in Top of Rack (ToR) and breakout applications, many data centres deploy direct attach cables (DACs) or active optical cables (AOCs), which provide a high performance and reliable solution, but they don’t offer a scalable migration path. With each technology update the user is required to change the AOC or DAC cord.
The preferred solution in these instances remains a structured fibre optic cabling system which utilises transceivers. This provides the maximum amount of options for scalability and migration in the future as the active components can be easily upgraded without changing the underlying infrastructure.
We are also seeing data centre architectures evolve to more simplified architectures with less network layers, thereby reducing the length and optical loss budget on the fibre channel. Those simplified architectures with shorter distances (knowing that 93 per cent of data centre channels are shorter than 100m) will better support lower cost multimode high-speed applications.
What about future higher speeds?
Even if multimode fibre cabling would be sufficient for enterprise data centres because of shorter distance and speed requirements compared to cloud data centres, the need for higher speeds in the future remains. Will port aggregation applications (8 lanes of 50G) be more cost effective once enterprise data centres move to 400G speed? Looking at current IEEE drafts, we are seeing more singlemode application developments above 400G. The objective of the IEEE beyond 400Gb/s Ethernet study group is to achieve 800G and 1.6 Terabit over singlemode fibre from a distance of 500m up to 40km; while multimode is being considered only for 800G applications up to 50 or 100m and currently not on 1.6T.
These developments help to reinforce that all current and future applications are supported by all fibre types. However, while a singlemode fibre infrastructure installed 25 years ago using the same duplex connector is still viable today, a multimode infrastructure has to upgrade from OM1 to OM2, OM3, OM4, with OM5 now emerging.
Multimode can efficiently absorb most of the dynamic traffic that is localised within the data centre. It is a highly useful media to support greater bandwidths of 40G with duplex LC interfaces using BiDi technology and 100G/200G/400G utilising parallel optics, so its continued relevance within data centre designs is clear.
With this in mind, multimode connectivity needs more design considerations in terms of connector type and also the maximum channel attenuation. These two factors (performance and connector type) need to be reviewed to ensure the solution specified is capable of supporting the application.
With the availability of pre-terminated links, both fibre types are simple to install, require less time than field terminations and are therefore more cost effective. As recommended by the latest cabling standards, pre-terminated fibre cabling is the preferred deployment type but it needs to be carefully tested and cleaned to ensure the right performance.
For multimode cabling, technicians need to ensure the lowest attenuation to support high speed connections where attenuation becomes more critical than length. Just a small dust particle can impact the signal. Therefore, cabling manufacturers like Siemon provide low-loss components and the Standards require a more professional cleaning method. Looking at singlemode fibre, cleaning is just as crucial to ensure that the signal will be transmitted. Even if there is more room for attenuation with singlemode applications, dust can create critical signal distortion as the signal needs to be transmitted in 9µ fibre core (compared to 50µ for multimode).
With different fibre technologies available in the data centre space today, it can be difficult for data centre owners and operators to make the right choice. While multimode clearly has its place for distances up to 100m and for longer term application use, an OS2 singlemode optical fibre infrastructure would be the suggested first choice to guarantee application assurance as we transition to higher speeds as it allows for high data availability and throughput within your various data centre environments.