Intelligent Data Centres Issue 44 | Page 53

FEATURE FEATURE

As data centre applications become more resource-intensive and fluid , network managers must up their infrastructure game . The data centre environment is constantly changing , which should surprise absolutely nobody . But some changes are more profound than others and their longterm effects more disruptive .

To be clear , data centres – whether hyperscale , global-scale , multi-tenant or enterprise – aren ’ t the only ones affected by such fundamental changes . Everyone in the ecosystem must adapt , from designers , integrators and installers to OEM and infrastructure partners .
In the Middle East and North Africa region for instance , the data centre market value is estimated at US $ 3.4 billion for 2022 and is expected to rise to US $ 10.4 billion by the year 2028 – according to RationalStat ’ s analysis .
Moreover , we are witnessing an increased focus on the region by international companies – Microsoft opened the first global data centre region in Qatar , while AWS launched another region in the UAE allowing customers to run workloads and store data securely , while serving endusers with lesser latencies . Investments and initiatives such as these bring new opportunities for a cloud-first economy .
We are witnessing the next great migration in speed , with the largest operators now transitioning to 400G applications and already planning the jump to 800G . So , what makes this latest leap significant ? For one thing , the move to 400G then 800G and eventually 1.6T and 3.2T officially marks the beginning of the octal era , which brings with it some fundamental changes that will affect everyone .
What ’ s driving changes in data centre infrastructure
Increases in global data consumption and resource-intensive applications like Big Data , IoT , AI and Machine Learning are driving the need for more capacity and reduced latency within the data centre . At the switch level , faster , higher-capacity ASICs make this possible . The challenge for data centre managers is how to provision more ports at higher data rates and higher optical lane counts . Among other things , this requires thoughtful scaling with more flexible deployment options . Of course , all of this is happening in the context of a new reality that is forcing data centres to accomplish more with fewer resources ( both physical and fiscal ). The value of the physical layer infrastructure is largely dependent on how easy it is to deploy , reconfigure , manage and scale .
Identifying the criteria for a flexible , future-ready fibre platform
Several years ago , we began focusing on the next generation fibre platform . So , we asked our customers and partners : ‘ Knowing what you know now – about network design , migration and installation challenges , and application requirements – how would you design your next-generation fibre platform ?’ Their answers echoed the same themes : easier , more efficient migration to higher speeds ; ultra-low-loss optical performance ; faster deployment ; and more flexible design options .
In synthesising the input and adding lessons learned from 40 + years of network design experience , we identified several critical design requirements necessary for addressing the changes affecting both our data centre customers and their design , installation and integration partners :
• The need for application-based building blocks
• Flexibility in distributing increased switch capacity
• Faster , simpler deployment and change management
• Application-based building blocks
As a rule , application support is limited by the maximum number of I / O ports on the front of the switch . The key to maximising port efficiency lies in your ability to make the best use of the switch capacity . Traditional four-lane quad designs provided steady migration to 50G , 100G and 200G . But at 400G and above , the 12- and 24-fibre configurations used to support quad-based applications become less efficient , leaving significant capacity stranded at the switch port . This is where octal technology comes into play .
MPO breakouts become the most efficient multi-pair building block for trunk applications . Moving from quad-based deployments to octal configurations doubles the number of breakouts , enabling network managers to eliminate some switch layers . Moreover , today ’ s applications are being designed for 16-fibre cabling .
Yet , not every data centre is ready to move away from their legacy 12- and 24-fibre deployments . They must also be able to support and manage applications without wasting fibres or losing port counts . Therefore , efficient applicationbased building blocks for 8f , 12f and 24f configurations are needed , as well .
Design flexibility
Another key requirement is for a more flexible design to enable data centre managers and their design partners to quickly redistribute fibre capacity at the patch panel and adapt their networks to support changes in resource allocation . One way to achieve this is to develop built-in modularity in the panel components that enable alignment between point-of-delivery ( POD ) and network design architectures .
In a traditional fibre platform design , components such as modules , cassettes and adapter packs are panel-specific . As a result , changing components that have different configurations also involves
Ehab Kanary , VP Sales , EMEA Emerging Markets at CommScope www . intelligentdatacentres . com
53