Intelligent CIO Europe Issue 34 | Page 79

t cht lk several boxes companies need to check to successfully enable AI at scale .
PUBLIC CLOUD , MEANWHILE , OFFERS THE PATH OF LEAST RESISTANCE , BUT IT ISN ’ T ALWAYS THE BEST ENVIRONMENT TO TRAIN AI MODELS AT SCALE OR DEPLOY THEM IN PRODUCTION DUE TO EITHER HIGH COSTS OR LATENCY ISSUES .

t cht lk several boxes companies need to check to successfully enable AI at scale .

First , businesses need to be able to ensure they have the right infrastructure to support the data acquisition and collection necessary to prepare datasets used for AI workloads .
In particular , attention must be given to the effectiveness and cost of collecting data from Edge or cloud devices where AI inference runs . Ideally , this needs to happen across multiple worldwide regions , as well as leveraging high-speed connectivity and ensuring high availability .
This means businesses need infrastructure supported by a network fabric that can offer the following benefits :
• Proximity to AI data : 5G and fixed line core nodes in enterprise data centres bring AI data from devices in the field , offices and manufacturing facilities into regional interconnected data centres for processing along a multi-node architecture .
• Direct cloud access : Provides high performant access to cloud hyperscale environment to support hybrid deployments of AI training or inference workloads .
• Geographic scale : By placing their infrastructure in multiple data centres located in strategic geographic regions , businesses enable cost-effective acquisition of data and high-performance delivery of AI workloads worldwide .
As businesses consider training AI / Deep Learning models they must consider a data centre partner that will in the long-term be

“ able to accommodate the necessary power and cooling technologies supporting GPU accelerated compute and this entails :

• High rack density : To support AI workloads , enterprises will need to get more computing power out of each rack in their data centre . That means much higher power density . In fact , most enterprises would need to scale their maximum density at least three times to support AI workloads – and prepare for even higher levels in the future .
• Size and scale : Key to leveraging the benefits of AI is doing it at scale . The ability to run at scale hardware ( GPU ) enables the effect of large-scale computation .

PUBLIC CLOUD , MEANWHILE , OFFERS THE PATH OF LEAST RESISTANCE , BUT IT ISN ’ T ALWAYS THE BEST ENVIRONMENT TO TRAIN AI MODELS AT SCALE OR DEPLOY THEM IN PRODUCTION DUE TO EITHER HIGH COSTS OR LATENCY ISSUES .

A realistic path to AI
Most on-premises enterprise data centres aren ’ t capable of handling that level of scale . Public cloud , meanwhile , offers the path of least resistance , but it isn ’ t always the best environment to train AI models at scale or deploy them in production due to either high costs or latency issues .
So , what ’ s the best way forward for companies that want to design an infrastructure to support AI workloads ?
Important lessons can be learned by examining how businesses that are already gaining value from AI have chosen to deploy their infrastructure .
Hyperscalers like Google , Amazon , Facebook and Microsoft successfully deploy AI at scale with their own core and Edge infrastructure often deployed in highly connected , high-quality data centres .
They use colocation heavily around the globe because they know it can support the scale , high-density and connectivity they need .
By leveraging the knowledge and experience of these AI leaders , enterprises will be able to chart their own destiny when it comes to AI . • www . intelligentcio . com INTELLIGENTCIO
79