Intelligent Tech Channels Issue 62 | Page 67

FINAL WORD
The problem is not just that low sample rates are bad , they do not tell the whole story , which can lead analysis in the wrong direction .

As IT professionals , it is all too common for us to hear buzzwords peppered into conversations as frequently as possible . While many ultimately fall out of the favour , those that truly represent a shift in the way the industry operates earn their rightful place in the industry vernacular . One such phenomenon has been unified observability , prompted by the need for organisations to win back control over increasingly complex and distributed IT environments .

Historically , observability , the predecessor to this term , has been seen through the lens of its ability to help DevOps teams combat the challenges they face in complicated , highly distributed cloud-native environments . But this is changing ; observability is becoming a function that helps teams identify and solve wider problems across application monitoring , testing , and management within these environments . As a result , unified observability has emerged as the broader definition that fits this expanded set of challenges .
Implementing unified observability can be challenging , particularly for large , global organisations . Take a company with
10,000 employees , for example , all of whom will expect a robust , reliable digital experience . However , they will often be working in swiftly changing hybrid working environments – each one with their own laptop configurations and Wi-Fi setups – and expecting the same digital experience they would receive on-premises .
This all comes before factoring in potentially hundreds of thousands of customers . Their unknown mix of legacy on-premises , cloud applications and shadow IT work together to make the observability quandary even more complex .
In these scenarios , a successful unified observability approach is one that can cut through siloes and locales , collecting information from all data sources with full fidelity .
Tools , effectiveness
A recent survey commissioned by Riverbed and undertaken by IDC found that 90 % of all IT teams are using observability tools to gain visibility and effectively manage their current mix of geographies , applications , and networking requirements . Around half of those teams use six separate observability tools , resulting in tens of thousands of alerts per day – far more than any IT team can feasibly attempt to address . The amount of data these tools produce , alongside the vast number of alarms , makes it difficult to ensure that all important information is collected .
This challenge is further compounded by teams that use limited or outdated tools . Almost two thirds of the IDC survey ’ s respondents said their organisations used tools that concentrated only upon the company ’ s complex layers of hardware configurations , cloud-based services , and legacy on-premises applications . The survey also revealed that 61 % of IT teams feel this narrow view impeded productivity and collaboration .
This is where unified observability has found its footing . Smart IT teams are now using a single unified observability tool to unify telemetry from across domains and devices with full fidelity , rather than sampling and capturing only some of the data which can lead to significant gaps . This would be analogous to a company only capturing a fraction of customer complaints received on Black Friday . This would leave them unaware of the full range of problems , unable to solve issues , and with a huge number of customers leaving the store .
INTELLIGENT TECH CHANNELS 67