nov dec | Page 9

grows exponentially with how many classes the model must identify. Since training from scratch is difficult, most industrial players take an existing model and add further training to customise it for their purpose. VeEX: Yes, we are moving in that direction. Many service providers are beginning to adopt or explore Agentic AI to enhance customer experience, operations, and security. Recent case studies presented at several international telecom conferences show Agentic AI’ s first major adoption in the Customer Service / Experience sector. There is still work to be done in developing specialised telecom AI agents for different network tasks and enable them to communicate with each other( agentto-agent) to achieve a full self-healing telecom network. As more intelligent data is fed to the autonomous agents, they learn and analyse situations to make better decisions over time. The overall consensus is that standards-based Agentic AI will help achieve fully automated diagnostics and self-healing( in the cases that do not require human interaction), to prevent service interruption and maintain optimum network performance. While we are not there yet, the path is clear, and the industry will continue toward an autonomous, selfmanaging network. VIAVI Solutions: In short Yes, but it is not that simple. The problem space in T & M is always shifting and thus some of the new technologies, vendors, services, software releases etc will contain new( and exciting) problems that has not yet been seen in the past, some of them might have been seen in other products or solutions in the past( e. g., the Ethernet MTU size is incompatible in the integration between a O-RU and the O-DU in a new O-RAN setup between two vendors) and general past learning can be applied and other things have never been seen in the past and thus it might be possible to draw the wrong conclusions using some of the past experience if one does not have the new context( e. g., the AI has not yet been trained in this).
Is there an argument for pooling learning on generic functions for T & M AI models? Accedo: Yes, with strategic pooling and privacy-preserving approaches. Certain test & measurement functions are inherently generic, such as device compatibility patterns, network performance characteristics, and fundamental video streaming technology properties. These functions are consistent across providers and can benefit significantly from pooled learning. Device-specific behaviours are consistent across platforms, and shared learning can provide larger datasets for rare device combinations. However, competitive differentiation requires selective pooling. Content performance metrics, user behaviour patterns, business intelligence, and revenue strategies must remain private. We recommend that the industry establishes consortiums to define what’ s generic versus proprietary, develops privacy-preserving learning protocols, and creates shared models for common scenarios, while protecting competitive data. Federated learning, synthetic pattern generation, and industry standard databases are all viable directions. Aprecomm: Pooling learning on generic functions offers major advantages for Telecom & Media( T & M) AI models. Many industry tasks— such as fault prediction, anomaly detection, demand forecasting, and customer experience management— share similar data structures and objectives. By jointly training models on these generic functions, organisations can build stronger foundational models that recognise universal patterns in network behaviour, service quality, and customer interaction. This approach drives significant efficiency gains. It reduces duplication of effort, lowers computing and data labelling costs, and enables smaller operators to access high-performing AI capabilities without extensive resources. Shared models can then be fine-tuned using proprietary data, accelerating innovation and improving accuracy for operator-specific contexts. Pooling learning also enhances transfer learning and adaptability. Bitmovin: There is definitely value in pooling AI learning across multiple services, particularly when it comes to identifying generic issues such as buffering patterns, dropped segments, or bitrate instability. Broader datasets can help models recognise recurring problems that affect the entire industry and accelerate the speed at which those issues are diagnosed. However, there are variables that need to be called out here as they highlight the issues that may be faced by streaming services when doing this. Bridge: Bridge has always believed in collaboration over competition, and that when the industry works together, everyone benefits. That said, AI complicates what collaboration looks like. Once knowledge and data enter a model, control over how it’ s used becomes blurred. While some functions may appear generic, the truth is that what sets companies apart are the ways they interpret, structure, and present that data. For Bridge Technologies, our innovation lies in transforming T & M data into usable production functions and intuitive visualisations. We support open standards and collective progress, but that doesn’ t mean handing the industry’ s entire knowledge base over to a common AI pool. Some caution- and a lot of consideration- is healthy. Leader: Yes, particularly where analysis of common signal behaviours can benefit multiple users or system types. For example, AI models trained to recognise typical colourimetry errors, packet loss, or timing drift could form the foundation of more universal diagnostic frameworks. However, each broadcaster’ s workflow is unique, and the metadata or signal environment can vary significantly. Leader believes that a hybrid approach- combining shared, domainagnostic models with customer-specific refinement- will deliver the best balance of efficiency and accuracy. Telestream: There is a case for selectively pooling learning on truly generic, and nondifferentiating functions in T & M, as long as IP boundaries are clearly established. In T & M, AI capabilities fall into two broad categories: 1) Foundational and generic- things every vendor needs to participate in the market, like standards-based metadata extraction and alarm occurrences where thresholds are defined by industry norms. Collaboration in this area can lift the baseline quality for everyone without giving away strategic advantage. 2) Proprietary and differentiating capabilities- This is where we would not pool learning. These models encode our‘ secret sauce’: operational playbooks, service assurance logic, and domain-specific insight into where problems actually matter. That is core IP and should remain private, customerspecific, and under our control. We are open to collaboration at the shared layer, because it improves overall quality for the whole industry. Torque: Sure, of course there is, but I think it is unlikely to happen. Initially, AI diagnostics of broadcast network problem will work at about the same level as today’ s traditional T & M instruments. The tools and standards to measure objective parameters are already there: TR101-290, network bitrate, jitter, RF signal quality, etc. Today’ s engineer interprets data from those measurements to understand what the problem is, and how to fix it. For engineers to work effectively, it is in everyone’ s best interest to have a common set of measurement algorithms and common set of measurement units. VeEX: Yes, pooling learning on generic functions could be beneficial. Many issues, such as packet loss, latency, jitter, and certain types of optical impairments, are universal across networks. Sharing training data for these common issues could accelerate AI development, improve diagnostic accuracy, and help all systems learn faster. The challenge lies in how we enable this without compromising intellectual property or competitive advantage. Vendors are typically cautious about exposing algorithms or proprietary methods. Ideally, the best path forward may be collaborative learning that shares generalised patterns or standardised
EUROMEDIA 9