FUTURE DEFENCE AND SECURITY
Secure
and Trustworthy Machine Learning
Improving the robustness and security of machine learning technologies
Research Project
Recent advances in machine learning , particularly deep neural networks and large language models , are transforming the design and implementation of decision-making systems . However , due to their blackbox nature , brittleness , and lack of safety guarantees , significant challenges remain in their adoption in potentially high-payoff applications , such as autonomous systems and critical technologies .
The research conducted by the Information Security and Privacy Research Group at UNSW Sydney has uncovered novel attacks in existing and emerging machine learning models and across the entire machine learning production lifecycle , including training and testing . The team has also proposed a range of novel defences for improving the security and resilience of machine learning models .
This work has wide applicability for traditional machine learning and federated learning and graph neural networks across several areas , including transportation , social networks , recommendation systems , Internet traffic , and many more .
Key capabilities
> Expertise in state-of-the-art machine learning concepts , e . g ., adversarial learning , generative models , transfer learning , etc .
> Expertise with multiple application domains .
Differentiators
> Unique work on studying the robustness of graph neural networks
> Proposed a range of novel attacks on federated learning systems
> Novel strategies for improving security of deployed machine learning models
Key customers
> Any organisation ( government , defence , industry ) that uses machine learning technologies within their operational ecosystem
Key partnerships
> DSTG > CSIRO
Quality accreditations and awards
> Numerous papers published at top security and machine learning conferences
> Artefact Badges that confirm the veracity of the developed code base
unsw . to / salil-kanhere
• 45