The holistic tool for security evaluations of machine learning systems
Middlesex University London
Professional Cyber services
Machine learning (ML) fuels industry 4.0 and is increasingly becoming a cornerstone to support smart systems. ML allows us to learn from data and to provide a timely and accurate response to specific intelligent requirements. Despite their undeniable advantages, ML has not been deployed to its full potential. One of the main concerns during the deployment of ML-based systems is
the fact that the ML models might be vulnerable to potential blind-spots in the forms of security vulnerabilities, performance issues, or unknown responses. While being system-dependent, our research found out that existing software testing tools and techniques can hardly be applied to identify these ML blind-spots because these tools do not take into account details such as correctness, optimal learning rates, over-fitting, feature computation, gradient computation, and optimal regularization, among others.
MLighter is the first tool to integrate and simplify multiple testing strategies to detect blind-spots in machine learning. It verifies the performance, security and functionality of your ML system, ahead
of cyber-attacks. Our clients are coders and QA Testers of ML systems who need to evaluate the quality of their final ML products. This requires knowledge in different aspects of the developed systems and the used ML libraries. Our web-based solution includes the tools our clients need for the automatic validation of ML systems in terms of performance, security and functionality. It includes a database of blind-spots and vulnerabilities and a flexible front-end designed to allow extensions.