Why is this important for your organization?

Model theft attacks seek to duplicate protected models and algorithms. While not likely to directly replicate a protected model or algorithm, an adversary with access to the model – including solely API access – has the potential ability to utilize external information to mimic the model. This imitation model can then be utilized to gain a similar advantage to that of the original owner or to begin to predict anticipated movements of the original model owner. In other words, the adversary has gained the information contained with the imitated model. Data poisoning attacks inject malicious data into training sets to alter the development of the algorithm or model. In this attack, the adversary alters the “education” of the algorithm to either sabotage the system in a general way, or to direct the system to behave in an anticipated way which could benefit the adversary. This can occur with original static training datasets or by feeding data into data streams which are utilized for online learning and updating of models. Adversarial/evasion attacks seek to sabotage the model in a similar way as data poisoning but through alteration of data used for input instead of training data. Finally, model inversion attacks attempt to acquire information about the training data, whether this be an understanding of properties of that data such as distributions – known as property inference attacks – or the knowledge of membership of a certain data point within the original training data – known as membership inference attacks.

These attacks can occur throughout the AI and machine learning (ML) development and deployment pipeline, including at access points where only input/output access – known as black box access – is provided. Many organizations and sectors are now using and deploying AI/ML models to gain a competitive edge – some of which use personally identifiable information (PII) and personal health information (PHI). Whether your model is suspect to robustness issues, theft of PHI/PII, theft of the competitive edge, or any of the other risks, an understanding of this risk, how to prevent it, detect it, and mitigate any issues is of utmost importance.

What the RAIL Offers

As experts in AI Security and Robustness, the RAIL is prepared to help organizations understand the need for AI Security defenses within their organization, the risk of various efforts, how to put in place defensive measures, how to detect any security breaches, and how to mitigate the effects of attack. The RAIL will work with your organization to quantify your risk, harden your models and algorithms, implement detection methods, and map out potentially affected networks to develop proper mitigation strategies in case of an attack. These mitigation strategies will include prevention of loss and the implementation of learning for future attacks. RAIL team members are adept at developing these strategies, defenses, and methods at all stages of AI/ML development – from ideation through deployed and online learning of models. Our team prides itself in not only its current work within this field with the Department of Defense, but also in our active contribution to fundamental research within this relatively young and incredibly important area. Contact the RAIL today to see how we can help your organization to deploy secure and robust models!

Recent and Current Efforts

The RAIL team recently published an article in the Journal of Cybersecurity and Privacy entitled “An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks”. This article is part of a recent effort evaluating the fundamental vulnerabilities of datasets to membership inference attacks. This study evaluated over 100 common datasets from various sources to determine what aspects of a dataset make it more or less vulnerable to membership inference attacks. While previous efforts in this area focused on particular models or attack types, this effort was the first of its kind to evaluate a model- and attack method agnostic approach to understanding this vulnerability.

Focus Area Section 3 Image

by the numbers

  • 2 conference presentations
  • 1 publications
  • 1 lines of effort
  • 1 clients served

Contact Us

Reach out to us at research@rotundasolutions.com!

let's talk

other focus areas