Honeycomb is sponsoring InApps’s coverage of Kubecon+CloudNativeCon North America 2020.

Machine learning models, which can cost up to millions to produce, can be easily copied through surreptitious means, warned David Aronchick, partner and product manager for the Azure Innovations Group in the Office of the CTO at Microsoft, during a presentation at KubeCon+CloudNativeCon earlier this month.

While there is no easy way now to prevent such attacks, Aronchick advised that the best defense is to develop and run a MLOps pipeline, one that can easily respond to new attack vectors. MLOps is the practice of integrating the data scientist’s work with the entire development and rollout procedures of making and running models — including steps for securing each step. Aronchick himself heads up the development of KubeFlow, a machine learning operations platform built for Kubernetes.

“The truths you can’t avoid here: your models will be attacked, your pipelines will have issues. And the game is all about mitigation of harms and quick recovery. And you can do that using an MLOps pipeline,” Aronchick said.

In his talk, Aronchick offered a few examples of how malicious hackers could replicate ML models for their own use.

One attack, called a Distillation, seeks to reproduce an otherwise-inaccessible black-box model by using the API to submit queries and log the results. For an image recognition model, for instance, an attacker could submit various shapes and log the results. With enough input, the user could determine what kind of shape the model is looking for (i.e. a triangle). “I am able to recreate the underlying model with just the examples,” Aronchick said.

Read More:   Update OpenStack Summit Presented a Convincing Demo of Cross-Cloud Convergence

Two different researchers were able to recreate a model with this approach in under 5,000 queries, or about 2 queries a minute for under two days. “It’s really not a lot,” Aronchick said.

Another type of attack, called Model Extraction, focuses on Natural Language Processing (NLP), in which a model is used to answer a query against a large body of text. These days, NLP implementations are often based on Google’s BERT (Bidirectional Encoder Representations from Transformers) model. Google researchers have found that either by presenting random words, or words from the model’s own corpus of training material, they were able to “get a really accurate representation of what the model is looking for,” he said.

The researchers, for example, were able to get to an 86% accuracy rate of the model itself through single a single iteration of all the words in the model’s corpus (a Wikipedia page, for instance).

Aronchick estimated that it could take only a few thousand dollars worth of cloud computing to approximate such a model by this technique: “It is much much less than the millions of dollars to train that model in the first place.”

He also delved a bit into data leakage, where a ML system may inadvertently reveal personally identifying data. The autocomplete on a browser or e-mail reader, for instance, offers clues as to the habits of the end user. “It will reveal a lot about what I write in other e-mails,” he said. Jogging apps can reveal the locations of top-secret buildings, through the patterns of multiple runners. A graph of a user’s contacts could reveal a secret organization or mission, such as a drive to create a union.

While Aronchick did not offer any specific advice for preventing scofflaws from copying multimillion-dollar models, he did stress the importance setting up a system that can easily be changed in light of such breaches. You can spend your energies responding to each attack, but “at best you can stop [attacks], but you’re not adding a lot of value.” It is better to proactively set up a pipeline that can rapidly evolve to prevent new attacks.

Read More:   Is React.js a Framework or a Library? Question Debunked

“If you’re exposing [your model] to the world, it will be attacked. What is most important is how quickly you can update, iterate, and detect those attacks, and make changes quickly,” he said.

The presentation also included an example of an MLOps-based workflow, provided by Arrikto Software Engineer Yannis Zarkadas. KubeFlow serves as a CI/CD,  which can wire up a data scientist’s Jupyter Notebook. The results of the notebook can be checked into GitHub, and fleshed out through training, inferencing hyperparameter tuning with Kale (and maybe Spark for large data processing).

To view the complete presentation, log on to the KubeCon+CloudNativeCon virtual conference site. The presentation is in the breakout sessions under the “Machine Learning + Data” track.

KubeCon+CloudNativeCon is a sponsor of InApps.

Feature image by Couleur de Pixabay.