First came agile. Then came DevOps. Except they only came to a few departments, mainly the traditionally IT ones. What’s keeping them from reaching the most extreme realms of tech? According to Luke Marsden, CEO and founder of Dotscience, enterprise-grade artificial intelligence (AI) and machine learning is being held back, in part, because the mathematicians and statisticians behind it are stuck in the days of Waterfall — still emailing code back and forth to each other.

Dotscience, a collaboration tool for end-to-end machine learning data and model management, recently ran a survey of enterprises. The resulting State of Development and Operations of AI Applications 2019 included results like how 63.2% of businesses reported they are spending between $500,000 and $10 million on their AI efforts. But 60.6% of respondents continued to experience operational challenges.

“The challenge is that, if you already knew the answer for the things your machine learning is predicting in production, then you wouldn’t need the model because you’d already know the right answer.” — Luke Marsden, Dotscience.

Basically most enterprise AI and machine learning initiatives are stuck at the gate. On this episode of InApps Makers we talk to Marsden about why.

Read More:   Please Serverless, Have an Opinion – InApps 2022


The Challenge Of Machine Learning And How DevOps And The Edge Will Modernize Data Science

Also available on Apple Podcasts, Google Podcasts, Overcast, PlayerFM, Pocket Casts, Spotify, Stitcher, TuneIn

Subscribe on Simplecast | Fireside.fm | Pocket Casts | Stitcher | Apple Podcasts | Overcast | Spotify | TuneIn

He explains that machine learning is inherently more complicated than just coding or coding, integrating, deploying and monitoring plus data. It’s all those coding steps plus multiple sets of data in multiple versions, plus metrics, plus hyperparameter, plus different models. And the data that’s pouring into this machine learning is going stale much faster.

“It’s more a problem of keeping track of what you’ve done,” explains Marsden.

Data scientists are tracking data by passing Word documents back and forth with slightly different file names. Which Marsden doesn’t think is so bad when you’re just getting started with machine learning. But it breaks down when you try to scale a team, where everyone uses different conventions, and then even more so when you have many models in production.

Dotscience looks to solve this problem with data versioning and provenance — that data’s origin. And not only the data’s provenance, but also that model’s provenance, which could just be an output of another process. This is all being graphed visually with Dotscience, so you can understand who has input which data version and which datasets are training which machine learning models.

“If you can’t go from a model in production to reliably being able to say which data it was trained on, then you have no hope of being able to track down the bias in the model, and you’re kind of left in the dark,” Marsden said.

This is essential when we are battling a lot of intrinsically biased data. But also imagine when the first fatality by an autonomous vehicle happens. Data versioning may be the best way to track who trained that model, which datasets went into that model, and what was the human influence that went into it. Because, as we learned with Volkswagen, even a developer can be legally responsible.

Read More:   What is ‘DataOps’ and Why It Matters – InApps 2022

And it’s not just the tech side of it all — although monitoring with Prometheus for monitoring seems key. Marsden says teaming up with a DevOps culture is just as important. And you still need a human influence to judge if modeling results are wackadoodle if there’s model drift or staleness.

Feature image via Pixabay.