As more enterprise organizations have recognized the utility of artificial intelligence technology, there’s been a major push to invest in and adopt new AI and ML infrastructure to drive insights and make predictions for businesses.
However, many of these solutions lack the mechanism to unlock and operationalize the data needed to train and deploy models for high quality AI projects. That pain point spawned the creation of Snorkel AI, which has developed an end-to-end data-centric machine learning platform for the enterprise.
Their flagship product Snorkel Flow has enabled large customers to quickly build and iterate from unlabeled data sets to high quality machine learning deployed in production. In recent years, the rise of large language (or foundation) models has significantly opened up opportunities for building AI applications, but most organizations don’t have the tools they need to actually put these models to use. To address that gap, Snorkel AI has evolved its product with the launch of their Data-Centric Foundation Model Development, which provides enterprise organizations with the capabilities to incorporate foundation models in their workflow.
“Big foundation models are great at generative and exploratory human loop processes. They’re great at generating text and images, et cetera, but when you actually want to adapt them to predict or automate something at high accuracy with guarantees of performance, you need adaptations (or most commonly) some kind of fine-tuning or prompting,” says Ratner. “Doing this for complex, real production use-cases usually requires labeled training data and the constant iteration and maintenance of that, and that’s what we’re focused on.”
Putting the capabilities to build impactful AI in the hands of more people has been Snorkel’s goal since its inception. The company spun out of Stanford’s AI Lab in 2019 and has been partnered with Greylock since 2020. Alex joined me on the Greymatter podcast to discuss the company’s journey, the evolving world of AI, and his vision for the future.
You can listen to our conversation at the link below or wherever you get your podcasts.
EPISODE TRANSCRIPT
Saam Motamedi:
In the time since launching, AI has advanced significantly, and Snorkel AI has likewise evolved to meet the expanded needs of enterprises to get up to speed on the latest machine learning approaches.
The latest addition to their platform, Snorkel Flow released in November enables enterprises to put foundation models to use. Today we’re going to talk about what that looks like in practice, and I’m pleased to welcome Snorkel CEO and co-founder Alex Ratner. Alex, thanks so much for joining me on Greymatter.
Alex Ratner:
Thanks so much for having me, Saam.
SM:
Alex, as you and I often talk about, AI is a very dynamic and fast-moving field. Even since you launched Snorkel AI, we’ve seen a lot of change. Let’s start by just putting everything in context. Where are we today in AI and ML, and how do you characterize Snorkel AI’s role in its adoption?
AR:
It’s indeed a fast-moving space and it’s exciting every day. A lot of where we started – and we’ll get back into this – is from this shift that you talked about from what we’ve called model-centric to data-centric AI development. And I’ll start there. And it’s still obviously where we think a lot of the core focus deserves to be. At a high level, this is an idea that the pain points (or the blockers to AI development and deployment) are. Or, you could more optimistically say these are actual areas where an AI developer can productively iterate. It used to be all around the models: picking out features, building custom architectures, building bespoke infrastructure; all of that’s what we call model-centric development. And data and training data (the data that models learn from) used to be seen as a second-class citizen. I sometimes call this the Kaggle era of machine learning, where a machine learning developer’s journey started by downloading a dataset that was nicely labeled and curated and then trying to train their model on that.
Fast forward to today, a lot of the machine learning technologies and models have just leapt forward. We’ll talk about even the recent progress over the last couple months around foundation models in a second. But really over the last couple of years they’ve become more powerful, more push-button, more automated, more standardized, more commoditized to the point where a state-of-the-art model’s a couple lines of Python code and an internet connection to get it going (if you have the data).
So the trade off over the last bunch of years – certainly the last seven or eight that we’ve been working on this data-centric AI movement out of Stanford and now the company – the game has really shifted towards the data and how you label and curate it to teach machine learning models.
The reality in most enterprises that we work with (the top 10 U.S. banks and government agencies and Fortune 500 healthcare systems) is that if you want to actually build a machine learning model for something, the balance of effort or time might look like a day to get the model and maybe several months to label the data to teach that model.
That is the trend that we started with; that the field has been going under this shift from model to data-centric development and it’s still the key thing that we see enterprises struggling with and that we address.
Now, on top of that, there have been some really exciting developments recently around what are often called large language models – or I’ll call them foundation models in this chat. Partially out of loyalty to my co-founder Chris, I know some of you know very well and he’s one of the co-founders of the Stanford Foundation Model Center and partly because I think it’s actually an appropriate name.
These foundation models are big self-supervised models. If you’re a machine learning nerd like me or you have machine learning nerd friends anywhere in your Twitter network graph, you’ve probably seen demos that are really incredible about these models learning from scratch to generate text or answers to things or images that are quite amazing. The question on the heels of all this amazing progress and these foundation models scaling up that we ask is, “How does this actually connect to providing production value for our customers?” And the answer right now is it doesn’t in most places that we see. All this ability to generate exciting text and image doesn’t really translate to enterprise automation – and we’ll get into this more – but we see that the data and the data center development as the bridge to connect that, and that’s the core of what we’re announcing today, we’ll get into more in our discussion.
SM:
Awesome. I want to get into the role of Snorkel AI and the [the impact that] new product you’re announcing today is going to have in actually putting these foundational models to use.
But I want to step back for a moment and ask you to spend a couple minutes just motivating foundation models. I’m sure all of us have played around with ones, whether it’s GPT-3 on language generation, or models like DALL-E on image generation. I know the first time I used DALL-E it felt like magic. I never was good at drawing, so it felt fun to actually be able to create interesting creative assets.
And so I think we taste the power of them yet at the same time there’s a question of, “How do we go from these really cool demos to these things actually changing the ways we work and live and operate?” And so maybe spend a couple minutes on that, Alex, and what’s hype, what’s real?
AR:
First of all, I want to plus-one that excitement about these models. I also have spent some time playing around with these multimodal models that can generate amazing images and the text-based ones. How they work is not fundamentally surprising if you’ve been watching the space. We’ve used, I guess what now need to be called medium language models, but the same fundamental architectures and types of models in Snorkel Flow for years now. Things like BERT, DistilBERT, et cetera are some examples in text that many of us run into and we use in platform and support in platform.
But the degree to which they’ve scaled up based on the increasing amounts of data compute engineering work and the results are really amazing and exciting for the field. It’s a really exciting time, even on the academic side, I could say with my academic hat, seeing this shift from a very toy theoretical view of machine learning in many places to studying properties of these gigantic foundation models that we barely yet understand. It’s an exciting time to be in machine learning.
“There’s lots of hype (there always is), but there’s real potential and real progress there that’s exciting. But I think the other side of this is that these are not anywhere near getting into production at large enterprises in critical-use cases.”
And that’s something we see, something we hear from customers, and they’re very pessimistic about their ability, anytime in the next couple years to, say, deploy GPT-3 in a real high-value use case. If you talk to a top 10 U.S. bank or a government agency, they’re still working on deploying basic machine learning models from five years ago through model risk management and at scale.
How do we solve this gap and why does it exist? I’ll highlight two core challenges. One is around what I’d call adaptation, the other one’s around deployment.
The first one is just a natural thing that anyone in machine learning is aware of in concept, which is that if you have these big foundation models, they’re great at generative and exploratory human loop processes. They’re great at generating text and images, et cetera, but when you actually want to adapt them to predict or automate something at high accuracy with guarantees of performance – like you do need to do to ship in the enterprise – they need adaptation or most commonly some kind of fine-tuning or prompting. And doing this for complex, real production use cases usually requires (surprise, surprise) labeled training data and the constant iteration and maintenance of that, which obviously is somewhere where we focus and have focused for years with Snorkel AI.
The second thing is deployment, which is that, again, getting these gigantic models into production is hitting and is going to hit, in my estimation and the estimation of our customers, walls around cost, latency, and governance and risk and bias for quite a bit of time. The question we ask is, “How can we actually still use them and get all of this exciting value but actually bridge into something that’s a deployable artifact?”
And again, the answer that we’ve worked on for the last year and a half and are going to be announcing shortly is around again, data-centric AI and using them to power data development, but ultimately ship something deployable that can actually have an impact today in the enterprise.
SM:
It sounds like customers are excited about the potential impact of these models and they see the upside, yet there are these challenges, as you point out, both on the adoption side and on the deployment side. What are we announcing today and what’s the impact it’s going to have on operationalizing foundation models in the enterprise?
AR:
Yeah, we’re really excited too. We’re announcing a set of capabilities that are embedded in our existing platform, Snorkel Flow and the existing data-centric workflow that we call the Snorkel Flow Data-Centric Foundation Model Development. These are basically tools to use foundation models in a data-centric workflow that actually can lead to production value today. Even if you can’t deploy a foundation model in production, getting the power of that and using it to accelerate and improve your processes. It’s exciting that we’ve already seen this in production with customers.
What this consists of is a couple of features that we refer to as foundation model warm start, prompt builder and fine-tuner. And the fine-tuner one is maybe the simpler place to start because we’ve been doing this for years and now we’re just offering support for fine-tuning these larger model classes.
Getting a little into the weeds, one of the most reliable ways to actually get a model to high accuracy on a complex bespoke task for, say, an enterprise use case is to what’s called fine-tune it or retrain parts of it, parts of the model for a specific objective with labeled training data. And obviously our core value prop and our core technological contribution for years has been programmatically, automatically, and iteratively developing that training data or enabling users to do that. That fits immediately into our existing workflow and product. And then the other two features are all about how you can use foundation models to accelerate the auto-labeling and development of training data, even if ultimately you’re training a smaller model that’ll actually be able to be deployed in your existing enterprise infrastructure.
And these features really came out of both research side where there are several papers. On the academic side, we posted from Snorkel, from Stanford over the last 12 plus months. They also came out of customer questions, really customers asking us, “Hey, we have all these people who are excited, they’re playing around with GPT-3 on their laptop, we’re not going to ship this anytime.” I had a customer we had dinner with at a top three U.S. banks we worked with, who likened his attempt to get GPT-3 through model risk management as a Don Quixote-like tilting at windmills activity that he was doing just as quote-unquote “Art.”
How do you actually get this into production for use cases that matter to the enterprise? Part of our core view, and what we’re releasing with this suite is the ability to use foundation models and to auto-label the data to both jumpstart the labeling, so you can start just in Snorkel Flow you put in the class names of what you’re trying to label and you basically instantly get up to the baseline of what some foundation model can auto-label. There are some classes that may magically get the right foundation models to do that for some of the lower hanging fruit.
Others, it’s almost certainly not going to because these foundation models don’t just work out of the box. It’s always dangerous to draw analogies between AI and humans, but in this sense, think about it in a generalist who’s read lots of Reddit and learned basic English and then ask them to interpret an insurance document or a medical report or a complex financial document, people need specialist training.
So the foundation model is not going to get you these trickier parts of the problem out of the box, but our warm start capability allows you to do what are called zero-shot techniques, get there right from the beginning. It jumpstarts your progress in building your dataset, building your models. And then our prompt builder allows you to prompt using natural language or code templates, these large language models to help auto.label data in targeted ways.
But then what you can do is you can also add in other approaches for labeling. You can use our iterative workflow to correct the errors and ultimately you can distill all this into a deployable model that might be 10,000 times smaller and also is going to be more accurate at the target task, which we’re also releasing some case study results about as well. This is a way to actually get this foundation model power bridge to complex enterprise use cases that could actually ship to production, which we don’t see anywhere today in terms of existing solutions. We’re excited to fill that gap, which is quite big and existential and urgent for enterprises who want to stay up to date with AI with these capabilities.
“This is a way to actually get this foundation model power bridge to complex enterprise use cases that could actually ship to production.”
SM:
Just to make it concrete for folks. Alex, is there a good customer or use case you could talk about? As I hear you characterize those different aspects, the things going through my mind are how much faster I can get my data ready for machine learning? [What are] the impact and performance of the models I’m using, and how that might drive business performance around my use case/just overall accuracy and performance?
AR:
Let’s dive into that. If you’re interested in the nerdy details, there’s more in both academic publications and some of the stuff we’re releasing today in terms of case studies. But let me actually take a step back and just give one other anecdote that I like to motivate with.
I was trying to generate a Snorkel AI logo. Our logo is a little blue octopus wearing a snorkel underwater. I don’t think there was enough support in maybe the data distribution for something as ridiculous as that. I could get scuba gear on an octopus, I couldn’t get [an image of] a snorkel underwater on an octopus, but I got some pretty amazing, astonishing images and it took me about 30 tries. I went through 30 samples, tweaking the prompt and just generating samples until I found one that looked reasonable and it was a pretty awesome result. For an exploratory generative human loop process, that’s an amazing experience and outcome. It’s awesome to see this boom and creativity around shaping those workflows and harnessing that power.
But now if you shift the enterprise mindset to the framing of most enterprise problems, which is automating or predicting or classifying something at high accuracy, that took me 30 tries, that’s a 3.3% hit rate. That is below unacceptable to even consider shipping to production. How do we bridge that gap? Let me give some examples that have actually been built using the beta version of these capabilities.
One setting is at a top three US bank that’s a customer of ours, some internal experiments around an AML KYC problem. This is pulling dozens or hundreds of pieces of information out of complex highly variegated customer documents that then power the anti-money laundering, pardon me, know your customer process.
Going back to a customer example, one example now is one of our customers at a top three U.S. bank on an AML KYC application (I’ll obviously obfuscate some of the details) but at very high level this problem, which we previously supported on Snorkel Flow without foundation models helping as directly, consists of pulling out or tagging or extracting all kinds of pieces of information from complex multi-hundred page customer documents. Very variegated, very complex and messy. It’s not a standard form or a simple problem.
By applying these warm start and then prompt builder technologies to it, our approaches to it, the really exciting thing first of all is you could use warm start and basically go from no labeled training data to labeled training data and a resulting model, which is in this case the easy part that actually got above the performance bar, above 90% accuracy or whatever the specific, I’m obviously going to obfuscate the specific number that they hold as their internal performance bar, let’s say 90, 90%, for a handful of these extractions. What’s the customer name, what’s the execution data of this document, et cetera. Some of the simpler types of things we were trying to label.
And then quite predictably, some of the more bespoken complex tasks about, I think about 80% of the ones that they were trying to extract for this task, the foundation models basic approach, a zero-shot approach got… 30, 40, 50% of the example is correct. Nowhere in your production quality, but a really powerful way to jumpstart your efforts. Then what you could do is you could prompt these large language models. Literally you’re using natural language prompts to auto-label along with other more kind of surgical approaches using regular expressions or patterns or other things we call labeling functions inside Snorkel Flow and have an iterative development environment for. And within a few hours, got all these up to above the performance bar.
This is a process that previously would’ve taken eight to nine figures’ worth of time in legal fees to do by hand or even just to label the training data, previously in Snorkel Flow would’ve taken days or weeks. Now it’s down to hours using this acceleration from these foundation model techniques. That’s one exciting example.
There’s others. It especially helps when you have many, many classes you’re trying to classify. If you look at toy examples in the space – and I don’t mean that aggressively, it’s just most examples that you see out there of using foundation models for classification prediction are from a machine learning perspective, very toy, take these restaurant reviews and classify them as happy or sad, or classify these customer questions into one of 10 categories. Out in the real world where there’s real complexity, you often have settings where there might be hundreds or thousands of classes.
And so another area where we’ve seen it help is using this warm start and prompting to auto-label huge numbers of classes and then be able to make targeted iterations on the classes where there are still gaps. And again, this is a massive accelerant for these very big, what are often called high cardinality machine learning problems that we see a lot of as well.
SM:
Yeah, that’s a great example Alex, and as I hear you talk, and correct me if I’m wrong, it feels like foundation models are really an accelerant of the data-centric paradigm to machine learning application building that Snorkel brings to customers. And they’re an accelerant across vectors. On the data management and preparation side, they’re a new approach to much more quickly get my data labeled and in a state where it’s ready for machine learning. And then as I walk down the life cycle towards deployment, leveraging these things can boost accuracy and just drive better and better results.
But what’s interesting as I hear you talk because there’s really exciting new things happening and yet the paradigm shift that we’ve been pioneering at Snorkel now for many years around shifting from model-centric to data-centric, if anything is more pronounced, you have these much more powerful models, but still like to get them to the place where they’re performing at the accuracy levels and with the controls and guarantees that you need in an enterprise contexts, it all comes back to the data.
AR:
And even just taking a step back from specifically we’re announcing today, it’s worth noting that these bigger ideas we’ve worked on, of course I’m a little biased here, having worked on data-centric AI and this push for the last seven or eight years, but really we view this whole area of foundation models as being intensely aligned with this idea of data centricity.
If you look at the latest progress, I was looking at the OpenAI Whisper model recently, the paper there, the notes on Stable Diffusion (we have some collaborators with the LAION AI people who are working on some cool CLIP stuff. The architectures of these foundation models are not changing that rapidly. There’s lots of innovation of course, but they’re fairly standardized. The engineering efforts are, I think, incredible. But a lot of the effort, a lot of the differentiation you’re seeing is how you curate the data sources for training these models.
So all the way from the start of building these foundation models to fine-tuning them or adopting them for specific use cases, it really is all about the data. That’s the really unique vector here and there’s a lot of work showing that recently, some of which is from us and from others. First and foremost, this biggest concentric circle of what we care about at Snorkel, this idea that AI is data-centric, it’s about developing your data, it’s very central to how foundation models are working and advancing today.
And then down to this specific idea, the Snorkel Flow platform of being able to programmatically label your data and iterate in this data-centric way. It fits perfectly there in our view, because you can use it to leverage the best of these big gigantic generalist models while really sharpening and iterating and fine-tuning them into the specialist you need to actually perform at high accuracy and production, which is the final output we produce.
SM:
Awesome. Alex, we’re spending a lot of time talking about the upside potential impact of these models. Let’s talk a little bit about the other side of the ledger. What are the risks around using these models? And when you talk to your customers, what are the things that’s top of mind for them? One of the things I’m interested in is, do things like explainability and actually auditing model predictions change in this world and how do customers capture the upside but also mitigate any risks?
AR:
It’s a great question. Obviously, a lot of our approach which is very oriented towards using these foundation models to assist in the development of smaller models that may emphasize explainability and interpretability, and guarantees more stridently is based around the challenges that we’re all facing here around these things with foundation models. Even the way that we’ve studied foundation models on the machine learning academic side has changed. I was up at University of Washington yesterday, where I’m on faculty, and was talking with some folks there about how it’s really changing machine learning in many ways from a very formal field, very, very theory and formalism heavy guarantees about explainability or bias et cetera, to a emergent property empirical field where you just poke at these things and see what comes out. A lot of the research about foundation models is like, “Hey, I tried this prompt and if I think really carefully about your answer before the prompt, then it works better and isn’t that neat?”
We’re really just starting the process of poking with sticks at these gigantic things and their really fantastic emergent properties. Now from a governance and risk and bias perspective, that is and should be a little bit terrifying. And that’s why we see the blocker. It’s not pessimism about these models, it’s just the reality that it’s going to take a while before they get to being served in production before someone… This is a little bit of a glib phrasing, but it’s going to take a while before, let’s say, a large US company or a large company plugs some model that was trained on Reddit directly into their customer-facing chatbot or I guess that has happened in certain experimental tech company phrase to predictably terrifying results.
We have a long road to figure out how to really explain, interpret, de-bias and put governance controls and guarantees around these foundation models. And that again feeds into our fairly unique way of using them, which is harnessing them for a data development and data-centric process that can be separately audited and controlled and explained and de-biased rather than just plugging them right into production.
Really, our excitement here at a company and a technology level is filling this really strident gap between this amazing surge of foundation model progress that we see on our Twitter feeds, out in the public domain in real complex high value enterprise use cases. We see Snorkel Flow, our platform, as that bridge. We see data-centric AI development as that bridge and we’re seeing it already working with customers, as we went through some stories of. However, I’m curious to hear Saam what you see in the broader landscape from your POV, because it’s pretty exciting out there right now.
“We have a long road to figure out how to really explain, interpret, debias and put governance controls and guarantees around these foundation models.”
SM:
Yeah, it’s not going to surprise you if I look at both founders who are coming into our offices and sharing more about their companies and then customers. Later today I’m hosting a large bank and I looked at the topic list. Number one on the topic list they want to talk about is foundation models. I think foundation models have really taken the zeitgeist not just in AI but in software generally. And certainly our view is that we are in the early days of a new computing trend and literally everything we do, ranging from how we live, but importantly to how we work, business workflows, these things are all going to be transformed by the advances in AI and around foundation models in particular.
We’ve crossed the chasm now on tractability where now we’re seeing these results both on the generative side and on the discriminative side where it feels real. People can’t yet fully connect the dots and be like, “Okay, I get exactly where the business value is,” but they’re like, “No, no, no. Clearly this stuff’s performance has tipped.”
And importantly it’s just getting so much better as we keep throwing data at these models, that this is going to deliver on the promise of AI. We’ve been investing in AI for a very long time at Greylock, Alex as you know. And I think if you go talk to customers for example, most CIOs have been thinking about AI for many, many years now.
But I would say, and I’d be curious about your thoughts, most people probably feel like we’re still in the trough of disillusionment where they’ve been told this promise of AI, but when they actually look in their environments and say, “Where is AI having an impact on my business?” That story is still thin and I think there’s a number of things that will be required to tip that story, but that story’s going to tip really fast now. And I think foundation models end up being an important part of that. So we’re very, very excited.
Now, I think we also are sober around [the fact that] it’s one thing to show a really cool demo and it’s another thing, to use your parlance,to adopt, deploy and utilize a model in production that actually drives business value. I think our view is, how do you take the advantages these foundation models have, the scale of pre-training that goes into them, the characteristics around transferability of performance? We don’t believe, for most use cases where performance really matters, you’re going to have this one magic model to solve it all. Instead, what’s going to happen is you’re going to take these models and take advantage of the scale of pre-training that’s happened and then really train on top and fine-tune to your use case.
And so if I’m a large bank, I might have my own foundation model and maybe that’s bank wide, maybe that’s use case wide. Maybe I’m looking at a specific set of financial documents and I want a foundation model just for that in the context of my environment. I think that’s when those models get trained and deployed, that’s when we really see the power of all these research advancements actually have enterprise impact.
Candidly, that’s why we think Snorkel and platforms like it can be the missing link or bridge between the potential these foundation models have and actually realizing their impact in production. That’s why personally I’m just so excited for our new product to get out there because I think we could see a dramatic acceleration of a AI impact across many, many enterprises.
AR:
Saam, it’s really exciting and I won’t even pretend that it’s a surprise that there’s concordance in our views, because, obviously. But I do want to touch on a couple things you said just to really plus one them, but emphasize what we’re seeing.
Number one, this tremendous uptick in excitement. We’re seeing that. It’s driving incredible and exciting momentum in our inbound, in our business and a lot of it is around these features that we’re now publicly announcing but have been previewing with prospects and customers for, in some form or other, almost six, nine months plus now. It’s very exciting I think on both sides for customers and potential users in the enterprise of AI, but also for vendors who have something to say about foundation models like we do.
Number two, there’s also definitely a healthy spoonful of this skepticism and disillusionment. We’ve seen this before and this is where we’ve lived and still live around the blocker of training data. Enterprise spends a huge amount of money building a center of excellence, a well-healed and well-skilled team. They’ve got all the fanciest machine learning state-of-the-art models that are out there often in the open source and then they’re blocked on asking some line of business team to label data for six, 12 months before they can train a single model. We’ve seen that disillusionment and we’ve helped to solve it. That’s been our mission so far. We’re seeing a similar kind around foundation models, tremendous energy, excitement, but can’t get it to production. How do we do that? And again, we think data and Snorkel Flow can be the bridge there and we’re already seeing it happen. So it’s exciting.
And then the final thing to add on is that I completely agree with the idea that… It’s baked into the name and the nomenclature that both of us chose for this podcast, foundation models. The idea is there are foundations to build on. You’ve still got to build the house. There’s not just one prefab house that’s going to work or building for every single use case, that’s where all of the adaptation, the specificity and all the really detailed design specs have to happen.
If you think of foundation models as, “Okay, they’re going to be this magic out-of-the-box solution,” you’re living in a fantasy world, you’re living in the cherry-picked demos and toy problems world. But if you think of them as just an incredibly strong and powerful foundation that you can then do all your building on top of, in a massively accelerated and really promising way, then I think you’re on the right track and we’re here to support that with Snorkel Flow.
“The idea is there are foundations to build on. You’ve still got to build the house. There’s not just one prefab house that’s going to work or building for every single use case, that’s where all of the adaptation, the specificity and all the really detailed design specs have to happen.”
SM:
I love the house analogy because I think that absolutely nails both the potential but also what has to happen to really get these things operationalized. Alex, we’ve talked about today’s really exciting product announcement, but I’d be remiss if I didn’t ask for more. What’s next and what are the next couple things coming on our roadmap?
AR:
Unsurprisingly a lot of them actually have to do with foundation models. That whole foundation metaphor really extends. A couple things that are worth noting. First, we’re doing a lot of exciting stuff around image and video, that’s having some exciting customer impact. We have a large e-commerce company we’ve been working with. With our beta product for image was able to, I think, take a four to six-plus week process and now do it in a few hours at greater quality compared to manual labeling. It’s all about programmatic labeling data-centric iteration. It’s also powered by some of these prompt builder and warm start techniques that we talked about. Going into some of these exciting new modalities where we’ve done research for years but haven’t launched a product, that’s coming out soon. We’re really excited about it. And it does also connect to foundation models.
Another theme which is a little bit different, but it’s super important and it ties in, of course, to everything we’ve talked about here, is ways of empowering and connecting to what we call subject matter experts and annotators, the people who actually know how to label the data, know how to validate it, know the business value that is attached to the model’s output. These folks are critical. If we take even a step back then and say, “What’s one indicator that an enterprise is not going to successfully adopt AI?” I’d say it’s over-siloing and separation of the data scientists and the subject matter experts, whoever they may be. Lawyers, underwriters, line of business analysts, whatever the specific title.
These two types of canonical roles have to work in lockstep to actually get AI developed and deployed to production. And we’re really doubling down on our suite of collaboration tools to support that. That’s whether you’re doing this to validate or correct the foundation model with your subject matter expert’s help or just get some manual labels for validation, that collaboration’s critical and it’s a big investment that we’ve been making and are continuing to make with partners.
SM:
Exciting themes and a lot of good stuff coming on the roadmap. Alex, this has been an awesome conversation and I come out of it just really excited about foundation models, Snorkel’s role and foundation models and the impact we’re going to see them have across enterprises and businesses.
Maybe as a closing question, we talked about roadmap, what can people get their hands on today and how do people go start deploying snorkel to leverage foundation models in their environments?
AR:
We are really excited today to be announcing again this foundation model management suite, three key features there, the foundation model warm start that gets your data auto-labeled and ready just by putting in the class names, using the magical power of foundation models’ foundation model prompt builder that helps you use foundation models to auto-label and iteratively develop along with the full suite of other approaches supported in Snorkel Flow. You can use them all together. And then also the foundation model fine-tuning, where if you can use foundation models in production, you can use that. Otherwise, you can use the existing suite of models that are often much smaller and more deployable, to get that out there.
Those together are just extra features in the existing Snorkel Flow platform workflow. We’re going to be sharing a lot of information and some demos today. There’s going to be some upcoming demo walkthrough sessions where we go through the workflows that you can now be empowered to do in the platform coming up in some sessions. And then if you’re interested in getting hands on, demo the platform, just shoot us a note, request a demo. And we’re really excited to get this out there and actually be this bridge between this amazing progress in the field of foundation models and real complex enterprise production use cases, solving the adaptation and deployment problems. Come check it out on our website.
SM:
Awesome. Alex, thanks for joining us today.
AR:
Thank you so much for having me.