Building Trust in AI

My current Macbook wallpaper is an old map of San Francisco from 1873. I’ve always appreciated old maps for the mix of art, science, and history they represent. More importantly, maps always showed me various ways to get from point A to point B whether road tripping across the state or backpacking through Yosemite. Before we had iPhones with little voices coming out to guide us when we went off track, we needed physical maps before we embarked on the journey (lest we run into “Dragons” in this “Terra Incognita”.) Even when we had books of roadmaps (remember Thomas Guides?), we still asked friends and locals for their insights on the best way to get to our destination. We trusted that the map data was correct, but we still needed to interpret and optimize the data.

There are similarities between maps and AI. To simplify it, when using AI, you start with a bunch of data (point A), and it ultimately gives you answers (point B). But unlike maps, AI has a trust gap. Enterprises and data scientists are using AI to make loans, optimize ads, hire employees, but often don’t understand what is driving all the decisions in these models. This is known as the “black box” problem.

Today I’m excited to announce Greylock’s investment in Truera, which is focusing on bridging this trust gap and solving the black box problem in AI today. We’ve led the Seed, and were joined by our friends at Wing VC and Conversion Capital. I am a director on the Board.

Companies and governments want high quality AI models that are trusted, fair, safe and compliant, and work in the real world. But that is really hard to do if these large organizations don’t understand their own AI models. Truera provides the first Model Intelligence platform that enables data scientists and non-data scientists alike to understand exactly why a model makes a prediction. Designed from the ground up, the Truera platform helps companies analyze machine learning to improve model quality, address unfair bias and build trust leading to measurably improved business and societal outcomes.

The underlying technology of Truera is powered by breakthrough AI explainability technology that was developed by two of the co-founders, Anupam Datta (Chief Scientist) and Shayak Sen (CTO), during 6+ years at Carnegie Mellon where Anupam is a professor and Shayak earned his PhD. Anupam and Shayak literally wrote the papers (here and here) on AI explainability.

In parallel, Will Uppington (CEO) was leading product and customer facing teams at a previous startup where he experienced the black box challenges of ML and observed how a lack of model intelligence negatively impacted model development, selling and customer success.

I first met Will over twenty years ago when he and I worked at different consulting firms in San Francisco after college. Our professional and personal friendship began then and grew over the next two decades. We both went on to work in different technology investment firms, and Will and his wife Lauren were classmates of mine at business school. After I joined Greylock and Will was an executive at Bloomreach, he and I would frequently compare notes on companies and tech and I would always end every conversation with the innocent statement, “Hey, don’t forget to give me a call when you start your own company!” Last year, I finally got the call. Will had teamed up with Anupam and Shayak in 2019 and formed the “dream team” — combining deep technology from years of field research with practical product experience.

This alchemy of technology and product vision is already being appreciated by their customers. Standard Chartered (SC), an early Truera customer, is a pioneer in the responsible use of AI. SC understands that the effective use of data and analytics within the Bank is not just a competitive advantage, but also a key enabler to better serve clients. These needs are shared by every financial service institution and every enterprise. Customers are not putting their AI models into production because they can’t trust them, explain them, or manage them. Compounding this black box problem is the fact that AI governance and compliance requirements driven by regulators and governments are compelling companies to document and explain every decision made by AI models. In a world where consumers feel the power of any negative bias created by modes based on gender, race, or other factors, ensuring AI powered applications are fair and transparent is critical.

Truera is officially coming out of stealth today with their Model Intelligence platform. Interested in seeing how the platform works? Sign up for a demo here , and check out the company’s blogs Introducing Truera and Why Machine Learning Needs Model Intelligence.  I’m privileged to be partnering with the team, and excited to watch as they help enterprises navigate AI through the first Model Intelligence platform.

The GitHub of DataSridhar Ramaswamy
Greylock
Modern Financial Tools for ConsumersDavid Thacker
Greylock
Reinventing Secure Development LifecycleOur Investment in apiiro
Talk Live, Type LessSarah Guo
Greylock