Sandeep Earayil, Vice President - Machine Learning and Quantitative Analysis, Credit Suisse on Applied AI For Actual Impact

Developing ML Algorithms that Drive both Action and Trust

Add bookmark

We invited Sandeep Earayil, Vice President - Machine Learning and Quantitative Analysis, Credit Suisse to share his thoughts on “Applied AI For Actual Impact” at our recent Data ROI event. You can watch his full session here or read on for a few highlights.



Combining Internal Data with External to Deliver Action-Based Predictive Analytics

Organizations are collecting and managing more data than ever before. This includes external as well as internal data

For banking and financial services companies who invest in and trade stocks, this incorporation of external and internal data is where the magic happens. As Sandeep explains, “So when we talk about internal data, there is obviously a perspective that a financial analyst who is on a company that they've historically held on a company that is also creating patterns of a particular stock, right? Now that's where the internal and the external data overlap happens. So you can see what kind of trends are happening. 

So a really good example is, let's say you're trying to predict Home Depot's earnings or Disney World’s. So weather actually is an important factor for both Home Depot and Disney. Obviously having a lot of bad weather days in Florida may not be a good thing for Disney, but probably a good thing for Home Depot. Now, historically, we've not looked at weather predictions and the weather patterns as a predictor of sales. And followed by: ‘Have there been other indications? Have we observed this pattern in the sequence historically when a similar thing has occurred?’ For example, if DC or Florida had five more bad weather days in the last quarter than the previous quarter, could that be what’s impacting consumer behavior?”

Knowing what data you have then how it can be used to solve real-world business problems is the first step in achieving applied AI success. 

 

Explainable and Reproducible AI

Explainable artificial intelligence processes and techniques that justify and explain how machine learning algorithms “think.” In other words, it outlines how the AI “thinks” or comes to any given conclusion. 

Reproducibility refers to the ability of an independent research team to produce the same results using the same AI method based on the documentation made by the original research team. Reproducibility and explainability are both paramount to ensuring human users can trust the results and outputs created by machine algorithms.

According to Sandeep, these attributes are especially important in business environments “The reproducibility of the models is one area where there's a little bit of friction, right? If you're a business driver and you want to make decisions based on AI insights, then you need to have full confidence in how that insight was generated. And if you get audited six months down the line, you have a very clear way of reproducing those results, right?

A good MLOps tool allows you to explain your results, document your results and make them fully immutable and auditable. There is every insight that is being produced and most organizations have a very high importance for explainability. There is a lot of governance around AI ethics to make sure that there is no bias in the models, being able to run several different combinations of models to see, what's not just the best prediction, but what's also the least biased model. So all of these things eventually help the business have more trust and confidence in those insights.”

 

 

WANT TO LEARN MORE?

Register For The Data ROI Virtual Event To Watch Sandeep’s Full Session On-demand

 

 


RECOMMENDED