Early in developing a machine learning project, it is tempting to focus primarily on building the model. Faith is placed in the notion that if the model is accurate, the other details will fall into place. This neglects the most crucial fact of all — the model is created to be used in the real world. The return on your AI investment can only be achieved when people use your model.
A focus on the ultimate users of the model is essential in traditional industries where their adoption of machine learning must live with existing and established processes. When combined with these existing processes, the machine learning model's predictions should produce greater efficiencies, reduced human workload, and less wasted effort overall.
But these traditional companies are big, and they move slowly. James Taylor, CEO of Decision Management Solutions, often endearingly refers to his consulting clients as "big boring companies," and best-selling author Tom Davenport has been known to ask via Twitter, "Are you doing boring AI?.”
Change is hard, and one failed implementation can scare off innovators within these companies from surfacing opportunities. Could it be that these kinds of projects are too ambitious? If companies planned for a sober, careful, and "boring" incremental improvement from the start, would be more successful? What would this look like?
There are no perfect models
I'd like to suggest a particular brand of boring machine learning model: give up on the perfect model which would take you, instantly, from zero to 100% automation. Your long-term goal can be to automate increasingly, but your short-term goal should always be to begin by identifying and replacing the most costly processes. And this should be done on a case-by-case level.
The machine learning model isn’t capable of producing these benefits on its own. The benefits come from the deployment of the model resulting in the automation of some tasks and the more efficient routing and processing of others, including exception handling for automation.
People play an important role at two stages:
- From the early stages of model development, the modeler and subject matter experts can identify where humans in the loop would be most cost effective.
- During deployment, the machine learning model will flag exceptions that require humans to process.
- A medical company shouldn’t wait for their organization to be 100% compliant in moving to electronic health records. Rather, they should build a model now to work with the variables available in electronic form, generate preliminary scores, and route cases to a human reviewer when missing data might change the model’s outcome.
- During loan processing, a model indicates that the applicant may likely be the same person who had a prior default. This application should be routed to a human to verify their identity.
- A company uses text mining to process personal records and assign tasks based on individuals’ skill sets. The model is procuring weak propensity scores for certain cases. Those cases should be routed to a person for review but allow cases with confident propensity scores to proceed to the next step.
A model's performance will never be uniform across all situations. There will always be identifiable cases for which the model is well suited and others for which it is poorly suited. You can maximize ROI when you identify these exceptions early.
ML models are not magic
Machine learning models, ultimately, are time management tools, performing triage, routing cases where they can best be handled. Using them this way prevents needless delays. The takeaway: Take action now and make progress where progress can most easily be made. Don't let the perfect model be the enemy of the good model.
CloudFactory will discuss this strategy with industry experts over the next three months.
- Dean Abbott, Chief Data Scientist at SmarterHQ, will bring his decades of experience building machine learning models and explore how people help models process challenging exceptions.
- James Taylor, founder and CEO of Decision Management Solutions, has spent a career helping companies integrate their exciting decision making processes with the potential that comes with machine learning models.
- Ian Barkin, Chief Strategy & Marketing Officer at Sykes, will remind us that not all automation involves machine learning models. There is also RPA and practitioners of RPA also have to deal with exception handling.
We hope you’ll join us. Our first conversation with Dean will be via LinkedIn Live on Tuesday, Jan. 26 at 12 p.m. EST. You can register to receive an email reminder about our chat with Dean before it airs. We also encourage you to follow CloudFactory on LinkedIn for additional updates for future discussions with industry experts.