“How accurate does my model need to be?”
This is a question that I get asked all the time. The universal answer: It depends. Virtually any decision that a human makes can be modeled by the computer. IBM’s WATSON proved that playing Jeopardy. Was the WATSON always right? NO. Did IBM prove WATSON able to simulate human decision making? YES.
The question is how accurate did WATSON need to be in order to compete? It depends. It depends on the type of questions asked. It depends on the quality of opponents. It depends on the score and questions left in the game.
In the business world the same type of questions need to be asked. All too often I run into vendors promising models where they lack a full understanding of all the circumstance surrounding the question I need answered. They then pledge all sorts of fancy accuracy metrics that speak to questions they figure I would want answers to, but fail to answer the one question I’ve asked: How accurate do I need to be in order to make better decisions and what is the cost of increasing that accuracy?
I realize I may be biased. I help companies build internal analytics practices, often by decreasing long term costs. Teach a man to fish; it costs less than buying a fish for him every evening.