December 19, 2019 - 3 min read

Bricks for impactful AI

img


If you want to make AI useful, you have to stop thinking in a box and consider expanding to a cross-discipline approach. This mistake has been done in different backgrounds and might also happen in data science. Fortunately, it doesn’t have to be this way.

Decision intelligence: new methods for new tools

Decision intelligence is a practical field to excel at the interface of business and data science. AI is trendy, everyone gets that. There is real value to build with it, OK. Sadly, blindly funding AI and looking for a new badge seem to be one of the ever expanding ways to use it. Decision intelligence (DI) is a method that tries to answer to questions like:

  • if I build this tool, what would be its benefits for X? "
  • if this tool makes step A better, would it have drawbacks on step B? "

The biggest advantage is that it enables you to abstract the technical parts and collaboratively build solutions for greater value. There is a lot to be explored with decision intelligence, but a starter read would be this introduction by Cassie Kozyrkov and for more I would suggest exploring Link from Lorien Pratt.

Making deep-learning wide

In the field of deep-learning, being too narrow is also something that raises concerns. Current algorithms are trained with narrow targets, they can become very good on limited/specific tasks. They can extremely accurately identify cats and dogs on pictures but they can not learn how to do something fundamentally different from their initial target (being on purpose unfair to how much can actually be accomplished with neural networks). In his paper on the “Measure of Intelligence”, Francois Chollet shares that if you want to build flexible intelligence, you need to train a system that goes beyond specific skills. Borrowing elements from cognitive psychology, he suggest that the ability to generalise should be taken into account when training models. For now, AI is not about imitating intelligence or generalising behavior but about organizing data, information and knowledge to some extend. Cross-field connections are still very human and ongoing debates discuss whether a machine could ever reach this level of complexity.

Building “Tech for Good”

Tech for Good is a way to make technology the most useful and not building it for its own sake. The decision to build a technology-based tool often comes down to the benefits it will bring. Economically, focusing on GDP-increase based growth aims at survival but not thriving. As this Harvard Business Review explains it very well, GDP based growth was about assessing how to survive during war times. You just need to turn on any news channel and you are likely to hear that growth equals increase in GDP. There is a big difference between surviving and flourishing. Interestingly, this McKinsey company featured insights suggests that more complex measurements for true growth could be used. Trying to model economic welfare in a larger sense, they include the following :

  • GDP
  • Consumption (income that is not saved)
  • Consumption inequality (aversion to inequality)
  • Risk of unemployment
  • Leisure
  • Health and longevity

No model is perfect and finding ways to include externalities such as environmental sustainability more directly are real challenges but would be highly beneficial.


I like this trend that we really need to make connections across fields to get the real value behind technology. In my opinion, there is a strange but compelling paradox that we really need to work hard in intricated networks to make technology a breeze to use.

As usual comments are welcomed.


Copyright 2023 - Mikael Koutero. All rights reserved.

Privacy Statement