We are now in an age when the simplest kind of intelligence built into products and services is being marketed as “AI”. This is a regrettable consequence of current marketing practice, that seems to extend to individuals, products and even job postings. For instance, it isn’t unusual to want to hire “AI developers” these days, who have certified “credentials in AI”.
As a professional in the AI and Machine Learning space, I have come across and perhaps to an extent have been complicit in, such hype. However, with time, you gain perspective and collect feedback. Of late, the more strident the clarion calls of “AI this” and “AI that” are in products, the more common it is to see ordinary consumers become dismissive of new technology. I truly think this “performance undersupply” (to use a phrase coined by Tinymagiq’s Kumaran Anandan) in AI marketing is a bit of a regression (pun intended).
For instance, tools with natural language processing are routinely called out as being “AI”. Let’s dig a little deeper:
- Text mining, and the extraction of information from documents, requires mathematical representation, modeling and characterisation of corpuses of text Stemming, lemmatization and other tasks commonly seen in task mining fit into this category of tasks.
- Models built on top of such representations that use them as input data learn statistical relationships between different representations. Term frequency histograms, TF/IDF models and such represent such statistical models.
- End-to-end deep learning systems that perform higher-order statistical modeling on such representations can learn and generate more complex patterns in text data. This is what we see with language translation models.
Note that none of the above truly imply intelligence. While there is an extensive use of statistical methods and techniques to represent text data and model it mathematically and statistically, there is no memory, context capture, or knowledge base. There is no agent in question, and therefore these can at best be described as enablers of artificial intelligence.
A post on LinkedIn by Kevin Gray talks about the same problem of marketing machine learning capabilities as AI. My response to his post is below, and perhaps it provides additional context to the discussion above on NLP/NLU/NLG and how that should be considered an enabler of AI, and not AI in and of itself.
The contention here seems to be on the matter of whether something can be described (and by extension marketed) as AI or not.
Perhaps it is more helpful to think of ML algorithms/capabilities such as NLU/NLP/NLG (as with audio/image understanding, processing and generation tasks) as _enablers_ of intelligent systems, and not the intelligence itself.
This distinction can perhaps help address the fact that consciousness, memory, context understanding and other characteristics of real-world intelligent agents are not glossed over in our quest to market one specific tool in the AI toolkit.
Coming to multiple regression – clearly a “soothsayer” or a forecaster (in the trading sense, perhaps) is valued for their competence and experience, which brings context and the other benefits I mentioned of real world intelligent agents. When a regression model makes a prediction along similar lines, that does not assume context either, and is therefore not in and of itself an intelligent system. So in summary, I’d say that NLP/NLU/NLG and such capabilities are also not “AI”, just as stepwise multiple regression isn’t.
From my comment.
Coming back to the topic at hand, we all can probably acknowledge first that marketers won’t stop using the “AI” buzzwords for all things under the sun anytime soon. That said, we can rest easy because we might be able to understand, with a little effort, what the true capability of the marketed product or service in question is. Mental models like those described above might help contextualize and rationalize the hype as and when we see it.