The Importance of Questioning Marketecture
In any given year, there are a number of strategies and technologies that rise to the level of buzzword, and with them often comes the logic that if those strategies or technologies are present, the solution is automatically better. But is it?
Things like object-oriented programming (OOP), service oriented architecture (SOA), open-source, Agile, Cloud, BlockChain…and in the automation realm we are seeing this with Machine-Learning and Artificial-Intelligence.
This is often Marketecture hype, and it sounds something like this:
Vendor: “Our product uses machine learning in the cloud, so it’s better.”
Here are three things you can do to retain a critical and objective mindset, differentiating between the hype and the reality:
1. Rephrase Without the Buzzwords
Rephrase what you are hearing and reading. Machine Learning is often just a form of automated data analysis, the Cloud someone else’s data center.
After doing this, do you still think the same, or were you getting wrapped up in the excitement of the buzzwords themselves?
2. Go Past the Powerpoint
Before you give over decision-making to a product, get a better conceptual understanding of the variables and logic going into that algorithm.
“Correlation does not equal causation” is a fundamental lesson in statistics, and is an important point to remember anytime data analytics is involved.
Kalev Leetaru, in his article “A Reminder That Machine Learning Is About Correlations Not Causation” reminds us of that fact, using the example “it is entirely possible to learn that a certain shade of color in a purchase button on a website makes it more likely that users will complete a sales transaction” (correlation) but notes “the problem is that it is unlikely that that specific color is the triggering factor” (causation).
Would you trust a decision system using a Magic-8 Ball behind the scenes? Do you know there isn’t one?
3. Consider the Outcomes from Multiple Angles
It’s tempting to only look at the possible positive outcomes, or assume that machines are inherently more intelligent than humans after seeing examples of IBM’s Deep Blue defeating a chess champion or Watson winning at Jeopardy.
However, machines equally have the ability to learn undesirable behaviors, be programmed with levels of bias, or make incorrect decisions based on the limits of the input data.
In 2016 Microsoft launched an A.I. powered chatbot named Tay, marketed as the “AI with zero chill”, that could learn to engage in conversation on Twitter. Only 16 hours later, Microsoft learned this lesson and the bot was taken down after it had started making racist tweets.
Ask the question: is the system working in your interest, and will it continue to do so?
As automation leaders, we must foster the critical thinking necessary
to ensure that Inspiring Automation
does not become Ill-Advised Automation.