Artificial Intelligence & Machine Intelligence A Non-Scientist Point of View
Artificial Intelligence, or AI as most everyone calls it, is an exciting term with very little common agreement on what it actually means. Defining the term is extremely challenging, because unlike defining a “cat” which is easily agreed upon and can be pointed to, AI is more of a concept. This challenge is more similar to defining “liberty”, where everyone has their own personal idea of what it means. Getting even five people to agree on a definition can sometimes be almost impossible.
Working for Captario, I can see that what we do today as easily having been considered AI ten years ago, but today not being so extraordinary. Today what we do is more comfortably called Machine Intelligence (MI). As the dreaded “Sales Guy”, I have been providing technology solutions to my customers for twenty plus years. Over this time, AI’s changing definition has plagued my efforts to be transparent and accurate in describing the shifting portfolio of products and solutions I have represented. Of late, I have debated many times with Captarian analysts whether features (existing and proposed) should be called AI, MI, or just clever workflows. We use an extremely flexible modeling schema in Captario SUM to analyze pharmaceutical industry challenges. We then put the complex models through a proprietary Monte Carlo simulation engine to answer the hard questions that at first might have appeared beyond answer.
I have crafted my own definition: ”Artificial Intelligence is the computer-based technology by which humans can off-load challenging but routine work to machines.”
But Dave, you say, that’s too simple. My response is “what is wrong with that?” The definition from the experts is constantly evolving and looks different every five years or so as the bar of “intelligence” for non-human thought/process keeps getting raised. Non-human thought, what does that mean…
AI and MI are all about using machines (computers) to relieve humans from doing routine tasks by crunching numbers more efficiently than a human brain can. The complexity of “routine” is what keep changing. The visionaries who feel AI is all about computers that can think independently and develop their own code are trying too hard to figure out the number of angels dancing on a pinhead. Technology that advances for the sake of novelty without benefitting humanity is neither useful nor desirable.
The value of separating AI from MI is minimal. I understand the viewpoint of the purists who envision AI creating new lines of thought and discovery. Pure AI is exciting, but also the Pandora’s box feared by Stephen Hawking who stated "the development of full artificial intelligence could spell the end of the human race” is a distrust that goes a little too far, but I understand some of his concern.
Isaac Asimov’s three laws of robotics:
The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm.
The second law is that a robot shall obey any instruction given to it by a human.
The third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.
sound like a possible safeguard that could be implemented, but won’t prevent abuses by AI/MI. The immensely powerful cloud computing platforms of Amazon, Google, Microsoft, Oracle and others are real, practical, and alive today. I hate to try and gainsay a couple of geniuses, but I see AI as a friendlier tool for humanity's benefit rather than a doom. Ultimately, the question of what full AI is exceeds my scope.
In my reading I have seen additional worries that spell out concerns that AI and MI have inherent bias that needs to be addressed. No system is perfect. Trial and error is often the best path forward to identify inequity. Simply put, question the bias of any AI or tool before you go on to ignore what it produces for you. Potentially the best path forward is to create a framework for researchers and intelligence scientists under which to operate.
Machine Intelligence (which I prefer to the ominous overtones of AI) is then a tool for humans to off-load routine and tedium from humanity to machines. Machines don’t get bored or lazy, and are prone to zero errors caused by inattentiveness. An ideal and practical use of MI is akin to the algorithms used by Google and LinkedIn to scour the Internet to alert me of news on topics like Drug Discovery or a connection who was recently in the news. MI has automated the almost impossible task of searching out news on all my contacts on a regular basis. When exciting promotions or innovations occur they are delivered directly to me versus me actively searching them out or just missing them altogether.
These same directed (semi-directed) searches are then built upon to keep me alerted to similar companies and people by the MI/AI. I have valuable information delivered to me that I never thought to even look for. Google News knows me and constantly drops unexpected but valuable articles into my news feed. This is something my Boston Globe digital subscription never quite accomplishes.
LinkedIn and Google are just two of the tools running MI algorithms on my behalf. I also receive machine driven investment advice from Fidelity and Motley Fool. There are dozens (hundreds) of machine bots running to feed me data. By the same token there are thousands of other machine algorithms running to analyze me and serve my information to their masters. Have you ever been speaking with a friend about buying a new product and “magically” Amazon puts it in front of you when you go to their site? How many times has Google’s autofill jumped to your question in three keystrokes or less? There is definitely an Orwellian overtone to some of the help I receive on a daily basis via my smartphone and laptop.
So, what’s my point? AI and MI are exciting tools that can be a bit terrifying. The visionaries like Stephen Hawking and Isaac Asimov have shared their opinions, but as a technology solutions salesman I feel that Ronald Reagan stated the best policy – "trust but verify."