Shell's TechBlabber

…ShelLuser blogs about stuff ;)

Is AI the new ‘religion’?

In the past years the topic of artificial intelligence has been raised a numerous of times and not always in a positive way. Take for example Professor Hawkings who warns that AI could easily pose a threat to mankind due to their major differences in evolution. Or what to think about all the shown issues with self driving cars?

Tesla’s CEO has already warned governments to intervene in order to stop the threat which AI imposes according to him.

But how feasible are those fears?

Pseudo Science

If you follow the global developments a little bit, the Internet is great for this, then you may come into contact with many so called “pseudo science projects”, the Indiegogo website is somewhat notorious (?) for this. These are often fund raising projects to set something in motion which is roughly based on scientific facts, but doesn’t really bother itself too much with all the details.

A good example is the WaterSeer project. In short: in order to help stop drought in poorer countries this projects suggests using condensation to extract water from the air. The theory is extremely simplistic: provide a cooler area in the ground, get air into it and watch the magic happen. All fun and well, but in real life things don’t really work that way. Sure, you can condense water from the air, anyone who has cooked potatoes and took the lid of the pan knows as much. But it takes a lot more that merely a difference in temperature. In fact: while WaterSeer keeps putting attention on providing cooler areas my example of the cooking pan already showed you that this condensation process can even take place when it’s awfully hot.

Now, you might wonder what all of this has to do with Artificial Intelligence. It’s not directly related of course, but I do wonder…

(ab)Using the lack of in-depth knowledge

See, what makes a project like WaterSeer ‘work’ where revenue is concerned is that people don’t stop to think about all the important details. We all know that water can condense, and trying to use this technique to benefit poorer countries may sound like a good idea at first, so you might be tempted to donate in order to help to make this work.

I can’t help wonder about the parallels with development on Artificial Intelligence.

See, there have been numerous of occasions where things went wrong. Now, there’s nothing bad about that: never underestimate the things you can learn from your mistakes. But I can’t help wonder when the involved parties often blame the whole thing on the AI itself. At first it may sound very plausible: the AI routines really worked but they worked so well that it went into directions which weren’t foreseen at first. AI is real, and it’s something to be aware of and be careful with!

That may sound very plausible at first, but it also ignores every other possible scenario as well. See: at it’s very core AI only does what it’s asked to do. It operates within a preset parameter. Sure it is possible that AI may “choose” to perform actions which weren’t foreseen (for example because we expected it to do something different) but even so: those actions are still predetermined options set by its programmers.

Which for me raises the question: was the AI really so very intelligent, or were the programmers simply not up to the challenge?

Leaving out facts, to make it sound better

The problem is that the art of sharing half-truths is happening more often than you may realize. Very often reporters and/or reviewers hardly know much about a specific topic yet still report their findings to others. based on the information which has been given to them and not bothering themselves too much to verify their findings and/or conclusions.

For example: Ransomware.

If you follow the news then you may almost think that the Internet is full of viruses comparable to a cold virus; the likes which you can easily come into contact with and if that happens then there’s little you can do to protect yourself, other than trusting your immune system. In the case of computer viruses this would be your anti-virus software, right?

Well…  Half truths.

See: while it is possible that you can come into contact with these things, it takes a little more than bad luck to actually get infected. Most ransomware got spread using e-mail attachments in spam messages. You know: the well known “You won 50.000 dollars! Please click the attachment to fill in your details“.

At that point in time you have come into contact with ransomware, but there’s no way in heck that it’ll ravage your network “just like that”. But that only happens when someone is dumb enough to actually open the attachment and start the program.

Once that has happened the program is on your so called LAN. Your internal network, and internally a network usually doesn’t try to protect itself from other computers on that same network because those are considered to be (somewhat) trusted. And that is what viruses such as ransomware heavily exploit. Well, that and the known backdoors in Windows of course.

Yet what most news agencies forget to mention is that when you’re accessing the Internet you’re on a so called public network (the Internet itself) and your computer is behind several layers of defense. For starters your router (and that of your Internet provider) which will have firewall support to block incoming traffic. But the firewall on your computer is also a line of defense here. That well known Windows backdoor I mentioned above can’t be exploited from the Internet “just like that”, that’s simply not how this whole thing works.

So it’s all a lie then?

Most definitely not. That’s why I have mentioned ‘half truths’ a few times now. It’s not a lie, but it also leaves out very important details of the whole story. In my example raised above the fact that someone opened an attachment is often ignored. Instead people only focus on the events that happen after that: how the virus exploited a Windows backdoor and spread across a whole network. No one bothers themselves with the fact that someone let that virus in in the first place.

So where does ‘religion’ come into play?

With all due respect to those people who do follow a religion… But to me a religion is mostly about control. A human build hierarchy where some people feel to be more important and special than others. And this hierarchy only works as long as people give those others their power. And when it comes with topics which try to explain the unknown it often boils down on having a little faith (definitely no pun intended here).

But wouldn’t you agree that AI shows the same kinds of examples?

Here we have several rich people who “worked with AI”, and who ran their “own AI projects” and judging by that surely they know what they’re talking about, right? Even if you’re a disbeliever then that will be soon fixed after they manage to gain plenty of media attention, thus making sure that many people have heard about them.

Heck, some even went public by actively trying to put several regulations on the development on AI. All thats needed is a little trust in their intentions.

It’s easy to follow these ‘prophets’, especially if you only base yourself on what little information they’re sharing. But the detail and the pizazz is usually within the details. As mentioned earlier: is the AI to blame for the mishaps, or were the programmers which tried to make it work at fault here?

Obviously it’s not the human factor, because “they” know what they’re doing right?

But do they really?


July 31, 2017 - Posted by | Editorial, TechBlabber | , , ,

Sorry, the comment form is closed at this time.

%d bloggers like this: