AI and data – do you need to be worried?

Recently some tech celebrities (Stephen Hawking, Elon Musk, Bill Gates, etc., you can google it for the details, here’s a news) have expressed their concerns or worries about harmful AI and human’s future. But really, do you need to worry? On the subjects and beyond, I got a few words to say.

Nowadays, it seems to me most people do not directly say they are working on AI, I feel the word AI has been abandoned by the academia for a few years since the “letdown” of the first generation AI, centered around rule based logic and reasoning. Because AI is hard, people have found other ways to work on it, mainly via machine learning, data mining, computer vision, voice recognition, natural language processing, etc.. With the arise of these fields, relying on the availability of data, today we say big data more often than AI. But what does data have to do with AI?

Here I give an informal definition of “new AI”: AI is the ability to understand real data and generate (non-random and original) data. This definition might be imprecise, also I am not giving a rigorous proof here, but think about it: if a machine can understand all data, from visual data, voice data, natural language text data, to network data, human interaction data, and more, is it less intelligent than human? Maybe there’s something missing: if the machine can create (non-random and original) data, e.g., generate non-random figures, sounds, texts, theorems, etc., then basically we can call it a painter, a writer, or a scientist, and so on, because it has all the expertise and can do creative work, it is then intelligent than most people.

The data point of view is superior than originally whatever it is called AI, because it enables us to make real progress and do so much more (I am also not going to show it here, and I suppose many “AI” researchers would have similar points of view). And if we look at what we are currently doing on “AI” research, we are basically doing so-called data mining (please don’t mix the concept of data mining with the data mining community in academia), especially focusing on data understanding. For example, machine learning, for which the basically principle is that feed data to machines, and make machines understand/recognize it on their own, so they can extract something useful, or make predictions, and so on. But not create! Machine learning currently is not focusing on generating real data (although there might be some trends).

If we say the machine’s ability to understand real data is weak AI, and machine’s ability to generate (non-random and original) data is strong AI. We are so in the phase of weak AI. And we can easily imagine, without strong AI, the weak-AI machines are not so dangerous. You can say cars, or weapons are dangerous, maybe they are, but eventually that depends on the people and conditions they are used. However, people’s worry about AI machines is different: the AI machines can get out of control and may destroy human race — which might be true, but I think weak AI can never do this (without human).

So how far aways are we from strong AI? I think it is pretty far away. We might achieve a little bit of strong AI in some special fields, but general strong AI is still way beyond human’s reachability. But we are getting there eventually, people might need to worry about it at some point in the future before it comes, but I guess not now. Of course, this might just be the pessimism of a practitioner, but the opposite can also be the optimism from non-practitioners.

To conclude, I think the take-aways from the post are:

  • We need to adopt a new point of view about AI, which is all about data, and there are so much we can do about it without achieving what people usually think as human-like AI agents (we did not build a big bird to fly around, and we did not build a robot arm to wash clothes, did we?).
  • For researchers working with data, we really need to think about the big picture of AI, and work towards it solidly, by things like establishing the fundamental principles and methodologies of learning from data, rather than being trapped by all kinds of data applications.
  • Let’s not worry about the harmful AI right now (well you should worry about something like information security, which is kind of related), people wouldn’t do so before the car or plane come around right? (well, maybe some people do..) The “weak AI” (defined above) is something powerful than but similar to cars and planes, they are eventually controlled by human, they can be dangerous if human mis-handle them. The real danger is possible, machines are expected to get out of control and conquer human being, and you need to worry about it, but not before we can really approach the strong AI (so don’t worry about it now).
  • http://tingchen.info/ chentingpc

    Andrew Ng:

    I think the fears about “evil killer robots” are overblown. There’s a big difference between intelligence and sentience. Our software is becoming more intelligent, but that does not imply it is about to become sentient.

    The biggest problem that technology has posed for centuries is the challenge to labor. For example, there are 3.5 million truck drivers in the US, whose jobs may be affected if we ever manage to develop self-driving cars. I think we need government and business leaders to have a serious conversation about that, and think the hype about “evil killer robots” is an unnecessary distraction.”

    from https://medium.com/backchannel/google-brains-co-inventor-tells-why-hes-building-chinese-neural-networks-662d03a8b548