AI for Dummies: Dummy Version

This is an introductory post for those looking to better grasp what the terminology AI means. It’s a complementary guide to rehearse some basic math needed as well as walking through popular topics within the scene. Later on I plan to walk through the basics of neural networks which is the fundamental backbone of AI.

History vs Today

AI itself is an alternative to what we’ve been mainly relying on. Since computers shaped our world we’ve been relying on algorithms to empower our daily lives. An algorithm could be as simple as counting to 100 but multiply by 2 each time a prime number is outputted until a non-prime is found. The algorithm would produce the following:

0, 4, 10, 21, 22, 46, 47…

The simplicity of an algorithm is how little overhead it comes with. It’s possible to solve simple problems really quickly with this approach. The backlash is that they don’t scale well: We can’t keep writing new algorithms each time we run into a problem. If you wanted to build a self-driving car the ‘algorithmic’ way, you’d need to shove in tons of algorithms, and even then at that point it wouldn’t be able to meet ends with new and unexpected scenarios, by the time the stop sign was tilting due to a recent storm, there wouldn’t be an algorithm to react to that new and unexpected change!

We humans are quite fit for adapting to new environments, some historians believe that because we used to move around a lot, we had to learn how to adapt to new lifestyles, which led to our brains gradually expanding. Either way, it’s a fascinating skill that most of us take for granted. Telling a robot to learn how to manage cords is an extremely messy task even though it seems simple to us. Yet, robots are great at repeating tasks while we’re not.

So where does AI come into all of this?

The beauty of Artificial Intelligence is the word itself. We reproduce intelligence by mainly mimicking our own brains, (hence the name ‘Artificial’) and attempt to ‘Artificially’ reproduce our actions. Since the 90’s, this concept has always made people speculate around the negative aspects of what the outcome could look like, and rightfully so. Artificial intelligence is the act upon which we recreate ourselves in the form of machines. There is the belief that Machines will never be able to outdo humans in terms of creativity, although from a psychological perspective, creativity is defined as the form of recognizing unusual patterns; If machines can recognize simple patterns (Which they have already proven in the form of board games) — then what’s stopping it from going even further?

One thing preventing machines from taking that giant leap is actually a daily problem that AI researchers struggle with: Meta-Learning. It’s the art of generalizing and adapting to new environments which I discussed above. The idea is when you stumble into a new problem is to take a step back and observe humans. In this case we need to go back and look at what babies do. They generally play with a bunch of toys at a younger age and create a form of generalized intelligence; This form of intelligence is later able to adapt to completely new scenarios without much hassle. Humans have an advantage of millions of years of evolution — so it’s definitely not an easy task to recreate that within a few decades.

Currently today, AI is limited to niche tasks- and it might not always seem like it’s as groundbreaking as it really is. It may even look like it’s just another algorithm — although the fact that the infamous Chess algorithm Stockfish was annihilated by the AlphaGo Zero AI in chess a few years back. Clearly there’s something going on — but it’s hard to say where we are in terms of reaching the Singularity.

How does it work?

So how does AI work today? Actually diving deep into the low level specifics will be due for future posts, but generalizing the infrastructure might give some good insights.

AI consists of multiple components: That of which are quite major for the industry are NLP (Neuro Linguistic Programming) and ML (Machine Learning). I’ve already discussed Meta-learning briefly above, although the other two have grown a lot more. Generally, ML is how the computer learns; Generally, there’s three main branches of building an ML model:: Supervised, Unsupervised and DL (Deep Learning). The last branch is the most exciting one, as it’s the most similar to how humans interact, and it’s likely a contender for reaching Singularity status.

Deep Learning

Deep learning means that you reward & punish the computer with a points system of some kind. An analogy usually referred to is putting a carrot on a stick while letting the subject teach itself. When it comes to games it’s quite easy to give the computer a goal: The closer to the finish line you are, the more you’re rewarded. On top of that, if you get there faster, you get even more points; Maybe if you collect coins you’re even more rewarded. Throughout millions of simulations the algorithm gradually becomes better and better without a tangible limit. With this comes a big challenge: How do you run millions of iterations in real life? The problem is that you practically cannot. You can always teach the AI in a simulation and use it in real life, which can work well for some applications, but a true AI should probably be able to get by without this handicap as the simulation is not as accurate as the real world as of today.

So what about it? The interesting part about that topic is that it’s closely aligned to how humans learn. DL doesn’t rely on us feeding it data; It feeds itself by absorbing the environment. Casual ML is generally interesting as it helps us do a variety of tasks that were practically impossible before, but it’s never able to do more than what we teach it to.

Natural Language Processing

NLP is the art of concentrating huge amounts of data into generalized words. There are many techniques out there, but generally we’re limited by the ML approach: We feed a bunch of pictures of items from different angles, of different qualities and different variations. We then tell the computer, this is a shovel. Once enough pictures have been fed, the algorithm should be able to recognize future items on its own. The problem is that this is a linear process. Again, we’re limited by the data we humans feed into it. There are interesting ideas that can be done with this, but it’s not AI, it’s not something that we can consider a conscious thing.

A way around this limitation is again the generalization skill. Meta-learning comes in here again: If we can teach the machine the color blue it might be able apply that knowledge towards the shovel if it can recognize that it’s a shovel. The problem is derriving the precious knowlege and applying it elsewhere.

The next stop?

So what comes next? We already have theorized a lot of the components needed to build something robust, but are we missing something? The big thing about AI that we probably don’t know enough about yet is setting a goal; How do we tell the AI that it should empower us humans instead of destroying us? Using the Pythagorean theorem to measure how close an AI is in reaching a point doesn’t even come close to the complexity of being useful in real life.

The next question that one might have is regarding NLP; How do we quantify the binary thinking that a computer generally has- into some artificial language that we can comprehend? You can always teach a computer that a red ball is Red + Ball combined, but if you teach it the color blue, will it be able to recognize a blue ball? Not really, as that requires actual intelligence and not just image recognition, which is a front that we haven’t really made progress in. Furthermore, translating that knowledge into vocabulary of course is the next step, and an even harder one. NLP is likely going to be a necessary component as without it we’re unable to communicate with the AI. We’ll essentially be on two different islands. Language has historically been so important to our survival that without it we wouldn’t have made it. That same test is coming close to us again, and it’ll be an ever so important one.

NLP is an exciting and probably really important area of research. One thing that might come before that it is putting chips in our own brains and boosting our cognitive performance before incorporating intelligent robots into our lives- which is something we’re making steady progress towards. AI is a big field with loads of data and we’ll likely need to encapsulate it into a generalized answer. Right now AI is dangerous and if we’re more intelligent than today we might not put the world on fire, so putting our endavour on hold is likely the way to go until we’re more confident in ourselves.

With that conginitive boost in mind, we should be able to come to an understanding with the robots and it might actually become a reality sooner than later- not to mention the performance boost helping us create the AI much faster. Neuralink probably still has a decade or so before we’ll actually be able access our prefrontal cortex. Not everyone will be willing to put a chip in their head, so NLP research will still be relevant in that regard.

Conscious robots?

So when does AI reach consciousness? I’ve briefly gone over some of the components, but at what point can it recognize itself? There are various ideas on how consciousness works, and it’s argued that everything is conscious at some level. I won’t get into the philosophical ideals, so I’ll go by the definition that when the AI can formulate social contracts with us humans, it has become conscious. Some might argue that robots today are already able to conduct social contracts in the way that they empower us — I would argue that we’re making a social contract with ourselves, when we code these robots, it’s basically mirrored back to us in the form of actions/events. Either way, when this notion becomes a reality, is probably when it reaches a certain level of intelligence coincides with our own, although that intelligence is still improperable for measure — as we still don’t understand how our own intelligence works.

So that’s an overview of how the AI sphere looks right now. There is exciting research happening all around. To help keep up with technology, I’m going to start sharing experiments with the reader. There will be experiments conducted and detailed using Python for the reader in the simplest manners. I also hope to teach some of the math included.

I write about AI-related topics and publish on the weekends. I’m interested in empowering people towards having an impact as the optimal way of getting there.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store