GoodAI’s Marek Rosa Part 1 - Solving general AI would change everything

Marek Rosa, photo: archive of Marek Rosa

Prague-based video game designer Marek Rosa, founder of Keen Software House, recently established a firm focussing on artificial intelligence. Last July, Rosa launched GoodAI, contributing 10 million dollars of his own money to fund research into the development of general artificial intelligence and unveiled a roadmap outlining how the company hopes to achieve its goals within as little as 10 years. Can general AI be achieved? By which methods? Those are some of the questions we discussed.

Marek Rosa,  photo: archive of Marek Rosa
“Usually narrow or weak AI works well at what it is designed to do. But it works well only within a very specific domain and only what it was trained to do. Self-driving cars, insurance company programs, face, people or object recognition AI or chatbots (where you have AI which partially understands what you are saying or writing, are examples. The knowledge that AI gains in its respective area cannot be transferred to a different domain. Similarly, the AI cannot communicate with people or ask for more feedback or clarification. The AI also cannot learn gradually, meaning one skill on top of another.”

From what you are saying weak AIs are always going to be stuck serving only the purpose for which they were designed. How does that differ from your goals at GoodAI? What is your long-term goal?

“The long-term goal is to create Artificial Intelligence with a human level skill set. To get there, we will need to design an AI architecture which will enable the AI to learn, to acquire skills gradually, basically, acquiring one skill - solving a set problem - and then another and another and so on. One skill on the list could be the ability to learn much more efficiently. Leaning to learn more efficiently is an important aspect in itself. If you have this kind of a system, then there are practically no limits in what the system could learn to solve. People, if they have enough time, can solve anything.”

Does that mean that the AI, from the very beginning, has to be hardcoded with some very specific instructions to set off the process, to get the ball rolling?

“Yes, of course. At the same time, the hardcoded parts of the algorithm should be minimal. We have two classes of skills or abilities: the first are called intrinsic skills or atomic properties: these are the skills that the system needs to have from Day 1, from the moment it is launched. And, then there is another class of skills, which are learned. We plan for the AI to learn these at what call School for AI. This is where the AI will learn many tasks. It is similar to the way education works with people. When you are born you have some DNA, you have some neurons and they have to function in a certain way and if there is some error or if something goes wrong, you will not learn. Then, there is the external part: the environment, the teachers, society where you learn useful skills. So, it is very similar, in a way.”

From talking to you in the past, it seems to be an evolutionary approach to the way you are trying to tackle this. How does the AI actually interact in order to learn?

“There can be different ways of doing this. With our School for AI, which is an automated and simulated environment, like a virtual world in which the AI ‘lives’ it has various tasks set which it has to learn [One was to learn by itself, how to play the early video game Pong, which the AI reportedly succeeded at - Ed. note]. It can be a matter of showing unique patterns to the AI to kind of push it in the direction so that it learns to do something with the patterns. Even this communication channel is hardcoded - it is not a language or anything high-level like that, but they are signals. Signals that push the AI in the right direction. Basically, you need it like this until you theoretically get to the level where you can communicate through language, and then it should be much easier because language has many ways how to convey the message. It is potentially much more rewarding than sending simple reward or punishment signals or error statement or similar messages.

“Still, there are many different ways to try and achieve this. At the core, you are sending some data to the AI and the AI tries to come up with some models, some hypotheses, that will explain the data or react to it in a way that you want. The AI needs to be able to evaluate multiple possible solutions and you repeat this process until the AI begins to solve or present possible solutions to the problem. Today many of the problems which can be solved are very narrow. In the case of face-recognition, you will show it pictures in the form of pixels and also you include some signal or value such as ‘the object you are looking for is there or it isn’t’. Or maybe some other additional information. The AI then tries to form a model which answers the problem.”

The quest for AI or general AI is a major undertaking being followed now by many different companies, different research institutes around the world. What inspired you to take on such a task?

“I have been interested in AI since forever. I know that if we ever successfully achieve artificial intelligence that we will be able use it to solve a great many problems which are currently beyond our reach. I also view this as certain leverage: instead of humans trying to work on certain tasks, we could create AI that would then scale itself and clone itself and tackle some of these problems. Since it should be able to learn, the idea that it would solve things much more efficiently than if you or I were trying to do it. So, I see it as a way of being more efficient.”

We are talking about problems which can be broad or great in scope: anything from economic problems to food crises to medicines to cures for diseases… all kinds of potential applications where we can really only see the tip of the iceberg, correct?

Illustrative photo: Alejandro Zorrilal Cruz / Public Domain
“Yes. I would also add the problem, for example, of space exploration and other scientific questions about the Universe, how things came to be. There, I think the risk is either it will take us too long to find the answers or there will be some limitation to our intelligence, maybe the answers are beyond our understanding. The best solution to that, in my opinion, is trying to understand intelligence, to try and create artificial intelligence, and then use AI to augment our own intelligence.”

How different to us might a created AI be?

“I actually think we could be quite similar, at least at first. In the beginning, the General AI or Strong AI could be quite similar, although of course there will also be differences. As it learns more and grows the differences would become more marked. But even we of course have the ability, within certain limits, to improve ourselves, to add to our own skill sets. Every day, each of us learns something new and we use these skills to learn new things. If we learn to read and write it takes us far beyond just oral communication and it is far more efficient. The drive to learn is something that needs to be in the default of the AI which we are designing. This AI will always try to improve itself. It should be able to uncover solutions too complex or time-consuming for us.”

That is an interesting point: how Strong AI, if achieved, would be close to how human beings think about or face problems or operate, before they move to the next step. The reason I ask about that is because some researchers are taking the approach of what is called embodied AI, which is basically to provide a bipedal robot body, not unlike our own, with cameras and sound detection and response, for the AI to interact with the world. There, the idea is that for the AI to evolve it has to experience its surroundings in a similar way to us. This is explored in a BBC Horizons documentary called The Hunt for AI which showed robots in Berlin communicating through a language they themselves were inventing. But perhaps if you go that route, the AI is more of an imitation, in a sense, of us. That is not necessarily the goal for you, is it?

“Well the short-term goal would be that but the long-term goal would be something which goes beyond our own skills. It can be beneficial to start by emulating what we know how to do in a way that the AI will later continue self-exploration and so on. It will be biased by what it has learned from us but will also discover new skills. The key is for the skills it adopts or adapts to, to be safe for us. We do not want to ever be in a situation when the AI would invent something which would be dangerous for us. The AI should understand the world in a similar way to us. If it has a similar sense of the world, it should increase the chances the AI will behave the way you want it to.”

In Part 2, what are the risks or dangers of AI and, according to Rosa, why it is a goal ultimately worth pursuing.