E65: Bo Morgan, Technology Lead at DreamWorks Animation – Interview

October 13, 2016

https://www.linkedin.com/in/bomorgan

This podcast is with Bo Morgan. Bo is the Technology Lead at DreamWorks Animation. His research and focus is around Social and Emotional Learning (SEL) Artificial Intelligence (AI) models. This entails trying to make a machine, a robot, or maybe even a character more human on an emotional level. Before that Bo was at an AI startup, AIBrain, and before that received his PhD from MIT Media Lab in 2011.

In this interview we talk about how to make robots more intelligent, and where to even start. And what it means to have intelligence.

Here are some other things we talk about:

-What did you research at MIT?
-How do you train a robot to become more intelligent?
-Can you tell us about AIBrain, the AI startup?
-How did you get interested in cognitive architecture?

Transcript

David Kruse: Hey everyone. Welcome to another episode of Flyover Labs. Today we have a really interesting guest with us, Bo Morgan. Bo is the Technology Leader at DreamWorks Animation and his research and focus is around social and emotional learning around AI models. And before that Bo had an AI startup that we’ll take about and before that he received his PhD at the MIT Media Labs. So I invited Bo on the show because I’m quite interested to learn more about this background and what he has worked on in the past and a little bit of what he is doing now. So Bo, thanks for coming on the show today.

Bo Morgan: Sure, no problem Dave.

David Kruse: So you have a pretty interesting background. Do you mind just giving us a little bit of an overview on how you came to where you are now?

Bo Morgan: Yeah. The social emotion learning track is an interesting story. It kind of goes around; it’s the cutest route. In high school I started programming neural networks because my dad game me a book C++, plus e-logic in neural networks and so it had a couple of examples that I had typed in the computer and did a science project using these things, which kind of won me an award, that got me into MIT and let me talk to some of the most amazing researchers in the field of neural networks who had moved on to these things.

David Kruse: I’m sorry, what year was that that you started?

Bo Morgan: I got into – I started programming when I was seven years old. I started programming the neural networks when I was in high school, maybe 16.

David Kruse: Wow!

Bo Morgan: 15.

David Kruse: Okay.

Bo Morgan: And then I got to MIT when I was 18, that was ’99.

David Kruse: Wow! So you were quite ahead. I mean the neruo networks have of course become popular now, but anyways go on …

Bo Morgan: Neural networks are really simple. A lot of people download tool kits to do neural networks in Python and I think they are still simply. You can write them with a 11 lines of Python. It’s just a matrix multiplication and a couple of addition. It’s really a very simple algorithm.

David Kruse: Fair enough, that’s true, that’s true. But now they’ve gotten so much more popular because of the data and the computing power. I mean they weren’t quite popular in 1999, is that?

Bo Morgan: We got GPUs and Videos leading the trench and telling us it’s not necessary investing in that approach.

David Kruse: Interesting, okay. All right, sorry to interrupt, keep going. Now you are MIT.

Bo Morgan: No, no that’s totally cool, we are going to go all over the place I’m sure. So let see, I’m at school talking with Marvin Minsky who invented neural networks Perceptrons or worked on them, it’s a large filed. But he wrote a book called Perceptrons and it was about neural networks and some of the limitations and some of the things that they could do, because he was helping to build these things and this led me to cognitive architectures and working under his graduate student Push Sing [ph] and they were so – they had theories that were so abstract, they were like fearless of abstraction. But the neural networks were probably the way the neurons worked to some extent, to a frequency based extent, not to a – not if you actually deal with the timing with the spikes. But at some level they were so abstract that they were dealing with these very abstract neural architectures and so I got interested in reflective thinking, social and emotional models came in with fifth and sixth layers of Marvin’s emotion machine theory, the selfless like to inflow consciously reflective layers involve personalities, identities, selves and others also that were modeled after themselves and then the sub-consciously reflective layer. You have two models of mental recursion. So not only are you thinking about the properties of the individual, yourself and others as individuals, but you can think about individual response about individuals. And see that this doubly recursive type of self modeling that allows you to do guilt and pride and just some of the strongest emotions that I saw with AI models under Marvin. And so then you can you start to think what would she think of me? And what I’m doing?

David Kruse: Wow! Okay and so how do you even begin to model something like guilt or pride in a computer, in algorithm?

Bo Morgan: That’s exactly – that’s a great questions, and that’s I mean any type of thinking that you would ask that question. What’s a machine that could do the same thing, right? And so like there are different kinds of just like simple machines that can do these things. It comes down to where does – like if your model it there’s a goal oriented machine for example. You don’t have to, but let’s say that we do. So there is a goal oriented machines. It makes plans towards a goal, it connects to keep the plan and it seems to work. These need goals, where do the goals come from is the question right? So if we are going to talk about a machine that has goals and can do these things and call that intelligence, where do the goals come from and so then you start to introduce culture. Basically it’s to make a long short, humans have culture and they communicate it from parents to children and so there are these special social relationships that communicate via language and via other means from parents to children. So there is this special model in the child’s mind that says this is where I get my goals. I’m sorry if I’m rambling, but did that answer your question.

David Kruse: Yeah. No, I mean – and then and how do you actually take that and create a model, you know even more detailed or I guess than – you know that was a good overview. But how do you take that and actually start programming a model to reflect.

Bo Morgan: Oh, how do you program it; good, good, good, yes. I would use a hierarchical approach just to think of categories of the types of mind, right. So the brain is divided into six segments of the spinal column basically and so you can think of it in six parts. The parts of the brain are useful to think of different function areas and you might want to build those in a computer program. But I think cognitive architecture wise, there is a different approach, but you still think hierarchically. So at the highest level there are layers. These can be an array in the Python program. You have six layers; you have an array with six slots. The computer program is not complicated yet. It’s what the layers mean that are complicated. And so when you look at Marvin’s theory for example, if you want a six layers theory, there are other four layers theories that are more conventional. But basically the bottom four layers are always the same. There is built in reactive thinking, there is learned reactive thinking, there is deliberative thinking and there is reflective thinking. These are all pretty standard ideas in robotics. So at the bottom layer you have the API for robot, these are the things that don’t change. These are the sensory inputs and motor control outputs. If you are going to solve a program you are going to use the actuators. You are going to be safe whether or not these problems are accomplished through these estimators and so that’s your lowest level of API. At the layer above that you have learned reactions. So there are sequences, little compiled programs you can learn or hand program and that will do things using this lowest level of API, that layer changes through experienced. The deliberative layers, layers three, that’s where you build plans towards goals. So you build a planning machine that has some kind of language, some logic that abstracts the model of the world and uses that to reason and try to accomplish specific configurations of the world’s partial state if you will. Then there is the reflective thinking layer, we can go on, there is the reflective thinking layer that will debug the planning layer; learn about how to plan, how to think. So my PhD was sort of the fourth layer and then there is the fifth and sixth which we’ve discussed, the self reflective and sub-consciously reflective.

David Kruse: Got you, okay. And yeah, do you have a project and maybe you answered this in your dissertation that you kind of implemented at least the fourth layer and how it worked.

Bo Morgan: Yes, there is open source and it’s called FUNK 2, F-U-N-K and then the numeral 2 hits on GitHub.

David Kruse: Okay.

Bo Morgan: And that has all four layers. Oh! Back to the hierarchical ideas, sorry that I got stuck on the first level of the hierarchy is the six categories. But then there are agencies within layers. These are groups of resources that solve a certain type of problem, a physical agency in the built in reactive layer. You will have sensory agency in the built in reactive layer. The resources that are the parts of that are – they can be arbitrarily complex or simple units, but basically they are things that can be activated, suppressed; they are kind of parallel processes that you want to think of them that way. If you want to think of them as physical parts of the brain they are like Asmara images that turn on and off, that’s a horrible analogy. Parts of the brain do not implement these abstract types of resources. But there would be some kind of computational equivalent, right. So there is – I don’t know the modern neural term for patterns, temporal patterns across the brain, and I don’t know what the best analogy would be. But you know there is electrical activity that is spiking around, there is – I forget the actual term, whether it’s as far as coding, but anyway, yeah. So three layers of hierarchy; layers, agencies, resources.

David Kruse: Interesting, okay. And do you have like a specific example where – or maybe MIT or outside MIT where like hey, we have this goal of training this robot or AI to do this, and this is how we made that happen?

Bo Morgan: Yeah, yeah for sure. I worked on – Well, it wasn’t a real robot that I worked on MIT, it was a simulated model.

David Kruse: Oh! Yeah, that’s fine. Yeah.

Bo Morgan: So that’s one decision you have to make is whether you build a real robot or run a simulation. And so what I got – it was a very basic logical simulation. It had real numbers and very minimally realistic physics real time and it would move – it was a basic blocks rob, [inaudible] and it would move a claw left and right and you could drop the claw and pick up whatever was below it basally, like those arcade kind of machines. And so then it could stack blocks if it was the right shape. And then you can give it goals like, I want the red triangle on top on the green cube, that kind of thing or the red pyramid on top of the green cube. And so it would learn to stack these blocks and so I built a system that would do that. It would execute plans to do that and then it would learn when different types of plans succeed or fail. And so it showed the deliberative learning about the physical world and the physical actions and that cause on the models to be updated and then it had a layer above it, which was a reflective layer that watched the planning machine, manipulate plans and decide to execute specific plans based on different planning algorithms and it would make a correlation between the effects of executing a plan result in a failure object being connected to that plan versus a success object being connected to that plan and so it could learn the results of planning, the decisions to execute different types of plans and learn how to think basically.

David Kruse: Interesting.

Bo Morgan: The specific example was two blocks on a table; let’s say green pyramid and blue cube again. They are both on the table, and so if you pick up the – if you decide to execute, you know two plans basically. You know how to search for plans, you know two methods for how to plan basically, which is how to think the analogy, and so one is to go through your most, your oldest learnt plans and start with those and then come forward in time trying each one in your imagination and imagining the effects of that plan and then seeing if it accomplishes your goal in your imagination and then you say, oh! Well it does accomplish my goal. I am going to execute it. There is another way of planning where you can go to your most recently learnt plan and do this imaginative processes backwards in time through all your plans as you learnt from previously. And so these two different methods of planning are good for different types of problems and so what the system would learn is that when it executed the oldest plans first for solving a problem involving triangles on cubes or something like where it was a – it learnt that executing the oldest plan for stacking a triangle on the cube was revolved around a failure object. And so actually it was stacked in two blocks. Sorry, I’m miss remembering. It’s a complicated model that I’m happy to let rest and remain for a few years, because I just focused on it for so long. But the goal is to stack a block upon a block and it didn’t matter the shape. And so what it did was, it found this old plan for stacking a cube upon a pyramid and it failed. And so it categorized that stacked, deciding to execute that type of plan, the old plan was – led to a failure plan, a failure attached to a plan. And so there was a reflective goal to have the planning machine avoid creating failure objects and so reflectively the thinking had failed, as well. I’m sure that had resulted in the plan that had a failure object attached to it. And so what it would learn is when I see this type of problem, I should execute my more – I should not execute that plan because it will lead to an expectation failure. So that’s a very specific example of a relatively complicated form of thinking.

David Kruse: And how would it know if it succeeded or not? I mean of course we know, but how would the…

Bo Morgan: It has perceptions of its environment and so it doesn’t have any real sensors, because it’s a simulation and so it can see that that’s a block and its blue and these are all symbolic in the mind of the reasoner and so these are directly input. And so when it has goals like stacking the blue blocks upon the red pyramid and it can look at the table and see very clearly in logical terms what is true and what isn’t true and so it can clearly compare, okay, here is my goal and he’s what’s throughout the world and they are not the same, they are not what I expected.

David Kruse: Got you. And what was the – one of the most difficult parts of that project. Maybe there was many difficult parts, but…

Bo Morgan: Oh! The politics.

David Kruse: Really?

Bo Morgan: To tell you the truth. Yeah, it’s just so hard to do. It’s a very theoretical AI project that has no real world aspect application directly. I am so excited to take this learning algorithm which scales to N-layers. You can do N-layers of reflection. So given one failure, one expectation failure, if you have N-layers of reflective thinking you got N things you can learn, which is the opposite of big data. It is the epitomal opposite of big data and I have focused on that specifically one shot learning and you can run an infinite amount about the smallest failure.

David Kruse: Interesting.

Bo Morgan: And I’m excited to implement it.

David Kruse: Right.

Bo Morgan: I mean in a practical way. I’ve have shown it working theoretically.

David Kruse: Okay, and is that one of your – is that – I know on your website you have some current research projects, is that what you hope to implement?

Bo Morgan: I do. I’m trying to simplify the algorithm. I went hard wild implementing a programming language in my PhD and I learnt a lot and that was my goal. I’m kind of a bastard in that way, but I implemented this very theoretical abstract algorithm on top it and so what I plan to do now is write it in Java script, which is a relatively beautiful language, not as beautiful as SCEAM, but it’s a very beautiful language, very similar and so implement it in Java Script very abstract, very simple to understand and just get a little demo to run. I bet I can get it to run faster than I wrote it in C, which is a very low level programming language that I was trying to make everything run super fast and actually we had it run extremely slow. But I learnt a lot about computer science through my experience.

David Kruse: Right. And your vision is so interesting, because often its – such a big problem is that to train these models you need just a huge amount of data, which is tough, especially for robotics.

Bo Morgan: Usually, yeah, yeah, and which is awesome because we have a lot of data suddenly. It will solve a lot of problems for us, which is cool.

David Kruse: For certain things, but sometimes it’s hard to get that data. So if you could start learning…

Bo Morgan: Totally. You don’t want to get that data sometimes.

David Kruse: Right, that’s very hard. So with this new research project what would be a good like use case or goal if you are writing Java Script and you can do X or what would you do?

Bo Morgan: I’m thinking of teaching an open source like programming class where we use the tools, we visualize the algorithm running and if I can get 30 students a year learning how to manipulate those ideas, understanding intimate learning from a single failure, I think that would amplify the idea and I would work at the same time on building the algorithm, you know and having a research group to focus on that, even if it were just myself.

David Kruse: Interesting, I like that vision, that would be great. I mean do you see this helping, in certain – well, it could help in lots of industries. Do you see it in any specific industries where it could be especially useful, like maybe early on? I don’t know if you thought about that?

Bo Morgan: Yeah, yeah, that’s a very good question. I’ve been applying it to social and emotional learning and making interactive robots and games for kids to try to get to – like using this cognitive architecture so the six layered model. Getting up to layer four which was the focus of the learning algorithm and so having – training children to like the current like brain training software or educational. Most of this neurally based or physiologically based brain training software is focused on relatively – in terms of the model they are layers one through three and a very little of layer three, which is deliberative thinking is represented. Most of it is perceptual, most of it is motor control reaction timing. There is some visual path planning. There is a lot of visual training, a lot of auditory training. There is some executive function training. There is a form of reflective thinking that I have seen on one of these brain training websites that has to do with task switching I believe, which is a physiological experiment where they will have a gain, where they present like colored numbers and they say if the task is, task one you are going to press the button if the number is even. If the task is task two, press the button if the color is red. And so then they will have, they will cycle these numbers at you, of varying colors and numbers and the task will be switching every once in a while, but the popup little flag that just says, okay task two, task one or you know letters or numbers or sorry numbers colors you know they will change the task that way. And so there is a very, very basic form or reflective control and that’s all that I’ve seen, and I think we can really go much further with clearer and more advanced models of reflective thinking in terms of AI agents that are competitive that can think these ways really well and they can play games with children who can think really fast and are learning to thinking in those way. And so I think that would be a fun sport, just like for children to play these like you know games and compete against these highly intelligent agents. But getting up to layers four, five and six, social emotional learning, you know there is a lot of children that could benefit from those tools and those earnings.

David Kruse: Interesting and do you have an example of what a game like that would be if you have your ideal and…

Bo Morgan: Yeah, there is – let’s see, we have a few ideas for conversational agents at AI Brain.

David Kruse: Oh! Yeah, do you want to talk about AI Brain quick since…

Bo Morgan: Oh! Yeah sorry. AI Brain was a start up that I was a part of a couple of years ago, 2013 and 2014 and so we presented at the Consumer Electronic Show a little toy robot that was made by Bona Vision in Korea, which is another company, a sister company basically. We share a CEO. He is a serial entrepreneur in a successful startup security company in Korea and since then he has you know, I don’t know how many companies, but probably I know of five. A very interesting successful man, Richard Shinn and so he has this vision to build this true AI and so he is starting with these toy robots for kids. He has a lots of educational robots and so I was trying to turn this towards an educational activity that would do social, emotional learning games with children inspired by neuroscience and these upper layers of Marvin Minsky’s model and so there were conversational agents, so you could have games with children that were based in social online worlds, where parts of the game were with their friends in school, which was tied in to the educational programs at school or you could have a database to look up these friends. The social environment would allow you to communicate with artificial agents in the beginning, so you could have your kind of little robot friend that is a gift to you at the beginning of the game and then your robot friend introduces themselves and introduces you to this world of social and emotional learning, where you make little games out of the different cognitive tasks. So for example, you go to the mall and you have to get like groceries and also like your haircut and like a couple of other tasks and so there is different stores in the malls that where each of these different things are done and you have to make a plan for getting around this mall. The execution of – so this was an executive function visual processing and part of it was trying to focus on social and emotional learning and so one of the basic aspects of social thinking was theory of mind; the ability to think about what someone else knows or to think about what their goals are. And so part of this was in the mall situation there would be areas, like if there is a pillar in the middle of the room, if you can’t see through the pillar, you are like a first person character. And so when actually the user has a God’s eye view, but the character that you are controlling and the various characters in the scene, you can see that that they can’t see everything in the room. And so people can be hiding from other people, objects can be hidden and someone can be confused. Where is my keys? And they are kind of walking around the mall, and the keys are behind the pillar and so you can help him, because you know that he doesn’t know where the keys are and you can actually point to the keys and he can see you point to the keys and so this kind of triangulation, observation based, not observation, obscuring the vision with these physical objects in the room allows children to build models of what people must be thinking about other people. And so those are the types of thinking that I really like to get into these education games and keep them fun, keep them on a social level, because well actually they really are social and so later in the game we start introducing real friends, it’s not just artificial agents you are interacting with. It’s like your friend Billy from school is going to play the mall scene with you and now you are both characters and you are communicating to each other using the same social tools you used with the artificial agents.

David Kruse: Interesting, wow! Okay, that’s cool and are they still working that at AI Brain or is it…

Bo Morgan: They are still working – yeah, yeah. So AI Brain is still going strong. They’ve got – I think they are still working with Daniel Colfax, who is a professor of planning in Budapest University. And so they have a very strong layer three basically, the deliberative thinking layer and Dan builds planners. So he has a very reflective view of planning. So I think they are going to get up to these higher level layers and Richard is very focused on social robotics and so I think the social, emotional trajectory they are on will continue.

David Kruse: Interesting. And just for the audience, we are not going to talk about a whole lot about Dream Works today, but what prompted you to go to Dream Works and maybe that’s the works you can just talk about, but at least I’ll ask.

Bo Morgan: Yeah, I work on Artificial Intelligence at Dream Works. I am continuing to work on social emotional learning models. And this is – Dream Works Technology is very focused on children and so my work is very inspired by those same kind of educational goals. I think it’s safe for me to say that I’m not working in the production film part of the business. I’m working in a part of the business that’s kind of really focused on advanced artificial intelligence, storytelling types of artificial intelligence; film, cinema, storytelling and so we’ve just being bought by Comcast as you may know and it’s very exciting to see the transition and where I will land basically. I’m super excited about this stuff that I’m working on right now.

David Kruse: Well, okay cool. Well maybe in the further we can have you back on and you can talk more about that.

Bo Morgan: For sure, yeah. It will be public after the Comcast deal goes through. We really can’t make any legal moves at the moment.

David Kruse: Yeah, yeah, fair enough okay, cool. And so we are getting kind of near the end of the interview, but you gave a – but I’m curious to ask about the – you gave a presentation on Cognitive Architectural Map of AI Startups and so it’s a really interesting presentation that you gave. It’s on your website and I mean there is a lot there. We’ve actually talked about a fair amount of it already and so we can dive in a little bit more. But one question I had for you, you talked about where AI startups should focus. You don’t want to necessarily focus too late or if there is other companies already in production and commercialized, maybe you should focus someplace else. Can you get into that a little bit?

Bo Morgan: For sure, yeah, and I don’t think there was a – yeah, that was I think a point I wanted to make about the various maturity of different technology and so the idea that it began as a philosophical thought, it progresses to I’m not going to I know I’m not going to remember my own model; there was six progressions. I think it went to a mathematical theory and then it went to a computational implementation and then it went to, let’s see, a computational practical application in a research setting, and then it went to a practical implementation in a commercial setting and then it went to a startup idea where this is not something that a company is making at. It’s being used in a company product, this is a company. It stands alone as a company.

David Kruse: Yeah, got you.

Bo Morgan: And so in that gradation I was trying to say, not ones rights that solve, but where should you focus and what are the guidelines for how to determine where you focus in that spectrum. Am I going to study theoretical math? Am I going to you know look at the starts ups that are out in the world? What should I be looking at, what should I be learning how to make it? And so I think the graduate student is going to focus on theoretical math and the algorithms that haven’t been implemented computationally yet, some graduate students will be and should be more in learning business applied kind of product skills and so they will take something that’s already been implemented computationally and they will use it and learn how to use it and to build products with it, kind of like using the neural networks tool kids to make products. And the businessman right, is I trying to figure out what the nest startup idea is, should be looking at these AI ideas that have just been implemented and that haven’t been proven in products yet. And you should also be looking at the things that have product ideas that are out there that maybe could stand alone as a startup which just focuses on that.

David Kruse: Got you, that makes sense. No, that’s interesting, okay. And so we are almost near the end here. I do have kind of one random question for you, and I always – and you mentioned at the beginning how within a 11 lines of Python code you can implement a neural network in Python. And so do you think maybe these AI startups are tweaking all these neural networks, are they taking kind of the – I mean there’s lots of ways to tweak the parameters of course around the neural network, but the actual guts of the neural network or with current neural networks, are they tweaking the actual code for that or are they just kind of taking what…

Bo Morgan: I really don’t know and I think it’s different in every case. I think to do a successful startup is not going to be tweaking a neural network. I mean I just – my minimal experience in – I was at AI Brains for two years and the numbers of hats you have to wear and the number of stories you have to tell to different people about what you’re doing and the value of what you are doing is to them. That tweaking, if you are sitting around on your computer tweaking neural networks or you need to be doing that for like 15 minutes out of the day, but somebody needs to be like building the product, which is not going to be AI and there is so much more to AI than just neural networks. There is the user interface, there is all kinds of interesting intelligent parts of the application. I think neural networks are a recently useful tool and they require a scaffolding, which is pretty standard. One is, this feature extraction and so those are engineering features that will make learning how to solve your problem easy as an art. The training of the neural network as you said, the tweak in the parameter, that’s an art, that’s a lot of fun. Building your own neural networks is as you said often part of that process, and so – but there is all the GPU math right and using these Nvidia Tesla 5000 core processors to do that math in the parallel. Oh! My God, it’s so much fun, I wish I was doing it, but I have other things to do and that wouldn’t be working toward my goals, even though that sounds like a lot of fun and I should do it in my open source hobby time, you know you are all fulfilled with the other open source projects. But those are such cool machines that I think working on that is awesome. But the startups, we are going to make money with this. It’s not just fun for a hobby. Like you really have to focus on what’s the business plan, how are we going to make money and it comes down to sometimes walking around knocking on doors. I had a buddy making a startup company with finding people jobs. Like he could match your personality and your whole profile and all your previous job search history to recommended jobs and it could consider what do like in the work place, what do you – what kind of culture would you like to have, all these different traits of the job company that were social and so it was an awesome website and it worked really well. He had to go like on foot to like try to recruit people by you know old style like recruiting. Like this was a recruiting company and he had to hit the pavement and go recruit some people just to grease the machine.

David Kruse: And that’s what it takes. I mean you are right, part of it’s the technology and part of it’s the hustle and the other part of it’s the…

Bo Morgan: Totally. And then there is a big difference between AI and AI startups.

David Kruse: Yeah, good point, good point, okay. Well, I think that’s a pretty good way to end the interview. It’s too bad, I have a lot more questions for you and maybe we’ll we have you back on down the road where you can talk about your Dream Works work a little bit more and – but now I really appreciate your time and thoughts and I learned a lot and I hope the audience learned a lot too.

Bo Morgan: Thank you so much Dave. It’s been a pleasure.

David Kruse: Definitely, and thanks everyone for listening to another episode of Flyover Labs. As always I greatly appreciate it and we’ll see you next time on here. Bye everyone.