E82: Sachin Chitta, Founder & CEO of Kinema Systems – Interview

December 15, 2016

https://www.linkedin.com/in/sachin-chitta-7696491

This excellent robotics interview is with Sachin Chitta. Sachin is the founder and CEO of Kinema Systems. Kinema Systems is a robotics manipulation startup in Silicon Valley. What they’re doing looks like it’s out of a Sci-Fi movie.

Their initial product, Kinema Pick, automatically removes stuff from pallets. It’s actually the world’s first self-training, self-calibrating solution for robotic depalletization.

Sachin has a deep history of robotics manipulation at SRI and Willow Garage. He received his PhD in 2005 from the University of Pennsylvania.

I was lucky enough to talk to Sachin about his experience and how they built Kinema Pick.

Here are some other things we talk about:

-Why did you choose depalletization?
-When you bring the Kinema Pick into a new client, how do you train it? Turns out, it’s pretty easy.
-How fast does the Kinema Pick pick up boxes?
-Who’s on your team?

Transcript

David Kruse: Hey everyone. Welcome to another episode of Flyover Labs and today we get to talk to Sachin Chitta. And Sachin is Founder and CEO of Kinema Systems, and what they are doing is pretty cool. They are a robotics startup in Silicon Valley and they are building advanced robotics manipulation applications. And their initial product Kinema Pick is the world’s first self training, self calibrating software solutions for robotics depalletizing. So essentially they take stuff off pallets and that’s a tough problem for a robot to crack. But Sachin has a long history of Robotics Manipulation at SRI and Willow Garage and he received his PhD in 2005 from the University of Pennsylvania. So I invited Sachin on the show, because I’m interested in what he’s done and doing now around Robotic Manipulation and curious how he does it. So Sachin, thanks for coming on the show today.

Sachin Chitta: Thanks for having me on the show.

David Kruse: Definitely. And before we kind of talk about what we are working on now, can you give us a little bit about your background?

Sachin Chitta: Yeah, sure. So I have a PhD in Mechanical Engineering actually from Penn after I did my bachelors, again in Mechanical Engineering. From India I came over to Penn and worked at the Grasp Lab, which is a pretty famous lab, pretty old where there is lots of cool stuff and manipulation, drones, locomotion, all kinds of things. I worked there. I did a postdoc there as well and worked on a program called LittleDog, which was a small robot, kind of like BigDog but a smaller version of that and we were trying to train it to go over a rough terrain. After that I went to Willow Garage where I was part of – I was one of the early employees there and actually went through all the projects that Willow did, which most people have not heard off, including an autonomous car and an autonomous boat project. But eventually we ended up working on the PR2 robot and the robot operating system and at Willow I led the group they worked all the Manipulation Components; in particular through the software frameworks called ARM Navigation, and then that evolved into MoveIt, which became the most popular open sourced software for manipulation. And also there was another project that I led called ROS-Control. This was intended more for people who were building their own robots and needed controllers for it, and we wanted to create a nice set of – a nice library that they could just use. So after Willow Garage shut down I went to SRI, which is a pretty famous non-profit research lab in the Bay area. There is lots of cool technology that have come out of there, including cini and Intuitive Surgical. And at SRI among other things I was leading the software in the robotics group there, and we developed technology that eventually became Verb Surgical, which is a medical robotics company, a joint venture between Google and Johnson & Johnson. After that, along with a colleague of mine who – the two of us had worked together since Willow Garage, we created Kinema Systems and since then I’ve been working at Kinema.

David Kruse: Well, it’s a quite a background in robotics. We could spend the whole podcast just going over all those projects you worked on, Oh my! You are kind of in the middle of it out there. Interesting! So I guess what was one or two kind of your, I don’t know if favorites the right word, but more meaningful projects or projects where you really kind of changed your perspective of robotics or you really enjoyed it in your past?

Sachin Chitta: I think I have enjoyed all my projects. They have all been fun. I think the ones that I’ve had the largest impact at Willow Garage and at SRI have been particularly enjoyable, because it was clear that we were doing something that would be useful, not just for us, but for a lot of people who are getting into robotics and enabling new applications. So doing MoveIt or creating that framework at Willow Garage was extremely satisfying, because we knew that there were hundreds if not thousands of people out there looking for the software platform that they could program the robots with and it didn’t exist in the nice way where they could just take it and incorporate it into their robots. So that was I think for a lot of us really satisfying to be involved in a project like that, and then at SRI doing a lot of the medical robotics stuff was incredibly satisfying because of the impact that medical robots have on the world and looking at new ways of how we can head this technology and change the way that surgeons use medical robots.

David Kruse: And maybe it’s confidential, but if it’s not can you share some of the projects you worked on at SRI around robotic surgery?

Sachin Chitta: So apart from saying that this was new, again this technology we applied were surgical. Everything else and all other information regarding that is confidential.

David Kruse: Okay, got you, fair enough. And then can you tell us a little bit more since its definitely applicable with Kinema that about MoveIt and what you created around kind of the software for manipulation and how did people develop time in the like manipulation robotics before and then how can they use MoveIt now to make their life easier.

Sachin Chitta: So step one is time. Manipulation has been strongly associated with commercial systems. So there is lots of industrial robot manufactures and each one have their own software and framework that you can use with their robots. The frameworks are proprietary, so unless you are using the robot you can’t really get access to that software and this is as people started building more and more innovative robots, including at Willow with the PR2 and other projects there was definitely, there is that short coming that we didn’t have the software that you could use to control these robots and do new kinds of tasks that go beyond what was already there. And so our goal was to create something that was open source, so it was easy for people to incorporate into their projects, but also very capable and let people do things beyond what was traditionally being done with industrial systems. And that’s why we created MoveIt and MoveIt had components that would let you integrate with new hardware, new robots in particular, or even with existing robots and bring some of this new technology into them. So it was what we called robot diagnostic. You didn’t have to worry about which vendor you had bought the robot from or if you had made it yourself. You could build up the robot model and then be able to do cool things with it almost immediately. And one of the key things that MoveIt provided was what’s known as motion timing. Traditionally robots are programmed by telling them every single way point that they need to go through; that’s how industrial robots are still programmed. With MoveIt that changed. We are now inside of telling it you need to take this path, you just tell it, this is where you are and this is where you need to go and you give it an idea of what’s around the robot and the frame work does all that planning for you. And this is a huge change, because you know no longer have to rely on a body of experts of someone to be able to do this for you, you could just do it on your own.

David Kruse: That’s really interesting. Can you give a use case of kind of how MoveIt would help kind of with that planning, that motion planning versus not using it and maybe if…

Sachin Chitta: Yeah definitely.

David Kruse: All right, go ahead.

Sachin Chitta: There’s lots of examples, but in particular any time you have – most robots cannot deal with changes in the environment. So if somebody is walking into – imagine like a household robot that’s going around and trying to do stuff, and if somebody is walking by and its putting new things on the table or moving a piece of furniture, or a chair rolling by, a robot, traditional robots would not be able to react to something like that. Now with frameworks like MoveIt when you are doing manipulation, you can build a map of the environment and can continuous update it. So you know where everything is and you can get your ARM to react to those changes and that way you will not be colliding with anything there. So whether it’s a household robot or an industrial systems where the scene is constantly changing, what’s around the robot is constantly changing, you would be able to do this and not have to rely of fixed paths. If you were solely reliant of fixed paths and then suddenly something got in the way there in traditional systems, there is no way to get around that; it would just hit what’s there. But with frameworks like MoveIt, you can now go around that on this line.

David Kruse: So the MoveIt framework must connect with the vision system, but I imagine the vision system is different on every robot. So I suppose you might have to calibrate I guess is how you would set it up or how who does that work?

Sachin Chitta: Typically when you do things, like if you abstract the data that you are dealing with so you can plug in different kinds of vision systems and still be able to use this framework and with frameworks like MoveIt you would typically use 3D information and that 3D information is represented as a point glove. So you get that information in regardless of what type of sensor you are using and then use that information to build some kind of representation of what’s around you.

David Kruse: Got you, okay. All right, well lets – we can keep talking about that all day. Let’s talk about Kinema. But before we do, I’m curious, how did you originally get interested in robotics way back in the day.

Sachin Chitta: My dad was actually a robotics professor in India.

David Kruse: Oh! Cool.

Sachin Chitta: So it kind of happened almost naturally you could say. I was always interested in things that move, things that affect the environment and can actually do tasks and that’s how my interest into robotics came up.

David Kruse: Did your dad expose you to robotics quite a bit growing up, you know. Did you have robots like at your house or…

Sachin Chitta: Yeah, we did have robots. We had robotic kits in the house. We had – I built some robots as well and of course in my dads lab there were always robot arms, including the puma, which is one of the early successful industrial robots, but you don’t see too many of them anymore, but that was one of the robots he had in his lab as well. So there was a lot of exposure to robotic systems pretty early for me.

David Kruse: Yeah, what a fun upbringing; that’s cool. So let’s talk about Kinema a little bit. Can you give us a – I tried to give a brief overview, but can you give us an overview and tell us a little bit about, a little background, who is on your team and are you working with any partners; yeah that would be great, just a little overview.

Sachin Chitta: Yeah, definitely. So Kinema Systems, we are developing software, software solutions and our initial target market is robotic picking. It’s one of the most common tasks that robots already do in the industry, but there is a lot of cases where the current technology is not able to deal with the variety and the lack of structure. And particularly with logistics and ecommerce, you no longer have just uniform objects that you can pick. Everything is different in a pallet or a bin or any kind of storage. And so doing robotic picking in those kinds of scenarios was not possible or has not been done yet and that’s basically the biggest differentiator we bring to the table, is dealing with these situations where there is lack of structure or there is variety, so that we can address the actual problems facing logistics, warehouses, shipping, and even in manufacturing as well. Our first product is, it was called Kinema Pick and is targeted towards depalletizing. So depalletizing is where you have a bunch of boxes on a pallet and typically they remove or they are depalletized on to a conveyer or a set of conveyers. If the pallet were uniform as in all the boxes were always the same, this would not be too hard a task, but there is lots of mixed skew pallets or a single skew pallets where you don’t know which box is on the pallet and in situations like these Kinema Pick, we think is one of the fewer or may be the only solution that offers an actually way to target depalletizing.

David Kruse: And the videos you have are pretty cool. I’d like to post those with the podcast that you have of the robot depalletizing. That definitely looks like the future.

Sachin Chitta: Definitely, yeah definitely, yeah, and to create those we actually – a lot of people do look at depalletizing, but they look at very standard mice boxes. What we did is we are actually in an industrial area here, so we just went around. We’ve got distributors in our area who get a lot of boxes and so we went to them and bought actually boxes that would be fairly better, so not at all an easy problem to handle. But the key was we wanted to make sure we were testing initially with real stuff and so if you look at a lot of the boxes there, they are in pretty bad shape and still our system is able to robustly pick them.

David Kruse: Definitely, and who is on your team right now?

Sachin Chitta: So I started Kinema with a colleague of mine, Dave Hershberger. We’ve worked together since the Willow Garage days and Dave worked on what’s often called the most popular component of the robot operating systems, it’s called ARviz and he’s a fantastic engineer and we’ve been working together on this for a while now. So he is my Co-Founder. And then we also have fantastic vision people for vision and 3D perception and we also hired automation people who have the experience deploying these systems on actual factory floors.

David Kruse: Interesting. And so how did you create the first Kinema Pick? Did – I mean I guess that’s kind of a broad question, but you know have you raised money at that point, and did you kind of create like a prototype in order to raise money or how did that whole process work to get the first one going?

Sachin Chitta: We have raised money, so we have raised money from Silicon Valley and we used that initial funding to basically build an initial prototype and then move up to a larger size robot and build a complete product. And when we started we actually looked at a bunch of different industrial tasks and we talked extensively to customers and asked them, what are the tasks that where you have the biggest need, but the solutions are just not there. So we’ve got a fairly large list, five to 10 tasks from each of these customs and picking came up to the top quite a bit. And once we decided we were going to do something in picking we gain slowed down and again talked to customers and said, ‘if there’s lots of flavors of tasks that you can do in picking and we zoned down and said, which ones are the most important,’ identified those and that’s how we came to depalletizing. And once we had done that we build up the product, again talking to customs about how the layout would look and what the typically features of the product need to be and used that information to create the product.

David Kruse: Got you and how is the Kinema Pick work? Let’s say you bring it in to a manufacturing plant or distribution center and can you just describe what type of training is required and how do you get it set up and going?

Sachin Chitta: Yeah, Kinema Pick is what we called self calibrating and self training. And one thing that was high on our feature list right from the beginning is to make it easy to deploy. A lot of robotic solutions really require a lot of training, a lot of hand holding and we wanted to create something that would be much, much easier for end users or even system integrators to deploy. And so that’s why we are both in this capability to self calibrate and self train. The way the process starts is you choose the robot and you say okay, this is where I am going to install the work cell. We got a gooey in which people, the end users can build up a model of the work cells, so that way the robot knows what’s around it, and then there is the really simple training step where you tell it where its picking from, so you tell it approximately where the pallet is and you tell it where its dropping off. So you tell it where the conveyer is and there is one more step which is calibrating the camera to the robot that happens automatically. You put a target and a pen; if you press the button and it calibrates itself. After that its quite literally you put a pallet in front of it and you just say go and the robot starts picking.

David Kruse: Wow! That’s amazing. And do you have these in production now or are they in the manufacturing plants right now or do you have some…?

Sachin Chitta: We do have them deployed now and we are starting to look at – we have a lot of traction and interest, so we are starting to look at more production installs over the course of this year and early next year and we are also doing it internationally.

David Kruse: Interesting. And let’s see, can you tell me – oh! Yeah, I was curious around safely. You know how, what looks in the video it looks like you can kind of walk around this robot, its not necessarily caged off. How do you handle the safety issue?

Sachin Chitta: So these robots are fairly big and fast and there are – usually you would put an actually safety fence. Just for the video we don’t have it, but when you actually do deployment you would have to put a safety fence around this robot.

David Kruse: That’s good.

Sachin Chitta: Especially because they move fast and they need to move fast because our end customers want good cycle times, fast cycle times so they can get the product out or into the warehouse as quickly as possible.

David Kruse: How fast can it depalletize compared to a human? Do you have those numbers if they are public?

Sachin Chitta: Yeah. In general these systems have to be fairly fast, but the average numbers we gotten a request for can range per box, can range anywhere between four to about eight or even 10 seconds per box, which is pretty fast and that tends to also approximate how fast a person can do it. The heavier the boxes of course the more time it takes and actually the more intensive it is for a person to do it. Whereas for the robot it’s not very difficult for it to lift heavy weights and that’s where we think these robots would make a huge difference is that they’d help protect workers or end customers from having to pick some of these large heavy boxes that they have to do right now.

David Kruse: No, that makes sense. And can you tell us how the Kinema Pick works? You don’t have to get into the very narrow details, but what type of sensors are on there and how does that take in. Imagine you are using a Ross and probably move it, but can you kind of give us a feel for how it works?

Sachin Chitta: We’ve actually been moving away from Ross and MoveIt. We started kind of doing that consciously and at this point we have a fairly large amount of our own IP that’s been developed at Kinema in the system and we use a combination of 3D and 2D sensing to find the boxes. And its, overall it’s a fairly simple structure and we wanted to keep it that way. It’s all off the shelf hardware. We don’t make any of our special hardware. We are essentially putting stuff together and we deliberately did it that way to keep the system simpler. The more hardware, custom hardware that you have to build, the more complex the system can get and our focus has been on the software. So the perception and the motion planning so that we can create this differentiated solution for the end customer.

David Kruse: Got you and could you move away from Ross and MoveIt because of the IP issue, is that – or do you want more control or…

Sachin Chitta: No, I think it’s just a natural thing that happens when you start developing a product. Ross and MoveIt has been great platforms for research, but once you start focusing on, especially industrial grade products, you have to take – you do have to take more control of individual components, you want to improve on what’s already out there and the framework or the architecture might not be the right one for particular things that now we are looking at. So all those contributed to focusing on our problem at hand and trying to design the best architecture and the best system for that. Ross and MoveIt are great platforms for more general robotics systems. But once you have the need to optimize for the particular set of tasks you are doing, I think most people will find themselves naturally moving away from that, those architectures and finding a lot of new things that then they have to implement to create these deployable systems.

David Kruse: Got you, okay. And what’s you vision over the next five years. How do you want to improve the Kinema Pick or are you working on other robots round manipulation that you want to deploy eventually?

Sachin Chitta: Our focus is going to be on manipulation and it’s a huge market. It’s just – it’s clearly to try and even quantify it. It’s not – it’s beyond $1 billion opportunity, because you look at commerce and trade and it’s basically things in motion. Things moving from one country to the other, one city to the other, warehouse to warehouse and then warehouse to store, there is a huge opportunity for robotics and manipulation to make a difference. And its underserviced right now, there is not many companies, there is all the traditional robotic companies and the traditional lenders, but there is a bunch of tasks where you need more interaction with the environment, more feedback from the environment that you can get from a lot of these new 3D sensors and you need the smarts to be able to take advantage of that information to do tasks that were not possible before. And especially manipulation is not an easy problem, because you are literally affecting the world. You are contacting the world, and that makes it very different from other problems. And we think there is – with that comes the opportunity, which is just huge in terms of robotic picking in like I mentioned, there is different flavors of robotic picking and different kinds of things you’d have to do from depalletizing, palletizing, picking individual object and lots of things beyond that. So just taking itself we think will keep us busy for quite a while and our vision is to be the robotic picking solution in the world. So any time people come up to a problem where they need to pick something and if they – you know we can do this with any other traditional technique we want them to think that okay, Kinema Pick is the solution for this.

David Kruse: Interesting, that’s a good future. Because didn’t Amazon have a picking challenge. It’s a lot, it’s different than what you guys do with the pallets, but…

Sachin Chitta: Yeah, I was actually – in the first challenge I was one of the co-organizers of the Amazon picking challenge. Yeah, it was at a big conference, a big robotic conference and they were looking, they wanted to look at how robots could pick from essentially key shelves. It’s an incredibly complex problem and there is lots of things that are very hard for robots and pretty easy for people to do. And so it’s going to take a fair amount of work to come up with solutions for tasks like that. People are just incredibly creative, incredibly flexible and robots are not yet. And it’s going to take a lot of work to be able to automate tasks like that.

David Kruse: Got you, interesting, okay. Not surprised you are a part of that I guess, the Amazing picking challenge. And so we are almost out of time here and the last question I have for you is, I was curious how you continue to learn and like find new ideas. Are you talking to your team, are you talking to people in the industry, are you reading journals or papers. What do you do to kind of stay in the cutting edge as much as possible?

Sachin Chitta: We keep in touch with the community. A lot of us have deep roots into the research community and the academic community. We sponsor things at conferences, so we go to conferences and that gives us an opportunity to talk to students who have the newest ideas or to faculties who have the newest ideas. We go to trade shows, and that give us an opportunity to see what’s newest on the application side and yes, we do read papers and journals just to keep in touch with what’s going on and what’s the latest coming out and in today’s world its fairly easy to gain all that knowledge looking into some internet. So we make an effort to stay as up to date as we can. We look at the newest tools coming out, whether it’s for machine learning or whether it’s for motion planning or 3D perception, we’ll keep an eye on all that, just by making sure we stay connected to the community.

David Kruse: Interesting. I think that’s a good way to end this podcast and Sachin, I definitely appreciate your time and hearing about your experience and your thoughts on the future of robotics. You have quite a past, so I know I learned a lot and I’m sure the rest of the audience did too.

Sachin Chitta: Thank you very much for having me. This was very enjoyable.

David Kruse: Definitely, and thanks to everyone for listening to another episode of Flyover Labs. As always, I appreciate it and we’ll see you next time. Thanks Sachin, thanks everyone. Bye.

Sachin Chitta: Thank you.