Header image for Hacking a vacuum to build a 3D model of our office

Hacking a vacuum to build a 3D model of our office

Halvor Bø09.04.2022

One of the biggest developments that have happened in the computer vision field in the last few years is the move to the edge. We decided to do a hackathon to experiment with techniques for generating 3D models based on data from the edge.

‍Introducing the Marc Zuc, our Roomba. It serves two purposes. The first is to clean our office because nobody is going to do it otherwise. The second one is to serve as the platform for our experimentation in robotics and computer vision.

To begin with we had the vacuum clean the office. We wanted to visualize the office through the data we had from the Roomba. Only problem was - we didn’t have a camera. How can you map out a space without cameras? Is probably what you’re wondering.

Technique for mapping out the office

We started by getting connected up to the Roomba and figuring out the API. In the beginning we ran into some technical problems with the API. More specifically our poor Roomba ran into things and got stuck.

We wrote out own library to talk to the Roomba. It’s available here for anyone that’s interested.

After that we started running the Roomba around the office.

The main problem at this point was that the Roomba was not doing exactly what we told it. That generates data that has different reference plains.

Visualization of robot changing direction and changing reference plain

So we reset the robot and started merging the data together. What we had at this point was a map of the office that was more or less correct. Next thing we looked at was how to map out the office in a more visual way. Generate an actual 3D model. We played around with a photogrammetry solution. For the people that don’t know photogrammetry is a way to merge together pictures, find the same spot in the pictures and automatically generate a 3D model - without knowing where the camera was. We had mixed results.

Broken 3D models from the photogrammetry

The idea here would be to put the camera on the vacuum. Have it drive around and generate a 3D model after. Image of broken 3D-model from photogrammetry

The idea here would be to put the camera on the vacuum. Have it drive around and generate a 3D model after. The next thing we tried was putting the data we got from the previous runs into Unitity. We were able to generate the 3D model below.

So why did we do all this

We are working on a tech demo for mapping out 3D space on the edge, like for example on a phone. We want to create a solution for phones that can understand what’s around the phone in a way that was not possible before.