How Starship Delivery Robots Know Where They Are Going by Joan Lääne Starship Technologies

(plus how to make your own 1: 8 scale paper robot model)

author: Joan Lääne, Mapping Specialist, Starship Technologies

Every September, when the new school year begins, many champions are less afraid of the unknown. Not only about going to school and new people they will meet, but also about the journey they have to go through every day. They need to learn and remember how to navigate the world on their own and on the way to and from the classroom. This can be made easier by a parent who can accompany their child on the first few trips back and forth to get better acquainted with the trail, usually by pointing out some interesting landmarks along the way such as tall or bright buildings or signs on the trail. In the end, it will be trivial for the child to go to school and remember the trip. The child will form a mental map of the world and how to move through it.

Starship Technologies provides a convenient last-mile delivery service with a fleet of sidewalk delivery robots sailing the world on a daily basis. Our robots have made over 100,000 deliveries. To get from point A to point B, robots need to plan a route in front of which some kind of map is needed. Although many publicly available mapping systems such as Google Maps and OpenStreetMaps already exist, they have the limitation that they are designed with car navigation in mind and mainly focus on highway mapping. Because these delivery robots travel the sidewalks, they need a precise map of where it is safe to travel on the sidewalks and where to cross the street, just as a child needs a mental map of how to get to school safely and on time every day. So how is this map generated?

The first step in creating a delivery robot map is to survey the area of ​​interest and generate a preliminary map (2D map) at the top of the satellite imagery in the form of simple interconnected lines representing sidewalks (green), crossings (red) and approaches (purple) such as is shown in the figure below.

The system treats this map as a node graph and can be used to generate a route from point A to point B. The system can identify the shortest and safest path for a robot, as well as calculate the distance and time it would take to drive this route. The advantage of this process is that everything can be done remotely before the robots physically arrive at the site.

The next step involves showing the robots what the world looks like. Similar to the parent-child analogy, robots need a little handshake when they first explore an area. When the robot first starts, the cameras and the multitude of sensors on the robot collect data about the world around it. This includes thousands of lines that come from revealing edges of different characteristics, such as buildings, street lighting poles and roofs. The server can then create an offline 3D map of the world from these lines that the robot can then use. Like a child, the robot now has a model of the world with guidance poles and can understand where it is at any given time.

Since our robots have to cover different areas at the same time in order to complete all their deliveries, in order to be efficient, it is necessary to compile different maps in order to create a unique 3D map of a given area. A unified map is created piece by piece by processing different parts of the new area until at the end the map looks like a huge completed puzzle. The server will compile this map based on line data previously collected by the robot. For example, if the same roof was detected by two robots, then the software detects how it connects to the rest of the map. Each color line in the image below represents one part of the mapping that has been added to the map.

The last step of the mapping process, before robots can drive completely autonomously, is calculating exactly where and how wide the sidewalk is. This is created by processing camera images taken by the robot while exploring the area as a reference, as well as embedding a pre-created 2D map based on satellite images.

During this process, more detail is added to the map to precisely define the safe zones in which robots can drive.

Of course, the world around us is not static. There are daily and seasonal changes in the landscape, construction and renovation, which change the way the world looks. How could this affect the mapped areas for robots? In fact, the robot software handles small to medium changes in the mapped area pretty well. 3D models are robust enough and filled with such a vast amount of data, that a tree cut down here or one building knocked down there usually does not pose a challenge to the robot’s ability to locate its position or use a map. In addition, as the robot drives each day, it continues to collect more data that is used to update 3D maps over time. But if an area is completely reshaped, or new sidewalks are built, then the solution is simple. The map must be updated using new data collected by the robot. Then, other robots can drive autonomously again in the same area as if nothing had happened. Keeping maps up to date is key to robots driving safely and autonomously.

As you can no doubt say so far, I really enjoy playing with 3-dimensional space concepts. Ever since I played the first 3D computer game first-person shooter (Wolfenstein 3D), the 3D world in the digital domain has become my interest. I wanted to create my own 3D worlds for computer games, so I found ways to edit existing game levels. Later, I tried 3D computer modeling, which was interesting to me. With the popularization and accessibility of 3D printers, I also began to physically print models. But long before that, during school summer vacations, I loved making paper models of different buildings and vehicles. It was a simple and cheap way to make something with my own hands, and yet it was interesting to see how a 2D layout on a piece of paper, with a little cutting, bending and gluing, can be turned into a 3D model. Basically, creating a 3D paper object or “unfolding” is, in a way, the opposite of mapping. Creates a 2D surface appearance of a 3D object.

Since I have a passion for paper, I decided to make one for our Starship delivery robots. The goal of creating this model is to enable others who might enjoy the same passions as me to create their own version of our delivery robots. Making paper models is a fun challenge, and when finished, it becomes a beautiful decorative item. As with generating 3D maps for a robot, creating a paper model requires precision, accuracy, and spatial thinking about how all the parts fit together. Also a little patience.

I have made some instructions for you to create your own paper delivery robot and I would like to see your work. Have fun and good luck in making your own paper delivery robot model!

Please post a picture of your robot on Instagram and tag @StarshipRobots so I can find them!!

Please find the model and instructions for the robot for delivering Starship paper here

© Starship Technologies. The design of the Starship® delivery robot and the aspect of the described technologies are proprietary and protected by copyright and other intellectual property laws

Source link

Be the first to comment

Leave a Reply

Your email address will not be published.