Dev Update: Preview NavAbility Mapper, a 3D Construction site example

Dev Update: Preview of NavAbility Mapper, a Construction site Example

NavAbility Mapper is making it easier to build, manage, and update large-scale maps in autonomy. This post will briefly showcase the building of a 3D map using an open construction-site dataset.

A key feature of NavAbility Mapper is that it strongly decouples the map building process so you can focus on the task at hand. For example, with NavAbility Mapper your team can efficiently split up the following map-building tasks:

    • Importing data into the cloud (e.g. ROS bags)
    • Inspecting and selecting data of importance
    • Map tuning using both parametric and non-Gaussian SLAM solvers
    • Resolving conflicts in areas with contradictory data
    • Manually refining results with human input in ambiguous areas where automation needs the human touch
    • Exporting the map to your robots in various formats

In this post we’ll give a preview of a concrete case study we’re building in the construction space.

Sneak Preview: Mapping a Construction site

Side-by-side view of 3D world map view (left) and first person camera data (right). Notice how information is aggregated+improved in the 3D world map as more data is injected. 

Challenges in Mapping

Building and maintaining 3D spatial maps from fragmented sensor data in a construction environment presents a number of challenges, for example:

    • There is no single sensor that solves all problems, and the key to robust mapping is flexibly solving data fusion problems from multiple (heterogeneous) sensors.
    • Measurement data itself is not perfect and each have unique errors or ambiguities that are hard to model, predict, or reduce to a Gaussian-only error model. This is especially true when incorporating prior information (CAD models) or resolving contradictions in dynamic environments.
    • Verifying map accuracy in a dynamic environment (i.e. construction) is a delicate balance between automation and user input, and requires continuous validation as new data is added.
      • We jokingly say you’re doing your job correctly in construction only if the map keeps changing.
    • Maps need to be shared – between automation, human stakeholders, and ideally CAD/BIM software – and this requires a rich representation of maps, not just a networked filesystem.
    • Leveraging data collection from mobile equipment (possibly hand-held) provides more opportunities for collaborative robotic systems, but requires significantly more advanced data processing capabilities.

Diving into Mapper

We’re resolving these issues with NavAbility Mapper by building it to be sensor-flexible and suitable for enterprise use.

Flexible Sensor Types

Firstly, let’s take a quick look at some of the specific sensor aspects of NavAbility Mapper.  No single sensing modality can do it all, so Mapper is designed from the ground up to combine various sensor types (“apples and oranges”) into a common “apples and apples” joint inference framework, for example:

    • LIDAR produces semi-dense point clouds, but its cost and size means it is not always available
    • Inertial sensors provide self-contained estimates, but they require complex calibration and tricky data processing considerations
    • Camera imagery is ubiquitous, but also requires unique calibration, adaptation for lighting variations, and scene obstructions to contend with

In short, no sensor gives you a complete solution. We believe how you merge the sensor data is what makes (or breaks) a solution. NavAbility Mapper is designed to be flexible, incorporating a range of different sensor types out of the box, with the ability to extend it as needed.

In this post, we’ll look at the three that are available in the construction data.

3D LIDAR Scans

LIDAR scans are a popular sensor type for mapping and localization.  One of the key operations is to be able to calibrate and align point clouds, also known as the registration problem.  An example of a LIDAR alignment problem is shown in Figure 1 below.

A key feature of NavAbility Mapper is that it employs multiple methods to align point clouds.  We integrate Gaussian techniques, non-Gaussian techniques, and supervisory human-intervention to enable an efficient mapping process. Ideally, everything aligns automatically, but in cases where it doesn’t (the critical cases!) we use novel solvers and judicial human intervention to ensure robust autonomy.

Even in cases of high ambiguity (when the going gets really tough!), the non-Gaussian alignment correlations are used directly as measurements in a factor graph model for further joint processing with other sensor data.

Figure 1: Two point clouds from the Construction dataset before alignment.

IMU Data

Inertial measurement units – gyros and accelerometers – may or may not be available in various cases but provide a valuable data input for fully autonomous data processing.  The figure below shows a short data gyro rate measurement segment between keyframes in the Construction dataset.  This data clearly shows a mobile sensor platform, rotating aggressively on three axes while collecting data! 

NavAbility Mapper fuses this data with other sensors into a unified mapping solution.

Figure 2: A short three-axis rotation rate data segment, as measured by gyroscopes firmly mounted to the measurement platform.

Camera Data

Camera imagery is another popular (and ubiquitous) data source useful for mapping and localization.  While camera data is easy to capture, numerous challenges in terms of lighting, obstruction, or dynamic scenery complicate their use. 

Camera data, in combination with other sensors, are a valuable data source for mapping and localization.  We incorporate camera data into the factor graph, which can be extracted and used for improving the mapping result.

Stereo or structured light cameras provide reasonable depth estimate data through computer vision processing.  In general camera data processing can either be done via brute force matching, sparse feature extraction, or semi-dense methods.

More to follow on camera data!

Figure 3: selection of camera angles captured under motion during data collection.

NavAbility Mapper for Enterprise Use

Multisensor Calibration

Naturally, the combination of multiple sensors requires calibration of each sensor individually (a.k.a. intrinsics) as well as the inter-sensor transforms (a.k.a. extrinsics).  Often, these calibration parameters are computed through optimization routines, not unlike the underlying mapping or localization problem itself, sometimes referred to as simultaneous localization and mapping (SLAM). 

A feature of NavAbility Mapper is that calibration is treated similarly to localization and mapping, solving both problems at the same time.

Gaussian and Non-Gaussian Algorithms

Robust mapping requires more than traditional parametric (Gaussian-only) processing.  NavAbility develops both non-Gaussian and parametric algorithms that operate at both the measurement and joint factor graph inference level for more robust computations.  While non-Gaussian techniques are more computationally intensive, the higher robustness can dramatically improve overall mapping process timelines. 

NavAbility Mapper combines both techniques at the heart of the software (the factor graph) to ensure your map is always stable and reliable in enterprise applications.

Multi-Stakeholder Access to Maps and Privacy

Collecting, ingesting, organizing and then producing maps is only part of the overall mapping problem.  The goal is to produce a digital twin representation of ongoing operations, one that can be used for everything from automation to progress reports. 

In construction, the map is inherently dynamic, must be constantly updated, and it must be available to a variety of stakeholders and end-users.  NavAbility understands these stakeholders may be both human or robotic, and we strongly believe in defining a common reference for human+machine collaboration through a shared understanding of the same spatial environment. 

NavAbility maps are:

    • Built and persisted in the cloud for easy access
    • Optimized and indexed for efficient access whether by human or machine
    • Secured by state-of-the-art cloud security and user authorization to ensure your data is kept private
Figure 4: Screen capture of a 3D Point cloud map from the NavAbility Mapper SLAM solution.

More details to follow in future posts and we invite visitors to reach out to NavAbility for help or interest. Follow us on LinkedIn to keep up to date with new articles on how Mapper can empower customer and end-user products, services, and solutions.

September Update: Announcing NavAbility Mapper and New Features

September Update: Announcing NavAbility Mapper and New Features

What do you get when you cross a world-class robotics conference (ICRA2022) with a localization and mapping startup? A brand-new product!

ICRA2022 was a game-changer for NavAbility, one that took almost three months for us to digest. We want to thank the participants for their overwhelming support at our tutorial as well as their invaluable feedback on our product direction. 

We’ve listened – from construction leaders through to agricultural automators – and we have weaved your needs into a new product that we will start releasing over the coming months. 

Announcing NavAbility Mapper

A key takeaway from the conference is that robotic mapping is an ongoing challenge, one that our software is uniquely positioned to solve. So we’re taking our toolset and designing ways to make your mapping problems simpler, faster, and easier to address. 

NavAbility Mapper is a cloud-based SLAM solver that allows you to build, refine, and update enterprise-scale maps for your automation. We’re excited about building living, breathing maps of your environment that give your robots trustworthy navigation in dynamic spaces. 

At the moment we’re focusing on providing examples in construction automation, warehouse automation, and marine applications. But, we’re looking for users in the Agriculture 4.0 space if you are looking to build the next generation of agricultural robotics.  

More information on NavAbility Mapper can be found on our Products page. Follow us on LinkedIn to keep up to date as we release Mapper features!

Mapping a Construction Site with NavAbility Mapper

We’re going to use your data to demonstrate how we’re making it easier to build, manage, and update large-scale maps in autonomy. We’ll start this with a Hilti Challenge dataset, which is an open construction-site dataset that can be found at HILTI SLAM CHALLENGE 2022.

In upcoming posts we will be documenting how NavAbility Mapper solves key challenges in producing and maintaining construction site maps.

In the meantime here’s a sneak preview of the raw data and the maps we are producing (visualized using the new FoxGlove integration):

Raw camera and LIDAR data available from the Hilti Challenge dataset
Preliminary 3D map from the NavAbility Mapper SLAM solution (using the new FoxGlove integration to visualize it)

New Features in NavAbility Cloud

In addition to this, we’re adding features that you asked for, including new visualizations and ways to build+solve your mapping challenges. We’ll dive in and highlight a few of these in later posts. 

If you have any questions about these features please feel free to contact us on Slack!

Multiple Solvers

A frequent request was to allow the NavAbility cloud to produce SLAM solver answers using traditional Gaussian solving in addition to the multimodal solutions. This is commonly called parametric SLAM, or unimodal SLAM. 

You can now produce both parametric and multimodal solutions by specifying the type when requesting a SLAM solve. This is available in v0.4.6 of the Python SDK.     

Visualization with FoxGlove

Do you want to see your data in all its 3D glory? In addition to the topographical graphs and the 2D spatial graphs in the NavAbility App, we’re integrating FoxGlove into the App to allow you to use the FoxGlove tools to examine your results.

We’ll write a post on how to do this in the coming weeks. 

Big Data in NavAbility Cloud

The marine surface vehicle example highlighted the need to allow users to tightly link their big data (e.g. camera image, radar data, and LIDAR scans) to the factor graph for efficient querying.

We now have endpoints to upload, link, and download big data related to variables in your graph. This allows you to upload all the raw sensor data, solve graphs, and later query it efficiently for additional processing. This is currently available via the NavAbility App and in v0.4.7 of the Julia SDK. Let us know if you want us to prioritize adding it to the Python SDK.

NavAbility App demonstrating the big data available across a user’s robots. This data is indexed by the factor graph and can be efficiently queried to, say, find all images around a specific location.

Announcing the NavAbility tutorials

Announcing the NavAbility Tutorials

We’re excited to announce the NavAbility tutorials, which demonstrate how real-world robot navigation challenges (like false loop-closures and distributing computations) are addressed using the NavAbility platform.

The tutorials are available as online Binder notebooks, short videos, and a GitHub repository that you can pull down if you want to run the examples locally. For the moment they are available in Python and Julia but we expect to release the Javascript versions soon!

The tutorials are designed to be:

    • Zero-footprint examples that should take about 15 minutes to take you from problem definition to results and analysis
    • Easily run in a browser in a JupyterHub Notebook but feel free to pull the code to your local machine
    • Quickly digested with our accompanying YouTube videos
    • Open to everyone without logins – all tutorials will always be available and can be run at any time using our guest account

Note: The cloud solving is done with a community-level version of our solver, so they may take a few moments to run. Reach out to us either by email or on Slack if you’d prefer to use a high-performance solver.

We’ll continue to grow the library of tutorials in the coming months. If you have any questions or suggestions please reach out to us at info@navability.io. Subscribe to our newsletter to keep up to date with new tutorials!

First 10 steps of the Robotics Journey – a roadmap

Your first 10 steps in the Robotics Journey

What to expect when you’re expecting a robot?… Buy robot, build solution, take over pizza industry, right?

But robotics is an emerging, constantly changing field. It’s a journey with unique challenges. In this blog post we want to provide our take on the steps in the journey and how you set yourself up for success.

We’ll use this as a roadmap for upcoming blog posts, YouTube videos, and in Jim’s Random Walks livestream, so subscribe to keep up to date.

Steps and challenges

Start: The robotics journey

Whether you’re disrupting an established industry with an exciting automation project, or you’re a hobbyist exploring the latest technology, welcome!

Challenge: There’s a wealth of information, libraries, and academic papers on robotics… Where to start?

Solution: Let’s map it out, and we’ll discuss each step (and pitfalls) in the coming months.

1. Hardware Choice

The first step is generally choosing hardware. This is the device that’s going to solve the physical world problem: anything from a simple cellphone, through a tracked vehicle, to a submersible. Luckily, there are endless hardware options to choose from across a wide price range! More to follow on this topic.

Challenge: You need to pick a hardware platform that will solve a physical world problem.

Solution: Pick a hardware platform that matches your needs, or build it!

2. Sensor Choice

Next you’d like to give it sufficient sensors to be able to solve the problem at hand. There’s a variety of sensors – cameras, LIDARs, RADARs, compasses, and many more – picking a set that should solve the problem (given a budget, some of this stuff can get pricey!) is the next hurdle you need to overcome.

Challenge: You want to give the robotic equipment (a.k.a. robot) sufficient information to understand its environment. We call it the Sensor Goldilocks problem – not too much, not too little, just the right amount of sensor data.

Solution: You choose which sensors you want to use for your application. This requires some guessing because you don’t know what you’re going to need in the next steps. We’ll talk through some options and how you can confidently choose sensors that will work in the next steps.

3. Integrating Data

Now we switch from hardware to software. You need to bring all your sensor data and your actuation devices into one place so that you can start processing it and building the software logic.

Challenge: You want to integrate your sensors to start understanding your environment. How do you consolidate the raw data from all your sensors, say a LIDAR, a camera, a compass, and a GPS? Choose carefully, because we’ve often made poor choices here and ended up having to write our own device drivers. Pick correctly and integration is a breeze.

Solution: At this point you’re probably deciding whether or not to use ROS (Robotic Operating System), which version to use and what packages it supports out the box. We’ll talk more about these design considerations in this step.

4. Building a Map

Before the robot can do truly useful actions, it needs a robust understanding of its environment. This is called perception (a.k.a. SLAM, mapping, spatial awareness, etc.) and it is the critical step where disparate raw data is converted into information. This is a rapidly developing research area (funny story: reproducing a human’s spatial awareness is quite challenging) and allowing you to do it easily, robustly, and in a scalable way is NavAbility’s mission.

Challenge: Converting camera images, LIDAR scans, compass bearing, GPS location, beacons, etc. into one robust, consolidated map of the world. Assume that you’re also going to have imperfect information and will need to design for that – this is true for every scenario we’ve ever worked with.

Solution: Start reading up on SLAM, dig into an existing library, or use our cloud services to get going quickly.

5. Using Landmarks

Landmarks are identifiable features that contain information and help localize the robot, like a docking station, a tree, or a cup. Converting raw data into landmarks is critical in closing the loop for robotics and solves the critical challenges like: “I’ve seen this before, so I must be here.” Converting raw data (like camera data) into robust landmarks and relating those landmarks to your current position is an important part of robotics.

Challenge: How do you convert raw sensor data into robust information so you can identify landmarks (known information, or objectives, in a map)? A great use-case of this is finding a docking station in a room.

Solutions: There are great libraries that help convert sensor data into information, and we’ll discuss these in regular video posts. For example, the YouTube video on using AprilTag fiducial markers to convert raw camera data into real-world landmarks.

From research to product!

Great! Once you have proven out the idea, you need to take it to the next level.

These steps might occur in any order, but we’ve documented the journey in the sequence we find users like to implement them.

6. Adding Memory

Once you flip the off switch, you don’t want to lose your data. On Monday I turned the robot off. On Tuesday I have to start over. I want to reuse yesterday’s information. Persistence is key to real-world robotics, but it’s quite a challenge because saving logs won’t cut it outside of a lab.

Challenge: How do you transmit, save, query, and visualize your robots data over time? Furthermore, how you give it yesterday’s map to use as prior information for today?

Solution: Integrate a persistence layer that saves and indexes your robot’s data, both temporally as well as spatially. You build multiple data sessions and use yesterday’s information as prior data to improve today’s navigation.

7. Using Prior Data

Prior data was mentioned in persistence, but what if I want to include blueprints, CAD models, or known locations? This is invaluable in maps where you’re doing construction, navigating a congested harbor, or finding a box in a warehouse. Luckily, you already have a persistence layer, so the challenge is to represent this information as dynamic landmarks.

Challenge: How do I convert prior information (blueprints, CAD models, Google Maps, known AprilTag positions) to landmarks so that I can use it for navigation?

Solution: Design a data integration layer between your prior data (e.g. CAD model) and your map so that the prior data becomes persisted memory as dynamic landmarks.

8. Cooperating Robots

Multiple robots can share a map so that they can operate together. This is the vision: coordinated, cooperating robots solving real-world problems!

Challenge: You have many robots operating and want to them to share information and coordinate operations. How do you share information, build a common map, and coordinate actions?

Solution: You consolidate each robot’s sensor data into a global map and stream information to each robot from the global map. Each robot then has a small local map and a much larger global map in the cloud! We live, eat, and dream about this, so we’ll discuss this in many forthcoming posts.

9. Many Robots, Many Maps

As you add new environments (new maps) your data grows exponentially. Managing this data and using it between robots becomes an enterprise challenge.

Challenge: You have many robots operating in a variety of environments. How do you store, query, and leverage all of that information?

Solution: Each environment becomes a shared global map, which grow as robots explore or interact with it. Environments become living global memory for any robot that interacts with it.

10. Onward Robotics Journey!

To #INF and beyond!

10. Onward Robotics Journey!

That’s the first 10 steps in the robotics journey!

Feel free to reach out to us if you’re on this road, we would like to understand how you’re solving these problems and discuss how we can help you move faster.

The full map of the Robotics Journey

We’ve compiled this as a mini-map with notes and considerations. Please feel free to download and print out, or use in discussions.

How can we help?

We want to help you in your journey. Where are you on the roadmap? What is your minimum viable navigation solution? What exciting projects are you working on that are over-budget, not getting to market, or may be cancelled because of pitfalls like these?

Contact us

Find out how we can help you in your navigation journey

Announcing our YouTube channel on all things robotics!

Announcing our YouTube channel and Livestream on all things robotics!

We’re excited to announce our NavAbility YouTube channel on all things robotics!

We’ll dive into interesting topics about robots, sensors, navigation, and coordination – the “what to expect when you’re expecting a robot” for everyone from commercial users through to home hobbyists. Jim will also be doing a YouTube Live stream to discuss the last video, answer questions, and talk about industry news.

Subscribe the NavAbility YouTube channel to follow us as we release these discussions. We also love communication, so If you have a topic in mind please comment on the videos or email us at info@navability.io.