Dev Update: Preview of NavAbility Mapper, a Construction site Example
NavAbility Mapper is making it easier to build, manage, and update large-scale maps in autonomy. This post will briefly showcase the building of a 3D map using an open construction-site dataset.
A key feature of NavAbility Mapper is that it strongly decouples the map building process so you can focus on the task at hand. For example, with NavAbility Mapper your team can efficiently split up the following map-building tasks:
- Importing data into the cloud (e.g. ROS bags)
- Inspecting and selecting data of importance
- Map tuning using both parametric and non-Gaussian SLAM solvers
- Resolving conflicts in areas with contradictory data
- Manually refining results with human input in ambiguous areas where automation needs the human touch
- Exporting the map to your robots in various formats
In this post we’ll give a preview of a concrete case study we’re building in the construction space.
Sneak Preview: Mapping a Construction site
Side-by-side view of 3D world map view (left) and first person camera data (right). Notice how information is aggregated+improved in the 3D world map as more data is injected.
Challenges in Mapping
Building and maintaining 3D spatial maps from fragmented sensor data in a construction environment presents a number of challenges, for example:
- There is no single sensor that solves all problems, and the key to robust mapping is flexibly solving data fusion problems from multiple (heterogeneous) sensors.
- Measurement data itself is not perfect and each have unique errors or ambiguities that are hard to model, predict, or reduce to a Gaussian-only error model. This is especially true when incorporating prior information (CAD models) or resolving contradictions in dynamic environments.
- Verifying map accuracy in a dynamic environment (i.e. construction) is a delicate balance between automation and user input, and requires continuous validation as new data is added.
- We jokingly say you’re doing your job correctly in construction only if the map keeps changing.
- Maps need to be shared – between automation, human stakeholders, and ideally CAD/BIM software – and this requires a rich representation of maps, not just a networked filesystem.
- Leveraging data collection from mobile equipment (possibly hand-held) provides more opportunities for collaborative robotic systems, but requires significantly more advanced data processing capabilities.
Diving into Mapper
We’re resolving these issues with NavAbility Mapper by building it to be sensor-flexible and suitable for enterprise use.
Flexible Sensor Types
Firstly, let’s take a quick look at some of the specific sensor aspects of NavAbility Mapper. No single sensing modality can do it all, so Mapper is designed from the ground up to combine various sensor types (“apples and oranges”) into a common “apples and apples” joint inference framework, for example:
- LIDAR produces semi-dense point clouds, but its cost and size means it is not always available
- Inertial sensors provide self-contained estimates, but they require complex calibration and tricky data processing considerations
- Camera imagery is ubiquitous, but also requires unique calibration, adaptation for lighting variations, and scene obstructions to contend with
In short, no sensor gives you a complete solution. We believe how you merge the sensor data is what makes (or breaks) a solution. NavAbility Mapper is designed to be flexible, incorporating a range of different sensor types out of the box, with the ability to extend it as needed.
In this post, we’ll look at the three that are available in the construction data.
3D LIDAR Scans
LIDAR scans are a popular sensor type for mapping and localization. One of the key operations is to be able to calibrate and align point clouds, also known as the registration problem. An example of a LIDAR alignment problem is shown in Figure 1 below.
A key feature of NavAbility Mapper is that it employs multiple methods to align point clouds. We integrate Gaussian techniques, non-Gaussian techniques, and supervisory human-intervention to enable an efficient mapping process. Ideally, everything aligns automatically, but in cases where it doesn’t (the critical cases!) we use novel solvers and judicial human intervention to ensure robust autonomy.
Even in cases of high ambiguity (when the going gets really tough!), the non-Gaussian alignment correlations are used directly as measurements in a factor graph model for further joint processing with other sensor data.

IMU Data
Inertial measurement units – gyros and accelerometers – may or may not be available in various cases but provide a valuable data input for fully autonomous data processing. The figure below shows a short data gyro rate measurement segment between keyframes in the Construction dataset. This data clearly shows a mobile sensor platform, rotating aggressively on three axes while collecting data!
NavAbility Mapper fuses this data with other sensors into a unified mapping solution.

Camera Data
Camera imagery is another popular (and ubiquitous) data source useful for mapping and localization. While camera data is easy to capture, numerous challenges in terms of lighting, obstruction, or dynamic scenery complicate their use.
Camera data, in combination with other sensors, are a valuable data source for mapping and localization. We incorporate camera data into the factor graph, which can be extracted and used for improving the mapping result.
Stereo or structured light cameras provide reasonable depth estimate data through computer vision processing. In general camera data processing can either be done via brute force matching, sparse feature extraction, or semi-dense methods.
More to follow on camera data!
NavAbility Mapper for Enterprise Use
Multisensor Calibration
Naturally, the combination of multiple sensors requires calibration of each sensor individually (a.k.a. intrinsics) as well as the inter-sensor transforms (a.k.a. extrinsics). Often, these calibration parameters are computed through optimization routines, not unlike the underlying mapping or localization problem itself, sometimes referred to as simultaneous localization and mapping (SLAM).
A feature of NavAbility Mapper is that calibration is treated similarly to localization and mapping, solving both problems at the same time.
Gaussian and Non-Gaussian Algorithms
Robust mapping requires more than traditional parametric (Gaussian-only) processing. NavAbility develops both non-Gaussian and parametric algorithms that operate at both the measurement and joint factor graph inference level for more robust computations. While non-Gaussian techniques are more computationally intensive, the higher robustness can dramatically improve overall mapping process timelines.
NavAbility Mapper combines both techniques at the heart of the software (the factor graph) to ensure your map is always stable and reliable in enterprise applications.
Multi-Stakeholder Access to Maps and Privacy
Collecting, ingesting, organizing and then producing maps is only part of the overall mapping problem. The goal is to produce a digital twin representation of ongoing operations, one that can be used for everything from automation to progress reports.
In construction, the map is inherently dynamic, must be constantly updated, and it must be available to a variety of stakeholders and end-users. NavAbility understands these stakeholders may be both human or robotic, and we strongly believe in defining a common reference for human+machine collaboration through a shared understanding of the same spatial environment.
NavAbility maps are:
- Built and persisted in the cloud for easy access
- Optimized and indexed for efficient access whether by human or machine
- Secured by state-of-the-art cloud security and user authorization to ensure your data is kept private
More details to follow in future posts and we invite visitors to reach out to NavAbility for help or interest. Follow us on LinkedIn to keep up to date with new articles on how Mapper can empower customer and end-user products, services, and solutions.