ICRA Update: One Week to Go, Preparing Tutorials and SDKs​

ICRA Update: One Week to Go, Preparing Tutorials and SDKs

We’re looking forward to our workshop on May 27th at ICRA the premier robotics conference, IEEE’s International Conference on Robotics and Automation (ICRA) in Philadelphia!

We encourage anyone who is interested in building robotic systems to come meet or visit us in Philadelphia during the ICRA week.  If you can’t be in Philadelphia sign up for complimentary virtual attendance and join us virtually on Gather.Town (we will be supporting both in-person and virtual attendees). We will also be there for the full week so feel free to reach out to us to set up a meeting. We would like to meet you, visit your booth, or share a coffee and discuss next-generation robotics.

Tutorials are Now Available!

Our workshop session is aimed at providing multiple levels of engagement – from brief overviews of how to solve complex real-world navigation problems, through to trying the trying the tutorial code snippets for your self. Everything is available on the NavAbility App page if you want to take an earlier peek!

Upgrading SDKs for Tutorials

We are working hard to provide “zero install” and local install options for visitors.  We are also improving our SDKs for easier and better interfacing from different programming languages like Python.  Our ICRA tutorials will also show how our SDKs can readily be integrated into your existing software stack, and make the features of our technology readily available with the least amount of effort.

Who Should Attend

We encourage robotic system developers, integrators, OEM and sensor manufacturers, navigation system experts, and project leads alike will find this Workshop insightful and constructive.  We also encourage researchers in simultaneous localization and mapping (SLAM) to visit.

Stay Tuned

We will be updating our ICRA Landing Page.  See you in Philadelphia May 23rd – 27th!  Reach out, follow us, or subscribe to our feeds for more info!

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey

Visit us at the NASA LSIC 2022 Spring Meeting!

Visit us the NASA LSIC 2022 Spring Meeting

The Lunar Surface Innovation Consortium, a NASA Lunar Innovation Initiative, is hosting its Spring meeting this week.  NavAbility is presenting at the event and will be engaged with breakout sessions.  Come visit us at the event to learn how we are enabling more capable, distributed, and robust robotic technologies through hybrid open and platform software!

This event is a great opportunity to connect with a community of experts in a variety of advanced robotic technologies, and learn about the ongoing innovation from industry, academia, private, and national lab groups.  See you there on 4-5 May at John Hopkins APL, Lauren, MD, USA!

PDF, NavAbility Poster 2022 with Hyperlinks

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey

We’re updating the NavAbility Cloud!

We're updating the NavAbility Cloud!

In preparation for our tutorials at ICRA2022 (join us on May 27th in person or by remote on Gather.Town!) NavAbility is excited to announce the deployment of our first official production environment of NavAbility Cloud. Our mission is to enable more vibrant robotics automation and this is an important step in empowering our community with a reliable, stable, and secure NavAbility Cloud.

These changes will enable exciting upcoming features such as automated calibration, big data integration, and simplified user authentication. More to follow as we release these features!

Timeframe and Affected Services

The deployment will begin at 8:00pm PST on April 14, 2022. You may experience some instability of existing environments during the deployment as we cut various services over into production.

Once the deployment is complete, we will announce it in ‘general_help’ on the NavAbility Slack channel. After that announcement, you should switch to using NavAbility Cloud via the following URLs:
https://app.navability.io
https://api.navability.io

As a consequence of this deployment, we will be asking our existing users to recreate their user accounts. Please go to https://app.navability.io, click login and then sign up. If you need your existing data migrated to your new account, please email us at support@navability.io by May 13, 2022.

Thank you for your patience and understanding while we take this milestone step in our journey.

First 10 steps of the Robotics Journey – a roadmap

Your first 10 steps in the Robotics Journey

What to expect when you’re expecting a robot?… Buy robot, build solution, take over pizza industry, right?

But robotics is an emerging, constantly changing field. It’s a journey with unique challenges. In this blog post we want to provide our take on the steps in the journey and how you set yourself up for success.

We’ll use this as a roadmap for upcoming blog posts, YouTube videos, and in Jim’s Random Walks livestream, so subscribe to keep up to date.

Steps and challenges

Start: The robotics journey

Whether you’re disrupting an established industry with an exciting automation project, or you’re a hobbyist exploring the latest technology, welcome!

Challenge: There’s a wealth of information, libraries, and academic papers on robotics… Where to start?

Solution: Let’s map it out, and we’ll discuss each step (and pitfalls) in the coming months.

1. Hardware Choice

The first step is generally choosing hardware. This is the device that’s going to solve the physical world problem: anything from a simple cellphone, through a tracked vehicle, to a submersible. Luckily, there are endless hardware options to choose from across a wide price range! More to follow on this topic.

Challenge: You need to pick a hardware platform that will solve a physical world problem.

Solution: Pick a hardware platform that matches your needs, or build it!

2. Sensor Choice

Next you’d like to give it sufficient sensors to be able to solve the problem at hand. There’s a variety of sensors – cameras, LIDARs, RADARs, compasses, and many more – picking a set that should solve the problem (given a budget, some of this stuff can get pricey!) is the next hurdle you need to overcome.

Challenge: You want to give the robotic equipment (a.k.a. robot) sufficient information to understand its environment. We call it the Sensor Goldilocks problem – not too much, not too little, just the right amount of sensor data.

Solution: You choose which sensors you want to use for your application. This requires some guessing because you don’t know what you’re going to need in the next steps. We’ll talk through some options and how you can confidently choose sensors that will work in the next steps.

3. Integrating Data

Now we switch from hardware to software. You need to bring all your sensor data and your actuation devices into one place so that you can start processing it and building the software logic.

Challenge: You want to integrate your sensors to start understanding your environment. How do you consolidate the raw data from all your sensors, say a LIDAR, a camera, a compass, and a GPS? Choose carefully, because we’ve often made poor choices here and ended up having to write our own device drivers. Pick correctly and integration is a breeze.

Solution: At this point you’re probably deciding whether or not to use ROS (Robotic Operating System), which version to use and what packages it supports out the box. We’ll talk more about these design considerations in this step.

4. Building a Map

Before the robot can do truly useful actions, it needs a robust understanding of its environment. This is called perception (a.k.a. SLAM, mapping, spatial awareness, etc.) and it is the critical step where disparate raw data is converted into information. This is a rapidly developing research area (funny story: reproducing a human’s spatial awareness is quite challenging) and allowing you to do it easily, robustly, and in a scalable way is NavAbility’s mission.

Challenge: Converting camera images, LIDAR scans, compass bearing, GPS location, beacons, etc. into one robust, consolidated map of the world. Assume that you’re also going to have imperfect information and will need to design for that – this is true for every scenario we’ve ever worked with.

Solution: Start reading up on SLAM, dig into an existing library, or use our cloud services to get going quickly.

5. Using Landmarks

Landmarks are identifiable features that contain information and help localize the robot, like a docking station, a tree, or a cup. Converting raw data into landmarks is critical in closing the loop for robotics and solves the critical challenges like: “I’ve seen this before, so I must be here.” Converting raw data (like camera data) into robust landmarks and relating those landmarks to your current position is an important part of robotics.

Challenge: How do you convert raw sensor data into robust information so you can identify landmarks (known information, or objectives, in a map)? A great use-case of this is finding a docking station in a room.

Solutions: There are great libraries that help convert sensor data into information, and we’ll discuss these in regular video posts. For example, the YouTube video on using AprilTag fiducial markers to convert raw camera data into real-world landmarks.

From research to product!

Great! Once you have proven out the idea, you need to take it to the next level.

These steps might occur in any order, but we’ve documented the journey in the sequence we find users like to implement them.

6. Adding Memory

Once you flip the off switch, you don’t want to lose your data. On Monday I turned the robot off. On Tuesday I have to start over. I want to reuse yesterday’s information. Persistence is key to real-world robotics, but it’s quite a challenge because saving logs won’t cut it outside of a lab.

Challenge: How do you transmit, save, query, and visualize your robots data over time? Furthermore, how you give it yesterday’s map to use as prior information for today?

Solution: Integrate a persistence layer that saves and indexes your robot’s data, both temporally as well as spatially. You build multiple data sessions and use yesterday’s information as prior data to improve today’s navigation.

7. Using Prior Data

Prior data was mentioned in persistence, but what if I want to include blueprints, CAD models, or known locations? This is invaluable in maps where you’re doing construction, navigating a congested harbor, or finding a box in a warehouse. Luckily, you already have a persistence layer, so the challenge is to represent this information as dynamic landmarks.

Challenge: How do I convert prior information (blueprints, CAD models, Google Maps, known AprilTag positions) to landmarks so that I can use it for navigation?

Solution: Design a data integration layer between your prior data (e.g. CAD model) and your map so that the prior data becomes persisted memory as dynamic landmarks.

8. Cooperating Robots

Multiple robots can share a map so that they can operate together. This is the vision: coordinated, cooperating robots solving real-world problems!

Challenge: You have many robots operating and want to them to share information and coordinate operations. How do you share information, build a common map, and coordinate actions?

Solution: You consolidate each robot’s sensor data into a global map and stream information to each robot from the global map. Each robot then has a small local map and a much larger global map in the cloud! We live, eat, and dream about this, so we’ll discuss this in many forthcoming posts.

9. Many Robots, Many Maps

As you add new environments (new maps) your data grows exponentially. Managing this data and using it between robots becomes an enterprise challenge.

Challenge: You have many robots operating in a variety of environments. How do you store, query, and leverage all of that information?

Solution: Each environment becomes a shared global map, which grow as robots explore or interact with it. Environments become living global memory for any robot that interacts with it.

10. Onward Robotics Journey!

To #INF and beyond!

10. Onward Robotics Journey!

That’s the first 10 steps in the robotics journey!

Feel free to reach out to us if you’re on this road, we would like to understand how you’re solving these problems and discuss how we can help you move faster.

The full map of the Robotics Journey

We’ve compiled this as a mini-map with notes and considerations. Please feel free to download and print out, or use in discussions.

How can we help?

We want to help you in your journey. Where are you on the roadmap? What is your minimum viable navigation solution? What exciting projects are you working on that are over-budget, not getting to market, or may be cancelled because of pitfalls like these?

Contact us

Find out how we can help you in your navigation journey

New YouTube video: Easily using camera data for navigation and localization

New YouTube video: Easily using camera data for navigation and localization

You have a camera on your robot… but now what?

We’re starting the discussion on how to convert camera data into information that can be used for navigation and localization.

Next up: Loop closures and why that’s the crux of a robust navigation solution!

References:
* Edwin Olson’s AprilTags paper: https://april.eecs.umich.edu/media/pdfs/olson2011tags.pdf
* C/C++ library: https://github.com/AprilRobotics/apriltag
* A Python library: https://github.com/duckietown/lib-dt-apriltags
* Julia library: https://github.com/JuliaRobotics/AprilTags.jl

We’re all learning here, so please feel free to comment about other good wrappers of the AprilTags library!

NavAbility SDKs Released!

NavAbility SDKs Released!

NavAbility SDKs are available for Python, Julia, and Javascript!
BETA

At NavAbility we’re excited to make robot navigation available to all! From enterprise commercial companies through to high-school hobbyists, our software lets you build out a complete robot navigation system for any application.

Navigation AI – also called a SLAM system, perception, or simply mapping – is the critical technology blocking robots from working in the real world. Read more here if you’d like to know how we’re solving this challenge.

With this in mind we’re excited to announce our Python, Julia, and Javascript SDKs!

These can be found on our website or here:

These are in development and are the first release. A few notes for users:

    • Expect that these will change as we introduce new features, so we recommend following NavAbility news on LinkedIn or joining the Slack channel to keep up to date.
    • We recommend using the “Guest” user for testing (which is open and accessible to all), but if you would like a user account of your own, feel free to reach out to info@navability.io.
    • We’re working on prioritizing features, so please feel free to write issues against the GitHub repositories if you would like specific functionality prioritized.

If you have any questions or feedback, please reach out to us at help@navability.io!

New YouTube video: Factor graphs and their importance in robotics

New YouTube video: Factor graphs and their importance in robotics

We promised to have a conversation on all things robotics, and a great place to start that conversation is on factor graphs. This is the topic of your second video on the NavAbility YouTube channel, which is embedded below.

We also love communication, so If you have a topic in mind please comment on the videos or email us at info@navability.io.

Start small, dream big! How does NavAbility empower teams to deliver?

Start small, dream big!

At NavAbility, we believe that robotics and autonomy are hard problems. This is because they’re emerging and changing continuously. Robotics and autonomy require a journey. In this article, we discuss how NavAbility empowers you to deliver on your projects.
 
We want to make solutions that are accessible to organizations of all sizes. The current state of this industry has numerous technologies which rarely interoperate and require significant upfront cost to implement. We want to change that. We want you to be able to get started with a minimal viable navigation solution. And then provide a comprehensive and robust suite of tools that allows you to scale and grow that solution into a production ready product.
 

We are developing that platform.

The NavAbility Platform is has a zero-install setup (Stay tuned for our tutorials app!) through to a fully-functional on-device deployment. It is grounded in next generation technology, MM-iSAMv2, allowing for almost limitless possibilities for your project journey.
 
This is where we need your help! We are focusing on building the best-in-class navigation software, but we need you to integrate your projects on top of it. It is early days, but already our technology can perform in place of GTSAM, Nav2, or SLAMCore. From there, we cannot wait for you to try the additional game-changing features made possible by our next generation technology.
 
The NavAbility Platform will give any organization a significantly lower cost of ownership, a much quicker time to market, and a reduced risk with a clear roadmap as you scale up. Reach out to discuss how we can help each other. Continue reading to learn more about how our strategy and technology enables these outcomes.

Lower cost of ownership

Deploy today with our ready to use products

First, the algorithm<->hardware tradeoff. There is often a trade off between the quality of your algorithms and the quality of your hardware. On one hand, you can use an algorithm with simplifying assumptions for speed and performance, but this may require an expensive sensor. However, if your algorithm is robust, it may run more slowly, but you can use a lower-cost sensor. The issue is that this choice has to be made very early in the project with little knowledge of how the design may iterate. If the wrong choice is made, or the technical requirements change, this can result in high cost. The MM-iSAMv2 technology allows you to start with the simplest available hardware while you develop and test and only upgrade when you need. The underlying algorithms leverage factor graph and manifold mathematics, enabling it to solve the most challenging problems without compromising performance. In the end, you can engineer for cost rather than be stuck with whatever you started with.

Second, the true cost of implementation. Once you get everything working in the lab or on your desktop computer, how do you bring that to market? There is a huge gap between a proof-of-concept and a running production product. NavAbility is providing a ready to use cloud platform that can be used from initial concept to scaled-up production system. This way you can focus on your specific product rather than how to host your computations.

Third, the distributed computing problem. The MM-iSAMv2 technology allows for data syncing between edge devices and cloud allowing for the flexibility to run large compute in the cloud and share that data with the edge. This reduces the overall cost of the compute hardware on the edge if your application has any sort of connection to the cloud.
As we stated above, robotics and autonomy are hard problems. They require a journey. Often it takes weeks to properly validate that a technology will work for your needs. In addition, your organization may have to hire costly personnel to develop navigation solutions. NavAbility’s core competency is navigation solutions and we are here to help. Together we are able to solve the navigation problem you are encountering.
 
The MM-iSAMv2 technology is flexible, so you can get a Minimum Viable Product running and just start gathering feedback from your customers. When the timing is right or if something is not right, it’s easy to drop in new, or more, hardware or iterate on the algorithm. In the end, we are always here to help you optimize your designs as your approaching launch day.

Faster time to market

Engage our experts to rapidly develop your novel applications

experiment

Reduce project risk

Leverage our next generation technology to avoid costly R&D

One of the greatest risks on a project is missing the targets. In an R&D project, you might go over budget. When developing a product, the market might not bear the cost needed to deliver. The best solution to these challenges is to get feedback quickly.
 
The platform we are building is there to help you understand what is possible and how to get there quickly. You can validate designs with customers through simulations on the NavAbility Cloud or even build a one-off Proof-of-Concept using AprilTag stickers and cell-phone cameras. This isn’t the final solution, but gathering early data to validate the viability of an idea dramatically reduces the risk.
 
The MM-iSAMv2 technology is built into an open-source suite known as Caesar.jl. Our success stories are examples and papers. Our struggles are out there in the open. You know what will work on day one and what needs additional R&D budget. Why trust a single company, when you can trust our community of followers and open codebase?

How can we help?

We want to build the platform that you need. What is your minimum viable navigation solution? What projects are you working on that are over-budget, not getting to market, or may be canceled? What technologies are you using today and how can they be better? The more we know, the better we can support your needs!

Contact us

Find out how we can help you in your navigation journey

Application Example: Marine Vehicle Mapping Systems for Collision Avoidance and Planning

We’re all excited for the Jetson-like world with robotics in every aspect of life, solving a variety of challenging problems. But most are failing to leave safe, controlled lab spaces. Why?

The Imperfect Information Problem: Operating in the real-world requires robust solutions that can readily manage the chaos of dynamic environments and imperfect data. In the case of robotics, this is the deciding factor between a commercially-successful robot and a failure to launch.

Why Marine Vehicles: Coordinating marine vehicles is an ideal example of automation in a complex, dynamic environment. How do you ensure that your vehicles can safely track and operate around other ships, make the most of the variety of potentially-disagreeing sensors, and robustly handle the busy environment while completing critical tasks?

The NavAbility Case Study: At NavAbility we’re using data from MIT SeaGrant‘s REx/Philos vehicles to demonstrate how any robot can extract map information from multiple sensors, identify and track dynamic objects like ships, and use prior information to navigate effectively in a dynamic environment.

Continue reading

Announcing our YouTube channel on all things robotics!

Announcing our YouTube channel and Livestream on all things robotics!

We’re excited to announce our NavAbility YouTube channel on all things robotics!

We’ll dive into interesting topics about robots, sensors, navigation, and coordination – the “what to expect when you’re expecting a robot” for everyone from commercial users through to home hobbyists. Jim will also be doing a YouTube Live stream to discuss the last video, answer questions, and talk about industry news.

Subscribe the NavAbility YouTube channel to follow us as we release these discussions. We also love communication, so If you have a topic in mind please comment on the videos or email us at info@navability.io.