September 5, 2025

Inside the Code: How 5 Programming Languages Power arculus Robotics

At arculus, we rely on a mix of programming languages to power arculees, our autonomous mobile robots (AMRs) and Fleet Management software. Each language is chosen for a reason: reliability, speed, flexibility, or usability. In this post, we take a closer look at the languages that make up our robotics stack.

From mapping, localising and navigating their way through warehouses to coordinating an entire fleet, Autonomous Mobile Robots (AMRs) like our arculees don’t speak just one language. After all, running real-time algorithms on an embedded system requires very different tools than designing UIs or scaling backend services. That’s why the arculus tech stack spans multiple programming languages, each chosen for what it does best. In this post, we’ll explore which ones we use and how they come together to keep our robots and fleet running smoothly.

The Languages of arculus

1. Rust for Reliability

At arculus, we use Rust for programming our Fleet Manager due to its unique ability to combine performance with safety and robustness. For mission-critical systems, such as coordinating arculees in warehouses, these qualities are non-negotiable. Unlike many other languages, Rust enforces strict rules that prevent common programming errors, such as memory leaks and data races, which can cause unpredictable behaviour or even system crashes. Its ownership model and borrow checker act as compile-time safeguards, guaranteeing memory safety without slowing down the system.

By catching bugs early in the development process, Rust allows our developers to write code that is not only fast but also remarkably reliable. This reliability is essential when a fleet of robots must work together in complex environments, where mistakes could result in downtime, collisions, or lost productivity.

Andy Brinkmeyer of the arculus Fleet team works with Rust at the office. He believes Rust is an excellent programming language for the company's tech stack
Andy from the Fleet team believes Rust is an excellent choice for the arculus tech stack

Rust’s value extends beyond its core language features. It boasts a thriving community and a mature ecosystem, featuring several high-quality libraries and tools. The Cargo package manager, for example, facilitates dependency management and builds smoothly and efficiently, thereby accelerating development cycles. Andy Brinkmeyer, our Software Developer from the Fleet team, believes,

“Choosing Rust as our main backend language was the best decision for our tool stack because it provides comprehensive solutions, eliminating the need for additional tooling for building and dependency management. It’s part of a robust and growing ecosystem, simplifying our development process and integrating well with our systems.”

2. C: The Language of Hardware Control

The closer you get to the hardware, the more simplicity and speed matter. That is why our Embedded team programs low-level firmware in C. The language offers precise control over hardware and efficient execution, making it ideal for safety-critical systems such as motor controllers and sensor interfaces. With C, we can guarantee that instructions will run exactly when and how we need them to. As Kayas Ahmed, our Embedded Software Engineer, explains,

“C's simplicity and speed are fundamental to achieving predictability in embedded software applications. Its straightforward nature allows for precise control over hardware, and its efficient execution helps ensure that operations occur within strict time constraints, both crucial for reliable performance in embedded systems.”

While C is the foundation of our embedded stack, other languages play supporting roles. For example, C++ facilitates module abstraction, enhancing reusability and simplifying testing, while Python is the preferred choice for quickly building proof-of-concept utilities and testing tools.

3. C++: The Muscle Behind Real-Time Robotics

If C is the foundation, C++ is the engine that drives our robots. Dennis Schradick from the Functions team affirms, “C++ is the main language of our stack.” Most of the core robotics logic, from localisation, mapping, and state estimation to planning, control loops, and sensor fusion, runs here. Its deterministic execution ensures predictable timing for safety-critical loops, while its high performance and low latency enable real-time control even under complex workloads.

Dennis Schradick, our Robotics Software Engineer, busy working at the arculus office
Dennis, our Robotics Software Engineer, busy working at the arculus office

Running on NVIDIA Jetson boards, our robots leverage C++ to unlock GPU acceleration through CUDA and TensorRT, enabling fast and optimised inference for machine learning and other compute-intensive tasks. Strong multithreading support further allows precise scheduling across Robot Operating System (ROS) 2 executors and callback groups, preventing bottlenecks and maintaining system responsiveness.
C++ also connects seamlessly to the broader robotics ecosystem. Mature libraries like Eigen, Ceres, OpenCV, and PCL provide ready-made tools for mathematics, vision, and optimisation, letting the team focus on higher-level functionality rather than reinventing the wheel. “This is what makes C++ the workhorse of our stack: it is fast, efficient, and deeply embedded in the robotics world,” concludes Dennis.

4. TypeScript: Clarity for Operators

Robots may not need user interfaces (UIs), but people do. At arculus, the frontend team relies on TypeScript to build UIs. “We use TypeScript as it is the de facto language for frontend development and mandatory by our used framework, Angular,” explains Axel Jäger, Team Lead Frontend.

While most of the visualisation and interactive elements run directly in the web browser, where JavaScript is the native language, TypeScript, a strongly typed superset of JavaScript, provides added clarity and reliability. As a result, it becomes easier to manage large, complex codebases while the program continues to run seamlessly in the browser. In some parts of the visualisation, OpenGL Shading Language (GLSL), a specialised language that executes directly on graphics hardware for high-performance rendering, is also used. Axel elaborates,

“Since we have to visualise warehouse layouts with thousands of items in a browser-based user interface, we try to offload as much as possible to the GPU, keeping the user interface as snappy as possible.”

This combination of Typescript and GLSL allows arculus to ensure smooth visualisations even for complex warehouses, thus making performance and utility go hand in hand.

code runs on a monitor screen
The arculus codebase  is shaped by the strengths of different programming languages

5. Python: A Tool for Fast Development

Python is everywhere at arculus. It may not power the mission-critical loops, but it is the language of flexibility and speed. The Functions team uses it to train neural networks in PyTorch and TensorFlow before deploying them in optimised C++ pipelines. It also drives lightweight ROS 2 nodes, backend services, and orchestration scripts.

In the Embedded team, Python supports rapid prototyping and firmware interface testing. It is the language to reach for when trying out an idea quickly, connecting different systems together, or automating routine tasks.

Some Other Languages in our Tool Stack: A Word on JavaScript and React

For debugging and diagnostics, our team leans on JavaScript and React to build browser-based interfaces that anyone — operators or developers — can access from any device without setup.

React’s modern framework allows rapid iteration on complex, interactive dashboards, while the rich JavaScript ecosystem provides charts, graphs, and even 3D visualisations for TF trees, node graphs, and other diagnostic data. Integration with Python backend services is straightforward via REST or WebSockets, enabling real-time monitoring and control. These dashboards power a variety of use cases, from inspecting topics and triggering services or actions to recording and downloading rosbags, making complex robot systems more transparent and manageable at a glance.

Wrapping Up

No single language can power an entire robotics stack. While Rust provides reliability, C offers precise control over hardware, and C++ is the go-to language for building fast and efficient robotics software. Similarly, TypeScript delivers clarity to operators, while Python fuels experimentation and AI development. JavaScript fills in the gaps where quick browser-based tools are needed. Together, these languages form a carefully balanced toolkit that allows our robots to be safe, robust, and adaptable. In robotics, performance is not only engineered into the hardware; it is also written into the code.

August 26, 2025

Free Navigation at arculus: Precise, Tagless Localisation

The arculees, our Autonomous Mobile Robots, must navigate through their environment to ensure smooth intralogistics operations. While they previously relied on physical markers at client sites, that’s no longer necessary. With the development of our Free Navigation technology, the arculees can now localise and move freely, i.e., without any pre-installed markers. Let’s hear from Dennis Schradick, our Robotics Software Engineer, on how this advanced technology works, reducing costs while improving efficiency.

Autonomous Mobile Robots (AMRs) like our arculees navigate through an environment to efficiently perform their designated tasks, such as material transportation in warehouses. Previously, the arculees relied on physical floor landmarks such as QR code markers to find their way at a site. While these markers enabled millimetre-level localisation accuracy, they required time- and cost-intensive pre-installation, calibration, and maintenance. To make the navigation process thus smoother and marker-independent, arculus adopted the technology called free navigation. Dennis Schradick, our Robotics Software Engineer, elaborates,

"Free navigation is what we call it at arculus. It is the technology that allows our arculee to localise itself and therefore navigate around in any environment without the need to alter the site physically. With free navigation, we are only relying on what's already there, both in terms of the operating environment and the sensor setup of the arculee."

Dennis working with arculees  in the testing area of our Munich office 
Dennis working with arculees  in the testing area of our Munich office 

How does Free Navigation Work?

At its core, free navigation enables the arculees to understand where they are in an environment and move autonomously. Dennis explains how it works through a combination of sensing and localisation techniques:

1. Sensing

For a robot to move autonomously, it first needs to perceive its own motion and its surroundings. The arculee achieves this through two types of sensors: Relative Sensors & Absolute Sensors

a. Relative Sensors

Relative sensors provide estimates of how the robot has moved based on its own internal measurements. These include:

  • Wheel speed sensors: measure how fast the wheels are turning.
  • Inertial Measurement Unit (IMU): measures acceleration and rotational velocity.

By mathematically integrating these measurements, the system can estimate how far the robot has travelled and how much it has rotated. This data gives an estimate of the robot’s current position and orientation relative to the initial point of operation. However, relative sensors are subject to drift over time due to issues like sensor noise and wheel slip, which can cause the estimates to become less accurate over time.

b. Absolute Sensors

To counter the drift issues of relative sensors, arculee uses absolute sensors such as laser scanners (LiDAR) to obtain a precise, external reference of its position in the environment. These scanners were originally part of the robot’s safety system, ensuring safe operation around humans. But with free navigation, they now also serve the additional function of enabling accurate localisation.

The laser scanners provide an absolute reference by recognising features in the environment, which improves navigation accuracy and allows the robot to maintain reliable position estimates without physical markers.

Laser (LiDAR) scanner for Free Navigation
Laser (LiDAR) scanner, installed initially for ensuring safety, also assists in accurate localisation

2. Localisation

Once the sensors have gathered data, the next step is to determine the robot's exact location within the environment. This process is known as localisation, and it begins with mapping through a technique known as Simultaneous Localisation and Mapping (SLAM).

SLAM: Building a Map to Get Around

SLAM creates a contour map or grid-based map of the robot’s environment. This map resembles a table-like structure where each cell represents a small portion of the space. Each cell is marked as either occupied (e.g., by a wall or pillar, etc) or free (when the laser detects nothing).

During operation, the robot continuously generates a "snapshot" of its surroundings using the laser scanner. It then matches this snapshot with the pre-generated SLAM map using nonlinear optimisation techniques. This matching process estimates where the robot is most likely positioned in the map.

By combining this absolute position estimate with the data from relative sensors (IMU and wheel speed), the arculee can accurately determine its current location and orientation within the warehouse. Dennis explains:

"This fusion of relative and absolute sensing, anchored by the SLAM map, ensures robust localisation even over long distances without relying on QR codes or physical landmarks."

SLAM Map
SLAM map, along with the relative and absolute sensing, ensures robust localisation in warehouses

How Do arculees Solve the Accuracy Challenge

High-accuracy scenarios, such as entering a pallet handover station or picking up a table, are crucial for arculee’s performance. Such use cases pose a unique technical challenge. To handle these, arculee switches to a specialised mode of localisation that no longer references the global map, but instead focuses solely on the object in front of it.

“Think of it like parking a car: once you see the parking space, your only goal is to drive into it precisely, everything else in the environment becomes irrelevant,” explains Dennis, “similarly, arculee approaches the target, detects it using a robust template-matching algorithm, and then begins precise positioning relative to that object.”

a contour map used by robots for accurate and reliable positioning
The arculees use a targeted, fine-grained contour map for their accurate and reliable positioning

At this point, arculee uses a high-resolution contour map with accuracy in the millimetre range to match incoming laser scans and guide movement. This targeted, fine-grained scan matching allows the robot to move in and out of stations reliably and accurately.

Whether it’s an in-house-designed station or a customer-provided object, arculee can quickly model the geometry and begin operating with precision, without any need for QR code markers. This flexibility makes it highly adaptable in diverse warehouse environments.

Fast + Reliable = Efficient

One noteworthy feature of arculee’s navigation system is its ability to provide fast and accurate absolute updates of the robot’s position, all while running on a small, integrated, low-power compute platform.

Dennis highlights that this efficiency comes from a smart architecture that balances cutting-edge precision with lightweight performance:

"We achieve this (efficiency) because while we use state-of-the-art non-linear optimisation techniques to find the best match of the laser scan with the map, on the backend, we rely on a lightweight state estimation that is very fast compared to some of the frameworks that are popular within the industry."

Dennis and Eugenio discuss Free Navigation at the arculus office
Dennis, in discussion with Eugenio

Making a Difference

Free navigation is more than a clever trick; it’s a crucial step in how AMRs like arculees move, adapt, and scale. By using what’s already present, the robot’s onboard sensors and the natural contours of the environment, arculus eliminates the need for costly infrastructure, while enhancing precision and safety.

The result? Flexible automation that stays accurate without constant maintenance.

As Dennis puts it,

"We’re using the tools we already have on the robot in smarter ways. That’s what makes the difference."


Learn more in the video below!

May 22, 2025

Embedded Software in arculees: Driving Communication & Control

An Autonomous Mobile Robot (AMR) has various components that work together to ensure smooth and efficient operation. But for these components to communicate and function together seamlessly, the robot relies on something crucial: an embedded system. In this blog post, Kayas Ahmed, Embedded Software Engineer at arculus, explains the essential functions and purpose of the embedded software in AMRs.

What is Embedded Software?

Embedded software is purpose-built code that interfaces directly with a robot’s hardware components, like sensors and actuators. Unlike general-purpose software that can handle a variety of tasks, embedded software is designed to perform one specific function with a high degree of precision and reliability. Its job is to ensure that the robot’s hardware behaves exactly as expected, every time.

Kayas Ahmed, Embedded Software Engineer at arculus, explains the role of embedded systems in arculess (our AMRs), “The embedded software communicates on interfaces, on all the components, all the sensors, on the Printed Circuit Board (PCB) in the robot. In the system hierarchy, embedded comes right after the robot's electronics, making it alive. For example, a visible function of the embedded is to control features like LED blinking and manage motor control.”

Open robot
In the arculee tech stack, embedded software comes right after the electronics, working closely with it

How it All Works

The embedded software ensures a predictable response time and executes its specific function by ensuring that all the different components, such as sensors, motors, and processors, communicate effectively. That is where communication protocols come into play.

Communication Protocols

Serial communication protocols are a set of rules that define how devices transmit data to each other. These protocols enable the embedded software to send and receive data from the various components of the robot in real-time.

The arculee uses different serial communication protocols to exchange data between the sensors, motors, and the Linux module, which acts as the main processor running on the PCB. Kayas mentions the following protocols:

  • Universal Asynchronous Receiver/Transmitter (UART): This protocol is for communicating with the Linux module, which serves as the central processing unit of the robot. It’s lightweight, fast, and ideal for simple, direct communication.
  • Inter-Integrated Circuit (I2C): This is a synchronous protocol for connecting low-speed devices, such as sensors.
  • Serial Peripheral Interface (SPI): It is a synchronous serial communication protocol for transferring data between sensors, microcontrollers, and peripherals.
  • Controller Area Network (CAN): Originally developed for the automotive industry, CAN is ideal for robust, real-time communication in noisy environments. It’s functions involve coordinating communication between various control units, such as those managing motor functions.

Each of these protocols has its own strengths, and the embedded system knows which one to use based on the component it is interfacing with. This layered, modular approach enables the arculee to maintain smooth data flow between all its critical systems.

Managing Complex Processes

The arculee performs complex processes, which require efficient movement of motors and the processing of data from multiple sensors. So, how does embedded software handle it all?

“To handle complex tasks, we split the problem into smaller tasks,” explains Kayas. “We use an open-source, real-time operating system called FreeRTOS, which allows us to run multiple tasks concurrently on a single microcontroller. For example, in managing LED, the task is simplified and divided into: 1. requesting a new LED pattern and 2. applying the pattern in real-time. This division of labour ensures that each part of the system operates smoothly without interference.”

The embedded software manages the LED lights on the arculees 

In the arculee, this distribution of tasks across multiple microcontrollers ensures that the system is more efficient and can handle the intricacies of motor control, sensor data processing, and other essential tasks without interference.

The Time Challenge

One of the biggest challenges in embedded software is adhering to strict timing requirements. Kayas believes that embedded systems are all about predictability. Therefore, it is crucial to ensure that tasks, especially motor control algorithms, are executed within a specific time frame. A motor control algorithm has very strict timing needs, and executing tasks in the required time window is crucial for the system’s operation. That is why the embedded software must work perfectly to guarantee a reliable AMR with predictable response times.

Embedded software developer working in the testing area of arculus office
Kayas busy working in the testing area of arculus office

Adaptation: The Key to a Successful Embedded System

Just like any tech system, AMRs also evolve. New components are added, and features continue to improve. Therefore, the embedded software must also adapt and evolve. That’s why the arculees support over-the-air (OTA) updates, allowing engineers to upgrade and enhance the software as needed remotely.

Kayas explains, “Our embedded software supports OTA updates via the Linux module. It means that when the Linux module is updated, the software for the microcontrollers is also updated, allowing us to integrate new components or functionalities seamlessly.”

This adaptability ensures that the AMR can evolve, supporting new components or features without needing a complete revamp of the entire system. An up-to-date embedded system makes it easier to ensure the reliable and autonomous function of the robots.

Bottom Line

In short, embedded software is crucial for bringing arculees to life. It ensures the robots operate smoothly, guarantees effective communication between components, and adapts continuously with the addition of new features.


Learn more in the video below!

March 25, 2025

Inside the arculee Brain: How Does a Robot Think?

At the core of every arculee, our Autonomous Mobile Robot (AMR) is its brain. This powerful software system processes sensor data, executes commands, and guides the robot through complex environments. Software Engineer Thomas Fuhrmann explains how it works and why it’s essential to the robot’s performance.

What is the arculee Brain?

The brain is the central software of an arculee that manages all the tasks of the robot, including navigation, decision-making, and command execution. It receives orders from an outside source, like a fleet manager, processes its surroundings through sensor data, decides movements, and ensures overall efficiency in AMR operations.

Thomas Fuhrmann, Software Engineer at arculus, uses the analogy of the human brain to clarify its functions:

"The arculee brain functions similarly to a human brain. If you imagine an arculee as a body, the chassis is like the physical form; the electronics and cable harnesses are the organs and blood vessels. Just like in humans, the brain controls everything for the robot."

Processing the environment

Thomas highlights three main steps the arculee brain follows when processing its surroundings:

  1. Sensing: The arculee uses different sensors to gather information about its environment, similar to how humans rely on sight or hearing. For example, sensors like LiDAR generate detailed point clouds that map the robot’s surroundings.
  2. Interpreting: The brain processes the sensor data to interpret the real world. For instance, it can locate itself and recognise objects and obstacles that might be present in the robot’s path.
  3. Implementing: Based on the interpretation, the robot makes timely decisions, such as stopping or altering its route to avoid collision with an object in the way.
LiDAR sensors on one of the arculees

The Brain’s Tech Stack

Developers at arculus build the arculee brain entirely in-house and avoid relying on external frameworks to maintain full control over performance and system behaviour. Thomas explains that they use C++ for the core system because it provides the speed and low-level access required for efficient robotics. For user interface components, they use Python, which allows for faster development and more efficient testing.

To improve modularity and efficiency, the developers plan to migrate the stack to ROS2, a well-known robotics framework that enables clearer system architecture and easier integration of new components. We will explore this migration in a future blog post.

Thomas, working on the arculee brain code

Challenges and Solutions

Developing a sophisticated piece of technology like the arculee brain comes with its own set of challenges. Thomas points out the following three:

  1. Code Maintenance: The brain stack, including the code base, has evolved over the years, becoming increasingly complex. As features and functionalities expanded, maintaining the code became more challenging due to increased interdependencies within the program and legacy structures. The team tackles this by constantly checking and conducting frequent debugging sessions.
  2. Managing Complexity: Our AMRs navigate autonomously in dynamic environments. This requires a solid system that can handle a wide range of possible scenarios and corner cases for maximum efficiency, safety, and optimal performance.
  3. Testing and Validation: The office testing area allows developers to test the arculees in a controlled environment to ensure quality before deployment. However, the robots don’t always behave as expected, especially during the early stages of development. To improve performance and stability, the team runs countless tests and refines the software based on real-world behaviour.
Testing real-world performance and function is crucial to developing the arculee brain

Sharper Brain, Stronger arculees

The arculee brain handles all the core functions of the robot and can thus make or break it. Thomas sees a well-designed brain as key to precision, reliability, and adaptability. As development continues, improvements in navigation and decision-making will help arculees respond more effectively to increasingly complex environments. A sharp brain doesn’t just keep an AMR moving—it sets the foundation for smarter automation.


Learn more in the video below!

March 19, 2025

This Is How Prototyping Makes Intralogistics Solutions Better

Prototyping is essential in the development of software for intralogistics, allowing teams to test ideas, refine solutions, and enhance user experience before full implementation. In this article from our Product Management series, Georg Held, Product Manager at arculus, shares how we use prototyping to drive innovation, optimise software performance, and improve efficiency.

Prototyping in Intralogistics

Intralogistics systems, supporting warehousing and production require a well-integrated hardware-software solution. While many monolithic systems perform well, they often hinder innovation. Exploring, testing, and refining new ideas before implementation is challenging. To address this, arculus leverages prototyping to gain insights before adopting new solutions. This allows us to collect early feedback, minimise risk, and improve efficiency to maximise product potential.

Let’s explore how arculus uses prototyping to innovate in planning, installing and managing a fleet of Autonomous Mobile Robots (AMR).

What is Hardware-Software Prototyping?

Today's prototyping culture goes back to Eli Whitney's promotion of "interchangeable parts”. In the late 1800s he advocated standardising identical components for easy replacement. This enabled mass production, simplified repairs, and improved integration and testing of new parts. It also allowed cost-effective iterations to enhance products in development, or for already existing ones, forming the foundation for modern prototyping.

Centuries later in the 1990s, the term hardware-software co-design emerged, as embedded systems gained recognition. Today arculus builds on this concept with a strong prototyping culture. Hardware-software prototyping as the name suggests, combines both hardware and software, allowing users to test products before full production. This approach enables effective testing and fosters innovation, efficiency, and clarity. There are many ways to engage users:

  1. Physical models, even non-functional ones, are used to represent form and design;
  2. Mock-ups that demonstrate the visual design of a concept;
  3. Simulators that replicate system behaviours;
  4. Clickable prototypes that allow users to interact with the system.

Numerous benefits such as the early identification of user experience problems, justify the time and effort required to create and test prototypes. Feedback from these tests is critical for future product development iterations, and establishing a learning culture through prototype experimentation is essential. As discussed in the previous blog post, prototypes are a key part of product discovery at arculus.

The Strength of Prototyping at arculus

The high complexity of intralogistics systems requires a strong prototyping culture. At arculus we rely on it not only for hardware but also to guide product development. By incorporating prototypes early we gain valuable insights from the start, enabling continuous evaluation and necessary adjustments throughout our discovery process. This agility ensures we integrate feedback early on, optimising resources while refining user experience. Crucially, recognising when to stop prototyping after validating a hypothesis ensures efficiency and keeps product delivery on track.

From Wireframes to Clickable Prototypes

After agreeing on the user journey we immediately moved to wireframes of the desired functionality, adding brief descriptions for context. This created a collection of screens covering key parts of the user interface, while revealing gaps to address. Using a collaborative tool like Figma, we efficiently captured extensive feedback in real time, enabling continuous iteration.

Screenshot of a collaborative Figma board showing hundreds of interface wireframes arranged in flows, with multiple users commenting and reviewing. The board represents the evolution of wireframes into a clickable prototype used for early feedback and product exploration at arculus.
Wireframes evolving to a clickable prototype using Figma

To better reflect the user experience, we compiled a clickable prototype by adding interactivity to the wireframes. This helped overcome challenges from our legacy software toolchain, as the previous solution lingered in people's minds. The visual, and more importantly the interactive approach, encouraged users to explore new solutions, helping them quickly adapt to move beyond past systems and actively engage. In the end, the validated prototype answered many of the questions that our development teams had, as they built the real application.

Understanding System Performance

Later on, and as an important part of the user journey for the arculus Fleet Manager, we simulated a fleet of mobile robots, to see the heart of the software in action without connecting to real vehicles. This framework helped us explore possible transport scenarios for the first version of the application. Using a challenging transport matrix, we configured a large-scale simulation scenario—another way of prototyping that allowed us to understand performance on all levels of the software at an early stage in the product discovery and development phase.

Using Mock-Ups to Test and Understand Software Performance

In this phase we validated the required infrastructure, architecture, and algorithms for the arculus Fleet Manager. We set up dozens of driveways, deployed 100 mock-up vehicles, and established 1000 handover stations. The simulation ran smoothly for hours, and the new software toolchain achieved the goal of having a stable foundation that optimised material flow in intralogistics for our customers.

The prototype proved that the core components of traffic management were suitable for large-scale deployment. However, for the MVP (Minimum Viable Product) we opted for a more robust, purpose-built simulator developed from scratch.

In one of our future blog posts, we will provide insights into the simulation capabilities of the arculus Fleet Manager.

Connecting the Dots: From Prototype to Physical Movement

The next step was connecting to real vehicles. Since the software on the robots already had defined interfaces and communication protocols, we could quickly perform the first driving actions. Naturally we recognised the need for a more robust interface between the arculus Fleet Manager and the vehicles for real projects, but taking a step-by-step approach reduced the complexity of software-hardware interaction early on.

Instead of defining and implementing the entire interface at once, we iteratively developed the most important functionalities for validation. One goal was to confirm that the new arculus Fleet Manager, with its high user experience standards, could guarantee safety and accuracy in real-world scenarios. For instance, through prototyping we tested the hypothesis that users wouldn’t need to enter error-prone coordinates, but could rely on a visual interface for the alignment of all layout objects. Benchmarking vehicle positions between the layout and reality took several iterations, but each step confirmed we were on the right track.

Four people observing and interacting with Autonomous Mobile Robots in the arculus test area. A large screen shows the Fleet Manager interface used to monitor robot behaviour. One person operates a laptop connected to the screen, while others watch the robots navigate the space. This setup demonstrates hardware-software prototyping and system validation in a real-world environment.
Hardware-Software Codesign and Prototyping in the arculus Test Area

The prototypical approach also helped us gain the trust of critical users, especially the field engineering teams responsible for commissioning projects at customer sites. Beyond validating the initial goals, prototyping also revealed other crucial improvements to the user journey. To capture those during typical testing sessions, we created systematic scripts with expected results, leading to clear actions for related teams. And half a day later the backlog already reflected new items for upcoming sprints. Rigorous prioritisation enabled teams to stay focused, and subsequent test sessions confirmed that development was moving in the right direction.

In an upcoming blog post, we will share more insights on how the arculus Fleet Manager’s user interface supports rapid project deployment.

Our Recommendation

There is no one-size-fits-all approach to prototyping. So here is our recommendation for getting the most out of it:

  1. Select and agree on the component or feature to prototype. Often prototypes are not for the most technically complex topics, but for those requiring the right user experience. Focus on the hypothesis and goals you want to achieve and strictly adhere to them to complete concepts quickly.
  2. Prioritise between low-fidelity and high-fidelity. Although prototypes support more efficient development, consider how much effort and complexity to invest. Low-fidelity prototypes, even on paper are often enough in the early stages of product development. Remember: the priority is to validate the concept, not how you build the prototype.
  3. Discuss and choose the right prototyping tools. The most suitable tool depends on the type of feedback you need. Over the years we've learned not to limit ourselves to a few tools, but to embrace cutting-edge solutions, and to stay open to trying new ones. Also, don't feel obligated to reuse the same prototype for subsequent development stages.

Finally with the advent of generative Artificial Intelligence (AI), it has never been easier to start prototyping. AI not only aids in creating prototypes, but also in quickly analysing and evaluating feedback. As this technology evolves, we continue to explore innovative ways to prototype, accelerating our product development in the future.

March 5, 2025

How User Journey Mapping Helps Us Deliver Real Product Value

User Journey Mapping is a powerful tool for defining product scope and ensuring a seamless user experience. In this post we explore how it helped refine the arculus Fleet Manager, from identifying user needs to structuring functionality. By leveraging insights from previous versions, market feedback, and internal collaboration, we built a strong foundation for future development. This article is part of a series on creating value through strong product management—written by Georg Held, Product Manager in the Software Team at arculus.

Why We Turned to a User Journey Map

When the idea of revamping the arculus Fleet Manager came up, we quickly realised that a strong focus on all aspects of product development was essential to building a valuable product. In this context, finding the right product value was a key point of the debate. Additionally, we realised that there isn’t a single typical user of a Fleet Management System for Autonomous Mobile Robots (AMRs). Conversations with insiders of our legacy software revealed a wide range of expectations on top of the technical challenges of defining the product’s underlying architecture.

As we described in our previous post about product discovery, thoroughly analysing and defining the product's scope, users, and relevant operational workflows would be crucial before starting development. However, we also encountered many opinions on what to tackle first. Therefore setting focus and priorities without sacrificing oversight became essential. Let’s dive into how user journey mapping helped us rapidly advance with defining the scope and prioritising it.

What is a User Journey Map?

The goal of a user journey map is to gain a complete understanding of how users interact with the product throughout its defined stages. More precisely, it provides valuable insights into users' needs, expectations, and pain points—all essential for understanding users’ experiences and analysing operational workflows.

Visualization of the user journey mapping process for automated robotics, highlighting key roles: Planner, Field Engineer, and Operator. The images showcase planning with blueprints, configuring robotic navigation on a laptop, and monitoring fleet management software in an industrial setting.
A user journey map is typically represented in a graphical map that provides an overview and supports collaboration between stakeholders

Visualising the entire user flow reveals gaps in the user experience and offers opportunities for innovation. Lastly, it helps us identify all development teams required for specific steps in the process, leading to the distribution of ownership across product development later on. User Journey Mapping is a prerequisite for identifying and designing a seamless product user experience, or in this case a system.

To learn more about User Journey Maps, visit the Yale University Quick Definition website.

The User Journey Map for the arculus Fleet Manager

To get started, we broke down the project process in which the software is typically used:

  • End-of-line testing within the arculus AMR production
  • Project planning in pre-sales
  • Specification phase after customer quote acceptance
  • System and field commissioning
  • Operations, service & support

In each of these phases we structured the following:

  • User
  • Scope
  • Objectives
  • Interaction with the system

This quickly filled a large whiteboard, but also revealed the need for improvements and gaps in user details, especially in how they interact with the system. Through relentless user interviews, the full picture emerged. Below is a simplified version of our initial user journey map:

 Comprehensive user journey mapping for automated robotics, detailing phases from pre-sales to after-sales. The visual workflow outlines key user roles—Technical Sales, Field Engineers, Operators, and Support—along with their tasks, current processes, and future improvements in layout planning, simulation, deployment, and operational monitoring.

Since we already had great insights from previous versions of the arculus Fleet Manager and the market, we could quickly map the required functionality and the existing pain points. These shaped our first priorities, which we then refined further. Wireframes and prototypes helped validate hypotheses, leading to better concepts for users to achieve their goals at each step of the process. This resulted in many cards highlighting the specific functionality required.

As a last step, we further broke down the functionality by assigning it to the most relevant product development teams. This added structure and clarified ownership.

Learn more about the development of the arculus Fleet Manager here.

Conclusions and What’s Next?

In general the user journey map is a great tool for understanding users, their goals and operational tasks. It helped us define the product scope and, more importantly, prioritise the key things for an MVP (Minimum Viable Product) of the system. It also guided us in achieving a great user experience.

Finally, the tool itself—we used Miro—allowed us to collaborate efficiently. Based on the feedback we received, we highly recommend user journey mapping for the intralogistics industry. Now and in the future, this framework will remain a living tool for developing new products. It is also the basis for breaking down product ideas by applying user story mapping in each team's backlog.

In the next blog post we will explain the strength of having a well-aligned user journey in getting started with prototyping; precisely with extracting wireframes to collect first user feedback.

This post is a part of our Product Management Best Practices series. Stay tuned for more articles coming soon!

February 25, 2025

A Tale of Two Technologies: Smarter Robotics with NVIDIA’s Orin NX and Debian

Autonomous Mobile Robots (AMRs), like our arculees, rely on several advanced technologies. Two key components are NVIDIA’s Orin NX and Debian OS (Ubuntu), a Linux-based operating system known for its stability and flexibility. Together, the two provide prime computing power and reliability to run our robots smoothly and safely. Hassan Saeed, DevOps Engineer at arculus explains how this combination supports our arculees and why it’s the ideal setup for our development.

Better Robotics with NVIDIA’s Orin NX

The arculees, our Autonomous Mobile Robots (AMRs), improve warehouse productivity, especially through their smooth navigation and modern safety features. However, to navigate safely and efficiently, our robots must process their surroundings in real time. This includes identifying obstacles, recognising markers, and making fast decisions. This requires strong hardware capable of handling complex data on the go.

This is where NVIDIA’s Orin NX, a compact System on Chip (SoC), comes in. Hassan Saeed, our DevOps Engineer, describes it as “a powerful board with sophisticated data processing capacity, along with advanced CPU and GPU capabilities.” By rapidly computing information from the robot’s software and input devices, Orin NX improves obstacle avoidance and marker recognition, allowing arculees to navigate seamlessly. Its GPU further optimises AI-driven tasks through efficient parallel processing, making complex computations faster and more reliable.

A closeup of NVIDIA’s Orin NX Board
NVIDIA’s Orin NX Board

Debian - the Classic Operating System

Debian is a free, Linux-based, open-source system known for its stability, security and flexibility. These features make it an ideal choice for robotics, where a dependable operating system adaptable to various hardware configurations is essential for smooth functioning. Hassan expands on the multiple benefits of this strong OS:

"Debian provides a strong foundation for various software applications and supports a wide range of hardware. For example, you might have used Ubuntu on your computers. It is also a derivative of Debian with a user interface (UI) similar to Windows. Yet, Debian’s popularity stems from its security and robust package management system, making it a preferred choice for many developers and enterprises."

A closeup of Debian screen code
Debian is one of the oldest operating systems, offering stability and flexibility to users

Orin NX and Debian: a Powerful Combination

By combining Debian’s stability and flexibility with Orin NX’s real-time processing and energy efficiency, Hassan and his team created a system that ensures smooth navigation, real-time decision-making, and long-term adaptability for arculees. He delves deep into how they made the two technologies work together to achieve an optimal ecosystem at arculus:

Flexibility and Stability: Debian is a natural fit for the Orin NX hardware because it is flexible and stable. Its broad hardware support ensures smooth integration with the Orin NX, while its robust architecture makes the robot’s system more reliable. 

Simplifying Updates and Adaptations: Pairing Debian with Orin NX streamlines the software update process for arculus. Integration with Debian’s package management system makes rolling out updates and security patches efficient. The regular support from Debian and NVIDIA ensures the software remains secure, up-to-date, and adaptable.

A Rich Ecosystem and Strong Community: Debian’s extensive software ecosystem and activity community provide a solid foundation for innovation. It supports various robotics applications and ensures continuous development and troubleshooting resources, which are invaluable for development cycles at arculus.

Seamless Integration for Core Functionality: Debian's strong support for robotics software aligns perfectly with Orin NX’s high-performance hardware capabilities. This integration enables arculus to deliver advanced robotics functionalities such as smooth navigation, obstacle avoidance, and precise task execution with reliability and security.

Finally, while this combination of two technologies already greatly serves the arculees, Orin NX offers various AI features that are especially relevant to robotics. Hassan and his team are actively exploring its Machine Learning (ML) aspects, such as predictive analytics, decision-making, data-driven processing, and computer vision. The aim is to utilise these features to further improve our AMRs and software.

Hassan Saeed, DevOps Engineer at arculus working at his station in our Munich office
Hassan busy working on his station at the arculus office in Munich

Integration challenges and how arculus solved them

Implementing NVIDIA’s Orin NX in the arculus ecosystem meant facing specific challenges. One of the main caveats, in this case, was the hardware incompatibility. Hassan explains, “NVIDIA’s kernel is designed to work with its own hardware, so it failed to recognise arculus’ custom-built board — the Robot Control Unit (RCU).”

The RCU, also known as the arculee’s heart, has multiple peripherals and devices that control the robot’s movements. So, to fully garner the benefits of Orin NX without compromising the functionalities of the custom-built board, the team had to make some adjustments.

“We configured all the devices according to the kernel provided by NVIDIA and also tweaked its device tree to ensure that the RCU’s components were fully supported,” continues Hassan, “It took some adjustments, but in the end, we integrated Orin NX smoothly and made the system work perfectly.”

RCU and NVIDIA's Orin NX
The arculus RCU and NVIDIA’s Orin NX together at work

Three Cheers to arculus, Debian, and Orin NX

Choosing Debian and NVIDIA’s Orin NX for product development wasn’t just about compatibility. It was a strategic decision based on performance and practicality. Hassan Saeed, DevOps Engineer, explains what motivated the team:

“Debian’s popularity and extensive package repository made it an obvious choice for us. Its flexibility in supporting multiple programming languages, including C and Python, aligns perfectly with arculus’ software and development needs.”

At the same time, Orin NX stood out for its powerful hardware features, which are necessary for robotics. With external memory options, NVMe device integration, high CPU and RAM capacity, and advanced GPU capabilities, it provided the performance arculus needed. Hassan adds:

“Ultimately, the compatibility between Debian and Orin NX helped us optimise both software and hardware, ensuring efficiency in our operations at arculus.”


Learn more in the video below!

February 17, 2025

This Is How arculus Scales Software With Product Discovery

Product management in intralogistics has long been project-driven, often prioritising execution over long-term scalability. But to build better software products, a shift towards Product Discovery is essential. In this blog post, we explore why this approach matters, the challenges of moving from custom-built solutions to a structured discovery process, and how it has helped us improve our product roadmap and decision-making. This post kicks off a series on creating value through strong product management—written by Georg Held, Product Manager in the Software Team at arculus.


The Shift Toward Product Discovery

In the past, people often considered one of the product manager’s primary responsibilities as gathering requirements and engineering. While this is undeniably true, Marty Cagan brought more focus to it by introducing the term "discovery" in 2007. He described it as finding out what to build. In fact, he emphasised that finding the right solutions is harder than building them (see The Origin of Product Discovery | Silicon Valley Product Group).

And this is precisely what we at arculus have observed in software: in intralogistics, there is too much focus on execution. In an industry where a few experts design highly complex and customised systems, understanding the real problems of the market is difficult. In the past, project-based implementation addressed specific, individual challenges. Today however, speed, efficiency, and adaptability demand more than just custom solutions. Established product management with a stronger focus on market needs is essential to scale your business, serve a broader range of customers, and keep costs down.

Info: This article kicks off a series of posts on creating value through a strong product management focus. Stay tuned!

What is Product Discovery?

A person standing at a crossroads sign labeled 'How' and 'What,' symbolizing decision-making and strategic choices in product discovery.

Because we draw from many different sources to define product scope, product discovery provides a strong framework for avoiding misguided development. Through continuous discovery, we test ideas against market problems, user needs, and business goals. Cycles of exploration, experimentation, and validation ensure the focus remains on promising concepts that users will appreciate. Therefore, the focus of product discovery is on “What”.

(Image generated by Gemini AI)

In future posts, we will explore these cycles further, for example, by using prototypes to assess user needs. Naturally this process isn’t always straightforward, as it lacks predictability. Regular decision points foster collaboration on how to move forward with an idea, helping to navigate uncertainty.

Ultimately it’s crucial to avoid wasting time on ineffective concepts. That’s why speed plays a key role in the early stages of exploration. Reducing effort on unviable ideas is essential, especially since teams should focus on delivering real value.

This is where product discovery comes in. It helps ensure teams move fast in the right direction and continue to innovate. For more details on addressing risks upfront, solving problems collaboratively, and focusing on outcomes (customer and business problems) vs. outputs (features), see Discovery vs. Documentation | Silicon Valley Product Group.

Product Discovery at arculus

At arculus, we describe product discovery as capturing market problems and finding specific user needs for our system. To capture these problems, we focus on:

  • Industry & market pain points through trend reports, competitive analysis and insider insights;
  • Customer requirements through early opportunity scanning;
  • User experience along the product journey through target user interviews and end-to-end testing;
  • Technical trends and the management of legacy architecture and technology.

All of these contribute as "insights", which lead to what we call a product idea. To develop these further, a team of product and technical experts explores solutions to the problems. Importantly we always keep the end user in mind. This enables a vertical perspective—crucial for the arculus system—which integrates both software and hardware. Too often we’ve seen these treated as entirely separate due to their technical nature, even though true functionality depends on seamless integration across all layers, from user interaction to underlying hardware.

The team creates concepts for an idea and supports them with whiteboard sketches or even clickable prototypes where useful. We then plan product increments to determine overall priorities. Based on this, we either opt out or continue planning and development. At this stage, our product departments have found what Marty Cagan calls the "right product”. In addition, there is a high level of transparency and collaboration among all stakeholders, a prerequisite for good, high-performing teams.

Diagram illustrating the Product Discovery and Delivery process, showing the flow from idea creation and insights to backlog teams through PI planning.
The Product Discovery and Delivery processes

We use Jira Product Discovery to facilitate this process, leading to the arculus product roadmap.  Although any other tool could also work, this one provides the flexibility needed to orchestrate all items efficiently and transparently. Nevertheless, the arculus product team has found that rigorous discipline is more important than perfect tools.

Lastly, once a product discovery is done, the necessary product increment (PI) planning will bring the idea to the Development teams. If you want to know more about the delivery at arculus (“How”), read our previous post about fast development here.

Dealing with Project Requirements

As mentioned earlier, product teams dealing with many intralogistics projects often encounter various individual customer requests. While this is usually the norm, timing these requests is critical for any product manager. Additionally, handling requirements on a deal-by-deal basis leads to uncoordinated concepts.

As arculus grew, this became a real challenge and prevented us from consistently delivering a marketable solution. In fact our teams were delivering projects but had little time to follow a sustainable product roadmap. As a result the product architecture and business case suffered, leading us to plan a new version of arculus Fleet Manager (read more about it here).

This was not only a technical decision but also a shift towards excellence in what we build. By applying product discovery as described above, we broke new ground in addressing individual needs. Some of the best practices we implemented are outlined below:

  1. Collaboration with the sales department(s): This allows the product team to understand potential product gaps in the pre-sales phase before an offer is made and provides enough lead time for thorough product discovery. While we may choose not to pursue some opportunities or fail to win them, the effort is worthwhile because it improves our overall understanding of the market.
  2. Strict focus on the problem: This applies especially to customer requirements typically described as "one-liners" in a project's requirements catalogue. Among other things, this means evangelising the relevant departments so that customer interactions start with the problem rather than the solution. As Simon Sinek’s Golden Circle suggested, arculus teams relentlessly ask “Why” to identify problems.
  3. Support project teams with product documentation and expertise: This helps the team understand the portfolio upfront and during specification negotiations with customers. This is also a potent tool for steering discussions toward existing features. We regularly learn that these features are both sufficient and appreciated.
3 members of the commercial team at arculus standing in a bright workspace, highlighting collaboration and teamwork
The commercial team at arculus watching a presentation about new product features
  1. Housekeeping in product discovery to identify risks early:
  1. Process: We created a product discovery process to keep track of the product discovery backlog (currently over 150 ideas). This allows us to see the status of an idea throughout its evolution: from the initial stage (parking lot) through discovery, planning, and development. The development status is captured via links to the implementation item in each team’s backlog.
  2. In addition, each idea has fields to identify affected teams, customers, and even relevant business goals. It also considers time factors, such as when ideas should reach a product release and other milestones, ensuring that the roadmap reflects these priorities.
  3. Using the arculus product discovery template, we have structured the way we capture ideas to make them easily understandable for anyone.

Responding to all product ideas is typically challenging. That is why the recommendations above, along with strategic direction, e.g., in focus markets, help us make the right decisions. As a result, we constantly update the product roadmap. This also gives all product management stakeholders confidence in what the team delivers. In the end we are positive that we made the right decisions to make our customers happy.

Conclusions and What is Next?

By applying a thorough product discovery approach to product management, arculus has already taken huge steps toward building better software solutions. We gained coordination, structure, speed, and oversight in creating the roadmap. This led to making more informed decisions and as a result, a better product-market fit.

As product discovery is a constant process, it is important to keep evolving and improving it. Among other factors, increasing the product value remains a key focus. Hence, we constantly evaluate performance by using available data points and feedback circles to keep prioritising the right product ideas, ultimately creating value in the market. This will bring arculus' Product Management team closer to the famous “The Ultimate Guide of a Product Manager Day by-Day Activity” by Lucas Balbino.

Stay tuned for the next post of our Product Management Best Practices series!

January 20, 2025

5 Intralogistics Trends to Look Out For in 2025

Labour shortages, high productivity demands, and ever-changing markets are reshaping the supply chain industry. To stay competitive, companies must understand and adopt the technologies and strategies defining warehousing in 2025. This blog post highlights the top five intralogistics trends to help you address industry challenges and transform your business this year.

Technological advancements, market analysis, and industry research define the best practices to improve intralogistics. From streamlining processes to full-scale automation, warehouse solutions have improved tremendously in the last few years. Yet, shifting markets, workforce shortages, and rising productivity demands leave room for innovation. As technology evolves, new trends emerge to address these challenges. Let’s explore the key intralogistics trends for 2025 to help your business stay ahead this year.

Top 5 Trends in Intralogistics for 2025

1. Warehouse Automation

Labour shortages continue to challenge the intralogistics industry while rising demand for faster deliveries pressures businesses to boost productivity. To address this issue, many companies have turned to warehouse automation, streamlining operations and improving efficiency.

The numbers speak for themselves: the European logistics automation market will increase by a compound annual growth rate (CAGR) of 11.18% from 2024 to 2033. This trend is not just a passing phase — it works and is here to stay!

arculee M, one of our Autonomous Mobile Robots (AMRs), in the testing area of the arculus office 
arculee M, one of our Autonomous Mobile Robots (AMRs), in the testing area of the arculus office 

2. Robotics

When it comes to warehouse automation, the most successful trend is the adoption of mobile robots, such as Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs). Their popularity stems from their ability to increase supply chain productivity by minimising errors, reducing costs, performing repetitive tasks quickly, and increasing safety.

With so much to offer, it is no wonder mobile robots will remain in demand. Their market size is thus expected to reach USD 29.86 billion in 2025.

3. Artificial Intelligence (AI)

Artificial intelligence (AI) and machine learning (ML) are no longer just buzzwords; they are transforming intralogistics by enhancing efficiency and accuracy. These technologies enable real-time inventory management, optimise storage solutions, and streamline order fulfilment processes.

For example, AI-driven predictive analytics allow better equipment maintenance, alleviating breakdowns and operational costs. Similarly, it offers accurate demand forecasts, ensuring better resource allocation and reduced overall expenditures. It is safe to say that integrating AI and ML in warehouses will remain an emerging trend in 2025.

4. Internet of Things (IoT) and Real-time Monitoring

Another trend to improve supply chain efficiency is the Internet of Things (IoT). This field involves interconnected devices, software, and other technologies with sensors and processing abilities that connect and communicate over the internet.

IoT enables real-time monitoring of warehouses, allowing for immediate data collection and analysis, streamlining intralogistics, improving management, and enhancing decision-making. Companies should harness this potential to increase efficiency, optimise processes, and meet market demands.

One person explains how to operate arculees (our AMRs) while several other people from the arculus stand and listen. An arculee is also visible in the frame.
Team arculus learns how easy it is to operate arculees, our AMRs

5. Sustainability

With regulations like the EU Green Deal, the Corporate Sustainability Reporting Directive (CSRD), and the Security and Exchange Commission’s (SEC) Climate Disclosure Rules, embracing sustainability principles has become essential.

To comply, companies are increasingly adopting green logistics practices, such as utilising electric vehicles. Another key focus is obtaining certifications to accurately report CO₂ emissions, an essential prerequisite for meeting these new standards. These efforts align with the growing consumer demand for eco-friendly products and services. The hope is to make the future green!

Looking Ahead

The intralogistics industry is here to grow, with technology driving faster, more efficient solutions. Innovations in robotics, data analytics, machine learning, AI, and automation are transforming warehouses. Don't let these trends pass you by—integrate them into your supply chain processes to keep your business flourishing in the future.

December 11, 2024

How do arculees Find their Way: AMR Mapping and Navigation Explained

Autonomous Mobile Robots (AMRs) can locate and navigate themselves in an environment—this is how they accomplish their tasks of assembly and transportation. But how do they know where they are and where they need to be? Behind these simple questions lies the science of localisation, mapping, and navigation. This blog post explains how our AMRs, the arculees, locate, map, and navigate to ensure accurate, efficient, and autonomous movement.

Autonomous Mobile Robots (AMRs), like the arculees, must navigate different environments, such as warehouses or other facilities, to move from one place to another and perform a task. The first step to navigation is mapping, which simply refers to the process of creating a map. Before arculees can drive autonomously, we need a map because otherwise, we don't have a point of reference; we don’t know where the robot is and where it needs to go.

How do arculees Generate a Map?

Mobile robots rely on data from their surroundings to create accurate maps for navigation. They need two types of information: their own location and the location of other objects around them. The arculees collect this important sensor data through two LiDAR (Light Detection and Ranging) scanners, an IMU (inertial measurement unit), and the wheel odometry.

LiDAR Scanners: They use laser beams to measure distances to objects, generating a point cloud that is used to create a map using the robot’s software.

Inertial Measurement Unit (IMU): The IMU is an electronic device that provides data on a robot’s acceleration and rotation.

Wheel Odometry: It is the process of using data from wheel encoders to estimate the robot's position and orientation over time by calculating how far the wheels have travelled, typically relative to the robot's starting point.

Together, these tools help the arculees create a map representative of their real environment and determine their position within it.

Two developers at arculus working on computers on AMR Mapping and Navigation techniques
Our developers at work to ensure the arculees navigate accurately

SLAM - Simultaneous Localisation and Mapping

The method of collecting data from scanners, IMU, and wheel odometry to generate a map is called SLAM. It stands for simultaneous localisation and mapping. To create an accurate map, you also need to know where the robot is on that map, which is what localisation stands for in the acronym. With the scanners and sensors in place, you drive the robot around and collect the scanner data and the data on where the robot is. Finally, you combine that to create a map.

Once the map is there, the robot software loads the map as a reference frame and is then able to drive within it. Since the map itself has a coordinate system, the software can tell the robot where it should move and where it is, creating a route and driving from A to B.

Tools and Techniques: Mapping, Localisation, and Navigation

Localisation, mapping, and navigation are the processes that go hand in hand to ensure autonomous driving of mobile robots like the arculees. However, there are certain tools and techniques to make the processes work:

At arculus, our tools, algorithms and sensors include the SLAM toolbox, Google’s Cartographer, IMU, wheel odometry, and two laser scanners. The SLAM toolbox is an open-source 2D SLAM tool specifically developed for Robot Operating System 2 (ROS2). Both Cartographer and SLAM tools are used for real-time localising and mapping across multiple sensor configurations and platforms.

The map of the arculus office generated with cartographer and LiDAR scanners
Map of our office generated with Cartographer and LiDAR scanners

Meanwhile, there are two ways to record a map:

Offline: We create a recording by driving around the environment whose map we need and collect data. This data includes scanner information, IMU (inertial measurement unit) readings, and wheel odometry. Afterwards, we process the recording using a Cartographer to generate an offline map.

Online: We primarily use Cartographer for online recording. It simply processes live data from the robot in real time to create the map.

Once the arculees have a laser map, they can drive automatically. However, to complete tasks, operate safely, and avoid deadlocks, an efficient traffic management system overlooks the robots, giving them specific driving orders. This system lives in our fleet manager. The best path is calculated by considering various aspects such as battery charge, distance, and possible driveways where the robots can go. Therefore, our Fleet Management Software sends step-by-step instructions to the AMR, guiding it for the next actions at a time. As the robot moves, the fleet manager continuously updates the route, ensuring safe, smooth, and accurate navigation.

A screen shot of the arculus Fleet Manager software
The Fleet Management Software tells the robot which route to take

Current Challenges and Hopes

Like any other technological advancement, AMR mapping and navigation comes with challenges, but there are always ways to find better, more viable solutions. While our current mapping techniques work perfectly within small to medium areas, a possible caveat can be creating maps of large and dynamic environments. However, our team of developers at arculus addresses this by recording the data and later processing it offline on more powerful computers, where computational constraints are no longer a concern.

Although the present mapping processes are effective, arculus aims to improve and innovate continuously. That is why our ultimate goal is to keep updating and refining the AMR mapping methods.

Hopes for the Future

Future advancements in mapping and navigation promise faster algorithms and higher map accuracy. Improvements in accuracy can address persistent issues. For example, enhanced precision would enable robots to navigate more efficiently, reducing errors and the need for manual corrections.

These advancements hold great potential for the role of autonomous mobile robots. With greater accuracy, robots could move faster and operate even more reliably. As a result, accuracy can create a more stable and efficient solution for dynamic environments.

The Way Forward

Mapping and navigation in autonomously driven vehicles is an emerging but crucial part of robotics. As such, it has ample room for growth and advancement. While there is no silver bullet to improve everything all at once, the tools developers use for mapping and navigation are composed of smaller components and algorithms, so a slight improvement in one of the parts can positively affect the entire process. On the other hand, staying connected to recent and relevant research, reading publications and keeping oneself up-to-date on what’s happening in the field is essential for future breakthroughs. There is much room for improvement, and tools need updating, especially with artificial intelligence entering the equation.

CONTACT

arculus GmbH
Balanstrasse 73 
Haus 10
D-81541 München

info@arculus.de