Path Planning: Finding Geometric Routes Through Space

Path planning is the problem of computing a geometric route from a start position to a goal position that avoids obstacles. Unlike full motion planning, path planning focuses on the spatial route without considering dynamics or timing. It is foundational to both mobile robot navigation and manipulator arm control.

What Is Path Planning?

Path planning is the computational problem of finding a geometric route from a start position to a goal position that avoids obstacles in the environment. The problem is defined in a search space — either the physical workspace (for mobile robots) or the configuration space (for manipulators) — with obstacle regions that must be avoided.

The computational complexity of path planning depends on the representation and dimensionality of the search space. Grid-based methods discretize the space into cells and search for a path using graph algorithms (A*, Dijkstra). The complexity scales exponentially with dimension, making grid methods impractical for high-dimensional spaces. Sampling-based methods (RRT, PRM) avoid explicit discretization by randomly sampling the continuous space and building sparse graphs, making them practical for 6-7 dimensional manipulator configuration spaces.

Path planning is distinct from trajectory planning (which adds timing), motion planning (which adds dynamics), and task planning (which determines the sequence of actions). In a complete manipulation pipeline, task planning determines what to do, path planning determines where to move, and motion planning determines how to move there.

Historical Context

Path planning has roots in computational geometry and artificial intelligence. The A* algorithm (Hart et al., 1968) provided the first optimal graph search algorithm with heuristic guidance, becoming the standard for grid-based path planning. Potential field methods (Khatib, 1986) introduced a continuous approach using artificial forces.

The development of sampling-based methods in the 1990s (PRM by Kavraki et al., 1996; RRT by LaValle, 1998) revolutionized path planning for high-dimensional spaces, enabling practical planning for robot manipulators. These methods remain the workhorses of robot path planning, implemented in widely-used libraries like OMPL.

Learned path planning emerged in the late 2010s, with networks trained to predict sampling distributions (Ichter et al., 2018) or directly output paths (Qureshi et al., 2019). These methods dramatically reduce planning time but sacrifice completeness guarantees.

Practical Implications

For manipulation teams, path planning is typically handled by established planning frameworks (MoveIt, OMPL) rather than implemented from scratch. The primary practical concern is ensuring that the environment representation (obstacle point clouds, collision meshes) is accurate and up-to-date.

For teams developing learned planners, the key data requirement is diverse environment configurations with solved planning problems. Claru provides environment-rich datasets with obstacle configurations and collision-free path annotations suitable for training learned planners that generalize across deployment scenarios.

Common Misconceptions

MYTH

Path planning always finds the shortest path.

FACT

Only certain algorithms (A*, Dijkstra) guarantee optimality. Sampling-based methods (RRT) find feasible but generally suboptimal paths. RRT* provides asymptotic optimality but requires more computation. For many robotics applications, any collision-free path is acceptable.

MYTH

Path planning is solved and no longer an active research area.

FACT

While classical methods work well in static environments, path planning in dynamic environments, under uncertainty, and in high-dimensional spaces remains challenging. Learned planning methods and real-time replanning are active research frontiers.

Key Papers

  1. [1]LaValle. Planning Algorithms.” Cambridge University Press 2006, 2006. Link
  2. [2]Hart et al.. A Formal Basis for the Heuristic Determination of Minimum Cost Paths.” IEEE SSC 1968, 1968. Link
  3. [3]Ichter et al.. Learning Sampling Distributions for Robot Motion Planning.” ICRA 2018, 2018. Link

How Claru Supports This

Claru provides purpose-built training datasets for frontier AI and robotics teams, with the data quality and diversity that path planning systems require for production deployment.

What Is Path Planning?

Path planning computes a collision-free geometric path from a start location to a goal location in a known or partially known environment. The output is a sequence of waypoints or a continuous curve that the robot can follow without colliding with obstacles. For mobile robots, the search space is typically 2D or 3D Euclidean space. For manipulators, it is the robot's configuration space.

The distinction between path planning and motion planning is subtle but important. Path planning finds the geometric route — the shape of the path through space. Motion planning adds timing and dynamics — how fast the robot moves along the path, subject to velocity and acceleration limits. In practice, many systems solve path planning first and then apply time parameterization (velocity profiling) to produce a full motion plan.

Classic path planning algorithms include A* search on discretized grids, Dijkstra's algorithm for graph-based representations, and potential field methods that treat the goal as an attractor and obstacles as repulsors. For continuous spaces, sampling-based methods (RRT, PRM) are standard. Learned path planners use neural networks to predict paths from environment representations, offering significant speed advantages for real-time applications.

Path Planning at a Glance

A*
Optimal grid search
RRT
Sampling-based continuous
D*
Dynamic replanning
PRM
Multi-query planner
2D/3D
Typical search dimensions
<10ms
Learned planner speed

Data-Driven Path Planning

Learned path planners train on datasets of solved planning problems to predict collision-free paths without expensive search. The training data consists of environment maps (occupancy grids, point clouds, or signed distance fields), start and goal positions, and optimal or near-optimal solution paths. At test time, the network predicts a path that is verified for collision-freeness before execution.

For mobile robot navigation, learned path planners have achieved near-human performance in complex environments. The training data comes from both simulation (procedurally generated environments with known solutions) and real-world mapping (SLAM-generated maps with human-driven paths). For manipulation, learned path planners operate in configuration space and require training data with diverse obstacle configurations relevant to the target deployment environment.

Claru provides environment-rich data with diverse obstacle configurations, enabling teams to train learned path planners that generalize across deployment scenarios. Each dataset includes calibrated environment representations alongside collision-free trajectory annotations.

Key References

  1. [1]LaValle. Planning Algorithms.” Cambridge University Press 2006, 2006. Link
  2. [2]Hart et al.. A Formal Basis for the Heuristic Determination of Minimum Cost Paths.” IEEE SSC 1968, 1968. Link
  3. [3]Ichter et al.. Learning Sampling Distributions for Robot Motion Planning.” ICRA 2018, 2018. Link

Frequently Asked Questions

Path planning finds the geometric route (where to go). Motion planning adds timing and dynamics (how to get there, how fast). Path planning ignores velocity and acceleration limits; motion planning respects them. In practice, path planning plus time parameterization approximates full motion planning for many applications.

Classical path planning requires a known map. For unknown environments, reactive methods (potential fields, learned policies) or online replanning approaches (D* Lite, receding horizon planning) are used. Learned path planners can generalize to unseen environments if trained on sufficiently diverse training environments.

Typically 100K-1M solved planning problems across diverse environments. The diversity of environments matters more than the volume of problems per environment. For manipulation, the environment diversity should match the target deployment domain.

Need Path Planning Training Data?

Claru provides diverse environment representations with collision-free path annotations for training learned path planners.