📄️ 3.1 The Mirror World
Before we deploy code to a $10,000 robot, we must first prove it works in the Mirror World: a simulated environment that mimics the laws of physics. Simulation is not just a "nice to have"; in modern robotics, it is the primary development environment. It allows us to iterate rapidly, test dangerous scenarios safely, and train AI models on massive datasets that would be impossible to collect in the physical world.
📄️ 3.2 Defining the Body (URDF & SDF)
To simulate a robot, we must first define its body. We need to tell the simulator about the robot's physical properties: its shape, size, mass, and how its parts are connected. Just as an architect needs blueprints before building a house, a roboticist needs a description file before simulating a robot. We do this using XML-based description formats.
📄️ 3.3 The Laws of Physics (Gazebo)
Defining the visual appearance of a robot is easy. Defining how it moves and interacts with the world is hard. This is the job of the Physics Engine. In the ROS 2 ecosystem, Gazebo is the default simulator. While it offers several physics backends (including Bullet, DART, and Simbody), the Open Dynamics Engine (ODE) has historically been the default.
📄️ 3.4 Visualizing Reality (Unity & Isaac)
While Gazebo is excellent for physics and general robotics, it sometimes falls short when we need photorealism. For classical robotics (navigation, planning), geometric shapes are often enough. But for modern AI, especially Vision-Language-Action (VLA) models and computer vision, the visual fidelity of the simulation is critical.