Software Setup

SDK installation, network connection, ROS2 with MoveIt2 dual-arm planning, browser teleop panel, VLA model integration, and the one-click data pipeline. Everything from network discovery to autonomous manipulation.

Jump to a section:

Step 1 — SDK Installation

SDK Installation

The VLAI L1 is controlled via the roboticscenter Python SDK, which provides both high-level task APIs and low-level joint control. Install on your host PC.

Create a virtual environment

python -m venv ~/.venvs/vlai
source ~/.venvs/vlai/bin/activate

Install the SDK

pip install roboticscenter[l1]

Verify installation

python -c "from roboticscenter import L1; print('SDK OK')"
rc --version   # command-line tool
Step 2 — Network Connection

Connecting to the L1

The L1 runs its own onboard ROS2 stack and exposes a gRPC control API over your local network. Your host PC communicates with it over WiFi or Ethernet.

Initial network setup

# Power on the L1 — it will connect to the configured WiFi automatically
# Then discover it on your network:
rc discover
# Output: L1-XXXX found at 192.168.1.45 (port 8888)

Connect and verify

rc connect --device l1 --host 192.168.1.45
# Output: Connected to VLAI L1 (firmware v2.1.4, battery: 87%)

# Or use the Python SDK:
from roboticscenter import L1

robot = L1(host="192.168.1.45")
robot.connect()
print(robot.get_status())
# {'battery': 87, 'arm_left': 'ready', 'arm_right': 'ready', 'base': 'ready'}
robot.disconnect()

Set a static IP (recommended for lab use)

rc config set network.static_ip 192.168.1.100
rc config set network.gateway 192.168.1.1
rc config apply   # reboots the L1 network stack
Step 3 — ROS2 + MoveIt2

ROS2 with MoveIt2 Dual-Arm Control

The L1 ships with ROS2 Humble running onboard. Your host PC connects as a ROS2 node over the same network. You need ROS2 Humble on your host.

Install ROS2 Humble on host (Ubuntu 22.04)

sudo apt update && sudo apt install ros-humble-desktop \
  ros-humble-moveit ros-humble-ros2-control \
  ros-humble-ros2-controllers -y

Launch the L1 ROS2 bridge

# On the L1 (via SSH or the onboard terminal):
ros2 launch vlai_l1_ros2 l1_bringup.launch.py

# On your host PC:
source /opt/ros/humble/setup.bash
export ROS_DOMAIN_ID=42   # must match the L1's domain ID
ros2 topic list   # should show /l1/left_arm/joint_states, etc.

Dual-arm MoveIt2 planning

source /opt/ros/humble/setup.bash
ros2 launch vlai_l1_moveit l1_moveit.launch.py

# In another terminal — plan and execute a bimanual task:
ros2 run vlai_l1_moveit bimanual_demo
# Executes: left arm picks object, right arm receives and places

Individual arm control via Python

from roboticscenter import L1
import numpy as np

robot = L1(host="192.168.1.45")
robot.connect()

# Move left arm to Cartesian pose (position + quaternion)
pose = {
    "position": [0.4, 0.1, 0.35],    # x, y, z in meters from base
    "orientation": [0, 0, 0, 1]      # quaternion xyzw
}
robot.left_arm.move_to_pose(pose, speed=0.3)

# Read current joint state
state = robot.left_arm.get_joint_state()
print("Left arm joints:", state.positions)  # 8 values in radians

robot.disconnect()

Mobile base control

from roboticscenter import L1

robot = L1(host="192.168.1.45")
robot.connect()

# Drive forward 1 meter at 0.5 m/s
robot.base.move(x=1.0, y=0.0, speed=0.5)

# Rotate 90 degrees clockwise
robot.base.rotate(angle=-90, speed=0.3)  # degrees

# Adjust lift height (106 to 162 cm)
robot.base.set_lift_height(130)   # cm

# Stop
robot.base.stop()
robot.disconnect()
Step 4 — Browser Teleop

Browser Teleoperation Panel

The L1 includes a built-in browser teleop panel — no software install required. Navigate to the L1's IP on port 8888.

Access the panel

# Open in browser:
http://192.168.1.45:8888

# Or launch via CLI:
rc teleop --device l1

The panel provides:

  • WASD keyboard control for mobile base
  • Left/right arm Cartesian joystick (click-drag in 3D viewport)
  • Gripper open/close buttons
  • Camera feed from all mounted cameras
  • One-click episode recording start/stop
  • Battery and joint state status panel

VR teleop (Developer Pro and Max)

rc teleop --device l1 --mode vr
# Opens a WebXR session — put on Meta Quest and visit the displayed URL
Step 5 — VLA Integration

Vision-Language-Action Model Integration

The L1 Developer Pro and Max tiers include onboard compute capable of running VLA inference locally. For all tiers, you can run VLA inference on a host PC and stream actions to the robot.

Run OpenVLA on host PC (any tier)

pip install roboticscenter[vla]

from roboticscenter import L1
from roboticscenter.vla import OpenVLAClient

robot = L1(host="192.168.1.45")
robot.connect()

vla = OpenVLAClient(
    model="openvla/openvla-7b",
    device="cuda"   # or "cpu" for slower inference
)

# Capture observation
obs = robot.capture_observation()   # returns RGB image + joint state

# Get action from VLA (text-conditioned)
action = vla.predict(
    image=obs["image"],
    instruction="Pick up the blue block and place it on the red plate"
)
# action: dict of arm joint deltas + gripper command

# Execute action on the robot
robot.execute_action(action)
robot.disconnect()

On-device VLA inference (Developer Pro/Max)

rc deploy vla \
  --model openvla/openvla-7b \
  --quantize int4   # fits in 6GB VRAM on V3 compute (70 TOPS)

# Now VLA runs on the L1's onboard compute — no host PC needed:
rc run policy \
  --task "Pick up the blue block and place it on the red plate" \
  --max_steps 50
Troubleshooting

Common Issues

Error 1 rc discover finds no devices

The L1 is not on the same subnet. Enterprise WiFi networks often have client isolation — check with IT or use a dedicated router.

# Try direct connection by IP if you know it:
rc connect --device l1 --host 192.168.1.45

# Or connect the L1 via Ethernet directly to your laptop:
# Set your laptop to 192.168.2.1/24, L1 will appear at 192.168.2.100
Error 2 MoveIt2 planning fails: no path found

The target pose is outside the arm's reachable workspace or in collision with the robot body.

# Check reachability first:
from roboticscenter import L1
robot = L1(host="192.168.1.45")
robot.connect()
reachable = robot.left_arm.check_pose_reachable(
    position=[0.4, 0.1, 0.35])
print("Reachable:", reachable)  # if False, adjust target pose
robot.disconnect()

Software Working? Start Collecting Data.

Once the arms are moving, the next step is teleoperation and dataset recording.