Fairly new to this so let me know if there's a better way of doing things!
I'm attempting dynamic collision avoidance with a Kinect V1 and the ur5 moveit package on ROS kinetic. The PC is running Ubuntu 16.04 and i have the following packages installed:
- camera_pose
- octomap_mapping
- octomap_ros
- OpenNI
- openni_camera
- perception_pcl
- robotiq
- trac_ik
- universal_robot
- ur_modern_driver
- ur_scripts
I've been following the ROS tutorial on perception/configuration (link below) in an effort to have moveit process the point cloud into an octomap and display the occupied voxels: http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/perception_configuration/perception_configuration_tutorial.html
And am also aware of this other thread which is extremely similar, however i wasn't able to achieve a working result using just the information here: https://answers.ros.org/question/195770/moveit-collision-avoidance-with-point-cloud-2/
I have Openni working so can visualise the pointcloud2 okay in moveit and have completed all the steps in the above, including altering the sensor_manager.launch.yaml files. However when i echo the planning scene topic, `/move_group/monitored_planning_scene`, i am getting nothing. This is despite trying a number of static transform publishers to arrange everything with respect to `/world` (I would post images of my tf trees but i dont have enough karma).
Can anyone shed any light on this?
↧