Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 171 articles
Browse latest View live

Reading PointCloud2 in C++

$
0
0
How do you read a PointCloud2 in C++? I'm interested in getting the depth(z point). Maybe an equivalent of this: [reading pointcloud2 in python](https://code.ros.org/trac/ros-pkg/ticket/4440).

RGB and Depth sensor_msgs::Image vs XYZRGB sensor_msgs::PointCloud2

$
0
0
Hello, So I have ROS running across multiple machines on a limited network, and I have two possibilities, transfer a XYZRGB sensor_msgs::PointCloud2 directly or two sensor_msgs::Image (one rgb and one depth) and assemble the cloud remotely from the RGBD data and the camera intrinsics. The assembled cloud is equal to the original sensor_msgs::PointCloud2, i checked. From the data I collected, I noticed that transferring two sensor_msgs::Image was 85% faster in the worst case scenario, and i dont know why. Is there any reason to why it would be faster? Thanks in advance.

Generate PointCloud2 without sensor

$
0
0
How can I construct a sensor_msgs::PointCloud2 message from scratch, in c++? I have looked up the structure and documentation of that message but I don't know how to build it up. Does someone have an example code with dummy data?

Trouble visualizing PointCloud2 in Rviz

$
0
0
Hello, I publish a PointCloud2 message to a predefined topic that rviz's PointCloud display subscribes to (/output) successfully, but the pointcloud does not show up on the visualization. I know it publishes successfully because there is no error message in the rviz display window. In the launch file for my program, I launch the demo.launch for my tx90 robot that was created through the moveit_setup_assistant, which starts up rviz which visualizes my tx90 robot, but not the pointcloud2 message. i believe this may be a problem with TF, but i added a static_transform_publisher to my launch file as so: But this does not change anything except add the frame head to my transform tree (which I found through using tf view_frames), I am still not able to view the pointclouds in rviz. It says I do not have enough karma to post my frames.pdf file. My code for this portion is as follows: ros::Publisher pc_pub = n.advertise ("/output", 1); boost::filesystem::path pkgPath, imgPath; std::stringstream ss; pkgPath = ros::package::getPath("mover_node"); ss << pkgPath.c_str() << "/Wound_3d_mesh.pcd"; std::string fpath = ss.str(); sensor_msgs::PointCloud2 cloud_msg; pcl::PointCloud cloud; pcl::PCLPointCloud2 point_cloud2; pcl::io::loadPCDFile(fpath, cloud); //pcl::PointCloud::ConstPtr tmp_cloud = cloud; pcl::toPCLPointCloud2(cloud, point_cloud2); pcl_conversions::fromPCL(point_cloud2, cloud_msg); cloud_msg.header.frame_id = "head"; cloud_msg.header.stamp = ros::Time::now(); pc_pub.publish(cloud_msg);

Equivalent of Marker frame_locked for PointCloud2 in rviz?

$
0
0
I have knowledge that point clouds are locked (have an unchanging transform relative to) a moving frame. The rviz point cloud won't move with the moving frame because at the one timestamp it was captured it will remain at the static location. I'd like to be able to set a non-zero decay time in rviz to see the points for a few seconds after they are captured. It's possible to set the rviz global frame to the moving frame and the visualization is correct except that I'd like to visualize from a static frame like map instead. I can solve this by republishing a limited buffer of old point clouds that are in the moving frame and constantly updating their timestamps to the current time. But is there something off the shelf to do this- can point_cloud_assembler help me here?

Code examples of Pointcloud Autonomous Navigation?

$
0
0
I am currently using a VLP-16 device that produces Pointcloud data type messages. I have tried with no success to convert them to Laserscan for Autonomous Navigation (as that appears to be the popular choice) and would like to consider alternatives. Where can I found existing robots with source code that navigate with simulation support and Pointcloud/Pointcloud2 data produced from LIDAR devices?

Im trying to transform a pointcloud (rotation along z) using the following code. however i get an error with respect to the data types used in the transformPoinCloud() method. What am I doing wrong here or is there a simpler method?

$
0
0
#include #include #include #include void callback(const sensor_msgs::PointCloud2ConstPtr &cloud_msg) { ROS_INFO("Message recieved:"); //msg to c string Eigen::Matrix4f transform_1 = Eigen::Matrix4f::Identity(); float theta = M_PI/18; // The angle of rotation in radians transform_1 (0,0) = cos (theta); transform_1 (0,1) = -sin(theta); transform_1 (1,0) = sin (theta); transform_1 (1,1) = cos (theta); printf ("Method #1: using a Matrix4f\n"); std::cout << transform_1 << std::endl; sensor_msgs::PointCloud2 op_cloud; pcl_ros::transformPointCloud(transform_1,*cloud_msg,op_cloud); } int main (int argc, char **argv) { ros::init(argc, argv, "cloud_trfm"); ros::NodeHandle nh; ros::Subscriber sub = nh.subscribe("/velodyne_points",1,callback); //topic name, buffer size untill node can read, callback fn ros::spin(); }

how to compute the centroid of the PointCloud2

$
0
0
I am considering how can I compute the centroid of sensor_msgs/PointCloud2, is there a direct way to do this? Because I only find he pcl::CentroidPoint< PointT > Class Template Reference *[link text] (http://docs.pointclouds.org/trunk/classpcl_1_1_centroid_point.html#details) on the website.

Maintain/keep all the PointCloud2 in Rviz

$
0
0
I am publishing PointCloud2 in Rviz, but when I publish the current PointCloud2 msg from the topic, the last msg will disappear, how can I maintain all the PointCloud2 in Rviz?

How to know Ring/Layer/Laser(/Color) from PointCloud2

$
0
0
Hi, friend. I am using Ubuntu 14.04 and ROS Indigo for the sensor **Velodyne VLP-16**. I am trying to extract the useful Information form `PointCloud2` and reorder them to the format that I needed. Now I can extract the x,y,z coordinates **in python**, but **my current question is**: How can I know the **layer** of points? (How can I know **which laser of 16 lasers** measured that point?) --> Contain the **colors** (r,g,b values.) those info? How can I from the **color values** like below know the layers? Sorry i don´t find the definition of rgb or examples. r = (pack & 0x00FF0000)>> 16 g = (pack & 0x0000FF00)>> 8 b = (pack & 0x000000FF) --> Should I calculate the elevation angles then get the layer information? I am so confusing now.... I would appreciate if you could put some Python codes examples. Thanks for your answers and best wishes.

sensor_msgs pointcloud2 color value ?

$
0
0
Does sensor_msgs/PointCloud2 data type also contains the rgb value of the image? Currentely i'm using below code, and when i publish the converted pcl type, it also shows rgb values, how is that possible? void cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input) { pcl::PointCloud output; pcl::fromROSMsg(*input,output); cout< ("input", 1, cloud_cb); marker_pub = nh.advertise ("visualization_marker",1); // Spin ros::spin (); } as i can see in the sensor_msgs/pointcloud2 documentations it shows below values: std_msgs/Header header uint32 height uint32 width sensor_msgs/PointField[] fields bool is_bigendian uint32 point_step uint32 row_step uint8[] data bool is_dense where data should be representing only the depth of the voxels. how is it returning rgb values when converted to pcl data type? i'm little confused, cas i didn't used this code for year.

rviz stops node from publishing

$
0
0
Hallo, I have weird problem with rviz in ros. I have a node, who receives a PointCloud and formats it into a PointCloud2 and publishes a PointCloud2 message afterwards. I can display that pointcloud in rviz as well. So far so good. Now here is my problem: Whenever I increase the number of points in the pointcloud and try to display it in rviz it doesn't work. It seems to stop my node from publishing. So when I close rviz, my node immediately starts publishing again. It is weird. So the increase in points in the pointcloud has no affect on the publisher itself but as soon as rviz gets involved the node refuses to publish constantly (it still publishes sometimes but not as fast as before rviz gets involved). What could be the reason for strange behavior? Thanks in advance!

build octomap using pointcloud2 data

$
0
0
I could use a little help getting Octomap to publish occupancy grid cell data. I have been reading the explanation on the ROS Wiki for how to build an Octomap using the octomap_server node, and PointCloud2 data. I have set up the following: 1 My RealSense camera is publishing data over the topic, "/cloud_in". 2 I am publishing /tf frames for "base_link" to "camera_link" at 5Hz 3 I am running rosrun octomap_server octomap_server_node What else do I need to do, to get Octomap publishing occupancy grid cells over the topic "/0ccupied_cells_vis_array" I might be confused on how to set up the tf frames to match the data coming in over the "cloud_in" topic. My understanding is a bit hazy when I come to tf frames and how they are matched with a topic.

Target pose from pointcloud

$
0
0
Hi, I'm using ROS kinetic and PCL to find a target point. The aim is to use moveit to move to the target point/pose. I used pcl to create a concave hull and as shown in the code below, I publish the tf frame of the first point in the cloud_hull. However, I'm lost at trying to get the pose as a geometry_msgs type or valid format for path planning in moveit. I can visualize the frame and get the coordinates, I want to know how to convert to a PoseStamped or PointStamped so i can use the point as a goal. pcl::PointCloud::Ptr cloud_hull (new pcl::PointCloud); pcl::ConcaveHull chull; chull.setInputCloud (cloud_projected); chull.setAlpha (0.1); chull.reconstruct (*cloud_hull); static tf::TransformBroadcaster br; tf::Transform part_transform; part_transform.setOrigin( tf::Vector3(cloud_hull->at(1).x, cloud_hull->at(1).y, cloud_hull->at(1).z) ); tf::Quaternion q; q.setRPY(0, 0, 0); part_transform.setRotation(q); br.sendTransform(tf::StampedTransform(part_transform, ros::Time::now(), world_frame, "Potshell_outline")); [Here](https://answers.ros.org/question/35084/transform-a-point-from-pointcloud/?answer=35086#post-id-35086) is an answer I was hoping to use, but I'm not sure how to get the header of the cloud_hull. geometry_msgs::PointStamped pt; geometry_msgs::PointStamped pt_transformed; pt.header = myCloud->header; pt.point.x = myCloud->points[1].x; pt.point.y = myCloud->points[1].y; pt.point.z = myCloud->points[1].z; tf::TransformListener listener; listener.transformPoint("target_frame", pt, pt_transformed); if i use `pt.header=cloud_hull->header;` I get no known conversion from `pcl::PCLHeader` to `std_msgs::Header_` Any help would be highly appreciated. In short, want to get the point coordinates in the cloud_hull as a `PoseStamped` message to set as a target. Thanks.

problem in calculating coordinate from PointCloud2

$
0
0
Hi all, this is my callback void get_point_cloud(const sensor_msgs::PointCloud2 &img) { sensor_msgs::PointCloud2 img1; tf::TransformListener listener; tf::StampedTransform transform; listener.lookupTransform(img.header.frame_id, "odom", ros::Time(0), transform); pcl_ros::transformPointCloud("odom", img, img1, listener); pcl::PointCloud::Ptr cloud(new pcl::PointCloud); pcl::fromROSMsg(img1, *cloud); cv::Mat frame; if (cloud->isOrganized()) { frame = cv::Mat(cloud->height, cloud->width, CV_8UC3); for (int h = 0; h < frame.rows; h++) { for (int w = 0; w < frame.cols; w++) { pcl::PointXYZRGBA point = cloud->at(w, h); if (point.y > 0) { Eigen::Vector3i rgb = point.getRGBVector3i(); frame.at(h, w)[0] = rgb[2]; frame.at(h, w)[1] = rgb[1]; frame.at(h, w)[2] = rgb[0]; } else { frame.at(h, w)[0] = 0; frame.at(h, w)[1] = 0; frame.at(h, w)[2] = 0; } } } } cv::imshow("frame", frame); cv::waitKey(1); } I want to use PointCloud2 as laser for my slam package, when I try to convert `img` from `camera_depth_optical_frame` frame to `img1` from `odom` frame and then apply that coordinates on my Map When I run this code I got terminate called after throwing an instance of 'tf2::LookupException' what(): "camera_depth_optical_frame" passed to lookupTransform argument target_frame does not exist. what is the problem?? I can see `odom` frame in tf tree!! And my other problem is: when I don't use tf Transform and try to use only `img` data (from camera_depth_optical_frame), some points has `point.y < 0`!!!! what does it means?? that points must has `point.y` bigger that 0!!!

kinect2 dont take all point clouds

$
0
0
Hi everyone my kinect 2 does not save the entire point cloud that is displayed in the rviz or in the viewer, ignores surfaces and some elements(walls, objects, ground, etc) the strange thing is that in the viewer the points if there are. for example ![image description](https://imageshack.com/a/img922/2676/3gKQyA.png ) ![image description]( https://imageshack.com/a/img922/9625/p67PAK.png) ![image description]( https://imageshack.com/a/img922/3682/LmWCtD.png) I use this code to capture the XYZ RGB data. import rospy from std_msgs.msg import String from sensor_msgs.msg import PointCloud2 import sensor_msgs.point_cloud2 as pc2 import struct import ctypes import time, os, sys path_data = os.getcwd() + "/data/raw" flag=0 def saveCloud_txt(fileName): fileName= fileName + "_" + time.strftime("%d-%m-%y")+'-'+time.strftime("%I-%M-%S") completeName = os.path.join(path_data, fileName + '.txt') f = open(completeName,"a") #opens file with name of "[fileName].txt" dataFile= "%" + "X"+"\t"+ "Y"+"\t"+ "Z" +"\t"+ "R" +"\t" + "G"+"\t"+ "B" + "\n" f.write(dataFile) return f def callback(data): global flag rospy.loginfo('Data acquiring') pc = pc2.read_points(data,skip_nans=True) int_data=list(pc) for P in int_data: x=P[0] z=-P[1] y=P[2] s=struct.pack('>f', P[3]) i=struct.unpack('>l',s)[0] pack = ctypes.c_uint32(i).value R=(pack & 0x00FF0000)>> 16 G=(pack & 0x0000FF00)>> 8 B=(pack & 0x000000FF) dataFile = str(x)+"\t"+ str(y)+"\t"+ str(z) +"\t"+ str(R)+"\t" +str(G) +"\t"+ str(B)+ "\n" f.write(dataFile) print "X: "+str(x)+ "; Y: "+str(y)+"; Z: "+str(z) +"; R: "+str(R)+" ; G:"+str(G)+"; B:"+str(B) del x,y,z,R,G,B print "--------------------------------------------------------------------------------------------" flag=1 def ros_listener(): global flag # In ROS, nodes are uniquely named. If two nodes with the same # name are launched, the previous one is kicked off. The # anonymous=True flag means that rospy will choose a unique # name for our 'listener' node so that multiple listeners can # run simultaneously. rospy.init_node('Kinect_Capture', anonymous=True) rospy.Subscriber('/kinect2/sd/points', PointCloud2, callback) while flag==0: nop_op = 0 # spin() simply keeps python from exiting until this node is stopped #rospy.spin() if __name__ == '__main__': fileName = raw_input('File name: ') print "5" time.sleep(1) print "4" time.sleep(1) print "3" time.sleep(1) print "2" time.sleep(1) print "1" time.sleep(1) f = saveCloud_txt(fileName) ros_listener()

SICK MRS6124 detecting phantom data, how to filter pointcloud2 data by distance?

$
0
0
Hi everyone, I am using the [sick MRS6124](https://www.sick.com/de/en/detection-and-ranging-solutions/3d-lidar-sensors/mrs6000/mrs6124r-131001/p/p533545?ff_data=JmZmX2lkPXA1MzM1NDUmZmZfbWFzdGVySWQ9cDUzMzU0NSZmZl90aXRsZT1NUlM2MTI0Ui0xMzEwMDEmZmZfcXVlcnk9JmZmX3Bvcz0xJmZmX29yaWdQb3M9MSZmZl9wYWdlPTEmZmZfcGFnZVNpemU9OCZmZl9vcmlnUGFnZVNpemU9OCZmZl9zaW1pPTkzLjA=) with the [sick_scan](http://wiki.ros.org/sick_scan) package for producing a PointCloud2 message and then passing this message into packages such as costmap_2D. The data point I detect sits at 0,0 relative to the laser frame (the origin frame of the PointCloud2 data). This is obviously a problem as the robot begins to avoid this phantom obstacle. The user manual for my LIDAR says that it has a minimum operating distance of 0.5m and so it doesn't make sense why I would be getting a point returned at 0,0? I am hoping that someone can point me as to how to filter the PointCloud2 data to exclude all data <0.5m? Thank you everyone in advance!

How to filter PointCloud2 data based on distances?

$
0
0
Hi everyone, I am using a [SICK MRS6124](https://www.sick.com/de/en/detection-and-ranging-solutions/3d-lidar-sensors/mrs6000/mrs6124r-131001/p/p533545?ff_data=JmZmX2lkPXA1MzM1NDUmZmZfbWFzdGVySWQ9cDUzMzU0NSZmZl90aXRsZT1NUlM2MTI0Ui0xMzEwMDEmZmZfcXVlcnk9JmZmX3Bvcz0xJmZmX29yaWdQb3M9MSZmZl9wYWdlPTEmZmZfcGFnZVNpemU9OCZmZl9vcmlnUGFnZVNpemU9OCZmZl9zaW1pPTkzLjA=) and experiencing problems where a data point will appear at 0,0. I am trying to filter this point out but don't understand how to use [pcl_ros](http://wiki.ros.org/pcl) to filter these points based on their distance. I want to do this as the LIDAR has a working range of 0.5m to 200m but clearly 0,0 is not 0.5m away... If anyone could direct me as to how to filter PointCloud2 data by distance from the sensor then that would be a huge help, thank you in advance to everyone.

Generate and publish PointCloud2 in ros2?

$
0
0
What's the most convenient way to generate a bunch of points in a loop, assign xyz (and possibly rgb), and then publish as a PointCloud2? I'm interested in a full C++ code example mainly but python would be useful to have here also. With and without pcl would be useful too. pcl_ros isn't available in crystal but there is pcl_conversions `ros-crystal-pcl-conversions`. I encountered some oddities with linking to PCL in colcon/ament last time I tried, maybe that is better addressed in a separate question but CMakeLists.txt excerpts are additionally helpful.

Issue with message_filters and PointCloud2 in python

$
0
0
I want to subscribe to pointcloud data and image data and sync them and then publish them again. I have written a short and simple script for this: import rospy from sensor_msgs.msg import Image as SesnorImage from sensor_msgs.msg import CameraInfo # from sensor_msgs.msg import PointCloud2 import message_filters class synchronizer: def __init__(self): self.pub_Image = rospy.Publisher('image_raw_sync', SesnorImage, queue_size=1) self.pub_Cam_Info = rospy.Publisher('camera_info_sync', CameraInfo, queue_size=1) # self.pub_Lidar = rospy.Publisher('rslidar_points_sync', PointCloud2, queue_size=1) self.imageInput = message_filters.Subscriber('/image_raw', SesnorImage) self.cameraInfo = message_filters.Subscriber('/camera_info', CameraInfo) # self.lidar = message_filters.Subscriber('/rslidar_points', PointCloud2) self.ts = message_filters.TimeSynchronizer([self.imageInput ,self.cameraInfo # , self.lidar ], 10) self.ts.registerCallback(self.general_callback) self._image_raw = SesnorImage() self._camera_info = CameraInfo() # self._rslidar_points = PointCloud2() def general_callback(self , image_raw , camera_info # , rslidar_points ): self._image_raw = image_raw self._camera_info = camera_info # self._rslidar_points = rslidar_points def publisher(self): while True: self.pub_Image.publish(self._image_raw) self.pub_Cam_Info.publish(self._camera_info) # self.pub_Lidar.publish(self._rslidar_points) if __name__ == '__main__': rospy.init_node('synchronizer') synchronizer = synchronizer() synchronizer.publisher() rospy.spin() But when I uncomment the commented lines, there would be no data in the published topics! so when I add the PointCloud2 sensor messages, everything gows bad and I dont know why this is happening. Why is this happening?
Viewing all 171 articles
Browse latest View live