Optimize your 3D Data for Surface-based 3D-Matching with MVTec HALCON


Hello and welcome. In a previous tutorial, we got to know the
basics of surface-based matching with MVTec HALCON. In this video, we’ll learn how to prepare
your 3D data to improve the robustness and speed of surface-based
matching. Let’s have a look at an example. Using this setup, we want to locate these
wooden blocks. For simple objects like this, you can hand-craft
your 3D object model. Here, we can use gen_box_object_model_3d,
with the dimensions of the wooden block in meters. Let’s take a look at it. Right now, it is a simple geometric primitive. Note that when creating a surface model,
there are specific requirements for defining the model: First, the 3D object model that is used as
the input needs to contain points. Second, it has to contain either point normals,
a polygon mesh, or a 2D-mapping. Currently, the generated 3D box contains no
such data. Using triangulate_object_model_3d,
we can generate points and a triangular mesh for the box. When inspecting it, we can check out the primitive
again, as well as the triangles, and the points. One last hint: Before creating the surface
model, it’s a good idea to check your object model
for unnecessary surfaces: For example, some CAD models have surfaces
on the inside. However, those surfaces can never be observed
by a camera and thus should be removed. In our case, because of the symmetry of our
object, we could remove some redundant triangles of
the backside of the box. Speaking of symmetry: Have a look at the documentation
for the operator set_surface_model_param. To speed up the matching, you can define symmetries
of your object. In our case, we can define the symmetry pose
like this. Additionally, you can restrict the range of
rotations in which the surface model is searched for. Before we start optimizing performance however,
we need to create the surface model. Then, we can try to locate it in some 3D scenes. First, let’s take a closer look at where the
3D scene is coming from. Here, the 3D sensor returned a 2D mapping, consisting of an image triple that contains
the X, Y, and Z-coordinates of 3D points. The data of 3D sensors often contains invalid
data where the 3D scene could not be reconstructed. Those pixels often have a gray value of zero,
and thus appear black. This can lead to false results and increased
runtime. We highly recommend filtering out these invalid
pixels, by using a threshold for example. In this case, we already did this when acquiring
the images. And one last tip: If your 3D data is noisy,
you can apply a median-filter on the Z-image. Now, using the operator xyz_to_object_model_3d,
we can create a 3D object model of the scene. Alternatively, the 3D scenes could also be
the output of reconstruct_surface_stereo,
using a stereo camera setup for example. Regardless of the source of the 3D point cloud,
next, we use find_surface_model to locate the surface model. For this, the 3D scene just needs to contain
points, and either point normals or a 2D-mapping. Then, we visualize the result of the matching. You might note that only one match is found
– this is the default setting for find_surface_model. Using its generic parameters, we can set ‘num_matches’
to ‘5’ – just to provoke some more problems. Now, when visualizing the result, the three
wooden blocks are found correctly. However, there are false matches in the background
of the scene. The score is nearly as high as the score of
the correct matches. One way to resolve this would be
to use edge information contained in the 3D scene –
more on this in another tutorial. Another way to avoid this problem is to remove
the background from the scene. This can also speed up the matching significantly. There are different approaches available for
this, both in the 2D and the 3D data. For example, if you have a planar background,
you can apply a threshold to the Z-image, or use select_points_object_model_3d to select
only points near your camera, without the background. Alternatively, you could also save the background. Then, in the Z-image, you could use sub_image
to filter out the background. In the 3D data, you could use distance_object_model_3D
and select_points_object_model_3D to select only points
with a certain distance from the background. Note also the operator regiongrowing
which can yield fast and robust results on the Z-images. It is especially useful for tilted and uneven
backgrounds. If possible, we recommend using the 2D methods,
since they are more efficient. Let’s have a look at the Z-image here. Using the Gray Histogram, we can threshold
the image, and separate the background from the foreground. Then, we can insert the respective code. Next, we reduce the domain of the Z-image
using the resulting region. Let’s have a look at the result. First, note that the runtime of find_surface_model
has been reduced quite a lot. Unfortunately, there are still false matches. Now, however, their score is significantly
lower. Thus, we can go back to find_surface_model
and simply increase the MinScore. Finally, we can reset this program and go
through all 3D scenes. The wooden blocks are now found correctly. This concludes the tutorial. Next, you can check out our tutorial on edge-supported
surface-based matching. Thank you for watching.

2 Replies to “Optimize your 3D Data for Surface-based 3D-Matching with MVTec HALCON”

  1. Tutorial video on OCR in complex images containing both horizontal and vertical images plzzz….

  2. Tutorial video on OCR in complex images containing both horizontal and vertical texts plzzz….

Leave a Reply

Your email address will not be published. Required fields are marked *