Industry 4.0: Machine vision in 3D

0
273

Machine vision systems in the form of video cameras, made as image sensors based on several basic image processing technologies, have existed for more than 20 years, but in recent years the rate of development and implementation of such systems has increased significantly. Initially, machine vision was distributed mainly in the scientific and military fields, but since the early 2000s, along with new advances in the field of optical sensors for image acquisition and digitization, it has become more popular. The evolution in performance has made it possible to use machine vision systems not only in object recognition (2D systems were initially more common here), but with the development of 3D technology and in lidars based on specific non-scanning technologies.

All this allowed machine vision to confidently step into the field of automation and security systems, including industrial ones. The essence of such systems is not only determining the distance to objects, but also their identification and recognition of position and volume, that is, taking into account the depth of the scene and the object – with the transformation of a two-dimensional image into a three-dimensional one, where information about the object is presented not just in units of brightness, but in pixel parameters /range. Such a representation is very different from the “vision” of bionic systems that we are accustomed to.

Machine vision in 3D

The options for using applications based on “true” machine vision technologies, which do not just determine the distance to a certain average surface as a plane, but give an idea with the ability to estimate the depth of the scene and the location of the object in it, that is, with 3D probing, are logistics, quality control , navigation, robotics, accurate face recognition (including hidden ones), protection and safety systems, systems that prevent industrial injuries, video surveillance systems. Such technology will help solve many of the problems that traditional 2D devices face today. It is the combination of high-resolution depth data, along with powerful classification algorithms, that will open up wide opportunities for its use in these areas.

To obtain such images, which are based on the depth of the image, and not on color or brightness, a number of technologies are used. These images are not exactly what we imagine, they do not coincide with the system of our vision and are focused on perception by programmed automata or systems with the basics of artificial intelligence (AI). In general terms, this is a set of points with a gradation of distance / brightness or distance / color. In this case, the full detailing of the shape of the object is often missed or simplified.

At first glance, everything is clear and simple. However, this is not the case. For the proper functioning of such systems, appropriate video sensors, lenses (sometimes with autofocus and adjustable aperture), illumination (as a rule, pulsed sufficiently powerful lasers or LEDs with a certain wavelength are intended for this, more often in the range invisible to our eyes) and hardware and software , high-speed direct and post-processing using various algorithms. In addition, they need to be calibrated by distance (and not at one point, but at points along the capture area), including temperature, in taking measures to compensate for external illumination. Go for more info.

For automation and safety systems in production, all of the above requires the most careful study, and basically the analysis of the situation needs to be carried out in real time with minimal delay, fast response, which imposes even more stringent requirements in terms of speed, and sometimes with compression of the transmitted final information. often intelligent.

LEAVE A REPLY

Please enter your comment!
Please enter your name here