,*,,,
1.College of Mechanical and Electrical Engineering,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,P.R.China;2.State?Owned Jinjiang Machine Factory,Chengdu 610043,P.R.China
Abstract: With the rapid development of the machining and manufacturing industry,welding has been widely used in forming connections of structural parts. At present,manual methods are often used for welding and quality inspection,with low efficiency and unstable product quality. Due to the requirements of visual inspection of weld feature size,a visual inspection system for weld feature size based on line structured light(LSL)is designed and built in this paper. An adaptive light stripe sub-pixel center extraction algorithm and a feature point extraction algorithm for welding light stripe are proposed. The experiment results show that the detection error of the weld width is 0.216 mm,the detection error of the remaining height is 0.035 mm,the single measurement costs 109 ms,and the inspection stability and repeatability of the system is 1%. Our approach can meet the online detection requirements of practical applications.
Key words:optical inspection;weld;feature size;light stripe center extraction;feature point extraction
Welding is commonly used as combination met?als or other thermoplastic materials,thanks to its high connection strength and high reliability. Nowa?days,with the rapid development of machining and manufacturing,it has been widely used in forming connections of structural parts. The quality of weld?ing determines the quality of the product and its ser?vice life to a great extent. At present,in most manu?facturing companies in China,the welding and weld?ing seam grinding processes mainly adopt manual methods:Depending on human eyes to judge wheth?er the characteristic size of the welding seam is rea?sonable,and whether there are defects such as weld?ing knobs,pores or cracks,as well as to determine the quality of the weldment. However,the manual detection is inefficient and has the disadvantages of missing detection and misjudgment. The manual grinding seriously affects the product quality with low production efficiency,and sometimes even the metal base material is damaged because of poor accu?racy and low efficiency.Therefore,improving the ef?ficiency of welding operations and the quality of weld?ing products is critical to modern welding production.
The active visual inspection technology based on structured light has the advantages of high accura?cy,fast measurement speed,and good anti-interfer?ence. It has become a hot spot in recent years,and been applied to the high-precision inspection process of parts[1-3]. The optical measurement method is based on modern optical technology and combines multi-disciplinary technologies such as optoelectron?ics,computer vision,and graphics. Thanks to its simpler structure,high detection efficiency and accu?rate measurement,it has become the first choice of optical measurement,and been rapidly developed.It is widely used in the field of dimensional measure?ment, three-dimensional topography reconstruc?tion,and surface quality inspection.
The structured light measurement method can meet the requirements of high accuracy,high stabili?ty and high real-time performance of welding seam detection. Accurately obtaining the remaining di?mensions of the welding seam after the welding is the key factor to effectively repair the welding seam and improve the grinding quality of the weld[4].Therefore,in this paper,a three-dimensional visual measurement method based on line structured light is proposed to position weld feature points and to de?tect geometric feature sizes.
The visual inspection system designed in this paper aims to realize the automatic measurement of the weld geometry. The detection system and the geometric quantity to be measured are shown in Fig.1. The geometric quantity means the width and the remaining height of the weld. The system devel?oped in this paper needs to ensure high measure?ment accuracy and stability in real-time while measu?reing geometric dimensions.
Fig.1 Detection system and geometric quantity to be mea?sured
A typical weld generally consists of a weld zone,a fusion line,a heat-affected zone,and its par?ent material,as shown in Fig.2(a). The geometri?cal parameters of the weld are shown in Fig.2(b).Due to the practical situations and the difficulty of vi?sual measurement,the system only measures the weld width and the remaining height. The detection error of the weld width must be within 0.25 mm and the detection error of the remaining height must be within 0.10 mm.
Fig.2 Components and structural parameters of a weld
The vision detection system consists of a laser projection module and an image acquisition module.We choose the laser tilt and camera vertical layout,which is easy to realize.The structure is simple,and the welding seam is enlarged.
The hardware platform of the visual inspection system for weld feature size based on line structured light is shown in Fig.3. It is mainly composed of a line structured light sensor,a fixture module,a de?tection object,a stepper motor,and an optical ex?periment bench.
Fig.3 Hardware platform of the visual detection system
Since the actual light stripe has a certain width,the center line of the light stripe must be extracted first in the implementation process of the 3D visual measurement technology based on line structured light. Therefore,the accuracy of extracting the cen?ter of the light stripe is very important to the perfor?mance of the entire system and directly affects the measurement accuracy. Based on the investigation of line structure light stripe characteristics,influenc?ing factors of center extraction and its principle,this paper proposes an adaptive optimization algorithm,and conducts detailed experimental verification,multi-dimensional algorithm evaluation and result analysis.
Due to the influence of external environmental factors,the vibration of the device during opera?tion,and the hardware of the system itself,noise of?ten exists in the light stripe image acquired by the sensor. In order to reduce the influence of noise and improve the signal-to-noise ratio of the image,medi?an filtering is used to denoise the original image.Then,the maximum inter-class variance method is used to segment the light stripe image,and the bina?rized light stripe is well extracted to ensure the accu?rate extraction of subsequent light stripe centers.The image preprocessing results are shown in Fig.4.
Fig.4 Image preprocessing results
The traditional line structured light stripe cen?ter extraction algorithm is divided into two aspects:extracting the geometric center,and extracting ener?gy center of the light stripe. The former mainly in?cludes the geometric center method,the threshold method and the skeleton refinement method[5].These algorithms are fast to extract,but less accu?rate. The latter mainly includes the directional tem?plate method,the gray center of gravity method,the Steger method,and the curve fitting method[6-8].The directional template method has poor stability,and the accuracy of its center extraction is average.Based on the principle of the gray barycenter meth?od,this paper proposes a robust adaptive center ex?traction algorithm for the case that the width of each cross section of the actual light stripe is different and there are multiple gray levels[9-10].
(1)Detecting the boundary of the light stripe
Pixels are retrieved column by column from the two ends of the binarized light stripe image to the middle. When the gray valuef(x,y)= 255 is de?tected for the first time on the left end or the pixel gray valuef(x,y)= 255 is detected for the last time in a column on the right,scanning detection is stopped. The pixels obtained at this time are record?ed asPlandPr,and their coordinates are (xl,yl)and(xr,yr). The set of pixels in the target area of the light stripe can be expressed as
Since the width of the light stripe varies in the length direction,this paper detects the upper and the lower boundaries of the light stripe in the target area to obtain the width of each section,and adap?tively extracts the center point within this range.The pixel setfi(x,y) of theith section of the light stripe that participates in the calculation of the cen?ter point is
wherefi(x,y)represents the pixel point with coordi?nates(x,y)on theith section,i=x+1;yup(i)andydown(i)are the vertical coordinates of the upper and the lower boundaries of theith light stripe section,respectively;wiis the width of each section,wi=ydown(i)-yup(i).
(2)Adaptive light strip sub-pixel center extrac?tion
The difference in pixel gray value at the center of the stripe is small,which is a grayscale character?istic of the actual light stripe. At the same time,the gray-center-of-gravity method is susceptible to noise. In this paper,the quadratic weighted center of the gravity method based on the adaptive width is used to extract the center pointPof each section.The calculation principle of this method is shown in Fig.5. This method uses the pixel information in the adaptive width range and increases the gray value of the central area of the light stripe to calculate the weight. The solid green dots in the figure are the ex?tracted sub-pixel center points.
Fig.5 Schematic diagram of the quadratic weighted bary?center method based on the adaptive width
Convolution calculations have low efficiency and insufficient stability in the normal direction of each point. And the research objects in this section are horizontal light streaks. There is no need to con?sider their directivity. Only the ordinateof the center point of theith section needs to be calcu?lated. Then the pointis the extract?ed initial center point(wherex=i-1).is calculated as
(3)Optimal relocation based on center point discrete analysis
In order to further improve the accuracy of cen?ter point extraction and reduce the impact of back?ground noise,this paper analyzes the initial center pointto optimize its relocation. Then it uses the least square method to perform center point fitting.The final subpixel light strip center is obtained.
Suppose a window with a length ofa,the cen?ter of the window is set as the initial center point of each section,and moves from left to right according to the light stripe section. Every time a section is moved,the averagey-coordinate of the initial center point in the window is calculated to obtain the opti?mized center point
whereiandmrepresent the serial number of the light stripe section;(i) andthe coor?dinates of the initial center point of theith and themth sections,respectively;pointis the center of the current optimization window;athe calculation window size,which is usually an odd number(a≥1).
According to the light strip boundary pointsPlandProbtained in the previous section,calculate the slope thresholdkTis calculated to evaluate the dispersion of the initial center point
Then the center pointis tranversed after the average optimization according to the length of the light stripe,and the slopek1between the current center pointand the(i-b)th section center pointis calculated
A dispersion analysis is conducted based on the thresholdskTandk1to determine whether there is a large deviation from the initial center point. Accord?ing to the analysis of the experimental data in this paper,whenk1is 10 times more thankT,it is con?sidered that there is a large deviation in the center point,and it needs to be optimized and reposi?tioned. Taking the(i-b)th section center pointas the reference point for relocation,the calcu?lation formula is as follows
The sub-pixel coordinate of the center pointobtained through the above two optimization steps is
Fig.6 is the result of extracting the center point of the light stripe by the optimization algorithm.Fig.7 shows the comparison between the extraction results of the three traditional algorithms and the proposed optimization algorithm. It can be seen from Fig.7 that macroscopically,the lines fitted by the above four algorithms are almost all located at the actual center of the light stripe.There is little dif?ference between them. The position of the center line is relatively accurate.
Fig.6 Light strip center extraction results of the proposed optimization algorithm
Fig.7 Light strip center extraction results of the four algo?rithms
The standard deviation of the distance from the center point of the light strip to the fitted straight line extracted by the algorithm is used to character?ize the accuracy.A samll standard deviation value in?dicates that the extracted center point is less discrete and the algorithm performs at a higher precision. On the contrary,it means that the algorithm center point extraction is unstable.
The calculation formula of the standard devia?tion of the distance from the center point to the fit?ted straight line extracted by the algorithm is
wheredithe distance from theith center point to the fitted straight line;the average of the distances
from each center point to the fitted straight line;andnthe number of center points extracted.
diis calculated according to the distance formu?la from a point to a straight line
where (xi,yi) is the center point of thei-th light stripe;a,b,care the coefficients of the linear equa?tionax+by+c=0 for the fitted light stripe.
The accuracy level,the running timet,and the standard deviationσof each algorithm are shown in Table 1. It can be seen that,compared with the three algorithms,the proposed algorithm presents an obviously better extraction effect. The position of the center point is accurate and stable. The fitted centerline is also fully able to characterize the actual appearance of light streaks.
Table 1 Accuracy analysis of different algorithms
After the in-depth study of the stripe center ex?traction algorithm,the next step is to perform im?age preprocessing on the weld stripe image. Based on the recognition of the light stripe type and with the center extraction optimization algorithm pro?posed in this paper,the feature points are accurately extracted to obtain the actual geometric feature size of the weld.
There are fewer feature points to be extracted in the light stripe image of the weld. Feature points can be directly extracted after image preprocessing.Image preprocessing generally includes region of in?terest(ROI)extraction,image filtering and thresh?old segmentation. Combined with the system param?eters,the characteristic size of the weld can be out?put[11].The general steps are shown in Fig.8.
Fig.8 Flow chart of welding stripe image processing and feature point extraction algorithm
It is difficult to guarantee higher welding quali?ty during the welding process. Welds may have de?fects such as pits,pores,and incomplete penetra?tion. As a result,the geometric characteristics of the sections of the actual weld are different,and they are not all arc-shaped. Therefore,the algo?rithm of extracting weld feature points should also be able to adapt to different situations. That is,the characteristic points of the welding seam light stripe with accurate geometric shapes can be accurately ex?tracted.
(1)Multi-type seam light strip image segmen?tation
The camera captures multiple weld seam imag?es with different cross sections. The four most repre?sentative light stripe images were selected as the ob?jects. The obtained light stripe images of the weld are shown in Fig.9. The quality of the weld shown in Fig.9(a)is the best. The modulated light stripe appears as an arc and feature points are easy to ex?tract. The weld quality in Fig.9(b)is the second.The light stripe is roughly two arcs. The welds shown in Fig.9(c,d)have obvious welding defects.The quality of them is poor,the characteristics of the light stripe are not obvious,and the features are hard to point positioned.
Fig.9 Light stripe images of different types of welds
Next,the images of the light stripe of the weld were preprocessed. Since the gray level difference between the light strip area and the background in the image was obvious,it is easy to segment the light stripes. The image pre-processing algorithm of the light stripe image used in this paper is shown in Fig.10.
Fig.10 Image preprocessing algorithm for weld light stripe
Median filtering was used to remove pepper,salt,and solitary point noise. The images were bina?rized by the Otsu method. Since the edge of the bi?narized light stripe after threshold segmentation looked like terraced,a morphological closed opera?tion was used to smooth the light stripe edge[12-13].The final processing results are shown in Fig.11.The light stripe target of the weld is accurately ex?tracted. The implementation of the preprocessing al?gorithm works well.
Fig.11 Images of binarized weld light stripe
(2)Extraction algorithm of weld feature points based on the two-parameter threshold
After the image preprocessing,the sub-pixel center point of the light stripe of the weld seam was extracted using the adaptive center extraction optimi?zation algorithm mentioned above. The center points were stored in the point set Weld_crtP[14-15].Since efficiency is particularly important in the auto?mated inspection of weld quality,we directly ex?tracted the feature points of the light stripe of the weld based on the central point set Weld_crtP.
The welding light stripe was divided into three partsS1,S2,andS3,as shown in Fig.12. The char?acteristic pointsWLandWRof the weld zone are the start and the end positions of the weld. The charac?teristic pointWDis the lowest point of the weld.The characteristic pointWUis the highest points of the weld. Through the extraction of these four fea?ture points,the width and the remaining height of the weld can be calculated and obtained.
Fig.12 Division of light stripe area of weld
By analyzing the morphological characteristics of the light stripe of the weld in the image,the fea?ture points were extracted into two parts:The start and theend feature pointsWLandWRof the weld seam,and the feature height pointsWDandWU. In this paper,the two-parameter threshold method was used to extract the characteristic pointsWLandWRof the weld. The specific implementation steps of the algorithm are as follows.
Step 1The moving rectangular windowWin the initialization image is set with a size of 2a+1,and the center of the window is placed at thea+1 sub-pixel center point of the extracted light stripe.The window with a Winsize=3 is shown in Fig.13.
Fig.13 Moving rectangular window for feature point extrac?tion of welds
Step 2Calculate the average slopesandof the light stripes on both sides of the window cen?ter,and the average ordinatesand.
Step 3Calculate the average slope differ?ence ?and the average ordinate difference ?on both sides of the rectangular window center.
Step 4Based on the slope thresholdkW,Tand the ordinate thresholdyW,T,the magnitudes be?tween the average slope difference ?and the aver?age ordinate difference ?are compared. Determine whether the center point of the rectangular window is the feature point of the light stripe to be extracted.If ?and ?are greater than the thresholdskW,TandyW,Tat the same time,the window stops mov?ing. The detected feature pointWL(WR)is output.If not,the window moves one pixel from left to right(right to left)and returns to Step 2.
The results of the extraction of weld feature points are shown in Fig.14. The solid dots are the extracted feature points. It can be seen that although the types of welds are different,the required toe and height feature points in them can be accurately ex?tracted.
Fig.14 Feature point extraction results of the light stripe of the weld
After the software function of the visual inspec?tion system is realized. A detection test was per?formed on the weld object constructed in this paper to analyze the accuracy of the core algorithm of the system and the detection performance of the entire system.
The high-precision measuring equipment se?lected was an absolute measuring arm of Hexagon,and its contact measurement accuracy was±0.051 mm. The measurement results of the sys?tem in this paper were compared with the real val?ues obtained by the measurement arm,and the de?tection performance of the accuracy,stability,and repeatability errors of the constructed system was tested and analyzed.
In order to effectively evaluate the accuracy of the geometric measurement of the weld in this pa?per,two approximate straight lines of the weld sam?ple were selected,as shown in the wire frame in Fig.15(a),and Fig.15(b)is the geometric quantity to be measured of the weld:widthw,remaining heighth. The linear motion and image acquisition parameters of the system were readjusted,the light stripe images were acquired in real time,and the measurement results were output. In this paper,a total of 60 sets of geometric data of the weld section were obtained.
Fig.15 Weld sample and its measured geometry
The actual dimensions of the weld geometric quantitieswandhand the system measurement re?sults in this paper are shown in Table 2. The aver?age absolute measurement error of the geometric quantitywis 0.216 mm,and the geometric quantityhis 0.035 mm. Both were within the range of the system measurement error index.
Table 2 Accuracy test results of weld feature size mea?surement
In order to clearly show the distributions of the measured value and the true value of each sampling section,the true and the measured value graphs drawn according to the data in Table 1 are shown in Fig.16. Fig.16(a)shows the weld widthw,and Fig.16(b) shows the remaining heighthof the weld. It can be seen that the change trend of the sys?tem measurement results in this paper is basically consistent with the real value,that is,it can accu?rately reflect the actual morphological change of the test piece. Specifically,in Fig.16(a),the measured values of the weld width of each section are smaller than their true values. The main reason is that the height of the weld to be tested is low and the gradi?ent of the boundary area between the two welded plates is small,which makes the algorithm have a certain deviation when extracting the feature points of the weld width and shortens the distance inward.
In Fig.16(b),the measured values of the resid?ual height in each section of the system are larger or smaller than the true values. Since the residual height is less than 1 mm,this change may be caused by the relative position of the height feature points.It may be caused by the uncertainty when artificially selecting the height detection point to calculate the true value.
Fig.16 Distribution curves of true values and measured val?ues of weld geometry
However,it can be obtained from the absolute error distribution chart in Fig.17 that the absolute deviation ofwfluctuates within the range of 0.1—0.3 mm,and the average error is 0.216 mm. It does not obviously cause deviation in feature point extrac?tion. The measurement error of the remaining heighthis in the range of 0.025—0.045 mm,and the average error is 0.035 mm. It indicates that the measurement of the remaining height is accurate and stable,and meets the actual detection requirements.
Fig.17 Absolute error distribution of weld feature size mea?surement
In order to meet the needs of visual inspection of the geometric characteristics of weld seams,this paper proposes a 3D visual inspection approach based on line structured light,and combines struc?tural design,image processing and software devel?opment technologies to design and implement a visu?al inspection system of weld geometric dimensions.The test results showed that the detection error of weld width was 0.216 mm,the detection error of re?maining height was 0.035 mm,and the maximum processing time of a single image was only 109 ms,which has reached the system development goal of this subject.Some conclusions can be drawn.
(1)This paper proposed an adaptive light strip sub-pixel center extraction algorithm and multi-di?mensional accuracy evaluation indexes. They can ef?fectively solve the uneven light strip width and gray distribution non-uniform problems with high extrac?tion accuracy,good real-time performance,and good robustness.
(2)For different types of weld images,a dif?ferential image preprocessing method based on the ROI region division is used to effectively filter out the background noise of the light strip image and ac?curately extract the light strip target.
(3)A fusion algorithm of type recognition and corner location is proposed. The corners are initially located while recognizing the type of image,and the center of the light stripe is extracted to obtain the sub-pixel feature points of the weld. The algorithm achieves fast and accurate feature point extraction.
Transactions of Nanjing University of Aeronautics and Astronautics2021年3期