Keywords

1 Introduction

Injection moulding is a versatile manufacturing process, used for producing a wide range of everyday objects. The process is based on injecting material into a mould under high pressure and temperature, whereafter the material will cool and harden, completing the injection moulding cycle. The approach works for a wide range of materials and is very fast, making it one of the most commonly used means of manufacturing today. Figure 1 shows a few examples of everyday objects produced through injection moulding: A computer mouse, a plate, and a pen.

Fig. 1.
figure 1

An example of products manufactured using injection moulding. Left: computer mouse. Center: plate. Right: pen.

When doing injection moulding it is important to ensure consistent quality of produced products. This both implies the mechanical properties, such as dimensions, roughness, and strength, and the visual appeal, or surface “appearance”. For the former, several standards and recommendations exist for ensuring consistent quality objectively, however, for the latter, no guidelines exist and most quality control is today done by manual visual inspection. This need for manual visual quality control, which is in addition often repeated multiple times and averaged for consistency, is a very expensive and slow operation to carry out that scales poorly. It is, however, as the consumers value visual quality highly, a task that can very rarely be neglected. Often today we see a compromise, where humans periodically assess quality of samples and adjust production parameters accordingly to maintain a visual quality that is within bounds. This means that whole batches may be discarded due to late identification of errors. It also often means that the quality is evaluated subjectively, with the potential of having varying qualities over time and different production locations.

It is apparent that being able to automatically and objectively quantify the visual surface quality of produced parts has the potential of saving manufacturers many resources in production. In this work, we deal with surface quality of injection moulded plastic (acrylonitrile butadiene styrene, ABS), where we demonstrate an objective surface quality measure based on multispectral images captured in a controlled lighting environment. Here, visual quality is synonymous with high color homogeneity, i.e. the produced plastic part has the correct color everywhere, whereas in cases of low quality there may be smaller patches with discoloring. Our main contributions are: (a) Proposing an approach for strongly emphasizing surface discolourings using multispectral imaging, and (b) presenting a Fourier-based, rotation invariant, quality measure that can potentially be translated to current human-based scores.

2 Related Work

It is well known that injection moulding parameters have a significant effect on the visual appearance and quality of moulded parts. Specifically, especially color and gloss, two of the main contributors to the visual appearance, are highly affected by production parameters [6]. Especially mould temperature and packing pressure are identified by Piscotti et al. to have a high impact on color and gloss. Inhomogeneity in color across the material surface is often caused by insufficient dispersion of fillers or colorants, or by injection moulding parameters too [7, 8].

The assessment of color quality and consistency itself is a well-addressed topic and is employed in many very different fields of research [12,13,14]. The CIE calibrated color spaces are a convenient way of obtaining device independent color measurements [9]. Especially, in relation to quality assurance of color, the CIELAB color space has been employed as this space corresponds well with the color perception of humans [4]. Especially, the color distance metric \(\varDelta E_{00}\) calculated in this space is very convenient, as it mimics how humans perceive color differences, and threshold values for “hardly perceptible” differences have been identified [2].

Moving beyond standard three stimulus color measurements, multispectral imaging approaches, e.g. designed for replacing colorimeters that only rely on point-samples, have been proposed in the food industry [12]. Unfortunately these have until now not been ported to the field of quality control in manufacturing.

Recently, convolution based methods have, with some success, been proposed for estimating color-inhomogeneity of samples scanned in a flatbed scanner [15]. Advancing from this, projector-based approaches trying to identify subsurface miscolorings using structured light have also been suggested [3].

3 Data

For the experiments carried out in this paper, a collection of injection moulded unfilled acrylonitrile butadiene styrene (ABS) samples was produced. To add coloring, a 4% by weight of masterbatch (MB) was dry mixed into the matrix. The samples were injection moulded on an Arburg Allrounder Advance 370 S 700-290 machine, using a screw diameter of 30 mm. To obtain varying color homogeneity, i.e. appearance quality, a range of relevant production parameters were varied. We chose to reuse the parameters identified by Zsíros et al. [15]. In Fig. 2, the mould used for producing samples is presented to the left. To the right in the figure, en example of a produced sample, using dark-blue coloring, is shown. Notice that small but distinct color inhomogeneities exist on the sample surface.

Fig. 2.
figure 2

(a): Mould used for producing the plastic samples that are analyzed in this paper. (b): A plastic sample from the dark blue material. Notice that this sample has many visible inhomogeneities on the surface. (Color figure online)

In total 100 samples were created. The collection covers ten different and commonly used colors, each with ten samples of varying quality. Unfortunately, absolute quality scores obtained by human quality inspectors were not available in this experiment. Instead, an assessment panel was instructed to produce a relative ranking, by ordering samples from the smallest visual error (1) to the largest visual error (10). This was done both for the ten samples within every color, and across all colors using one sample of each color.

For analyzing the produced samples, we utilize a multi-spectral imaging device called VideometerLab 4 to obtain high resolution (\(2192\times 2192\) pixels) images captured at 19 different wavelengths [1]. It captures band-sequential images by diffusely illuminating the sample using LEDs operating in 19 different wavelengths ranging from 365 nm to 970 nm.

Table 1. The wavelengths at which we have acquired images of our samples using the VideometerLab. Due to the limited sensitivity of the human eye, we only use bands two to fourteen. The bands used are highlighted with bold.
Fig. 3.
figure 3

Raw spectral images acquired of the sample presented in Figure 2. The wavelengths are in an increasing order from left to right and top to bottom. These correspond to the wavelengths mentioned in Table 1. In the five longest wavelengths (near infrared), there can be seen a faint bright rectangle near the middle of the sample. This is because the sample is slightly transparent at these wavelengths and there is a sticker on the opposite side of the sample.

As the typical human eye is only sensitive to wavelengths from 390 nm to 700 nm [10], we only use the thirteen bands that fall within this spectrum. Table 1 lists all available wavelengths from the VideometerLab 4, and the ones emphasized in bold writing are the ones used for analysis. This is done because we are in this study not interested in imperfections that are not visible to the human eye. An example of the multi-spectral image of the material sample presented in Fig. 2 can be seen in Fig. 3, where each wavelength is shown as a grayscale image. Note here that the last five bands (near infrared), which fall outside the human visual spectrum, renders the sample slightly transparent as it is possible to faintly see a sticker that is located on the other side of the sample. This shows up as a bright rectangular shape slightly left of the center.

The dark circles seen in the center of each image in Fig. 3 are a result of the samples being very specular and are in fact a reflection of the camera hole itself in the VideometerLab 4 device. This is an unwanted artifact and as a final processing step we therefore produce a mask that masks out this center circle as well as the edge of the samples. This mask ensures that no false color inhomogeneities are introduced during data-processing.

4 Method

The 13 color bands within the human visible spectrum, that were acquired with the multispectral camera, are initially filtered with a \(3\,\times \,3\) median kernel to reduce sensor noise and small dust particles that were present on the sample at the time of image acquisition. This is done as dust particles introduce unwanted high frequent noise in the signal, which would complicate later analyses. Afterwards, the mask created during data acquisition, identifying valid pixels in the images, is applied to the images to extract only valid information. After masking, the 1 percentile brightest and darkest pixels are clipped. This clipping is an additional way to reduce the effect of tiny dust particles, as they make up less than 1% of the area in the image.

Every valid pixel in the thirteen channel multispectral image may be treated as an \(n=13\) dimensional observation, \(\mathbf {x} \in \mathbb {R}^{n}\). As 3,246,212 pixels in every channel were identified as being valid using the predefined mask, a total of \(m=3246212\) n-dimensional points have been observed. As the distribution of these points is currently not centered, the mean from each channel is subtracted from the respective channel to ensure proper centering of the distribution:

$$\begin{aligned} \tilde{\mathbf {x}}_i = \mathbf {x}_i-\mu _i = \mathbf {x}_i - \frac{\sum \nolimits _{k=1}^{m} x_{ik}}{m} \qquad \forall i \in [1;n] \end{aligned}$$
(1)

and the centered observations may be gathered in an \(m\times n\) sized observation matrix, \(\mathbf {X} \in \mathbb {R}^{m\times n}\):

$$\begin{aligned} \mathbf {X} = \left[ \tilde{\mathbf {x}}_1,\ldots ,\tilde{\mathbf {x}}_m\right] ^T \end{aligned}$$
(2)

Based on this observation matrix, the maximum autocorrelation factor (MAF) transformation can be performed [11]. This is very similar to doing principal component analysis (PCA), but yields results that are in some ways more interpretable in the context of images, as it uses the spatial information.

The autocorrelation-based variance-covariance matrix \(\mathbf {\Sigma }_\varDelta \) is defined as:

$$\begin{aligned} \mathbf {\Sigma }_\varDelta = 2\mathbf {\Sigma }-\mathbf {\Gamma }(\varDelta )-\mathbf {\Gamma }(-\varDelta ), \end{aligned}$$
(3)

where \(\mathbf \Sigma \) is the covariance matrix of \(\mathbf {X}\) and \(\mathbf {\Gamma }(\varDelta )\) is the autocorrelation of the image using a specified spatial shift, \(\varDelta \), usually 1 pixel both horizontally and vertically.

The MAF components correspond to the eigenvectors, \(\mathbf {W} = [\mathbf {w}_1\ldots ,\mathbf {w}_n]\), of \(\mathbf {\Sigma }_\varDelta \) with respect to \(\mathbf \Sigma \).

Fig. 4.
figure 4

The first five MAF components of the masked and transformed image before and after high pass filtering the images. The high pass filtering is performed by subtracting a highly blurred version of each image from the image.

The resulting components of the MAF transformation can be seen in Fig. 4. Note that there is a low frequency component visible in the first channel. This is because the sample has not been perfectly evenly lit. As we are not interested in this bias, we subtract a highly blurred version of each channel as a means of applying a high pass filter. We use a Gaussian kernel with \(\sigma =30\) to blur the image. The image after this operation can be seen in Fig. 4b. The value of \(\sigma \) has been chosen empirically to reproduce the impurities as perceived by looking at the sample.

Clearly, as can be seen in Fig. 4, both with and without highpass filtering, the first MAF component clearly emphasizes the inhomogeneities that are only faintly visible in the reference image shown in Fig. 2.

4.1 Feature Extraction

At this step we discard everything but the most significant component, i.e. the first, identified by the MAF, as it captures the inhomogeneities in the material sample very well.

Using this single grayscale image we propose two approaches of extracting information about the surface quality: two Fourier-based methods and one autocorrelation based method.

Fourier-based method: Here, we compute the 2D Fourier transform of the image which yields the complex-valued 2D Fourier spectrum,

$$\begin{aligned} F(u,v) = \sum \limits _{m=0}^M \sum \limits _{n=0}^N f[m,n] e^{-j2\pi (\frac{um}{M} +\frac{vn}{N})}. \end{aligned}$$
(4)

To obtain rotational invariance, we compute the radial average, \(A(r)\), of the amplitude in the Fourier domain, to get something similar to a one-dimensional power spectrum of the image:

$$\begin{aligned} A(r)= \frac{1}{2\pi }\int \limits _{0}^{2\pi } \left| F\left( r\sin \phi ,r\cos \phi \right) \right| d\phi . \end{aligned}$$
(5)

This power spectrum is shown in Fig. 5. We discard the phase information. From the power spectrum, \(A(r)\), we can compute the average radial amplitude.

$$\begin{aligned} A_{avg}= {\int \limits _0^{\min (M,N)}} A(r)dr \end{aligned}$$
(6)

The upper limit in the integral is chosen so the \(A_{avg}\) is finite because the Fourier transform is cyclic.

Second Fourier-based method: Using the above defined \(A(r)\) we compute only the average radial amplitude from \(r=20\) to \(r=80\). This range has been chosen as the ordering of the lines in Fig. 5 in this range visually correlates well with the human based quality ordering. This can be interpreted as a mid-pass filter. The range is depicted by the dashed red vertical lines in Fig. 5.

$$\begin{aligned} A_{mid}= {\int \limits _{20}^{80}} A(r)dr \end{aligned}$$
(7)
Fig. 5.
figure 5

Radially averaged Fourier transform of the first MAF component. This is shown for one sample of each color. (Color figure online)

Autocorrelation-based method: Here, we compute the correlation length of the 1st MAF component using its weighted autocorrelation. The weighted autocorrelation is given by:

$$\begin{aligned} \mathbf {\Gamma }_{weighted}(\varDelta ) = \frac{\mathbf {\Gamma }(\varDelta )}{N-|\varDelta |}, \end{aligned}$$
(8)

where N is the length of the discrete signal. The correlation length, l, is then defined as the distance at which the autocorrelation drops below the value 1 / e for the first time [5]:

$$\begin{aligned} \mathbf {\Gamma }_{weighted}(l) = \frac{1}{e}\quad \text {(first occurrence)} \end{aligned}$$
(9)

5 Results

In the following we summarize the results obtained by the proposed quality features compared to the results obtained by the human quality assessment panel. For compactness, we only summarize the rankings of a subset of the samples, as these were deemed to span variation of our method’s performance well. This subset includes the samples “dark blue”, “yellow”, “dark grey”, “light grey”, “dark brown”, and “one of each color”.

We employ each of the three quality measures presented in the previous section and rank the samples according to those. Figures 6, 7, 8 and 9 visualize these rankings, where the first MAF component is overlaid with the human-assigned rank (10 being the worst). The samples are ordered from left to right according to the respective method’s ranking. Note that the extremes of the rankings generally conform well with the ones ranked by a human assessment panel. Around the middle of the rankings, we observe a little more variation in the assignments, which would be expected.

Fig. 6.
figure 6

Comparison of how the dark blue samples were ranked by the human panel and our proposed quality features. The numbers shown in the center of each sample are the ranks assigned by the human assessment panel. The ordering from left to right corresponds to the method’s ranking.

Fig. 7.
figure 7

Comparison of how the light grey samples were ranked by the human panel and our proposed quality features. The numbers shown in the center of each sample are the ranks assigned by the human assessment panel. The ordering from left to right corresponds to the method’s ranking.

Fig. 8.
figure 8

Comparison of how the dark brown samples were ranked by the human panel and our proposed quality features. The numbers shown in the center of each sample are the ranks assigned by the human assessment panel. The ordering from left to right corresponds to the method’s ranking.

Fig. 9.
figure 9

Comparison of how one sample of each color were ranked by the human panel and our proposed quality features. The numbers shown in the center of each sample are the ranks assigned by the human assessment panel. The ordering from left to right corresponds to the method’s ranking.

For a quantitative evaluation of our method’s performance, we employ Spearman’s and Kendall rank correlation coefficients to compare our rankings to the human rankings. These correlation coefficients are summarized in Tables 2 and 3. As may be seen from the table all quality features perform decently with an average Spearman’s rank correlation coefficient above 0.5 and Kendall rank correlation coefficient slightly above 0.4. What is noteworthy is that the features perform well with different samples, indicating that they may be able to compliment one another. We have not investigated this further but will look into this in future work.

Table 2. This shows the Spearman’s rank correlation coefficient of each method compared with the human panels sort order, for the six sorts that the panel did. Higher values are better.
Table 3. This shows the Kendall rank correlation coefficient of each method compared with the human panels sort order, for the six sorts that the panel did. Higher values are better.

6 Discussion and Conclusion

We have in this paper demonstrated a method for automatically assessing the visual quality of injection moulded plastic surfaces. The approach is based on multispectral imaging in conjunction with MAF dimensionality reduction and proposes three different inhomogeneity measures for quantifying surface quality. All three methods are capable of correctly ranking according to a human panel with Spearman’s rank correlation coefficient above 0.5 on average and Kendall rank correlation above 0.4 on average. Generally the methods robustly identify the best and worst samples, whereas medium-range quality estimates are more uncertain. If only one of our variants should be picked we propose \(A_{avg}\), as this has the most, and highest, statistically significant rank correlation coefficients.

For future work, we will try to obtain absolute values for sample quality scores, in order to find a direct mapping from our feature scores to the current standard in human visual quality control.

Currently, our method only looks at inhomogeneities and does not consider whether the base color of the sample is within specifications. Future work could incorporate this nuance as well, as this too is an important part of the quality.

An interesting observation is that the proposed features work well on different material samples. This could indicate that even better overall ranking performance may be obtained by concatenating the features before predicting material quality. In addition using additional MAF components and/or extracting more than one range from the Fourier spectrum is subject for future research.