1 Introduction

With the development of the internet, smart factories and smart warehouses (Liu et al. 2018; Lee et al. 2019; Scheer 2012) have gradually emerged. Machines have gradually replaced some simple and repetitive tasks. What’s more, in some places with harsh production environments, unmanned people rapidly realize to reduce the human in such harsh environments.

In the harsh factory environment, most factories did not reserve some installing sensors, nor did they reserve the network cables needed for data transmission. The factory structure is pre-designed, and it is not easy to change the structure. The data transmission line will increase the cost very much and bring danger to the working environment, so we should find a way to monitor the warehouse without cable. The material volume calculation (Riccabona et al. 1995; Fojtík 2014; Zhongyi et al. 2019) of intelligent warehouse with distributed computing equipment proposed in this paper is used in the warehouse’s harsh environment.

Two important concepts are involved: distributed systems and edge computing. Firstly, distributed systems have a wide range of applications in smart factories. A general feature of manufacturing systems is the transmission and processing of data between each other. Distributed systems provide coordination to allow global information to be available for further calculation and better decision making (Poonpakdee and Koiwanit 2018). The architectures of distributed systems we used are centralized systems, Fig. 1 presents the simulated scenarios. A server aims to centralize all functions and information taken from clients by directly connecting to the clients. Clients share their resources by sending and receiving the information to a server (Minar 2002). Secondly, edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to improve response times and save costs (Hamilton 2019). Edge computing is performed at the network edge near to the device or data source. Edge computing can provide real-time data processing, data cleaning, and privacy protection (Shi et al. 2016). Pizoń and Lipski (2016) argued that edge computing enables dynamic monitoring and data process. Distributed computing is based on the rapid development of the network, and edge computing is related to factories’ distributed automation. Edge computing combines decentralized processing with the centralized upload, which reduces data transmission time, data transmission errors, and data security while reducing bandwidth (Chen et al. 2017). The primary advantage of centralized systems is their simplicity. Shu et al. (2016) provides a method for collecting data and processing data by cloud-integrated. The Cyber-physical systems (CPSs) have three fundamental conflicting attributes: safety, security, and sustainability (Baheti and Gill 2011). For edge computing, De Brito et al. (2016) supposed that deployment of programmable fog nodes is attributed to inter-node Peer-to-Peer (P2P) communication and services orchestration without centralized control. The manufacturing industry is exploring the use of cloud computing to enhance manufacturing plant operations’ efficiency, improve product quality, and so on (Georgakopoulos et al. 2016). However, the internet can be used in our environment, so the structure of distributed edge computing provided by others can not be used directly. We use a computer as a center and set up all edge computing modules to start automatically, and all of them are independent.

The distributed systems and edge computing are used in my system. There are severe dust and large-scale machinery operations in this kind of warehouse, but someone needs to monitor whether the materials in each warehouse are lacking. If the lack of materials needs to be supplemented in time, otherwise it will affect production efficiency. The edge computing equipment used in this article connects each computing module to a network through a wireless router (Ikram and Thornhill 2010; Paavola and Leiviska 2010) and monitors the volume of materials in the warehouse in real-time. This article mainly contributes to the processing of distributed edge computing module data and the networking of distributed equipment. The module is mainly composed of a mini processor, Livox Horizon LiDAR, camera, and wireless router. After power on, each module will be connected to the network through the router, and the equipment can be started by remote login, and the central control room can also be obtained through high-power wireless access. The main contributions of this paper can be presented as follows:

  1. 1.

    Distributed edge computing devices transmit data through wireless network networking.

  2. 2.

    Adding materials through real-time volume calculation and intelligent notification to reduce personnel working in harsh environments.

  3. 3.

    The sand pile model is used to predict the sand piles that cannot be scanned by LiDAR, reducing the redundancy caused by installing sensors at the bottom of the warehouse.

Fig. 1
figure 1

Centralized systems. The systems have a base station and all sensors connect to it,base station can read data from every sensor,but sensors are independent

2 Method

The intelligent material warehouse has two essential components: the edge computing end and the monitoring room. The edge computing is responsible for two parts, volume calculation and the third is point cloud colouring Fig. 15. Point cloud colouring makes the 3D point cloud with the information of the image can be displayed on the monitoring room. There is much dust in the material area in the actual warehouse. The measurement accuracy will be greatly affected, such as the ZED stereo camera, which is a depth camera based on the principle of RGB binocular stereo vision (Bauer et al. 2019), can meet 20 m distance. But it is difficult to meet the requirements of distance and accuracy, so we choose LiDAR to obtain three-dimensional data. Traditional line LiDAR can not get dense point cloud data, and the actual scene viewing angle is only 90, so using 360 lasers make a waste of a performance. The Livox Horizon LiDAR is a non-repetitive LiDAR with a field of view (FOV) \(81.7^{\circ }\) (horizontal) \(\times \) \(25.1^{\circ }\) (vertical). Two lasers are arranged on the left and right sides of the warehouse to achieve full-range coverage. By accumulating multiple frames of LiDAR data, a denser point cloud data can be achieved. In order to splice the left and right point clouds, the transfer matrix from the left point cloud to the right point cloud needs to be calibrated. The left point cloud should be changed to the right point cloud frame by multiplying the transform matrix. After adding left and right point cloud, changing the adding point cloud to the map and then remove the plane and wall point by plane constrain. We want to transmit data to the monitoring room through wireless long distance for people, but it takes more bandwidth to transmit pictures. Hence, we color the point cloud and transmit the point cloud to the monitoring room, saving wireless bandwidth. The point cloud colouring technology (PDAL Contributors 2018) is equivalent to using LiDAR data and two-dimensional image data to make a data fusion to generate an image similar to the RGB-D image. The monitoring room can see the RGB-D similar data information. Another one, the coloured point cloud can be used for 3D reconstruction (Newcombe et al. 2011), in some places where RGB-D cameras are not convenient to use. The intelligent warehouse system can be broken down into several parts: hardware framework, volume calculation, LiDAR camera calibration and point cloud colouring, which will be introduced one by one.

2.1 The hardware system

The hardware system of the intelligent material warehouse includes a distributed edge computing module composed of the Jetson-nano development board, Livox Horizon LiDAR, Huawei WIFI3 AX3 wireless as Fig. 4a, b. Jetson-nano is a small processor, which comes with a small camera can be used to collect images, is responsible for LiDAR point cloud processing and image recognition. Huawei routers adopt wireless networking configuration information. The base station uses the auxiliary transceiver that can interact with the base station transceiver of the neighbouring cell base station to exchange network parameters with the neighbouring cell base station and configures their neighbouring cell lists based on this. Stations exchange and update of configuration information, and self-adaptive networking. After power on, the processor and the program will automatically start, and automatically connect to the wireless network. The module hardware system connection diagram is shown in Fig. 2.

Fig. 2
figure 2

Module hardware connection diagram

The installation of edge computing modules in the warehouse is distributed as Fig. 3. The traditional sensor arrangement requires the LiDAR to be connected to the processor through a network cable. However, in a warehouse environment with a long distance, we need multiple LiDARs, so it is very inconvenient to connect the LiDAR to a network by pulling the network cable. The edge computing module are suitable for installation in this environment. Each edge computing module performs independent calculations, and the processor performance requirements for edge computing do not need to be very high. Combine all modules in a local area network through a router, and send data through a high-power wireless (Farkas 2011) transmitting device Fig. 4, The entire system is based on the Robot Operating System (ROS). ROS is a distributed framework which nodes can be combined through a loose coupling. The nodes can run on different computing platforms and communicate through topics. In the same ROS system, only a master is allowed, so set the host of the control room as the master node, and combine all other modules under the same local area network as the client. The volume of storage material warehouse is \(20\times 10\times 4\,\text{m}^3\), which is not easy to install edge computing modules, so choose to install the modules on the left and right columns in front of the warehouse. We will use the algorithm to predict the height of some materials that cannot be scanned to improve the accuracy of volume calculation.

Fig. 3
figure 3

The module installed on the warehouse

Fig. 4
figure 4

a The front view of module. b The top view of module. c The side view of module. d The powerful wireless. The figure shows edge computing module hardware and powerful wireless for data transmission, The edge computing module is small and convenient to install in factory

2.2 The point cloud correction

The point cloud correction has two parts, one is the point cloud correction of the left and right modules of the warehouse and splicing them into a complete point cloud. The other is the correction of the spliced point cloud to a pre-built 3D map by LiDAR Odometry and Mapping (LOAM) (Zhang and Singh 2014). The LiDAR data of the modules on the left and right sides are a set of intersecting data and cannot be directly spliced into a complete warehouse point cloud data. It needs to be calibrated to find the transform of the left and right point clouds, and then the two sets of data are stitched together. We choose the right side as the source cloud and the left side as the target cloud, find the transform from left to right, rotate the point cloud on the left to match the right. The 3D point cloud map of the warehouse is constructed in advance, and the data collected in real-time is at the origin of the coordinates by default. Transform needs to be calibrated to transform the collected 3D point cloud to the corresponding warehouse of the collected data. Because the warehouse has a certain similarity and the established three-dimensional model and the data have a large transformation distance, it cannot be automatically calibrated by the algorithm at one time, so there are two steps in the calibration. One is to manually adjust the point cloud data, adjusting to the approximate position of a specific warehouse. Second, in order to obtain an accurate transform matrix, we consider using the normal-distributions transform (NDT) (Magnusson et al. 2009) algorithm to accurately match the point cloud data. Similar registration algorithms include the Iterative Closest Point (ICP) (Besl and McKay 1992), but the ICP iterative algorithm needs to provide a better initial value. Simultaneously, due to the algorithm’s defects, the final iterative result may fall into a local optimum. NDT is a method of compactly representing the surface of an object. The first step of NDT is to divide the point cloud into cells by k-means clustering (Duda and Hart 1973), calculate the PDF of each cell based on the points in the cell, and assume that this PDF is Gaussian distribution. This PDF can It is understood as the generative process (Magnusson et al. 2009), in another word, of each cell surface point, which is the local model of the measurement points in this cell. For the N-dimensional \({\mathbf{x}}\) normal random process, the likelihood of having measured \({\mathbf{x}}\) is

$$\begin{aligned} p({\mathbf{x}})=\frac{1}{(2 \pi )^{D / 2} \sqrt{|{\Sigma }|}} \exp \left( -\frac{({\mathbf{x}}-{\boldsymbol{\mu }})^{\mathrm {T}} {\Sigma }^{-1}({\mathbf{x}}-{\boldsymbol{\mu }})}{2}\right) \end{aligned}$$
(1)

where \({\boldsymbol{\mu }}\) and \({\Sigma }\) denote the mean vector and covariance matrix of the reference scan surface points within the cell where \({\mathbf{x}}\) lies.

$$\begin{aligned} \begin{aligned} {\boldsymbol{\mu }}&=\frac{1}{m} \sum _{k=1}^{m} {\mathbf{y}}_{k} \\ {\Sigma }&=\frac{1}{m-1}\sum _{k=1}^{m}\left( {\mathbf{y}}_{k}- {\boldsymbol{\mu }}\right) \left( {\mathbf{y}}_{k}-{\boldsymbol{\mu }}\right) ^{\mathrm {T}} \end{aligned} \end{aligned}$$
(2)

where \({\mathbf{y}}_{k=1, \ldots , m}\) are the positions of the reference scan points contained in the cell.

Using NDT registration, the goal is to find a pose of the current scan, which maximizes the likelihood of the current scan in the reference frame. The optimized parameters are the rotation and translation of the pose estimate of the current scan.Given a set of point of current \({X}=\left\{ {\mathbf{x}}_{1}, \ldots , {\mathbf{x}}_{n}\right\} \), a pose \({\mathbf{p}}\), which is a parameter vector to be optimised, and a transform function \(T({\mathbf{p}}, {\mathbf{x}})\) to transform point \({\mathbf{x}}\) in space by \({\mathbf{p}}\). The score function for current parameter vector can be formulated as Eq. 3

$$\begin{aligned} s({\mathbf{p}})=-\sum _{k=1}^{n} {\tilde{p}}\left( T\left( {\mathbf{p}}, {\mathbf{x}}_{k}\right) \right) \end{aligned}$$
(3)

which the optimal transform parameter vector \({\mathbf{p}}\) can be iteratively computed.

The 3D scene map of the warehouse is built in advance, and real time LiDAR data needs to be projected to the corresponding warehouse on the scene map. Each time the local point cloud is multiplied by the solved transform matrix to become a point cloud under the global map, expressed as:

$$\begin{aligned} {X}_{global} =T({\mathbf{p}}_{optimal}, {X}_{local}) \end{aligned}$$
(4)

where \({X}_{global}\) represent the points be projected to 3D map of corresponding warehouse, \({\mathbf{p}}_{optimal} \) is optimal transform parameter vector which is computed by NDT, and \({X}_{local} \) is point cloud of adding left and right LiDAR calibrated data.

If the point cloud is transformed each frame and then filtered to calculate the volume, it will increase the time for each frame of data to be transformed. Therefore, when calculating the volume, we do not need to transform the data frame every time but make the map rotate to the point cloud’s coordinate once. We can transform the map to the point cloud coordinate just once when the program load the map so it is efficiency.

$$\begin{aligned} {M}_{local} =T^{-1}({\mathbf{p}}_{optimal}, {M}_{global}) \end{aligned}$$
(5)

\({M}_{local}\) represents the map in LiDAR coordinate system, \( {M}_{global}\) represents the original map in global coordinate system

Calculating the volume of the material needs to remove the spliced LiDAR data wall and ground, and what is left is the volume of the object to be calculated. Because we want to project the LiDAR data to an image, so we construct a filter to remove the wall, ground and outliers.

$$\begin{aligned} {\left\{ \begin{array}{ll} {a_i*x+b_i*y+c_i*z+d_i = 0}\\ y< const\\ z < const \end{array}\right. } \end{aligned}$$
(6)

where \(a_i\), \(b_i\), \(c_i\), \(d_i\), are plane parameters and yz are length and height constraints.

Through these constraints, a filter is formed to remove the LiDAR wall, ground and outliers, and the remaining point cloud data is the sand pile. However, due to the impermeability of LiDAR, the back of the sand pile cannot be scanned by LiDAR, resulting in the missing part of the sand pile. If the volume of the sand pile is directly projected, there will be a large error. We project the sand pile data into the mat (it is a data type). Each pixel of the picture represents 0.1 m \(\times \) 0.1 m. If the area corresponding to the pixel has a height, the average height of belonging is used as the pixel’s gray value. If a pixel value is not the corresponding height is that this height is blocked and temporarily filled with 0. In order to fill up the obscured height, in the study of the angle of repose of the sand pile (Al-Hashemi and Al-Amoudi 2018), the model of the sand pile is shown in Fig. 5. Our actual sand pile can also be approximated like this.

Fig. 5
figure 5

a The sand model. b The real sand pile. According to the model of the sand pile and the actual sand pile, the sand pile can be approximated to be symmetrical, and the shaded area can be predicted

The sand pile is approximately symmetrical with respect to the highest point. Traverse each column of the image, if there is a boundary between the height, that is, the non-zero area and the occluded area 0, then the pixel is the projection pixel of the highest point of the sand pile \(( x_i,y_i)\), calculate the pixel distance d of the pixel position with the gray value of 0 relative to the highest point of the column, because the sand pile is approximately symmetric, subtract the pixel distance from the column where the highest point pixel is located d can get the symmetrical position \((x_i,y_{i-d})\) of the highest point of the occluded area about the sand pile, and use the gray value of \((x_i,y_{i-d})\) as the occluded gray value, That is, as the predicted value of the occluded height, all the pixels of the height 0 of the sand pile area are predicted, and an approximate projection of the entire sand pile can be obtained. Integrate the sand piles behind the grid, expressed by the formula:

$$\begin{aligned} V =\sum _{i=1}^{N} s_{i} \cdot h_{i} \end{aligned}$$
(7)

\(s_i\) represents the area corresponding to each pixel, and the gray value of the \(h_i\) pixel is the height of the sand pile. Add up all the pixel values and multiply it by the bottom area represented by each pixel to calculate the volume of the sand pile.

2.3 The point cloud fusion

In order to colour the point cloud of the LiDAR, it is necessary to calibrate the extrinsic parameters between the LiDAR and the camera (Dhall et al. 2017; Wang et al. 2017). In this solution, the corners of the calibration board are used as the calibration target. Due to the non-repetitive scanning feature of the Livox LiDAR, the density of the point cloud larger, easier to find the accurate position of the corner point in the LiDAR point cloud (An et al. 2020). The basic principle of calibration is to calculate and obtain the conversion relationship between the xyz coordinates of the same target in the LiDAR coordinate system and the xy coordinates in the camera coordinate system. Because the corner points are obvious targets in the point cloud and photos, this can reduce the calibration error. The calibration steps include calibration of internal camera parameters, calibration preparation and data collection, and calibration of extrinsic parameters. There are many ways to calibrate the internal parameters of the camera (Heikkila and Silven 1997). We use MATLAB tools to calibrate the internal parameters of the camera. The calibration of the camera is a mature tool so this article will not introduce it in detail. Calibration preparation and data collection include: First, the preparation of the calibration scene, using the four corners of the calibration board as the target, choosing a relatively open environment, and ensuring that the LiDAR is more than 3 m from the calibration board. Second, connect the LiDAR and camera to view the point cloud and record the data packets and photos of the point cloud. Calibration of three extrinsic parameters. In this section, we use a camera and a Livox LiDAR to realize the function of a RGB-D camera. The color of the point cloud is to calculate the corresponding camera pixel coordinates through the xyz of the point cloud and the obtained internal and extrinsic parameter matrix, and the RGB information of this pixel is obtained and then assigned to the point cloud for display so that the LiDAR point cloud can display the real color. We project the LiDAR point cloud to the image pixel by pre-known extrinsic calibration value, so as to colourize the LiDAR point cloud. Then we use a sequence of colourized point clouds to recover a dense colourful map of an area. The advantages of this sensor set are longer detection range and better performance in an outdoor scenario. Data process pipeline as Fig. 6.

Fig. 6
figure 6

Data process pipeline of colouring point cloud

3 Experiment

The experiment was performed offline by recording rosbag, which is a set of tools for recording from and playing back to ROS topic. We need to calibrate the extrinsic parameters of the two LiDARs and LiDAR point cloud data to the map. The calculated sand pile point cloud is extracted by the fitted plane and height constrain. The filtered point cloud is projected into the pixels, where each pixel represents a grid of \(0.1\,\text{m}\times 0.1\,\text{m}\). After the camera and LiDAR are calibrated, we can use the extrinsic parameters to colour the point cloud. The coloured point cloud has image information and can achieve an effect similar to an RGB-D camera. In this section, the detailed calibration process and results, the prediction results and volume calculation of the occluded area, and the results of point cloud colouring will be introduced.

3.1 Calibration of extrinsic parameters

The point cloud collected by two edge computing devices is Fig. 7a. The two LiDARs are based on their respective coordinates as the origin, which causes the data of the two LiDARs to cross and misaligned. It is impossible to calculate accurate extrinsic parameters by directly using the registration algorithm. Because the two edge calculations are fixed on the warehouse and only need to calibrate the extrinsic parameters once, we manually adjust the point cloud on the right to align with the point cloud on the left to get the first extrinsic parameter of the rotation matrix \(T_1\) Fig. 7b, c, and then use the NDT method to register to get the precise rotation matrix extrinsic parameters \(T_2\) Fig. 7d, so the rotation matrix extrinsic parameters from the right to the left \(T = T_2\times T_1\).

Fig. 7
figure 7

a The raw LiDAR data from distribute edge computing module. b, c The point cloud manually adjusted to a rough position. d The registered point cloud. Rotating the left point cloud that cannot be registered at all to the right point cloud by manual, two point clouds in a roughly position that can be registered, and then register through the NDT algorithm to obtain a more accurate transform

We used GO-ICP (Yang et al. 2015) do experiments without giving the initial value, but LiDAR data can’t be matched to the target warehouse. We tried to manually adjust the LiDAR data and map data to a relatively close position but still could not achieve the desired effect. Figure 8 show the result.

Fig. 8
figure 8

a is manually adjust the LiDAR data and map data. b is registered through GO-ICP. c is registered through NDT. Comparing the registration results, using NDT has a better effect than GO-ICP in our environment

In order to facilitate manual adjustment of the point cloud, the calibration is from the LiDAR point cloud data to the map rotation matrix extrinsic parameters \(T_{cloud}^{map}\). But in the actual material calculation, in order to reduce the amount of calculation and the operation of rotating the point cloud, we only need to rotate the map to the LiDAR point cloud coordinates just once when the program is initialized to load the map. The result of LiDAR point cloud and map calibration as Fig. 9. The operation of rotating the map to the LiDAR coordinates is equivalent to multiplying the map by the inverse of the LiDAR to the map \((T_{cloud}^{map})^{-1}\), formulated as \({M}_{local} =(T_{cloud}^{map})^{-1}\times {M}_{global}\), The result be shown as Fig. 10.

Fig. 9
figure 9

a is the sub-map of the warehouse, which is the edge computing module installed. The white point cloud is added point and coloured point cloud is sub-map, both of them were adjusted manually. b is registered through NDT

Fig. 10
figure 10

Top view of rotating warehouse map to edge computing module’s LiDAR added point cloud

3.2 Point projection and prediction

The point cloud projection is to project the filtered point cloud into the pixels, and the grey value of each pixel represents the height of a materials. First, we need to determine the range of the point cloud on the x-axis and the y-axis, and then project the point cloud according to the grid size of \(0.1\,\text{m}\times 0.1\,\text{m}\). It is possible that multiple points are projected into one pixel at the same time, so the average value of the height of the multiple points is used as the gray value of the pixel. In actual operation, two pictures of the same size are used. One picture \(M_1\) stores the total height of all point projections, and one picture \(M_2\) stores several point clouds projected by each pixel. After traversing all point clouds, \(M_1/M_2\) is the average height corresponding to the grey value of each pixel.

After projecting all point cloud, some pixels may have no point cloud projection in the scan are due to the grid resolution is not small enough. These pixels do not conform to the sand pile model and are not part of the image prediction. The image dilatation is used directly on the image to complete the blank points in the scanned area. The prediction of the point cloud area that cannot be scanned on the back of the sand pile is based on the model that the sand pile is symmetric about the highest point. When traversing the columns of the picture, the edge of each sand pile and the highest point of the sand pile is recorded to complete each sand pile. The pile is symmetric concerning the highest point, so the unscanned area’s height can be predicted according to the sand pile model. The distance from the highest point to the area with a gray value of zero is not greater than the distance from the highest point of the sand pile to the edge of the sand pile. After all the columns are traversed, the prediction of the height of the unscanned sand pile is completed Fig. 11.

Fig. 11
figure 11

a The point cloud be projected to a mat. b The prediction of the sand pile on mat. The red circle area is on the scannable side, so there are no points with gray values because the point cloud is not fully covered during projection. It is not within the range predicted by the sand pile model, and the point height of the scannable area cannot be changed to zero, so the expansion algorithm complements the internal height in the image. The blue area is the area that needs to be predicted based on the model

To evaluate the accuracy of the calculation and the accuracy of the sand pile model prediction, we did two sets of experiments. One group of experiments compares the actual volume changes of the materials and the changes obtained through volume calculations. The other group compares the prediction accuracy of the sand pile models and double times scannable areas. In the actual working environment, each material carried by the forklift is in a range. In our experiment, the actual value of each material is 2.6–2.8 \(\text{m}^3\). For the accuracy of the calculation, we get the experimental results shown in Fig. 12. The average accuracy reach 0.85.

Fig. 12
figure 12

a The calculated volume change value and actual volume change value. b The accuracy of calculation. The material is filled in the warehouse. We remove a part of the material from the warehouse, calculate the changed volume, and calculate the volume calculation method’s accuracy

Fig. 13
figure 13

a The volume calculated by different methods and the real value. b The accuracy of the predicted volume based on the sand pile model and the accuracy of twice the scanning volume

For evaluating the accuracy of the sand pile model, we piled up several small sand piles in the material warehouse and scanned them with LiDAR to calculate their volume. Sand pile model achieves high accuracy as Fig. 13.

3.3 LiDAR camera calibration and colouring point cloud

LiDAR camera calibration is preparation for colouring point cloud. This calibration can be divided into three steps: the first step is the calibration of the camera, the second step is the corner extraction of the LiDAR, and the third step is the calibration of the LiDAR and camera extrinsic parameters. The camera’s internal parameters can be easily calibrated for camera calibration using the calibration board and MATLAB tools. The click point selection can be easily selected by clicking the point cloud displayed by the RVIZ (it is a 3-dimensional visualization tool for ROS). Based on this point, the program will extract a plane from the region of interest (ROI) area within a certain range of the point cloud. The corner extraction program will extract corners in this ROI area. The calibration process can be show as Fig. 14. After preparing the camera’s internal parameters and the LiDAR’s corner points, the extrinsic parameters of LiDAR and camera can be calibrated by the algorithm (Wang et al. 2017). As Fig. 6 show, we input the camera image and LiDAR camera extrinsic parameters to re-project 3D points to 2D image to colourize the point cloud (Fig. 15).

Fig. 14
figure 14

a The camera calibration. b The ROI area depend on the click point. c The corner extracted by the program. In the c figure, the pink points circled by the green circle are the corner points extracted by the program. Some corner points are not circled

Fig. 15
figure 15

a The raw LiDAR data from distribute edge computing module. b The camera image for colouring. c The colourized point cloud. The point cloud and image can be captured and colourized in edge computing module. The coloured point cloud can be transmitted to the control room for display

4 Conclusion

The intelligent warehouse monitoring system based on edge computing has been initially completed, and the function has been achieved in the calculation of warehouse material volume. We reached the distributed systems and edge computing in unique factories and warehouses that cannot connect directly can form a network through a wireless router. Distributed edge computing can calculate the volume of materials and colour the point cloud. From our experiments, the sandpile model has achieved higher calculation accuracy. It saves the wireless network’s bandwidth by transmitting coloured point clouds instead of transmitting pictures and realizes long-distance data transmission. In the future, the primary technology for more comprehensive applications is the network bandwidth and the stability of distributed nodes. What is more, cloud technology is relatively mature, and in the future, cloud technology can also be used for data transmission across regions and distances.