Abstract
Interdisciplinary research combines computer vision with stage light design to automatically detect light fixtures’ positions to create light animations. Multiple programmable light fixtures are often used in theaters, the event industry, and interactive installations. When creating complex animations such as a wave traveling from one side to the other through multiple light fixtures array, all lights’ positions must be known beforehand. Traditionally the position of the light is marked in the technical plan. However, technicians make mistakes during the installation and sometimes install the light in a different position. In such a case, time-consuming troubleshooting is needed to determine which light is misplaced and either correct the position in the software or manually move the light to the correct position. Our system saves time during installation and produces a light id and position pairs that users can use in various lighting control software. As a result, users can improvise and change the light positions more intuitively without needing a technical plan. Our system saves installation costs and enables rapid prototyping of light shows to create previously impossible organic designs. We verified the system in a controlled experiment and measured the influence of camera resolution on accuracy.
You have full access to this open access chapter, Download conference paper PDF
Keywords
1 Introduction
1.1 Use Case
We focus on large-scale dynamic light installations. Consider the light installation at 131 South Dearborn, Chicago, the USA [22] as an example of an ideal use case. The installation consists of 925 glass bubbles, and each bubble has its light source. Install 925 individually programmable light sources and ensure that the wiring is right pose a considerable challenge and is prone to mistakes during installation. We aim to automate the light position mapping process with the proposed system.
Another use case we tested was mapping multiple programmable led rings and strips. Instead of manually marking each fixture’s position, we have automated the process. As a result, the setup can be changed frequently to help designers find the ideal configuration and immediately test it. Please watch the experiment video [15] documenting the process.
1.2 Programming Light Show
Without knowing the light positions, we can create only simple light shows, such as changing the light parameters uniformly or creating animations based on noise. To create a more complex light show, we need to distinguish individual lights and know their position.
We can then create a simulation with light sources represented as points in space. See Figure 1. We add virtual objects and animate their position. We can achieve various light effects by detecting the collision of light sources’ positions with a volume of moving virtual objects. We turn on the light source when the light is inside the virtual object and turn it off when it is outside. We can also use multiple virtual objects to create more elaborate animations or map them interactively based on sensor inputs. For example, one can map people’s movement to light intensity in different sectors.
1.3 Light Network Control
Light fixtures are often controlled using DMX512 [10] protocol. Traditionally DMX address (a unique id of the light) is selected using a hardware DIP switch directly on the light fixture. Another option is to use Remote Device Management (RDM [9]) commands from the control software [6]. However, RDM is only available with some DMX light fixtures, so we often rely on manual selection, making changing DMX addresses difficult. According to the technical plan, each light with the appropriate DMX address must be installed in the correct position.
To control the lights, most often, we use the Artnet [2, 22] or sACN [1, 11] protocol - an extension of DMX standard that enables us to control more lights and use network topology and devices such as Ethernet switches to send DMX packets over the network.
In essence, we can control individual lights from a single networked computer. We need an ethernet adapter and Artnet/sACN node to convert the signal to DMX. DMX signal is then sent to DMX driver that maps the DMX values to voltage and current to dim individual lights. See Figure 2.
2 State of the Art
Various systems using a camera to control lights exist. Most of them deal with finding a user or controlling the light to track the user. In most cases, available solutions rely on knowing the light fixtures’ position in space. We are trying to solve a different problem, localizing light sources relative to the camera. Still, some principles can be used for both problems.
Luxapose [14] system used a mobile phone as a camera and modified light fixtures to track a mobile phone’s position. Light sources are modulated to produce pulses of light with encoded position and ID information that can be captured in a single frame exploiting the rolling shutter effect. While we are trying to determine the position of the light fixture, Luxapose is tracking a user. Theoretically, we could repurpose the system and use the known camera position to determine the light position. Unfortunately, the Luxapose system requires light source modification which is not practical in our use case. Moreover, the camera position relative to the light source would have to be known. While feasible, it would require precise measurement on site.
Similar to the Luxapose, a paper by Hossan and Tanvir [13] uses multiple known light fixtures to localize the camera position. We can not use triangulation methods to determine the camera position when using a single camera in our system. In practice finding the precise camera position relative to the light fixture would have to be repeated for all lights as we can not guarantee their position relative to each other. Such manual measurements would completely negate any benefits of automatization. The authors use found blob pixel count to measure the distance from the camera to the light fixture. A similar approach could be adopted while moving the camera back and forth to determine the distance from the camera to the light source. Other approaches [24] use a photo-diode as a sensor installed in the light fixture to track objects based on changes in light propagation. Multiple light fixtures have to be taken into account, or a quadrant photo-diode is used [7]. The camera is no longer needed in these cases, but the light fixtures have to be modified to include the appropriate sensor and communication unit. For example, BlackTrax [8], a commercial solution for motorized lights, uses multiple cameras and an infrared beacon to track the moving target. Craig Hiller developed a system to detect light’s positions and even classify them as a fluorescent bulb, tube, or LED light [12]. The project relies on a Tango tablet [17] aligned with a DSLR camera and IMU in one package. The user walks under the lights to map their position and create a spatial map. Tracking is only possible with sensors being perpendicular to the light fixture, which is fine when detecting ceiling-mounted lights but would fail with vertical or organically shaped installations. The reported error is also not suitable for the light show scenario, with some lights being false positives and others missed during detection.
3 Software
3.1 Programming Environment
The main program is written in Java Processing framework [4]. To acquire the camera stream, we used Gstreamer [23] based library [11]. To perform computer vision tasks, we have used the Java wrapper of OpenCV library [5].
3.2 Light Fixture Detection
To find the light position, we create a frame difference between a frame with all lights turned off and a frame with a single light turned on. Then we threshold the resulting absolute difference to a binary image. We find the contours of the light blobs. Finally, we sort found blobs based on their area. We select the blob with the biggest area and find its centroid. The resulting x and y coordinates are saved and associated with the light DMX address. When we have processed all the lights, we normalize their coordinates and save all the information into a JSON file. Later this file can be loaded into light control software such as Touchdesigner [21], Madrix [3], OpenFrameWorks [20], Processing [4], VVVV [18], and similar software used to create light shows.
3.3 GUI and User Input
We have also developed a GUI for easier usage. We provide multiple options, such as selecting a target IP, setting custom binary thresholds, selecting from multiple cameras, and setting minimal and maximal blob size. Users can also select whether to cycle through all lights automatically or one by one by clicking the button. The user can manually adjust every detected light position with a mouse if needed. Users can also create custom masks to select the area in the camera image that should be ignored during detection. Furthermore, we have also enabled perspective corner pin transformation to be applied to the camera image. Acquiring undistorted camera images is essential to measure uniform distances between light fixtures correctly.
4 Experiment
4.1 Lights Setup
We have tested our system on a setup with 30 individually programmable LED light sources controlled via the ArtNet network. Lights were installed on 250cm by 125cm wooden plate, all facing the same direction. See Figure 3. Each light is 3.5cm in diameter with 6 LEDs controlled as a single symmetrical light source.
4.2 Camera
We have used the Logitech C922 camera, a widely available and affordable standard web camera with a USB interface. The camera was positioned perpendicular to the wooden plate 272cm away. The camera’s Diagonal Field of View (FOV) is 78\(^\circ \), horizontal FOV is 70.42\(^\circ \), vertical FOV is 43.3\(^\circ \), and Focal Length is 3.67mm. We tested the setup, once using 640*480px camera resolution and once 1920*1080px, so we could determine if the used resolution correlates with accuracy.
5 Results
Experiment data is available at Zenodo 6814223 [16]. All installed lights were detected correctly. See the results in Table 1. In the case of 640*480px camera resolution, 1 pixel corresponded to 4.53mm in the wooden plate plane, and the average position error on the horizontal axis was 2.912mm and 2.426mm on the vertical axis. The maximum error on the horizontal axis was 2 pixels, and 1 pixel on the vertical axis with standard deviation 0.678px and 0.507px respectively.
In the case of 1920*1080px camera resolution, 1 pixel corresponded to 2.02mm in the wooden plate plane, and the average position error on the horizontal axis was 2.885mm and 1.947mm on the vertical axis. The maximum error on the horizontal axis was 4 pixels, and 6 pixels on the vertical axis with standard deviation 1.069px and 1.318px respectively.
6 Discussion
We assumed that the higher resolution of the camera and shorter distance to the lights would produce more accurate results as 1cm would be represented by more pixels in the camera image. Resolution does not play a crucial role, as we have not observed a significant accuracy increase when using 1920* 1080px over 640* 480px.
We need to know if particular light is left or right relative to the other light to propagate the animated wave through them correctly. We have successfully obtained correct relative relationships between lights irrespective of the used camera resolution. High absolute accuracy can be beneficial but unnecessary for the light-show creation.
The light source shines directly at the camera and produces a lens flare. Lens flare does not worsen the detection as long as it is uniform in all directions, so we can assume that the light source is in the center.
7 Conclusion
We offer a robust way to automatically detect installed light fixtures positions with a single monocular camera. Furthermore, we save the information with the respective light DMX address in both machine and human-readable formats to be used in various light control software. Our proposed automatic light detection method can be thought of as a proof of concept with clear time and cost-saving benefits. More importantly, it opens the way for new stage light design possibilities. More testing in real-world scenarios is needed to further verify the presented system’s viability.
8 Limitations
An unobstructed view of all light sources is needed to detect their position correctly. Position mapping is possible only if we find a view in which the light fixtures do not overlap. For example, it would be challenging to automatically get the positions of light fixtures organized in a 3D helix shape.
The best use case for our system is the light sources positioned in a single layer. For example, lights are hanging from the ceiling.
The problem occurs when lights are organized in multiple layers. Such as multiple lights on a single string or rod beside each other. It might not be practical to use our method in such a case.
Reflections cause another limitation. When reflective surfaces surround the light sources, it can create several false-positive hot spots in-camera images that might be hard to distinguish from the actual light source. For example, a chrome-plated ceiling has high reflectivity. In this case, reflections have to be eliminated before detection.
9 Future Research
We can improve the tool to enable 3D position mapping. We can use the binocular stereoscopic camera to calculate the distance from the camera to the light. Alternatively, we can reposition a single monocular camera to acquire two points of view to achieve the same result. We could further improve the usability by merging multiple cameras to cover a larger area to map large-scale installations. Another approach would be to enable sequential mapping. After mapping one place, the user would physically move the camera to cover another neighboring area. The relative position change of camera origin could be calculated by the standard SLAM method [19].
10 Declarations
Availability of data and materials
Experiment data is available at DOI: 10.5281/zenodo.6814223 [16].
References
SACN (2019). https://artisticlicenceintegration.com/technology-brief/technology-resource/sacn-and-art-net/
Art-net (2020). https://art-net.org.uk/
Madrix lighting control (2022). https://www.madrix.com/
Ben, F., Casey, R.: (2004). https://processing.org/
Bradski, G.: The openCV library. Dr. Dobb’s J. Softw. Tools Prof. Programmer 25(11), 120–123 (2000)
Choi, S.-I., Lee, S., Koh, S.-J., Lim, S.-K., Kim, I., Kang, T.-G.: Reliable transmission for remote device management (RDM) protocol in lighting control networks. In: Jeong, Y.-S., Park, Y.-H., Hsu, C.-H.R., Park, J.J.J.H. (eds.) Ubiquitous Information Technologies and Applications. LNEE, vol. 280, pp. 51–58. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-41671-2_8
Cincotta, S., Neild, A., He, C., Armstrong, J.: Visible light positioning using an aperture and a quadrant photodiode. In: 2017 IEEE Globecom Workshops (GC Wkshps), pp. 1–6 (2017). https://doi.org/10.1109/GLOCOMW.2017.8269150
Eichel, J.A., Clausi, D.A., Fieguth, P.: Precise high speed multi-target multi-sensor local positioning system. In: 2011 Canadian Conference on Computer and Robot Vision, pp. 109–116 (2011). https://doi.org/10.1109/CRV.2011.59
ESTA: American national standard ANSI e1.20 - 2006 entertainment technology RDM remote device management over dmx512 networks. Technical report, 875 Sixth Avenue, Suite 1005, New York, NY 10001, USA (2006). https://webstore.ansi.org/preview-pages/ESTA/preview_ANSI+E1.20-2006.pdf
ESTA: American national standard ANSI e1.11 - 2008 (r2018) entertainment technology-usitt dmx512-a asynchronous serial digital data transmission standard for controlling lighting equipment and accessories. Technical report, 630 Ninth Avenue, Suite 609, New York, NY 10036 USA (2018). https://tsp.esta.org/tsp/documents/docs/ANSI-ESTA_E1-11_2008R2018.pdf
Gottfried, H., Ben, F., Reas, C., et al.: Codeanticode (2022). https://github.com/processing/processing-video
Hiller, C., Zakhor, A.: Fast, automated indoor light detection, classification, and measurement. Electron. Imaging 2018(15), 2711–2714 (2018)
Hossan, M.T., Chowdhury, M.Z., Islam, A., Jang, Y.M.: A novel indoor mobile localization system based on optical camera communication. Wirel. Commun. Mob. Comput. 2018, 9353428 (2018). https://doi.org/10.1155/2018/9353428
Kuo, Y.S., Pannuto, P., Hsiao, K.J., Dutta, P.: Luxapose: indoor positioning with mobile phones and visible light. In: Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, pp. 447–458. MobiCom 2014, Association for Computing Machinery, New York, NY, USA (2014). https://doi.org/10.1145/2639108.2639109
Leischner, V.: Automatic light position detection prototype v2 (2022). https://youtu.be/xAghkKOFq-g
Leischner, V.: Light camera position detection - experiment data (2022). https://doi.org/10.5281/zenodo.6814223
Marder-Eppstein, E.: Project tango. In: ACM SIGGRAPH 2016 Real-Time Live!, pp. 25–25 (2016)
McDirmid, S.: Usable live programming. In: Proceedings of the 2013 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software, pp. 53–62 (2013)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular slam system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)
Noble, J.: Programming Interactivity: A Designer’s Guide to Processing, Arduino, and Openframeworks. O’Reilly Media Inc, California (2009)
Rousset, I.: Touchdesigner (2022). https://derivative.ca/
Růžičková, J.: Walk on clouds (2019). https://www.lasvit.com/project/131-south-dearborn/intro
Taymans, W., Baker, S., Wingo, A., Bultje, R.S., Kost, S.: Gstreamer application development manual (1.2. 3). Publicado en la Web 72 (2013)
Wang, W., Wang, Q., Zhang, J., Zuniga, M.: PassiveVLP: leveraging smart lights for passive positioning. ACM Trans. Internet Things 1(1), 1–24 (2020). https://doi.org/10.1145/3362123
Acknowledgements
The research was consulted with doc. Ing. Zdeněk Míkovec, Ph.D.
Funding
The author works commercially in the interactive installations industry that can benefit from the proposed automatic light position detection system. This research has been supported by the project funded by a grant SGS22/172/OHK3/3T/13 and by RCI (CZ.02.1.01/0.0/0.0/16 019/0000765).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this paper
Cite this paper
Leischner, V. (2023). Light Fixtures Position Detection Using a Camera. In: Biele, C., Kacprzyk, J., Kopeć, W., Owsiński, J.W., Romanowski, A., Sikorski, M. (eds) Digital Interaction and Machine Intelligence. MIDI 2022. Lecture Notes in Networks and Systems, vol 710. Springer, Cham. https://doi.org/10.1007/978-3-031-37649-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-37649-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-37648-1
Online ISBN: 978-3-031-37649-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)