This research implements a system that detects and records various human activities in indoor scenes. For example, it detects who brings in or takes out an object and the handled object’s image with the incident timestamp. It’s constructed over ROS2, a widely used distributed communication framework for robotic implementation based on micro-services architecture, so that it can separate each subprocess of detection and improve the maintainability of each module. This paper reports the constructed system with visual human and pose detection, object detection, and recognition of object handling activities. Since the system was able to separate hardware not only service process, it was able to employ computationally heavy machine learning models simultaneously on multiple PCs with GPU.
- Interactive human-space design and intelligence
- Human-robot interaction