Résumé:
In recent decades, smart building has received considerable research attention due to the increased demand for connected and integrated technology. Based on data collected by sensors, alarms, lighting, access control, heating and cleaning can be adjusted according to human activity and the actual needs of the occupants, resulting in efficient energy management and operating cost savings. However, in most cases, these sensors are application-specific, which limits their usefulness and scalability. Of the available sensing technologies, acoustic methods often rely on the use of microphones, which can lead to privacy issues. In this work, we use an electrodynamic loudspeaker in combination with a convolutional neural network algorithm to extract and classify the features of indoor events from the sound field. We show how the loudspeaker impedance is sensitive, through the modal response of the room, to changes in occupancy or room layout (presence of people, movement or removal of furniture), door or window opening, or temperature variation. This gives the speaker a new functionality in addition to audio broadcasting. Theoretical analysis and experiments in real rooms demonstrate the accuracy and effectiveness of this acoustic-based approach for supervised classification of indoor events.