• Medientyp: E-Artikel
  • Titel: Collaborative duty cycling strategies in energy harvesting sensor networks
  • Beteiligte: Long, James; Büyüköztürk, Oral
  • Erschienen: Wiley, 2020
  • Erschienen in: Computer-Aided Civil and Infrastructure Engineering
  • Sprache: Englisch
  • DOI: 10.1111/mice.12522
  • ISSN: 1093-9687; 1467-8667
  • Schlagwörter: Computational Theory and Mathematics ; Computer Graphics and Computer-Aided Design ; Computer Science Applications ; Civil and Structural Engineering ; Building and Construction
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:title>Abstract</jats:title><jats:p>Energy harvesting wireless sensor networks are a promising solution for low cost, long lasting civil monitoring applications. But management of energy consumption is a critical concern to ensure these systems provide maximal utility. Many common civil applications of these networks are fundamentally concerned with detecting and analyzing infrequently occurring events. To conserve energy in these situations, a subset of nodes in the network can assume active duty, listening for events of interest, while the remaining nodes enter low power sleep mode to conserve battery. However, judicious planning of the sequence of active node assignments is needed to ensure that as many nodes as possible can be reached upon the detection of an event, and that the system maintains capability in times of low energy harvesting capabilities. In this article, we propose a novel reinforcement learning (RL) agent, which acts as a centralized power manager for this system. We develop a comprehensive simulation environment to emulate the behavior of an energy harvesting sensor network, with consideration of spatially varying energy harvesting capabilities, and wireless connectivity. We then train the proposed RL agent to learn optimal node selection strategies through interaction with the simulation environment. The behavior and performance of these strategies are tested on real unseen solar energy data, to demonstrate the efficacy of the method. The deep RL agent is shown to outperform baseline approaches on both seen and unseen data.</jats:p>