Samenvatting
Photonic reservoir computing has been demonstrated to be able to solve various complex problems. Although training a reservoir computing system is much simpler compared to other neural network approaches, it still requires considerable amounts of resources which becomes an issue when retraining is required. Transfer learning is a technique that allows us to re-use information between tasks, thereby reducing the cost of retraining. We propose transfer learning as a viable technique to compensate for the unavoidable parameter drift in experimental setups. Solving this parameter drift usually requires retraining the system, which is very time and energy consuming. Based on numerical studies on a delay-based reservoir computing system with semiconductor lasers, we investigate the use of transfer learning to mitigate these parameter fluctuations. Additionally, we demonstrate that transfer learning applied to two slightly different tasks allows us to reduce the amount of input samples required for training of the second task, thus reducing the amount of retraining.
Originele taal-2 | English |
---|---|
Pagina's (van-tot) | 949-961 |
Aantal pagina's | 13 |
Tijdschrift | Nanophotonics |
Volume | 12 |
Nummer van het tijdschrift | 5 |
Vroegere onlinedatum | 18 okt 2022 |
DOI's | |
Status | Published - 1 mrt 2023 |
Bibliografische nota
Funding Information:Research funding: This research was funded by the Research Foundation Flanders (FWO) under grants G028618N, G029519N and G006020N. Additional funding was provided by the EOS project “Photonic Ising Machines”. This project (EOS number 40007536) has received funding from the FWO and F.R.S.-FNRS under the Excellence of Science (EOS) programme.
Publisher Copyright:
© 2022 the author(s), published by De Gruyter, Berlin/Boston 2022.
Copyright:
Copyright 2022 Elsevier B.V., All rights reserved.