An invariants based architecture for combining small and large data sets in neural networks
An invariants based architecture for combining small and large data sets in neural networks
Samenvatting
We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.
Organisatie | Hogeschool Utrecht |
Afdeling | Kenniscentrum Digital Business & Media |
Lectoraat | Artificial Intelligence |
Gepubliceerd in | Proceedings of BNAIC/BeneLearn 2021. BnL, Luxembourg, Pagina's: 748-749 |
Datum | 2021-11-10 |
Type | Conferentiebijdrage |
ISBN | 0-2799-2527-X |
Taal | Engels |