Trust in cognitive models:understandability and computational reliabilism

Javed, NomanORCID logo; Pirrone, Angelo; Bartlett, LauraORCID logo; Lane, Peter; and Gobet, FernandORCID logo (2023) Trust in cognitive models:understandability and computational reliabilism. In: AISB 2023 convention proceedings. The Society for the Study of Artificial Intelligence and Simulation Behaviour. ISBN 978-1-908187-85-7
Copy

The realm of knowledge production, once considered a solely human endeavour, has transformed with the rising prominence of artificial intelligence. AI not only generates new forms of knowledge but also plays a substantial role in scientific discovery. This development raises a fundamental question: can we trust knowledge generated by AI systems? Cognitive modelling, a field at the intersection between psychology and computer science that aims to comprehend human behaviour under various experimental conditions, underscores the importance of trust. To address this concern, we identified understandability and computational reliabilism as two essential aspects of trustworthiness in cognitive modelling. This paper delves into both dimensions of trust, taking as case study a system for semi-automatically generating cognitive models. These models evolved interactively as computer programs using genetic programming. The selection of genetic programming, coupled with simplification algorithms, aims to create understandable cognitive models. To discuss reliability, we adopted computational reliabilism and demonstrate how our test-driven software development methodology instils reliability in the model generation process and the models themselves.

mail Request Copy

picture_as_pdf
subject
Accepted Version
lock_clock
Restricted to Repository staff only until 1 January 2100

Request Copy

Atom BibTeX OpenURL ContextObject in Span OpenURL ContextObject Dublin Core MPEG-21 DIDL Data Cite XML EndNote HTML Citation METS MODS RIOXX2 XML Reference Manager Refer ASCII Citation
Export

Downloads