Let u denote the temporal order of the update within a trial (i e

Let u denote the temporal order of the update within a trial (i.e., u = 1 for the first update and u = 2 for the second update). In this case, the judgment at the time the agent’s prediction is observed is given by gtu=1if(at=1andqt>0.5)OR(at=0andqt<0.5) gtu=0otherwise,and the judgment at the end of the trial is given by gtu=1ifct=1

gtu=0otherwise. The ability belief updated at each time step is the most recent estimate. We also considered several reinforcement-learning (non-Bayesian) versions of these three models, none of which performed as well as their Bayesian counterparts (see Supplemental Information for details). fMRI analysis was also carried out Trichostatin A supplier using FSL (Jenkinson et al., 2012). A GLM was fit in prewhitened data space. A total of 28 regressors (and their temporal derivatives, except for the 6 motion regressors produced during realignment) were included in the GLM, one for each of the four runs/sessions collected during scanning: the main effect of the first decision making phase for DNA Damage inhibitor predictions about people (condition 1), algorithms (condition 2), and assets (condition 3); the main effect of the observed agent’s prediction for people (condition 1) and algorithms (condition 2); the main effect of the interstimulus interval (conditions 1 and 2); the main effect of the

feedback phase for AC, DC, AI, and DI trials for people (condition 1) and algorithms (condition 2); the main effect of the feedback phase for assets (condition 3); the main effect of the presentation screen at the beginning

of each run; the interaction between chosen subjective EV and the decision making phase separately for people, algorithms, and assets; the interaction between expertise and the decision making phase separately for people and algorithms; the interaction between simulation-based aPEs and the other agent’s prediction separately for people and algorithms; the interaction between rPE and feedback phase separately for people, algorithms, and assets; the interaction between evidence-based aPEs and feedback phase separately for AC, DC, AI, and DI trials separately for people and algorithms; and 6 motion regressors. The ITI event until was not modeled. See the main text for the definition of the AC, DC, AI, and DI trials. We defined additional contrasts of parameter estimates (COPEs) for expertise and expertise prediction errors of agents, independent of agent type, as a (1 1) contrast of relevant regressors based on the people and algorithms, as well as COPEs for the difference (1 −1) between expertise and expertise prediction errors for people compared to algorithms. To search for common expertise prediction errors at feedback, we defined a ((AC + DC) + (AI + DI)) × people + ((AC + DC) + (AI + DI)) × algorithms) contrast.

Comments are closed.