Unlock Editor's Digest at no cost
FT editor Roula Khalaf selects her favourite stories on this weekly newsletter.
A decade ago, three researchers on the University of Pennsylvania coined the term “Algorithm aversion“ to explain how people immediately distrusted a weather forecasting program when it made a mistake—even when it was obviously more accurate than the predictions of the human meteorologists it was comparing.
This stayed at the back of FT Alphaville's mind at any time when we wrote posts about whether you would day trade with ChatGPT, pass the CFA exam, get an economics degree, decipher central bank chatter, or get a job as a sell-side analyst.
In other words, while we proceed to ask whether artificial intelligence can do all these items, we must always also ask whether we are able to actually do it. This is entirely appropriate, given the large sums of cash firms are spending on AI infrastructure. Would the cash be wasted if people fundamentally don't trust or like the outcomes, even in the event that they are good?
For this reason this recent paper by Gertjan Dickeckt and Francesco Stradi is so interesting. Here is the summary:
Do investors trust an AI-based analyst forecast? We address this query using 4 incentive experiments with 3,600 U.S. participants. Our results show that while investors update their return expectations in response to the forecast, they respond less when an analyst incorporates AI. This lower trust stems from lower perceived credibility of AI-generated forecasts. We uncover other necessary nuances: women, Democrats, and investors with higher AI literacy respond more strongly to AI forecasts. In contrast, the complexity of the AI ​​model reduces the likelihood of a return update. Additional manipulations show that forecast providers don’t increase reactions to their content. Overall, our results challenge prevailing notions in regards to the use of AI in financial decision-making.
Here's the way it worked: Dicket and Stradi divided a bunch of Americans into three different groups to see how much they trusted a stock market forecast from Goldman Sachs that was made purely by humans, a forecast made purely by an “advanced AI model,” and a forecast from “Goldman Sachs analysts using a sophisticated AI model.” Otherwise, the reports were an identical.
The researchers then examined how the forecasts affected the participants' own expectations. And lo and behold: reports written or supported by AI proved to be less influential than those written by humans.
However, there have been some interesting nuances, because the summary suggests.
Specifically, women, individuals who had declared their political affiliation as Democrats, and folks who were more acquainted with AI were more prone to update their very own forecasts in the event that they deviated significantly from the statements within the supposedly typewritten or machine-assisted report:
. . . On average, women usually tend to update their return expectations based on AI-generated forecasts, especially after they have larger initial miscalculations. While the typical investor moves away from signals from AI sources, female investors appear to update their expectations based on the signals.
. . . Democrats usually tend to adjust their return expectations to AI forecasts, especially when there may be a bigger gap between their previous expectations and the forecast. In other words, Democrats are more receptive to AI-generated forecasts.
… Finally, higher AI competence is related to a significantly larger update of returns within the direction of the signal when a human-machine forecast is received.
Many people's criminal records will probably be confirmed here.
It can be notable that folks became more suspicious the more complex the strategy sounded. So extraordinary least squares regression was more influential than deep learning techniques or a best linear unbiased estimator.
In other words, if you ought to use AI to conduct your sell-side research, don't announce it too loudly and call the model something like “Ye Olde AdaBoost” or “Homespun Learning.”