Last Friday they fired Joan from HR. As they burned her access she wasn't allowed to post.
The point of her trial, had the HumanOps AIs been asked to explain, was the performance of the people she oversaw. They lost no work days during pandemics. They were never late when traffic jammed or bus drivers went on strike. Their productivity rose when they gave birth. Their bonuses and their bills rose in benevolent harmony.
There was no malfeasance proved or investigated, nor were her team's victories in customer satisfaction metrics taken into account. The cost anomaly by itself was enough, and statistical miracles weren't allowed. (The programs that had fired her could have been trained to inquire with the phenomenology of curiosity and near-angelic insight her why and the how, and perhaps stay their judgement to increase a higher-order utility function; but they hadn't been, so they did not.)
After Joan left others left too, unshielded from randomness or finding it had a cost they couldn't afford. Morale, would it have been asked by someone to whom the truth was owed, fell. Effectiveness, as such things had been measured before, did as well. But investor AIs noticed the metrics they had been programmed to watch for had been improved by the company AIs (as they had been designed to) so they looked upon the company with favor and all was right with the stock.
Hahahahahah!!! Brilliant, as always! And the worst part is that it is often true, even without AI supervision. Dilbert 2.0!