So in that sense there is nothing new or surprising here.
But: I do think there's a potential "difference in degree is a difference in kind" situation here. Having humans in the loop does still provide some ethical friction. In the Milgram experiments there were degrees of compliance related to both the degree of pain inflicted, and the justification / authority for doing so.
But automated agents need not have any such friction. Thus there's potential for huge increase in how easy it is to both effect, and justify to yourself, delegating unethical actions.