The general idea is that there could be a utilitarian argument for substituting robots for soldiers on the ground. The reasoning is that war causes casualties and if casualties can be avoided, they should be. Thus, sending “bots” into war instead of human beings is justified.
As a just war theorist, I find utilitarianism troubling in general and in particular I find this line of reasoning problematic. The problem is that a robot cannot, by definition, have human reasoning skills — if they do, as the post above notes, they are human. What makes us, as humans, distinct from the objects around us is our ability to reason — if a thing gains these abilities, it is human…
So, if we give non-reasoning objects the ability to wage close-combat wars, we are sending machines into a situation that may require human reasoning… and in instances where that reasoning is most crucial, the implications of failure are horrific.
This is leaving aside the very real technical barriers to creating such bots in the first place. At the International Society of Military Ethics conference in 2011, this problem was the topic of many papers — you can find some of them online here.
There were several panels concerning either remote operated ways of fighting wars or autonomous robots in combat situations. The technical problem with using drones is that there is a lag time between the operator and the drone — it’s short but significant in combat where fractions of a second can make the difference between shooting an unarmed child and someone who is pointing a gun at you.
The problem with autonomous robots is that it’s nearly impossible to write a computer program that implements just war principles with a level of accuracy similar to that of a human being. In other words, they can’t make the robot human enough to trust it in combat.
This brings up a challenge to the idea that warbots are justified by utilitarianism. If the only persons counted in the calculation of utility are the soldiers, their families and country, it seems to be a reasonable conclusion that utilitarianism supports warbots. If all of the persons impacted by the decision (rightly) include the people the warbots shoot at and the substantial risk that countries with warbots will enter wars they wouldn’t enter if they had to send human soldiers.