Why I rewrote it this way and where I think the show went terribly wrong:
I “fixed” sudden flashes of conscience in the robot and Doctor’s intention to destroy. Made one more organized and programmed, and the other more chaotic and human. Especially, the last one I think is important, because first the Doctor was all like “no, you kill to survive, you’re not a murderer” (read: “I am trying to make a bloody murder machine look more human”), and then he was like “I’m going to have to kill you” (as in: “Oh you know what? Idgaf. I guess imma be a murderer too”). That's not ok.
The point I’m trying to make here:
the Doctor would see more humanity in a machine than there ever had been. In a sense, we are the Doctor here, desperately wishing for the machine to stop killing and feel some remorse. But it can’t. It only checks the facts and doesn’t step out of it’s program. It checks how true\false is the Doctor’s speech about the broom. The Doctor on the other hand starts doubting his own humanity at that point. If killing another being was called only by the need to destroy a dangerous opponent, and afterwards brought no feelings of guilt or wrongness of what had been done, then neither of them were showing a bit of humanity. And after all, the only thing that makes a difference is that one feels something while the other doesn’t.