But I can ask the decision maker to explain his decision-making process or his arguments/beliefs which have led to his conclusion. So, kinda debuggable?
Their answer to your question is just the output of another black-box neutral net! Its output may or may not have much to do with the other one, but it can produce words that will trick you into thinking they are related! Scary stuff. I’ll take the computer any day of the week.
No, since in most cases (if "thumbing the scale" was small and not blatant) they can lie and generate a plausible argument that does not involve the actual factor that determined their decision, and any tiny, specific details don't need to be exactly the same as applied to other cases since it's impossible to expect perfect recall or perfect consistency from humans.
If anything, the neural network is more debuggable since you can verify that the decision process you're analyzing (even if complex and hard to understand) was the one actually used in this decision and the same as used for all the other decisions
Debuggable and explainable AI is necessary but not sufficient. The societal implications and questions are profound and may be even harder to solve (see other comments in this thread).