So much seems to hinge on the definitions of words that in practice carry a diversity of meanings.
The following conclusion I think is consistent with @sugato’s analysis:
- A statement such as: "it is a fact that neural nets produce racist and sexist outcomes"
can be understood as coming from an analysis that is driven 100% by desire, and it always reflects the minds of the people behind it.
Which raises a question in my mind of how far it is useful and productive to take such analysis.
I think it is useful to consider how we might to test for validity, accuracy or skill. (The term “skill” in this case being a term of art, for instance, for how well a hurricane model predicts the path and strength of a storm.)
Consider the case of this article: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing – There’s software used across the country to predict future criminals. And it’s biased against blacks.
Propublica.org’s claim that this risk assessment model was biased is based on what many would call ’ fairly objective’ measures of accuracy. But that analysis is arguably also “driven 100% by desire”. And so on and so on.
I would instead emphasis that it is unknown whether the the risk assessment model in question was validated, how well, where did the data come from etc. In other words more like the analysis published by propublica.org. In other words I would like to see more transparency in the process. The value of a practice of openness and transparency IMO is a more useful and important lesson to be learned from this.
One also hopes that the people using the software took into account that the prediction was only accurate about %60 of the time.
I would even suggest that openness and transparency are consistent with – if not suggested or implied by – Right Speech, Right View, and Right Effort.