Predictions made by algorithms often have built-in biases and are unfair to the individual.
Models of society are used to make predictions algorithms about the future. Those predictive algorithms are then treated like Gods, when in fact they are as fallible as the people who created them. Victims of injustice due to these sorts of algorithms aren’t seen as people to those who enforced whatever injustices upon them, and the enforcers don’t have to feel bad because “the system” told them this was the best option. We wind up with a cold machine, hurting people on the output, removed from the engineers who designed it to begin with, and operated by those who see it as all-knowing.
A way to minimize these sorts of injustices is to figure out how closely we can measure the real definition of success, much like Good Metrics.
Example
An algorithm that implicitly factored in the race when determining The likelihood of recidivism of a person who is about to be sentenced for a crime, and would routinely turn off work for four black and Latino man from the ghetto, because the ghetto has more crime in it and therefore (the algorithm inferences) they are more likely to commit crime when they get out.