When we look at our mirror image, it appears to us as the image that others have of us. "Citizen Mirror" reflects the viewer as data. This mirror reads personal information such as age, gender, mood, and other supposedly meaningful attributes from human faces. Viewers are reduced to a box of superficial information.
The data box created by this machine has little to do with the self-image we imagine and hardly represents reality. One's own face is not only exposed to other people, but increasingly to machines that have learned to categorise the masses - but according to which models?
Machine learning algorithms are already being used to monitor and control people. Power over these systems lies in the hands of the corporations that collect the most data about the world. The systems are trained with categorised data. The labelling of the data is done by people, detached from any discourse. The later decisions of the trained systems cannot be traced back to their sources. Values passed on hide behind the infallibility of the machine and are not further questioned.
The question arises, whether one has to watch how the world views of the privileged are manifested and reproduced in an "infallible" system or whether there is the possibility to make sources transparent and to embed human diversity and non-discriminatory values for a fairer world in these systems.