Critique of Machine Learning

Another important criticism leveled at the algorithms used in technologies employed by the digital humanities is that they are not only transparent to the user, but also that they contain their own hidden presuppositions and biases that are not evident to humanities scholars.  Such non-transparency may have a deleterious effect on the results that these algorithms produce.  This is the case with “digital readings”, or computational approaches to literary criticism, which were originally intended to correct some of the problematic aspects of traditional reading practices (Dobson, 2015).  The enthusiasm in some quarters with which these tools have been adopted obscures the inner workings of these computational methods, and therefore become a “black box”.  These techniques have provided the ability to analyze vast archives, in a shift from what digital humanities researcher Matthew Jockers calls microanalysis (analogous with close reading) to macroanalysis, analogous with distant reading (Dobson, 2015).  One of the purposes of the technological and algorithmic turn in the humanities was to produce more accurate and objective knowledge without subjective interpretation and cultural and political biases.  Fundamental to this enterprise are machine learning algorithms, which will be discussed and further developed in a later course.  For the current discussion, it will suffice to distinguish supervised and unsupervised algorithms.  Supervised machine learning approaches were designed for categorization tasks.  For instance, data may include a grouping of variable-length words.  The algorithm is supervised in the sense that the algorithm is “trained” on one partition of the available data, known as the training set, which is presented to the algorithm, along with its corresponding label, to “learn” the data – in other words, to label the data properly according to the established categories.  The resulting model is then tested on another partition of the data, the testing set.  Consequently, the result is a model that, when presented with unknown data, labels that data with the pre-specified categories chosen by the researcher.  However, it is clear that supervised approaches are to a large degree subjective.  The categories into which data are to be placed are chosen by the researcher, as are the training (and testing) data themselves, and the labels that are attached to members of the training set.  By contrast, unsupervised algorithms determine the labels and categorization model on the basis of the data alone, without human guidance.  Training data are not supplied with labels, and are not pre-organized into categories.  Unsupervised learning is therefore presumably free of subjective influences of the researcher.  However, purely unsupervised learning is not possible (Dobson, 2015).  Although the data are unlabeled and disorganized, researchers still decide the original data on which the algorithm “trains itself”.  Data can therefore never be analyzed completely disentangled from subjective intent. The result is that the “black box” of unsupervised machine learning still bears the subjective and biased traces of the model developer.  For the digital humanist adopting machine learning tools – including unsupervised learning algorithms – this means that it not clear what happens to the data that are used as input, or how the data are transformed by machine learning approaches.  The “black box” nature of these techniques also makes the process of finding and correcting errors much more difficult (Dobson, 2015).

[NEXT]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Contemporary Digital Humanities Copyright © 2022 by Mark P. Wachowiak is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book