Visualization
The digital humanities emerged from text. Text-based work is still the predominant form of digital humanities scholarship. However, data visualization, enabled by computational techniques, and visual analysis and data exploration through visualizations are increasingly employed in humanities research. Consequently, many digital humanities areas are developing substantial visual components. Such components are evident in research in art, architectural history studies, museum studies, archaeology, and cultural heritage, but are gaining importance for other humanities areas as well. In addition to data visualization, image analysis, wherein patterns are detected in large collections of images, perception-based techniques, particularly important in analyzing architectural objects, spatial modelling and 3D reconstruction of historical objects, and other types of visualization now have applications to the humanities. Consequently, many digital humanities research questions depend on the utilization and production of graphical, visual information (Münster & Terras, 2020).
Visual analytics is a new discipline that focuses on the integration of computer visualizations, human interaction with these visualizations, data exploration, and leveraging human cognitive abilities to gain new understandings and insights from the visualized data. Some recent work in the digital humanities has incorporated visual analytics for research into humanities questions. In recent work on visual data exploration, a visual analytics system was designed for navigating the content of large digital historical document corpora. The system incorporates new techniques in dynamic multilayer network visualization. It was designed to make these visualizations accessible for humanities scholars who are not experts in data analysis and visualization, but whose work can nevertheless benefit from visualizing and analyzing networks showing the linkages, for example, among different kinds of named entities appearing and co-appearing in the collections (Bornhofen & Düring, 2020). Other recent visual analytics applications to the digital humanities include interactive data exploration for researching historical linguistics (Butt & Beck, 2020), visualization of event-based networks to investigate connections between disparate entities and the resulting historical narratives (Filipov et al., 2021), and visual text analysis combined with statistical analysis for uncovering interesting knowledge associations and groups of common interests between researchers and practitioners in working in different research areas (Benito-Santos & Therón Sánchez, 2020).
For advancing digital humanities scholarship, new directions may be taken in which innovative ideas are drawn from areas not normally associated with computation, such as postmodern philosophy. For instance, because of the increasing importance of data visualization in the digital humanities, including digital art history, literary studies, and digital cultural heritage, interactivity as a core feature of visualizations is becoming focus areas for humanities scholarship. Interactivity is especially relevant for the visualization of digital cultural heritage data, which consists of large, multidimensional, and heterogenous datasets. Consequently, new lines of inquiry are being pursued for visualizing and interactively exploring these complex data. One direction draws on thought of French continental philosopher Gilles Deleuze (1925 – 1995), who proposed the idea of the “fold”, for incorporating the complexity of cultural heritage data into dynamic visualizations. Deleuze thought that the world consists in the “actual” and the “virtual”, which are intertwined with and mutually dependent upon each other. Here, the “virtual” means the “ideal” in a Platonic sense, but existing in reality. These two components are related through “folds”. Transposing Deleuze’s sense of the virtual into the “digital”, where virtual denotes computer-generated “reality”, the idea of the fold is proposed as a metaphor for digitally existing objects, in which only a small amount of available information is perceptible at a time, while the whole “information space” is a present, yet invisible reality (Brüggemann et al., 2020). The “fold” consists of the explication–implication–complication triad.
Explication is “unfolding”, akin to opening a book, the process of dividing something into parts or subsections in which hidden connections are uncovered. Implication is “folding”, analogous to closing the book, and reducing size, complexity, or detail, and, in effect, “hiding” something that was previously unfolded. Explication and implication are closely related, as new, unexpected results and connections can emerge from folding and unfolding. Complication takes place inside of human information processing and is an attempt to explain information accumulation and connectedness of perceivable objects (Brüggemann et al., 2020).
In data visualization, the concept of the fold was proposed as an approach for incorporating both interaction and encoding, or representing data. Encoding and interaction have generally been treated independently of each other, with the latter following the former, after decisions about the visual representation. Although encoding is well understood, sense needs to be made of the interactive capabilities of the visualization. Adapting the idea of the fold, visualizations can be conceptualized through explication, implication, and complication, and interactive visualizations can be considered as “portals into coherent, elastic, and ultimately infinite information spaces” (Brüggemann et al., 2020). Deleuze’s ideas on the fold can be employed for developing a “critical framework for interactive data visualization consisting of operations, qualities, and questions for their design and interpretation” (Brüggemann et al., 2020). In this way, a modern philosophical idea can be adapted to provide a framework for studying, critiquing, and interpreting interfaces, and to enable new ways of thinking about data visualization in a humanistic way (Brüggemann et al., 2020).
New types of visualizations are being explored in the digital humanities, drawing from mathematical disciplines. Research in this field often requires the processing and analysis of linked data, and, consequently, techniques and algorithms from graph theory can be employed to solve some of the problems with this type of data. Many types of graph visualizations, including network visualizations, are used to represent relationships, such as those studied in conjunction with social media or influences among novelists or artists. As a result, research in graph visualizations continues with the specific goal of benefiting digital humanities work. For instance, circular graphs have recently garnered interest among digital humanists because of their potential to comprehensively model and visualize interconnected data. Software has been developed to facilitate assessment of this method, with results on real-world examples from the digital humanities applications (Ryabinin et al., 2020).
A web portal is a website that combines and provides a consistent representation of information obtained from diverse web sources, including online forums and user comments. A semantic portal is a specific type of web portal that is navigated through semantic relationships, or associations that exist between data. Consequently, linked data are particularly relevant for semantic portals. For cultural heritage, early semantic portals provided data aggregation, and search and browsing functions. Currently, integrated, interactive software tools are available on these portals. Some researchers have proposed that the “third generation” of semantic portals for cultural heritage will feature artificial intelligence to solve research problems algorithmically, under human guidance, although realizing this vision poses both humanistic and technical challenges. A key feature of this third generation, especially important for humanities questions and discussed below, is that these systems provide transparent justification and rationalization of the procedure, or “reasoning”, taken by the algorithms. In other words, researchers need to understand how an AI algorithm arrived at a particular conclusion (Hyvönen, 2020).