Introduction To Computation

This section will introduce and discuss a brief introduction to computer science that will be helpful in the study of the digital humanities.  When one thinks of “computer science”, one normally associates it with the study of computers in general.  Such a definition is very broad and includes the study of how to write computer programs, or instructions that are executed by the computer to solve a specific problem or to perform a specific task.  Computer science may also include the study of the uses and applications of computers and software.  However, an important characteristic that must be included in any definition of computer science is that it is the study of algorithms (Schneider & Gersting, 2018).   Algorithms, specifically algorithms used in the digital humanities, will be discussed in subsequent sections.  For now, it is sufficient to state that the study of algorithms includes their mathematical properties, their hardware realizations (that is, how algorithms are related to the electronics and circuitry in computers, and how computer hardware implements algorithms), and their linguistic realizations software; in other words, implementations of algorithms and code in the form of programs written in various programming languages.

 

Computer science is also the study of the applications of algorithms.  An algorithm informally can be considered as an ordered sequence of instructions that is guaranteed to solve a specific problem.  The word “algorithm” was named after the Persian mathematician Al-Khowarizmi.  There are three main operations in the construction of algorithms:

 

  1. Sequential operations, that execute consecutively, one at a time, in sequence;
  2. Conditional operations in the form of if (something is true) then (do something) operations, and;
  3. Iterative operations (looping operations), or operations that are repeated for a certain number of times (execute in a loop), or until a condition is satisfied.

 

An example will help to illustrate the concepts of an algorithm (Schneider & Gersting, 2018).

 

Suppose that a user wants to program a digital video recorder (DVR).  The concept of “programming” for this DVR is analogous to programming a computer.  The steps required to “program” the DVR constitute an algorithm.  An example of an algorithm for this application may be stated as follows:

 

  1. The clock and the calendar may not be correctly set, in which case the user should refer to a specific page of the instruction manual and follow the instructions before proceeding.
  2. The user repeats Steps 2a – 2d for each program that is to be recorded.
    • a. Enter the channel number that is to be recorded and press the button labeled CHAN.
    • b. Enter the time that the recording is to start and press the button labeled TIME START.
    • c. Enter the time that the recording is to stop and press the button labeled TIME FINISH, completing the programming of one recording.
    • d. If the user does not want to record anything else, then the user should press the button labeled END PROG.
  3. Finally, the DVR is turned off. The DVR is now in timer mode and is ready to record again.

 

The preceding was an example of an algorithm to program a DVR.  Steps 1 – 3 are sequential operations, as they are executed consecutively, in sequence.  Step 1 is a conditional operation, as the if time must be correctly set, then the user should refer to the instruction manual and set the correct time.  Step 2d is also a conditional operation, as the user determines whether another program is to be recorded.  Steps 2a – 2d are iterative, as the run repeatedly for each program that is to be recorded on the DVR.  The loop continues until the condition in 2d specifies that the user does not want to record any new programs.

 

Algorithms are crucial in computer science because if an algorithm to solve a problem can be specified, or formulated, then its solution can be automated.  The automation occurs through some computing agent, which may include a machine (such as a computer), robot, person, or any other agent that would carry out the algorithm steps.  There are, however, some unsolved problems.  Such problems, occurring very often in mathematics, science, and in computer science do not have well defined solutions.  Some problems are unsolvable, or some solutions are prohibitively slow (more time is required to solve the problem than can be budgeted for that purpose), and some solutions are not yet known.  Consequently, it is stated that there are problems that cannot, at least currently, be solved algorithmically.

 

The formal definition of an algorithm is as follows: an algorithm is a well-ordered collection of unambiguous and effectively computable operations that, when executed, produces a result and halts, or stops, in a finite amount of time.  In other words, the algorithm will return a result and the algorithm will not continue indefinitely.  Each component of this definition will now be considered in more detail (Schneider & Gersting, 2018).

 

The first aspect is the concept of a well-ordered collection.  Upon completion of an operation, it is always known which operation to perform next.  The next issue deals with ambiguous statements.  For instance, an algorithm may specify to “go back” and to perform the steps again.  A question arises as to what are the steps that are to thatwhich are to be performed again. Another ambiguous statement may specify that an algorithm “start over”, in which case the question becomes from where the algorithm should start over.  Algorithms consist of unambiguous operations, or what is known as primitive operations.

 

An unambiguous operation can be understood by the computing agent without having to be further defined or simplified.  It is insufficient for an operation to simply be understandable.  The operation must be able to be accomplished; in other words, it must be doable.  The term “effectively computable” by the computing agent indicates that the result of an algorithm must be produced after the execution of a finite number of operations.  In other words, the algorithm cannot continue indefinitely.  At some point in its operation, it must halt or stop in a finite number of operations.  The result returned by the algorithm may be any type of data, or any type of any type of information that can be represented in the computer or by the computing agent.  Results may be in the form of numbers (integers or non-integer, floating point numbers), text, graphics, audio, or even a change in the computing agent’s environment.  (Integers are whole numbers, such as 0, 1, 100, 9845, -12005, etc.  Floating point numbers are numbers that have a fractional component, or that have non-zero values to the right of the decimal place, such as 3.2, 88.44, -8.7, or 0.00001.)  Some algorithms have what is known as an infinite loop, in which the algorithm will theoretically run without halting, or, in other words, will theoretically run “forever”.  Infinite loops are usually the result of an error in the specification or implementation of the algorithm, or the result of a programming error (Schneider & Gersting, 2018).

[NEXT]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Contemporary Digital Humanities Copyright © 2022 by Mark P. Wachowiak is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book