Brief History of Computing

Algorithmic problem solving is extremely important to computer science.  In the industrial revolution of the 1800s, repetitive physical tasks were mechanized and automated.  In the “computer revolution” of the 20th and 21st centuries, repetitive mental tasks are mechanized through algorithms and computer hardware.  In this section, a brief history of computing will be discussed, starting with the early, “pre-computer” period that ends around the year 1940.

 

In the 17th century, there was automation and simplification of arithmetic for scientific research.  Scottish mathematician John Napier invented logarithms to simplify mathematical computations in 1614.  The logarithm of a number is an exponent such that, when some base number is raised to the power of that exponent, the number is obtained.  For instance, with base 10 (the commonly-used decimal system), log(1000) = 3, indicating that 103 = 1000.  Sometimes, the base is explicitly stated in the mathematical formulation; e.g. log10(1000) = 3.  Also, log10(1) = 0, since 100 = 1.  In computer science, the common base is base 2, as computer hardware uses binary, or base 2 arithmetic.  That is, binary digits (bits) can only have the value 0 or 1.  For example, log2(8) = 3 because 23 = 8, and log2(256) = 8, because 28 = 256.  As is the case with any base, log2(1) = 0.  In mathematics and computer science, log2 is sometimes written as lg; i.e. lg(256) = 8.

 

Logarithms led to the invention of the first slide rule, which appeared around 1622. The French philosopher, scientist, and mathematician Blaise Pascal designed and built a mechanical calculator, the Pascaline, in 1672, and the German mathematician Gottfried Leibniz constructed a mechanical calculator called the Leibniz Wheel in 1674.  None of these devices can properly be considered computers as they are normally considered in the present.  Other innovations in the 17th century devices that were just discussed included that ability to represent numbers and performing arithmetic operations on these numbers.  However, there were substantial limitations.  They were not able to store information; in other words, these devices did not have what is known as “memory” or storage capabilities.  Very importantly, these devices were not programmable, which means that a user could not provide a sequence of operations to be executed by the device (Schneider & Gersting, 2018).

 

In 1801, French weaver Joseph Marie Jacquard designed an automated loom that used punch cards to create patterns.  In the 1880s, Herman Hollerith designed a programmable card processing machine to read tallies and to sort data on punched cards for the United States Census Bureau.  Hollerith founded the company that eventually became known as International Business Machines, or IBM, in 1924.  English mathematician Charles Babbage designed a device known as the “difference engine” in 1823.  A working model was also built.  The difference engine could perform the basic arithmetic operation – addition, subtraction, multiplication, and division – to six significant digits, and in that sense, the difference engine was a quite powerful computational device for the time.  The difference engine could also solve polynomial equations and other complex mathematical problems.  Babbage also worked on another device known as the analytical engine, which was designed but not built.   The analytical engine was to be a mechanical programmable machine similar in the operation to a modern computer.  Computer science innovator and pioneer in computer science Ada Augusta Byron (after whom the Ada programming language was named) worked with Babbage on instructions (“codes” in today’s terminology) for solving mathematical problems.  It is important to note that 19th century devices were mechanical and were not powered electrically.  They did, however, have many features that are commonly associated with modern computers.  For instance, the devices could represent numbers or other data, and were able to handle operations to manipulate that data.  They also had a form of memory to store values in a machine-readable format; that is, in a form that could be interpreted by the mechanical machine device.  Additionally, and very importantly, they were programmable in the sense that sequences of instructions could be predesigned for complex operations (Schneider & Gersting, 2018).

 

The era of the 1940s and 1950s was the era of the birth of computers, as that term is currently used.  The MARK-I computer was designed and developed in 1944.  This machine was an electromechanical computer that used a mix of relays, magnets, and gears to process and store data.  The year 1943 saw the development of the Colossus machine, which was built by computer scientist and mathematician Alan Turing.  It was a general-purpose computer that was built for the British Enigma Project for cryptanalysis and decoding German Enigma code during World War II.  The Electronic Numerical Integrator and Calculator, or ENIAC, was developed in 1946.   The ENIAC computer is generally considered as the first publicly known, fully electronic computer.  A very important computer pioneer is computer scientist and U.S. Navy rear admiral Grace Murray Hopper (1906 – 1992).  Hopper was a programmer on the MARK-I, MARK-II, and MARK-III systems, as well as UNIVAC-I, which was the first large scale electronic computer (Schneider & Gersting, 2018).  In these early years, military-related calculations were the main application areas.  For example, they were employed in computations for missile trajectories and for message encryption and decryption (de-coding) (Gaffield, 2016).

 

In 1949, Hopper began work on the first compiler, a program that translates symbolic mathematical code understood by the computer hardware into a code known as machine code, which represents the operations that the hardware of the computer can perform.   The first compiler was known, appropriately, as A-0 Admiral.  Hopper was instrumental in the development of the COBOL programming language, described above.  She is also credited with inventing the term bug, a term associated with “computer errors”, errors in algorithms or in programs.  She is credited with coining the phrase as she discovered the first computer bug on September 9th, 1945 while working on a prototype of the MARK-II computer.  The “bug” was a real bug:  a moth that had caused a relay failure.  This moth is taped to the dated page in one of Hopper’s notebooks, and is preserved on that page.

 

Another very important pioneer in the early years of computer science is John von Neumann, an American computer scientist.   Von Neumann proposed a radically different computer design based on a stored program model.  He led a research group at the University of Pennsylvania in which one of the first stored program computers, called EDVAC, was built in 1951.  The UNIVAC-I computer that was discussed previously is a version of EDVAC and was the first computer that was commercially sold.  To the present time, virtually all modern computers employ what became known as the von Neumann architecture (Schneider & Gersting, 2018).

 

The years 1950 to 1957 usually delineate what is known as the first generation of computing.  In this era, vacuum tubes were used for processing and storage.  These computers had a very large size, were extremely expensive and very fragile, requiring a great deal of infrastructure and maintenance.  These machines also required highly trained users and special environments with rigorous specifications in which the computer could be housed and operated (Schneider & Gersting, 2018).

 

The second generation of computing generally spans the years 1957 to 1965.  This was the era in which transistors and magnetic cores were employed instead of the vacuum tubes that were used in the first generation of computing.  The second generation was also the era of high-level programming languages, which enabled developers and software designers to write computer code in languages that are human readable, instead of exclusively in machine code that was understandable only by experts and by the machine hardware itself.  Two of the early high-level languages were FORTRAN and COBOL.  FORTRAN stands for Formula Translation or the Formula Translator.  It was a high-level language for scientific and engineering applications.  FORTRAN, although it was developed in the late 1950s, is still the preferred language for scientific and engineering work, and is also a very high-performance computing language, meaning that the machine code generated by the FORTRAN compiler runs very efficiently on computer hardware.  Another high-level language that was developed during this period is COBOL, which stands for the Common Business Oriented Language.  As its name suggests, COBOL was used for business applications.  It was much more word-intensive (or “wordy”), to non-scientists, and more human-readable than FORTRAN.  However, it also required a lot of effort to program, but was efficient and widely used for business software.  Millions of lines of COBOL code are still in operation to this day and must be maintained by current software developers (Schneider & Gersting, 2018).

 

The third generation of computing spans the years 1965 to 1975.  This was the era of the integrated circuit, and it was also the generation that saw the first minicomputer, which is a desk sized computer and does not require a full room to house it.  The third generation also saw the birth of the software industry.  Large-scale applications for business, data processing, and scientific work, such as simulation, modeling, advanced numerical computations, and computer-enabled spacecraft, were prevalent during this period (Gaffield, 2016).

 

In the 1960s, mainframe computers came into prominence.  A mainframe computer, or simply a mainframe, is a large system primarily used for data processing and large-scale transaction processing.  They have the capability to work with very high volumes of input and output.  International Business Machines (IBM) and Unisys are two examples of mainframe manufacturers.  Mainframes support interactive, keyboard and screen-based user terminals.  Hundreds of users can simultaneously perform tasks on the system.  Mainframes are typically smaller and less computationally powerful than supercomputers, but generally have greater processing power than minicomputers (described below) and personal computers (PCs).  As mainframe computers have a large marked for business applications, COBOL is a popular language for these systems.  Java, C, C++, and assembly languages are also widely used on mainframes.

 

Minicomputers are also large computer systems developed in the 1960s, but smaller than mainframes.  Their main purpose was originally control, instrumentation, and communication, as opposed to the large-scale processing, computation, and record-keeping tasks that are in the domain of mainframe systems.  Although commercially popular in the 1960s and 1970s, with the advent of dedicated workstations for high-end graphics, as well as microcomputers in the mid-1980s, the minicomputer market began to decline, a trend that continued into the 1990s.  At present, only a small number of minicomputer architectures are in widespread use, including systems manufactured by IBM.  In addition to assembly languages, FORTRAN and BASIC are also widely used programming languages for minicomputers.

 

The fourth generation, spanning from 1975 to 1985, saw the introduction of the first microcomputers, desktop machines that became widely available to the public (Schneider & Gersting, 2018).

 

Microcomputers are small, relatively inexpensive computers whose central processing unit is a microprocessor.  Originally developed in the 1970s, their popularity greatly increased in the latter years of the decade and into the 1980s.  These systems also penetrated the home market, in what became known as personal computers, or PCs.  In addition to familiar desktop and laptop computers, game consoles, handheld devices, mobile phones, and embedded systems (in which a microprocessor becomes part of a non-computational device (like a microwave or automotive subsystem) or is integrated into an external environment) are all examples of microcomputers.  Assembly language and BASIC (Beginners All-purpose Symbolic Instruction Code) were early programming languages provided to users of microcomputers, but compilers for variety of languages, including C and Java, and the Java virtual machine have all been widely available for decades.  Microcomputers now support most, if not all programming languages.

 

The fourth generation also featured the development of widespread computer networks, including the early version of what would become known as the Internet.  Electronic mail (e-mail, or email), graphical user interfaces (GUIs), and embedded systems (using microprocessors or computational hardware to partially or completely control or guide industrial instruments and everyday devices) also was introduced in this generation of computing (Schneider & Gersting, 2018).

 

The fifth generation, spanning from 1985 to the present, saw the introduction of massively parallel processors that are capable of quadrillions of operations per second.  A quadrillion represents the number 1015 (1 with 15 zeros, or 1,000,000,000,000,000), and therefore this era saw extremely powerful computers being developed.  It is also the era of handheld digital devices, including cell phones. that are in common.  Powerful user interfaces that incorporate different types of media, including sound, voice recognition, images, computer-generated graphics, video, and television were also part of this fifth generation.  Other innovations include wireless communications, massive storage devices, and ubiquitous computing where computing can be performed anywhere and on a wide variety of devices, not just at an IT centre, a dedicated computer room, or on a desktop or even a laptop machine.

 

Mention also needs to be made to supercomputers.  Although, like mainframes, they are large-scale systems, supercomputers are high-performance machines, where performance is typically measured by the number of floating-point operations per second, or FLOPS.  Supercomputers are also characterized by their support of parallel computing, or parallel processing.  In parallel computing, one program utilizes multiple processors simultaneously, either in the same machine where memory is shared among all the processors (shared memory processing), or where several, independent processors residing on different machines are connected through a network (distributed memory processing).   Supercomputers are used primarily, but not exclusively, for scientific, mathematical, biomedical, and engineering applications.  Climate modeling, meteorological forecasting, drug design and discovery, and other tasks requiring a massive amount of computational power frequently employ supercomputers.  Although supercomputers are not new architectures, and were introduced in the 1960s, they came more widely available and used in the 1970s and 1980s.  Because of the scale and complexity of scientific, biomedical, and engineering problems that with which researchers are currently faced, supercomputers of all varieties are increasingly crucial.  They are generally physically large and extremely expensive.  Consequently, supercomputing resources are frequently shared among several institutions.  For instance, the SHARCNET (Shared Hierarchical Academic Research Computing Network) consortium, part of the Compute Canada organization, hosts supercomputer resources at several institutions that are used by researchers at many universities and post-secondary institutions in Ontario.  Because of their close association with scientific and engineering applications and research, and the need for high performance, FORTRAN is a common programming language for supercomputers.  C is also widely used.  Programs running on supercomputers frequently written in languages that explicitly support parallelism.  Of special interest in this category is Julia, with built in support for many different types of parallelism.

[NEXT]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Contemporary Digital Humanities Copyright © 2022 by Mark P. Wachowiak is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book