3.7 Invention of Film

Please read: pp 1-16

Wheeler Winston Dixon, and Gwendolyn Audrey Foster. A Short History of Film, Third Edition. Rutgers University Press, 2018. https://archive.org/details/shorthistoryoffi00dixo

 


A BRIEF HISTORY OF CINEMA – Russell Sharman

Excerpt from: https://uark.pressbooks.pub/movingpictures/chapter/a-brief-history-of-cinema/#chapter-26-section-1

What is Cinema?

Is it the same as a movie or film? Does it include digital video, broadcast content, streaming media? Is it a highbrow term reserved only for European and art house feature films? Or is it a catch-all for any time a series of still images run together to produce the illusion of movement, whether in a multi-plex theater or the 5-inch screen of a smart phone?

Technically, the word itself derives from the ancient Greek, kinema, meaning movement. Historically, it’s a shortened version of the French cinematographe, an invention of two brothers, Auguste and Louis Lumiere, that combined kinema with another Greek root, graphien, meaning to write or record.

The “recording of movement” seems as good a place as any to begin an exploration of the moving image. And cinema seems broad (or vague) enough to capture the essence of the form, whether we use it specifically in reference to that art house film, or to refer to the more commonplace production and consumption of movies, TV, streaming series, videos, interactive gaming, VR, AR or whatever new technology mediates our experience of the moving image. Because ultimately that’s what all of the above have in common: the moving image. Cinema, in that sense, stands at the intersection of art and technology like nothing else. As an art form it would not exist without the technology required to capture the moving image. But the mere ability to record a moving image would be meaningless without the art required to capture our imagination.

But cinema is much more than the intersection of art and technology. It is also, and maybe more importantly, a powerful medium of communication. Like language itself, cinema is a surrounding and enveloping substance that carries with it what it means to be human in a specific time and place. That is to say, it mediates our experience of the world, helps us make sense of things, and in doing so, often helps shape the world itself. It’s why we often find ourselves confronted by some extraordinary event and find the only way to describe it is: “It was like a movie.”

In fact, for more than a century, filmmakers and audiences have collaborated on a massive, ongoing, largely unconscious social experiment: the development of a cinematic language, the fundamental and increasingly complex rules for how cinema communicates meaning. There is a syntax, a grammar, to cinema that has developed over time. And these rules, as with any language, are iterative, that is, they form and evolve through repetition, both within and between each generation. As children we are socialized into ways of seeing through children’s programming, cartoons and YouTube videos. As adults we become more sophisticated in our understanding of the rules, able to innovate, re-combine, become creative with the language. And every generation or so, we are confronted with great leaps forward in technology that re-orient and often advance our understanding of how the language works.

And therein lies the critical difference between cinematic language and every other means of communication. The innovations and complexity of modern written languages have taken more than 5,000 years to develop. Multiply that by at least 10 for spoken language.

Cinematic language has taken just a little more than 100 years to come into its own.

 

In January 1896 those two brothers, Auguste and Louis Lumiere, set up their cinematographe, a combination motion picture camera and projector, at a café in Lyon, France and presented their short film, L’arrivée d’un train en gare de La Ciotat (Arrival of a Train at La Ciotat Station) to a paying audience. It was a simple film, aptly titled, of a train pulling into a station. The static camera positioned near the tracks capturing a few would-be passengers milling about as the train arrived, growing larger and larger in the frame until it steamed past and slowed to a stop. There was no editing, just one continuous shot. A mere 50 seconds long…

And it blew the minds of everyone who saw it.

Accounts vary as to the specifics of the audience reaction. Some claim the moving image of a train hurtling toward the screen struck fear among those in attendance, driving them from their seats in a panic. Others underplay the reaction, noting only that no one had seen anything like it. Which, of course, wasn’t entirely true either. It wasn’t the first motion picture. The Lumiere brothers had projected a series of 10 short films in Paris the year before. An American inventor, Woodville Latham, had developed his own projection system that same year. And Thomas Edison had invented a similar apparatus before that.

But one thing is certain: that early film, as simple as it was, changed the way we see the world and ourselves. From the early actualite documentary short films of the Lumieres, to the wild, theatrical flights of fancy of Georges Melies, to the epic narrative films of Lois Weber and D. W. Griffith, the new medium slowly but surely developed its own unique cinematic language. Primitive at first, limited in its visual vocabulary, but with unlimited potential. And as filmmakers learned how to use that language to re-create the world around them through moving pictures, we learned right along with them. Soon we were no longer awed (much less terrified) by a two-dimensional image of a train pulling into a station, but we were no less enchanted by the possibilities of the medium with the addition of narrative structure, editing, production design, and (eventually) sound and color cinematography.

Since that January day in Lyon, we have all been active participants in this ongoing  development of a cinematic language. As the novelty short films of those early pioneers gave way to a global entertainment industry centered on Hollywood and its factory-like production of discrete, 90-minute narrative feature films. As the invention of broadcast technology in the first half of the 20th century gave way to the rise of television programming and serialized story-telling. And as the internet revolution at the end of the 20th century gave way to the streaming content of the 21st, from binge-worthy series lasting years on end to one-minute videos on social media platforms like Snapchat and TikTok. Each evolution of the form borrowed from and built on what came before, both in terms of how filmmakers tell their stories and how we experience them. And in as much as we may be mystified and even amused by the audience reaction to that simple depiction of a train pulling into a station back in 1896, imagine how that same audience would respond to the last Avengers film projected in IMAX 3D.

We’ve certainly come a long, long way.

 

There is an ancient story about a king who was so smitten by the song of a particular bird that he ordered his wisest and most accomplished scientists to identify its source. How could it sing so beautifully? What apparatus lay behind such a sweet sound? So they did the only thing they could think to do: they killed the bird and dissected it to find the source of its song. Of course, by killing the bird, they killed its song.

The analysis of an art form, even one as dominated by technology as cinema, always runs the risk of killing the source of its beauty. By taking it apart, piece by piece, there’s a chance we’ll lose sight of the whole, that ineffable quality that makes art so much more than the sum of its parts. Throughout this text, my hope is that by gaining a deeper understanding of how cinema works, in both form and content, you’ll appreciate its beauty even more.

In other words, I don’t want to kill the bird.

Because as much as cinema is an ongoing, collaborative social experiment, one in which we are all participants, it also carries with it a certain magic. And like any good magic show, we all know it’s an illusion. We all know that even the world’s greatest magician can’t really make an object float or saw a person in half (without serious legal implications). It’s all a trick. A sleight of hand that maintains the illusion. But we’ve all agreed to allow ourselves to be fooled. In fact, we’ve often paid good money for the privilege. Cinema is no different. A century of tricks used to fool an audience that’s been in on it from the very beginning. We laugh or cry or scream at the screen, openly and unapologetically manipulated by the medium. And that’s how we like it.

This text is dedicated to revealing the tricks without ruining the illusion. To look behind the curtain to see that the wizard is one of us. That in fact, we are the wizard (great movie by the way).  Hopefully by doing so we will only deepen our appreciation of cinema in all its forms and enjoy the artistry of a well-crafted illusion that much more.


Leland Stanford was bored.

In 1872, Stanford was a wealthy robber baron, former Governor of California, and horse racing enthusiast with way too much time on his hands. Spending much of that time at the track, he became convinced that a horse at full gallop lifted all four hooves off the ground. His friends scoffed at the idea. Unfortunately, a horse’s legs moved so fast that it was impossible to tell with the human eye. So he did what really wealthy people do when they want to settle a bet, he turned to a nature photographer, Eadweard Muybridge, and offered him $25,000 to photograph a horse mid gallop.

Six years later, after narrowly avoiding a murder conviction (but that’s another story), Muybridge perfected a technique of photographing a horse in motion with a series of 12 cameras triggered in sequence. One of the photos clearly showed that all four of the horse’s hooves left the ground at full gallop. Stanford won the bet and went on to found Stanford University. Muybridge pocketed the $25,000 and became famous for the invention of series photography, a critical first step toward motion pictures.

 

The Horse in Motion. Eadweard Muybridge, 1878. Public Domain Image.
The Horse in Motion. Eadweard Muybridge, 1878.

Of course, the mechanical reproduction of an image had already been around for some time. The Camera Obscura, a technique for reproducing images by projecting a scene through a tiny hole that is inverted and reversed on the opposite wall or surface (think pinhole camera), had been around since at least the 5th century BCE, if not thousands of years earlier. But it wasn’t until a couple of French inventors, Nicephore Niepce and Louis Daguerre, managed to capture an image through a chemical process known as photoetching in the 1820s that photography was born. By 1837, Niepce was dead (best not to ask too many questions about that) and Daguerre had perfected the technique of fixing an image on a photographic plate through a chemical reaction of silver, iodine and mercury. He called it a daguerreotype. After himself. Naturally.

But to create the illusion of movement from these still images would require further innovation. The basic concept of animation was already in the air through earlier inventions like the magic lantern and eventually the zoetrope. But a photo-realistic recreation of movement was unheard of. That’s where Muybridge comes in. His technique of capturing a series of still images in quick succession laid the groundwork for other inventors like Thomas Edison, Woodville Latham and Auguste and Louis Lumiere to develop new ways of photographing and projecting movement. Crucial to this process was the development of strips of light-sensitive celluloid film to replace the bulky glass plates used by Muybridge. This enabled a single camera to record a series of high-speed exposures (rather than multiple cameras taking a single photo in sequence). It also enabled that same strip of film to be projected at an equally high speed, creating the illusion of movement through a combination of optical and neurological phenomena. But more on that in the next chapter.

By 1893, 15 years after Muybridge won Stanford’s bet, Edison had built the first “movie studio,” a small, cramped, wood-frame hut covered in black tar paper with a hole in the roof to let in sunlight. His employees nicknamed it the Black Maria because it reminded them of the police prisoner transport wagons in use at the time (also known as “paddy wagons” with apologies to the Irish). One of the first films they produced was a 5 second “scene” of a man sneezing.

Riveting stuff. But still, movies were born.

Sort of.

There was just one problem: the only way to view Edison’s films was through a kinetoscope, a machine that allowed a single viewer to peer into a viewfinder and crank through the images. The ability to project the images to a paying audience would take another couple of years.

In 1895, Woodville Latham, a chemist and Confederate veteran of the Civil War, lured away a couple of Edison’s employees and perfected the technique of motion picture projection. In that same year, over in France, Auguste and Louis Lumiere invented the cinematographe which could perform the same modern miracle. The Lumiere brothers would receive the lion’s share of the credit, but Latham and the Lumieres essentially tied for first place in the invention of cinema as we know it.

Sort of.

It turns out there was another French inventor, Louis Le Prince (apparently we owe a lot to the French), who was experimenting with motion pictures and had apparently perfected the technique by 1890. But when he arrived in the US for a planned public demonstration that same year – potentially eclipsing Edison’s claim on the technology – he mysteriously vanished from a train. His body and luggage, including his invention, were never found. Conspiracy theories about his untimely disappearance have circulated ever since (we’re looking at you, Thomas Edison).

Those early years of cinema were marked by great leaps forward in technology, but not so much forward movement in terms of art. Whether it was Edison’s 5-second film of a sneeze, or the Lumieres’ 46-second film Workers Leaving a Factory (which is exactly what it sounds like), the films were wildly popular because no one had seen anything like them, not because they were breaking new ground narratively.

There were, of course, notable exceptions. Alice Guy-Blaché was working as a secretary at a photography company when she saw the Lumieres’ invention in 1895. The following year she wrote, directed, and edited what many consider the first fully fictional film in cinema history, The Cabbage Fairy (1896):

 

https://player.vimeo.com/video/278722967?h=71ed63aa9b&dnt=1&app_id=122963

But it was George Melies who became the most well-known filmmaker-as-entertainer in those first few years. Melies was a showman in Paris with a flare for the dramatic. He was one of the first to see the Lumieres’ cinematographe in action in 1895 and immediately saw its potential as a form of mass entertainment. Over the next couple of decades he produced hundreds of films that combined fanciful stage craft, optical illusions, and wild storylines that anticipated much of what was to come in the next century of cinema. His most famous film, A Trip to the Moon, produced in 1902, transported audiences to the surface of the moon on a rocket ship and sometimes even included hand-tinted images to approximate color cinematography.

 

He was very much ahead of his time and would eventually be immortalized in Martin Scorsese’s 2011 film Hugo.

 

By the start of the 20th century, cinema had become a global phenomenon. Fortunately, many of those early filmmakers had caught up with Melies in terms of the art of cinema and its potential as an entertainment medium. In Germany, filmmakers like Fritz Lange and Robert Weine helped form one of the earliest examples of a unique and unified cinematic style, consisting of highly stylized, surreal production designs and modernist, even futuristic narrative conventions that came to be known as German Expressionism. Weine’s The Cabinet of Dr. Caligari (1920) was a macabre nightmare of a film about a murderous hypnotist and is considered the world’s first horror movie.

 

And Lange’s Metropolis (1927) was an epic science-fiction dystopian fantasy with an original running time of more than 2 hours.

 

https://www.youtube.com/watch?v=5BBnMCAIuQg

Meanwhile in Soviet Russia, Lev Kuleshov and Sergei Eisenstein were experimenting with how the creative juxtaposition of images could influence how an audience thinks and feels about what they see on screen (also known as editing, a relatively new concept at the time). Through a series of experiments, Kuleshov demonstrated that it was this juxtaposition of images, not the discrete images themselves, that generated meaning, a phenomenon that came to be known as The Kuleshov Effect. Eisenstein, his friend and colleague, applied Kuleshov’s theories to his own cinematic creations, including the concept of montage: a collage of moving images designed to create an emotional effect rather than a logical narrative sequence. Eisenstein’s most famous use of this technique is in the Odessa steps sequence of his historical epic, Battleship Potemkin (1925).

 

But it was the United States that was destined to become the center of the cinematic universe, especially as it grew into a global mass entertainment medium. Lois Weber was an early innovator and the first American director, male or female, to make a narrative feature film, The Merchant of Venuce (1914). Throughout her career, Weber would pursue subjects considered controversial at the time, such as abortion, birth control and capital punishment (it helped that she owned her own studio). But it wasn’t just her subject matter that pushed the envelope. For example, in her short film, Suspense (1913) she pioneered the use of intercutting and basically invented split screen editing.

Others, like D. W. Griffith, followed suit (though it’s doubtful Griffith would have given Weber any credit). Like Weber, Griffith helped pioneer the full-length feature film and invented many of the narrative conventions, camera moves, and editing techniques still in use today. Unfortunately, many of those innovations were first introduced in his ignoble, wildly racist (and wildly popular at the time) Birth of a Nation (1915). Griffith followed that up the next year with the somewhat ironically-titled Intolerance (1916), a box office disappointment but notable for its larger-than-life sets, extravagant costumes, and complex story-line that made George Melies’s creations seem quaint by comparison.

Weber, Griffith and many other filmmakers and entrepreneurs would go on to establish film studios able to churn out hundreds of short and long-form content for the movie theaters popping up on almost every street corner.

CINEMA GOES HOLLYWOOD

This burgeoning new entertainment industry was not, however, located in southern California. Not yet, anyway. Almost all of the production facilities in business at the time were in New York, New Jersey or somewhere on the Eastern seaboard. Partly because the one man who still controlled the technology that made cinema possible was based there: Thomas Edison. Edison owned the patent for capturing and projecting motion pictures, essentially cornering the market on the new technology (R.I.P. Louis Le Prince). If you wanted to make a movie in the 1900s or 1910s, you had to pay Edison for the privilege.

Not surprisingly, a lot of would-be filmmakers bristled at Edison’s control over the industry. And since patent law was difficult to enforce across state lines at the time, many of them saw California as an ideal place to start a career in filmmaking. Sure, the weather was nice. But it was also as far away from the northeast as you could possibly get within the continental United States, and a lot harder for Edison to sue for patent violations.

By 1912, Los Angeles had replaced New York as the center of the film business, attracting filmmakers and entertainment entrepreneurs from around the world. World-renowned filmmakers like Ernst Lubitsch from Germany, Erich von Stroheim from Austria, and an impish comedian from England named Charlie Chaplin, all flocked to the massive new production facilities that sprang up around the city. Universal Pictures, Metro-Goldwyn-Mayer (MGM), Warner Bros., all of them motion picture factories able to mass-produce dozens, sometimes hundreds of films per year. And they were surrounded by hundreds of other, smaller companies, all of them competing for screen space in thousands of new movie houses around the country.

One small neighborhood in the heart of Los Angeles became most closely associated with the burgeoning new industry: Hollywood.

By 1915, after a few years of failed lawsuits (and one imagines a fair number of temper-tantrums), Thomas Edison admitted defeat and dissolved his Motion Picture Patents Company.

In the heyday of those early years, some of those larger studios decided the best way to ensure an audience for their films was to own the theaters as well. They built extravagant movie palaces in large market cities, and hundreds more humble theaters in small towns, effectively controlling all aspects of the business: production, distribution and exhibition. In business terms that’s called vertical integration. It’s a practice that would get them in a lot of trouble with the U.S. government a couple of decades later, but in the meantime, it meant big profits with no end in sight.

Then, in 1927, everything changed.

Warner Bros. was a family-owned studio run by five brothers and smaller than some of the other larger companies like Universal and MGM. But one of those brothers, Sam, had a vision. Or rather, an ear. Up to that point, cinema was still a silent medium. But Sam was convinced that sound, and more specifically, sound that was synchronized to the image, was the future.

And almost everyone thought he was crazy.

It seems absurd now, but no one saw any reason to add sound to an already perfect, and very profitable, visual medium. What next? Color? Don’t be ridiculous…

Fortunately, Sam Warner persisted, investing the company’s profits into the technology required to not only record synchronized sound, but to reproduce it in their movie theaters around the country. Finally, on October 6th, 1927, Warner Bros. released The Jazz Singer, the first film to include synchronized dialog.

 

Spoiler alert: It was a HUGE success. Unfortunately, Sam Warner didn’t live to see it. He died of a brain infection on October 5th, the day before the premiere.

Suddenly, every studio was scrambling to catch up to Warner Bros. That meant a massive capital investment in sound technology, retrofitting production facilities and thousands of movie theaters. Not every production company could afford the upgrade, and many struggled to compete in the new market for films with synchronized sound. And just when it seemed like it couldn’t get worse for those smaller companies, it did. In October of 1929, the stock market crashed, plunging the nation into the Great Depression. Hundreds of production companies closed their doors for good.

At the start of the 1930s, after this tremendous consolidation in the industry, eight major studios were left standing: RKO Pictures, Paramount, MGM, Fox, Warner Bros., Universal Pictures, Columbia Pictures and United Artists. Five of those – RKO, Paramount, MGM, Fox and Warner Bros. – also still owned extensive theater chains (aka vertical integration), an important source of their enormous profits, even during the Depression (apparently movies have always been a way to escape our troubles, at least for a couple of hours). But that didn’t mean they could carry on with business as usual. They were forced to be as efficient as possible to maximize profits. Perhaps ironically, this led to a 20-year stretch, from 1927 to 1948, that would become known as The Golden Age, one of the most prolific and critically acclaimed periods in the history of Hollywood.

THE GOLDEN AGE

The so-called Golden Age of Hollywood was dominated by those eight powerful studios and defined by four crucial business decisions.[1] First and foremost, at least for five of the eight, was the emphasis on vertical integration. By owning and controlling every aspect of the business, production, distribution and exhibition, those companies could minimize risk and maximize profit by monopolizing the screens in local theaters. Theatergoers would hand over their hard-earned nickels regardless of what was playing, and that meant the studios could cut costs and not lose paying customers. And even for those few independent theater chains, the studios minimized risk through practices such as block booking and blind bidding. Essentially, the studios would force theaters to buy a block of several films to screen (block booking), sometimes without even knowing what they were paying for (blind bidding). One or two might be prestige films with well-known actors and higher production values, but the rest would be low-budget westerns or thrillers that theaters would be forced to exhibit. The studios made money regardless.

The second crucial business decision was to centralize the production process. Rather than allow actual filmmakers – writers, directors, actors – to control the creative process, deciding what scripts to develop and which films to put into production, the major studios relied on one or two central producers. At Warner Bros. it was Jack Warner and Darryl Zanuck. At RKO it was David. O. Selznick. And at MGM it was Louis B. Mayer and 28 year-old Irving Thalberg.

Irving Thalberg. Central Producer at MGM. Public Domain Image.
Irving Thalberg. Central Producer at MGM.

Thalberg would become the greatest example of the central producer role, running the most profitable studio throughout the Golden Age. Thalberg personally oversaw every production on the MGM lot, hiring and firing every writer, director and actor, and often taking over as editor before the films were shipped off to theaters. And yet, he shunned fame and never put his name on any of MGM’s productions. Always in ill-health, perhaps in part because of his inhuman workload, he died young, in 1936, at age 37.

The third business decision that ensured studios could control costs and maximize profits was to keep the “talent” – writers, directors and actors – on low-cost, iron-clad, multi-year contracts. As Hollywood moved into the Golden Age, filmmakers – especially actors – became internationally famous. Stardom was a new and exciting concept, and studios depended on it to sell tickets. But if any one of these new global celebrities had the power to demand a fee commensurate with their name recognition, it could bankrupt even the most successful studio. To protect against stars leveraging their fame for higher pay, and thus cutting in on their profits, the studios maintained a stable of actors on contracts that limited their salaries to low weekly rates for years on end no matter how successful their films might become. There were no per-film negotiations and certainly no profit sharing. And if an actor decided to sit out a film or two in protest, their contracts would be extended by however long they held out. Bette Davis, one of the biggest stars of the era, once fled to England to escape her draconian contract with Warner Bros. Warner Bros. sued the British production companies that might employ her and England sent her back. These same contracts applied to writers and directors, employed by the studio as staff, not the freelance creatives they are today. It was an ingenious (and diabolical) system that meant studios could keep their production costs incredibly low.

The fourth and final crucial business decision that made the Golden Age possible was the creative specialization, or house style, of each major studio. Rather than try to make every kind of movie for every kind of taste, the studios knew they needed to specialize, to lean into what they did best. This decision, perhaps more than any of the others, is what made this period so creatively fertile. Despite all of the restrictions imposed by vertical integration, central producers, and talent contracts, the house style of a given studio meant that all of their resources went into making the very best version of certain kind of film. For MGM, it was the “prestige” picture. An MGM movie almost always centered on the elite class, lavish set designs, rags to riches stories, the perfect escapist, aspirational content for the 1930s. For Warner Bros. it was the gritty urban crime thriller: Little Caesar (1931), The Public Enemy (1931), The Maltese Falcon (1941). They were cheap to make and audiences ate them up. Gangsters, hardboiled detectives, femme fatales, these were all consistent elements of Warner Bros. films of the period. And for Universal, it was the horror movie:

Frankenstein (1931), Dracula (1931), The Mummy (1932), all of them Universal pictures (and many of them inspired by the surreal production design of German Expressionist films like The Cabinet of Dr. Caligari).

But the fun and profits couldn’t last forever.

Three important events conspired to bring an end the reign of the major studios and the Golden Age of Hollywood.

First, in 1943, Olivia de Havilland, a young actress known for her role as Melanie in Gone with the Wind (1939), sued Warner Bros. for adding six months to her contract, the amount of time she had been suspended by the studio for refusing to take roles she didn’t want. She wasn’t the first Hollywood actor to sue a studio over their stifling contracts. But she was the first to win her case. The court’s decision in her favor set a precedent that quickly eroded the studios’ power over talent. Soon actors became freelance performers, demanding fees that matched their box office draw and even profit participation in the success of their films. All of which took a sizeable chunk out the studios’ revenue.

Then, in 1948, the U.S. government filed an anti-trust case against the major studios, finally recognizing that vertical integration constituted an unfair monopoly over the entertainment industry. The case went to the Supreme Court and in a landmark ruling known as The Paramount Decision (only because Paramount was listed first in the suit), the court ordered that all of the major studios sell off their theater chains and outlawed the practices of block booking and blind bidding. It was a financial disaster for the big studios. No longer able to shovel content to their own theater chains, studios had to actually consider what independent theaters wanted to screen and what paying audiences wanted to see. The result was a dramatic contraction in output as studios made fewer and fewer movies with increasingly expensive, freelance talent hoping to hit the moving target of audience interest.

And then it got worse.

In the wake of World War II, just as the Supreme Court was handing down The Paramount Decision, the television set was quickly becoming a common household item. By the end of the 1940s and into the 1950s, the rise of television entertainment meant fewer reasons to leave the house and more reasons for the movie studios to panic. Some of them, like MGM, realized there was money to be made in licensing their film libraries to broadcasters. And some of them, like Universal, realized there was money to be made in leasing their vast production facilities to television producers. But all of them knew it was an end of an era.

Attribution:

Moving Pictures by Russell Sharman is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

 

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Origins of Contemporary Art, Design, and Interiors Copyright © by Jennifer Lorraine Fraser is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book