Sound



SOUND AESTHETICS AND PRACTICE

Sound's constructed nature and the wide variety of relationships it can have to the image give sound great expressive potential—even within an illusionistic aesthetic. Characteristics of film sound that allow it to be manipulated include selectivity, nonspecificity, and ambiguity.

RENÉ CLAIR
b. Paris, France, 11 November 1898, d. 15 March 1981

René Clair epitomized the ambiguous relationship many filmmakers had with sound in the transition-to-sound period between 1928 and 1933. Whereas others like Ernst Lubitsch, Jean Vigo, and Rouben Mamoulian pushed the boundaries of the new technology, experimenting in a variety of styles, Clair initially stood among those who believed that sound would constrain the possibilities of film as a visual medium. He was hesitant to embrace sound because it increased production costs and because the industrialized cinematic practices that it introduced would jeopardize directorial control. In addition, he feared that making the camera subservient to the recording equipment would sacrifice the cinematic primacy of the image. For Clair, sound had to complement the image, not regulate it.

Clair's first sound film, Sous les toits de Paris ( Under the Roofs of Paris , 1930), features music as a characterization and atmospheric device, minimal use of dialogue, and an almost complete absence of natural sounds. Interested in the nonsynchronous relationship between sound and image, Clair avoids using sound to express information already given by the image. As an alternative, he explored their disjunction for comedic purposes. In the film's climatic fight scene, when a streetlight is broken and the screen goes dark, Clair does not resort to the musical score. Instead, he uses vocal and bodily sounds as a way to express the eruption of physical violence into the story. In À Nous la liberté ( Freedom for Us , 1931) Clair, while still experimenting with asynchronous sound and image, employed the musical score to mark the narrative incursion of fantasy into the story and as an ironic commentary on the action.

His first English-language film, The Ghost Goes West (1935), marks a significant shift in Clair's approach to film sound. Writing the screenplay with American playwright Robert E. Sherwood, he became fully aware of the cinematic possibilities of speech. In fact, the film is closer to American dialogue-based humor than any of his previous endeavors. I Married a Witch (1942) fully immersed Clair in the screwball comedy genre, leaving behind the visually poetic style of his French period.

Clair returned to France in 1945 to make his most significant work, Les Belles de Nuit ( Beauties of the Night , 1952), a return to his previous sound-image experiments. The film's protagonist, Claude, can only distinguish between dream and reality by trying to make a noise. The conspicuously noiseless worlds of his dreams metaphorically point to the inexhaustible possibilities of film as a visual medium that sound technology had partially restricted.

RECOMMENDED VIEWING

Sous les toits de Paris ( Under the Roofs of Paris , 1930), À Nous la liberté ( Freedom for Us , 1931), The Ghost Goes West (1935), Les Belles de Nuit ( Beauties of the Night , 1952)

FURTHER READING

Clair, René. Cinema Yesterday and Today . Edited by R. C. Dale. New York: Dover, 1972. Translation by Stanley Applebaum of Le Cinéma d'hier, cinéma d'aujourd'hui (1970).

Dale, R. C. The Films of René Clair . Metuchen, NJ: Scarecrow Press, 1986.

Gorbman, Claudia. Unheard Melodies: Narrative Film Music . Bloomington: Indiana University Press, 1987.

McGerr, Celia. René Clair . Boston: Twayne, 1980.

Vicente Rodriguez-Ortega

Like music, sound effects (and to a lesser extent, dialogue) speak to the emotions. Take the "simple" sound of footsteps as a character is seen walking onscreen. Choices in reverberation, pacing, timbre, volume, and mixing (of sounds with each other) may not only determine our sense of the physical contours of the space in which the character is walking, but suggest any number of feelings—loneliness, authority, joy, paranoia—in combination with the images. These choices—rarely noticed by the audience—are characteristics mainly imparted to the sounds not during production, but once the shooting stops.

Separation defines sound practices in many senses. For one thing, sound and image are recorded onto separate mediums. For another, the personnel involved in different units may never meet. The production mixer (set recordist) rarely interacts with the editing (postproduction) staff. And on a major production, dialogue, sound effects, and music are handled by discrete departments, which may remain independent of one another.

Normally, little sound other than dialogue is captured during filming. Yet even here, microphone type and placement can affect the tonal quality of a voice. Production dialogue is best taken with a microphone suspended on a boom above the actors just outside of the camera's frame line. This placement preserves the integrity of the original performance and maintains aural perspective in rough correspondence to the camera angle. When booms are not feasible, the actors can be fitted with radio mikes, small lavalieres connected to radio frequency transmitters concealed in clothing. These microphones sacrifice perspective and vocal quality for invisibility. Locations are scouted for visual impact; unless production assistants can reroute traffic and shut down air-conditioning systems, the audio environment may prove unconquerable. Under budget and schedule pressures, audio aesthetics are often sacrificed and some production sound is kept only as a "guide track" on the assumption that it can be "fixed in the mix."

Production mixers normally ask that all action cease for a few moments on each location so that they may record ambient sound or room tone, the continuous background sound (such as water lapping) in that space. Editors will later have to reinsert ambience under dialogue and effects created during postproduction for continuity with production sound. The sound crew may also take some "wild" sound (such as foghorns), not synchronized to any shot, for possible use as authentic sound effects.

Sound recording mediums have evolved rapidly in the digital age. Analog recording on 1/4-inch tape was supplanted in part by digital audiotape (DAT), which in turn was replaced by sound recorders with removable hard discs that can be directly transferred into computer work stations for editing. Methods of maintaining and establishing sync (precisely matching sound and image) have also evolved. To enable the editor to match voice and lip movement, the take was traditionally "slated" (numbered on a small blackboard held in front of the camera) and announced vocally by an assistant director, who then struck the hinged clapper stick for a sync point. Although slating is still done, now a time code is used to sync camera and recorder electronically.

Actors and directors almost always prefer to record dialogue directly on the set. During production the dialogue is synced up overnight with the image so that the filmmakers can select the best takes by evaluating vocal performance as well as visual variations. Later, specialized

René Clair experimented with a musical score in À Nous la liberte (Freedom for Us, 1931 ).
dialogue editors will make minute adjustments to salvage as much of the dialogue as possible. They eliminate extraneous noises and may combine parts of words from different takes or even scenes to replace a single flawed word.

Although intelligibility is the usual priority for dialogue, it can be manipulated, perhaps by increasing reverberation or volume, to characterize someone as menacing. But the main choices involve how dialogue is edited in relation to picture. To show "talking heads" can be redundant and boring. The picture editor's choice of when to shift between speaker and listener not only alters emotional identification but allows us to learn information simultaneously from one character's facial expression and the other's vocal inflection.

Any dialogue that cannot be polished or could not be captured at all during production is recorded during postproduction in a process called looping, or ADR (automated dialogue replacement). The actor repeatedly watches the scene that needs dialogue, while listening to a guide track on headphones, and then reperforms each line to match the wording and lip movements. Computers can imperceptibly stretch or shorten words to adjust a phrase that is not quite in sync.

While some sound effects are recorded during production, most are added or created later. "Spotting" sessions are held to determine what kinds of sounds are needed and where scoring will be heard. Some sounds that must be in sync are performed by a foley artist. Foleying is the looping of sound effects in a specialized studio outfitted with various walking surfaces and props. Sometimes called foley walkers because so much of their work consists of adding footsteps, foley artists create sounds by moving their bodies or props as they watch the image. Often their props do not match the original objects. A feather duster may simulate not only a flock of birds, but also leaves blowing along the street. A kiss is still just a kiss in filmmaking, but its sound may be recorded by a foley artist making dispassionate love to his or her own wrist. Because sounds like clothing rustle and footsteps are rarely noticed by the audience, they can later be subtly adjusted to help characterize the people who appear to make them. The villain's sword can be given a more ominous swishing sound than the hero's.

Sound effects that need not be recorded in sync can come from CD libraries or be freshly generated. Often recording the original source is not as convincing as inventing one. The editors of Ben-Hur (1959) found that recording real whips for the chariot race sounded less realistic than steaks slapped on a thigh. There is particular freedom to create sound effects when there is no authentic source for the image, as in monster and science fiction films. Creators of sounds often start by recording something real and then processing (altering) it. Two simple processing tricks that date from the earliest days of sound effects are reversing the original sound or changing its pitch. It is also common practice to create one new sound by "stacking" effects—layering several sources and processing them together. For instance, the voice of the Star Wars (1977) droid, R2-D2, is a combination of electronically generated sound plus water pipes, whistles, and human vocalizations. With digital technologies, a sound editor can feed into a computer a brief sample of a sound, which can then be expanded and radically modified.

Music is not usually written until postproduction. The director, composer, and music editor have had a spotting session, running through the rough cut of the film and agreeing on where, and what kind of, music is needed. Then, the music editor prepares a detailed list of "cues" that are timed to the split second, sets up the recording session if there is an orchestra, and makes any needed adjustments when the score is mixed with other tracks.

The final combining of tracks is called "rerecording" on screen credits, but "the mix" or "the dub" by practitioners. (Many sound terms are regional. Practices also vary by region or project: from one to three rerecording mixers may preside at the console.) Basically, the mix combines the dialogue (and narration if there is any), the effects, and the music. A final mix may combine hundreds of separate tracks. For manageability, groups of tracks are "'premixed" so that like sounds have been grouped and adjusted in preliminary relation to each other. Since dialogue takes precedence, it is mixed first. Music and effects, when added, must compete with neither each other nor the dialogue. Sounds from disparate sources must be adjusted with tools like equalizers and filters (which manipulate specific frequencies) to match and flow seamlessly. Since the ratio of direct to reflected sound indicates along with volume how far we are from a sound's source, reverberation is an essential tool for placing a sound in a space. The rerecording mixer will also distribute sounds to specific outputs, deciding, for instance, which sounds go to the surround sound speakers and which shift from one speaker to another. The rerecording mixer is both a master technician who fine-tunes the adjustments to volume, duration, and tone quality begun in the premix and an artist who makes thousands of aesthetic choices as well. The best rerecording mixers must not only balance the various tracks but also subtly layer and orchestrate them, choosing which sounds to emphasize at a given time to create a texture and pacing that have an emotional effect on the audience and support the narrative.

Most likely the work of various sound departments has been overseen by a supervising sound editor. Optimally (though rarely) sound is conceived—like production design—during preproduction, so the film's sound is not an afterthought but an organic, integral part of the film's conception. Films that exploit the fullest expressive potential of sound may have been planned with a sound designer, a credit originated to suggest the conceptual importance of Walter Murch's contribution to Apocalpyse Now . The term is now used to designate either someone with an overview of the sound, whose job can overlap that of a supervising sound editor, or someone who designs a specific type of sound, such as dinosaur steps.



Also read article about Sound from Wikipedia

User Contributions:

Comment about this article, ask questions, or add new information about this topic: