replace in opening quotations

This commit is contained in:
Lucas Cordiviola 2022-04-12 23:17:03 -03:00
parent 1ca5c25706
commit 19822c017f
89 changed files with 282 additions and 282 deletions

View File

@ -499,7 +499,7 @@ original version by: Nikos Drakos, CBLU, University of Leeds
<LI><A NAME="tex2html390"
HREF="node180.html">Narrow-band companding: noise suppression</A>
<LI><A NAME="tex2html391"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
<LI><A NAME="tex2html392"
HREF="node182.html">Phase vocoder time bender</A>
</UL>

View File

@ -85,7 +85,7 @@ nominal amplitude <IMG
<DIV ALIGN="CENTER"><A NAME="fig01.04"></A><A NAME="1090"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 1.4:</STRONG>
The relationship between ``MIDI" pitch and frequency in cycles per
The relationship between "MIDI" pitch and frequency in cycles per
second (Hertz). The span of 24 MIDI values on the horizontal axis represents
two octaves, over which the frequency increases by a factor of four.</CAPTION>
<TR><TD><IMG

View File

@ -86,7 +86,7 @@ The phase-aligned formant (PAF) synthesis algorithm.</CAPTION>
Example F12.paf.pd (Figure <A HREF="#fig06.17">6.18</A>) is a realization of the PAF generator,
described in Section <A HREF="node96.html#sect6.paf">6.4</A>.
The control inputs specify the fundamental frequency, the center frequency, and
the bandwidth, all in ``MIDI" units. The first steps taken in the realization
the bandwidth, all in "MIDI" units. The first steps taken in the realization
are to divide center frequency by fundamental (to get the center frequency quotient)
and bandwidth by fundamental to get the index of modulation for the
waveshaper. The center frequency quotient is sampled-and-held so that it is
@ -95,7 +95,7 @@ only updated at periods of the fundamental.
<P>
The one oscillator (the <TT>phasor~</TT> object) runs at the fundamental
frequency. This is used both to control a <TT>samphold~</TT> object which
synchronizes updates to the center frequency quotient (labeled ``C.F. relative
synchronizes updates to the center frequency quotient (labeled "C.F. relative
to fundamental" in the figure), and to compute phases for both <TT>cos~</TT> objects which operate as shown earlier in Figure <A HREF="node100.html#fig06.16">6.17</A>.
<P>
@ -115,7 +115,7 @@ The amplitude of the half-sinusoid is then adjusted by an index of modulation
WIDTH="38" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img588.png"
ALT="${\omega_b}/\omega$">). The table
(``bell-curve") holds an unnormalized Gaussian curve sampled
("bell-curve") holds an unnormalized Gaussian curve sampled
from -4 to 4 over 200 points (25 points per unit), so the center of the table,
at point 100, corresponds to the central peak of the bell curve. Outside the
interval from -4 to 4 the Gaussian curve is negligibly small.
@ -145,31 +145,31 @@ Filling in the wavetable for Figure <A HREF="#fig06.17">6.18</A>.</CAPTION>
WIDTH="57" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img623.png"
ALT="\fbox{ $\mathrm{until}$\ }"> :
<A NAME="7054"></A>When the left, ``start" inlet is banged, output sequential bangs (with no
elapsed time between them) iteratively, until the right, ``stop" inlet is
banged. The stopping ``bang" message must originate somehow from the
<TT>until</TT> object's outlet; otherwise, the outlet will send ``bang" messages
<A NAME="7054"></A>When the left, "start" inlet is banged, output sequential bangs (with no
elapsed time between them) iteratively, until the right, "stop" inlet is
banged. The stopping "bang" message must originate somehow from the
<TT>until</TT> object's outlet; otherwise, the outlet will send "bang" messages
forever, freezing out any other object which could break the loop.
<P>
As used here, a loop driven by an <TT>until</TT> object
counts from 0 to 199, inclusive. The loop count is maintained by the
``<TT>f</TT>" and ``<TT>+ 1</TT>" objects, each of which feeds the other. But
since the ``<TT>+ 1</TT>" object's output goes to the right inlet of the
``<TT>f</TT>", its result (one greater) will only emerge from the
``<TT>f</TT>" the next time it is banged by ``<TT>until</TT>". So each bang
from ``<TT>until</TT>" increments the value by one.
"<TT>f</TT>" and "<TT>+ 1</TT>" objects, each of which feeds the other. But
since the "<TT>+ 1</TT>" object's output goes to the right inlet of the
"<TT>f</TT>", its result (one greater) will only emerge from the
"<TT>f</TT>" the next time it is banged by "<TT>until</TT>". So each bang
from "<TT>until</TT>" increments the value by one.
<P>
The order in which the loop is started matters: the upper ``<TT>t b b</TT>"
object (short for ``trigger bang bang") must first send zero to the
``<TT>f</TT>", thus initializing it, and then set the <TT>until</TT> object sending
The order in which the loop is started matters: the upper "<TT>t b b</TT>"
object (short for "trigger bang bang") must first send zero to the
"<TT>f</TT>", thus initializing it, and then set the <TT>until</TT> object sending
bangs, incrementing the value, until stopped. To stop it when the value
reaches 199, a <TT>select</TT> object checks the value and, when it sees the
match, bangs the ``stop" inlet of the <TT>until</TT> object.
match, bangs the "stop" inlet of the <TT>until</TT> object.
<P>
Meanwhile, for every number from 0 to 199 that comes out of the ``<TT>f</TT>"
Meanwhile, for every number from 0 to 199 that comes out of the "<TT>f</TT>"
object, we create an ordered pair of messages to the <TT>tabwrite</TT> object.
First, at right, goes the index itself, from 0 to 199. Then for the left inlet,
the first <TT>expr</TT> object adjusts the index to range from -4 to 4 (it

View File

@ -100,7 +100,7 @@ notes.
<DD>Echos: At time shifts between about 30 milliseconds and about a second,
the later copy of the signal can sound like an echo of the earlier one. An echo
may reduce the intelligibility of the signal (especially if it consists of
speech), but usually won't change the overall ``shape" of melodies or
speech), but usually won't change the overall "shape" of melodies or
phrases.
<P>

View File

@ -92,7 +92,7 @@ A convenient logarithmic scale for pitch is simply to
count the number of half-steps from a reference pitch--allowing fractions to
permit us to specify pitches which don't fall on a note of the Western scale.
The most commonly used logarithmic pitch scale is
<A NAME="1100"></A>``MIDI pitch", in which the pitch 69 is assigned to a frequency of 440 cycles
<A NAME="1100"></A>"MIDI pitch", in which the pitch 69 is assigned to a frequency of 440 cycles
per second--the A above middle C. To convert between a MIDI pitch <IMG
WIDTH="17" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img111.png"
@ -153,7 +153,7 @@ second.
MIDI itself is an old hardware protocol which has unfortunately insinuated
itself into a great deal of software design. In hardware, MIDI allows only
integer pitches between 0 and 127. However, the underlying scale is well
defined for any ``MIDI" number, even negative ones; for example a ``MIDI pitch"
defined for any "MIDI" number, even negative ones; for example a "MIDI pitch"
of -4 is a decent rate of vibrato. The pitch scale cannot, however, describe
frequencies less than or equal to zero cycles per second. (For a clear
description of MIDI, its capabilities and limitations, see

View File

@ -104,7 +104,7 @@ delay lines in parallel.</CAPTION>
<P>
In order to work with power-conserving delay networks we will need an
explicit definition of ``total average power".
explicit definition of "total average power".
If there is only one signal (call it <IMG
WIDTH="31" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img80.png"
@ -542,7 +542,7 @@ Flat frequency response in recirculating networks: (a) in general,
using a rotation matrix <IMG
WIDTH="15" HEIGHT="14" ALIGN="BOTTOM" BORDER="0"
SRC="img36.png"
ALT="$R$">; (b) the ``all-pass" configuration.</CAPTION>
ALT="$R$">; (b) the "all-pass" configuration.</CAPTION>
<TR><TD><IMG
WIDTH="381" HEIGHT="287" BORDER="0"
SRC="img765.png"

View File

@ -80,7 +80,7 @@ To make this work in practice it is necessary to open the input of the
reverberator only for a short period of time, during which the input sound is
not varying too rapidly. If an infinite reverberator's input is left open for
too long, the sound will collect and quickly become an indecipherable mass. To
``infinitely reverberate" a note of a live instrument, it is best to wait until
"infinitely reverberate" a note of a live instrument, it is best to wait until
after the attack portion of the note and then allow perhaps 1/2 second of the
note's steady state to enter the reverberator. It is possible to build chords
from a monophonic instrument by repeatedly opening the input at different

View File

@ -137,7 +137,7 @@ a delay of <IMG
SRC="img783.png"
ALT="$x[n-1.5]$">.
We could do this using standard four-point interpolation, putting a cubic
polynomial through the four ``known" points (0, x[n]), (1, x[n-1]), (2, x[n-2]),
polynomial through the four "known" points (0, x[n]), (1, x[n-1]), (2, x[n-2]),
(3, x[n-3]), and then evaluating the polynomial at the point 1.5. Doing
this repeatedly for each value of <IMG
WIDTH="13" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"

View File

@ -435,9 +435,9 @@ classic variant uses a single delay line, with no enveloping at
all. In this situation it is necessary to choose the point at
which the delay time jumps, and the point it jumps to, so that the output
stays continuous. For example, one could find a point where the output signal
passes through zero (a ``zero crossing") and jump discontinuously to another one.
passes through zero (a "zero crossing") and jump discontinuously to another one.
Using only one delay line has the advantage that the signal output sounds
more ``present". A disadvantage is that, since
more "present". A disadvantage is that, since
the delay time is a function of input signal value, the output is no longer
a linear function of the input, so non-periodic inputs can give rise to
artifacts such as difference tones.

View File

@ -108,7 +108,7 @@ inlet takes an audio signal and writes it continuously into the delay line.
WIDTH="86" HEIGHT="39" ALIGN="MIDDLE" BORDER="0"
SRC="img821.png"
ALT="\fbox{ \texttt{delread\~}}">:
<A NAME="8425"></A>read from (or ``tap") a delay line. The first creation argument gives the name
<A NAME="8425"></A>read from (or "tap") a delay line. The first creation argument gives the name
of the delay line (which should agree with the name of the corresponding
<TT>delwrite~</TT> object; this is how Pd knows which <TT>delwrite~</TT> to
associate with the <TT>delread~</TT> object). The

View File

@ -106,7 +106,7 @@ achievable delay is one sample.
Here the objects on the left side, from the top down to the
<TT>clip&nbsp;-0.2 0.2</TT> object,
form a waveshaping network; the index is set by the
``timbre" control, and the waveshaping output varies between a near sinusoid
"timbre" control, and the waveshaping output varies between a near sinusoid
and a bright, buzzy sound. The output is added to the output of the
<TT>vd~</TT> object. The sum is then high pass filtered (the <TT>hip~</TT> object
at lower left), multiplied by a
@ -121,10 +121,10 @@ that the signal cannot exceed 1 in absolute value.
The length of the delay is controlled by the signal input to the
<TT>vd~</TT> object. An oscillator with variable frequency and gain, in the
center of the figure, provides the delay time. The oscillator is added to
one to make it nonnegative before multiplying it by the ``cycle depth" control,
one to make it nonnegative before multiplying it by the "cycle depth" control,
which effectively sets the range of delay times. The minimum possible
delay time of 1.46 milliseconds is added so that the true range of delay times
is between the minimum and the same plus twice the ``depth". The reason for this
is between the minimum and the same plus twice the "depth". The reason for this
minimum delay time is taken up in the discussion of the next example.
<P>

View File

@ -82,7 +82,7 @@ the software package specifies the network, sometimes called a
which essentially corresponds to the synthesis algorithm to be used, and then
worries about how to control the various unit generators in time. In this
section, we'll use abstract block diagrams to describe patches, but in the
``examples" section (Page <A HREF="node18.html#sect1.examples"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"
"examples" section (Page <A HREF="node18.html#sect1.examples"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"
SRC="crossref.png"></A>), we'll choose a
specific implementation environment and show some of the software-dependent
details.
@ -158,13 +158,13 @@ Parts (c) and (d) show a more gently-varying possibility for <IMG
ALT="$y[n]$"> and the
result. Intuition suggests that the result shown in (b) won't sound like an
amplitude-varying sinusoid, but instead like a sinusoid interrupted by
an audible ``pop" after which it continues more quietly. In general, for
an audible "pop" after which it continues more quietly. In general, for
reasons that can't be explained in this chapter, amplitude control signals
<IMG
WIDTH="30" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img2.png"
ALT="$y[n]$"> which ramp smoothly from one value to another are less likely to give
rise to parasitic results (such as that ``pop") than are abruptly changing
rise to parasitic results (such as that "pop") than are abruptly changing
ones.
<P>
@ -173,7 +173,7 @@ sinusoids are the signals most sensitive to the parasitic effects of
quick amplitude change. So when you want to test an amplitude transition, if
it works for sinusoids it will probably work for other signals as well.
Second, depending on the signal whose amplitude you are changing, the amplitude
control will need between 0 and 30 milliseconds of ``ramp" time--zero for the
control will need between 0 and 30 milliseconds of "ramp" time--zero for the
most forgiving signals (such as white noise), and 30 for the least (such as a
sinusoid). All this also depends in a complicated way on listening levels and
the acoustic context.

View File

@ -86,8 +86,8 @@ tilde objects, and because of the connections, the object <TT>a~</TT> must
produce its output before either of <TT>b~</TT> or <TT>c~</TT> can run;
and
both of those in turn are used in the computation of <TT>d~</TT>. So the
possible orderings of these four objects are ``a-b-c-d" and
``a-c-b-d". These
possible orderings of these four objects are "a-b-c-d" and
"a-c-b-d". These
two orderings will have exactly the same result unless the computation of
<TT>b~</TT> and <TT>c~</TT> somehow affect each other's output (as
delay operations might, for example).
@ -98,7 +98,7 @@ delay operations might, for example).
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 7.26:</STRONG>
Order of execution of tilde objects in Pd: (a), an acyclic network.
The objects may be executed in either the order ``a-b-c-d" or ``a-c-b-d". In
The objects may be executed in either the order "a-b-c-d" or "a-c-b-d". In
part (b), there is a cycle, and there is thus no compatible linear ordering of
the objects because each one would need to be run before the other.</CAPTION>
<TR><TD><IMG
@ -253,7 +253,7 @@ a new object:
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 7.27:</STRONG>
A patch using block size control to lower the loop delay below
the normal 64 samples: (a) the main patch; (b) the ``delay-writer" subpatch
the normal 64 samples: (a) the main patch; (b) the "delay-writer" subpatch
with a <TT>block~</TT> object and a recirculating delay network.</CAPTION>
<TR><TD><IMG
WIDTH="558" HEIGHT="252" BORDER="0"

View File

@ -90,7 +90,7 @@ G05.execution.order.pd (Figure <A HREF="#fig07.28">7.28</A>).
Using subpatches to ensure that delay lines are written before they
are read in non-recirculating networks:
(a) the <TT>delwrite~</TT> and <TT>vd~</TT> objects might be executed in either
the ``right" or the ``wrong" order; (b) the <TT>delwrite~</TT> object is inside the
the "right" or the "wrong" order; (b) the <TT>delwrite~</TT> object is inside the
<TT>pd delay-writer</TT>
subpatch and the <TT>vd~</TT> object is inside the <TT>pd delay-reader</TT> one.
Because of the audio connection between the two subpatches, the order of
@ -116,8 +116,8 @@ the subpatch as a whole. So everything in the one subpatch happens before
anything in the second one.)
<P>
In this example, the ``right"
and the ``wrong" way to make the comb filter have audibly different results.
In this example, the "right"
and the "wrong" way to make the comb filter have audibly different results.
For delays less than 64 samples, the right hand side of the patch (using
subpatches) gives the correct result, but the left hand side can't produce
delays below the 64 sample block size.

View File

@ -78,7 +78,7 @@ Section <A HREF="node108.html#sect7.network">7.3</A>.
Using a variable, non-recirculating comb filter we
take out odd harmonics, leaving only the even ones, which
sound an octave higher. As before, the spectral envelope of the sound is
roughly preserved by the operation, so we can avoid the ``chipmunk" effect we
roughly preserved by the operation, so we can avoid the "chipmunk" effect we
would have got by using speed change to do the transposition.
<P>
@ -86,7 +86,7 @@ would have got by using speed change to do the transposition.
<DIV ALIGN="CENTER"><A NAME="fig07.29"></A><A NAME="8278"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 7.29:</STRONG>
An ``octave doubler" uses pitch information (obtained using
An "octave doubler" uses pitch information (obtained using
a <TT>fiddle~</TT> object) to tune a comb filter to remove the odd harmonics
in an incoming sound.</CAPTION>
<TR><TD><IMG

View File

@ -75,14 +75,14 @@ idea of a comb filter. Here we combine the input signal at four different
time shifts (instead of two, as in the original non-recirculating comb filter),
each at a different positive or negative gain. To do this, we insert the
input signal into a delay line and tap it at three different points; the
fourth ``tap" is the original, un-delayed signal.
fourth "tap" is the original, un-delayed signal.
<P>
<DIV ALIGN="CENTER"><A NAME="fig07.30"></A><A NAME="8287"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 7.30:</STRONG>
A ``shaker", a four-tap comb filter with randomly varying gains
A "shaker", a four-tap comb filter with randomly varying gains
on the taps.</CAPTION>
<TR><TD><IMG
WIDTH="591" HEIGHT="386" BORDER="0"

View File

@ -136,7 +136,7 @@ common to use more than four recirculating delays; one reverberator in the Pd
distribution uses sixteen. Finally, it is common to allow separate control of
the amplitudes of the early echos (heard directly) and that of the
recirculating signal; parameters such as these are thought to control
sonic qualities described as ``presence", ``warmth", ``clarity", and so on.
sonic qualities described as "presence", "warmth", "clarity", and so on.
<P>
<HR>

View File

@ -72,8 +72,8 @@ Pitch shifter</A>
Example G09.pitchshift.pd (Figure <A HREF="#fig07.33">7.33</A>) shows a realization of the pitch shifter
described in Section <A HREF="node115.html#sect7.pitchshift">7.9</A>. A delay line (defined and written
elsewhere in the patch) is read using two <TT>vd~</TT> objects. The delay
times vary between a minimum delay (provided as the ``delay" control) and the
minimum plus a window size (the ``window" control.)
times vary between a minimum delay (provided as the "delay" control) and the
minimum plus a window size (the "window" control.)
<P>
@ -112,7 +112,7 @@ t = {2 ^ {h/12}} = {e ^ {\log(2)/12 \cdot h}} \approx {e ^ {0.05776 h}}
</DIV>
<BR CLEAR="ALL">
<P></P>
(called ``speed change" in the patch). The computation labeled ``tape
(called "speed change" in the patch). The computation labeled "tape
head rotation speed" is the same as the formula for <IMG
WIDTH="13" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
SRC="img112.png"

View File

@ -103,11 +103,11 @@ the <IMG
One customarily marks each of the <IMG
WIDTH="21" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
SRC="img898.png"
ALT="$Q_i$"> with an ``o" (calling it a ``zero")
ALT="$Q_i$"> with an "o" (calling it a "zero")
and each of the <IMG
WIDTH="19" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
SRC="img899.png"
ALT="$P_i$"> with an ``x" (a ``pole"); their names are borrowed
ALT="$P_i$"> with an "x" (a "pole"); their names are borrowed
from the field of complex analysis. A plot showing the poles and zeroes
associated with a filter is unimaginatively called a <A NAME="10312"></A><I>pole-zero plot</I>.

View File

@ -107,7 +107,7 @@ the smallest <IMG
ALT="$\tau$"> if any at which a signal repeats is called the signal's
<A NAME="1175"></A><I>period</I>.
In discussing periods of digital audio signals, we quickly run into the
difficulty of describing signals whose ``period" isn't an integer, so that the
difficulty of describing signals whose "period" isn't an integer, so that the
equation above doesn't make sense. For now we'll effectively
ignore this difficulty by supposing that the signal <IMG
WIDTH="31" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"

View File

@ -78,7 +78,7 @@ Section <A HREF="node78.html#sect5.waveshaping">5.3</A> almost always contain a
This is inaudible, but, since it specifies electrical power that is sent
to your speakers, its presence reduces the level of loudness you can
reach without distortion. Another name for a constant signal component is
<A NAME="10330"></A>``DC", meaning ``direct current".
<A NAME="10330"></A>"DC", meaning "direct current".
<P>
An easy and practical way to remove the zero-frequency component from an audio

View File

@ -225,7 +225,7 @@ where <IMG
ALT="$q$"> is the
<A NAME="10499"></A><I>quality</I> of the filter, defined as the center frequency divided by
bandwidth. Resonant filters are often specified in terms of the center
frequency and ``q" in place of bandwidth.
frequency and "q" in place of bandwidth.
<P>
<HR>

View File

@ -109,7 +109,7 @@ gain is one at frequency 0.
SRC="img1017.png"
ALT="\fbox{ \texttt{bp\~}}">:
<A NAME="10844"></A>resonant filter. The middle inlet takes control messages to set the center
frequency, and the right inlet to set ``q".
frequency, and the right inlet to set "q".
<P>
<BR><!-- MATH

View File

@ -82,7 +82,7 @@ using the <TT>vcf~</TT> object, introduced here:
WIDTH="53" HEIGHT="39" ALIGN="MIDDLE" BORDER="0"
SRC="img1022.png"
ALT="\fbox{ \texttt{vcf\~}}">:
<A NAME="10846"></A>a ``voltage controlled" band-pass filter,
<A NAME="10846"></A>a "voltage controlled" band-pass filter,
similar to <TT>bp~</TT>, but with a signal inlet to control center frequency.
Both <TT>bp~</TT> and <TT>vcf~</TT> are one-pole resonant filters as
developed in Section <A HREF="node143.html#sect8.twopolebandpass">8.3.4</A>; <TT>bp~</TT> outputs only
@ -109,7 +109,7 @@ Example H04.filter.sweep.pd (Figure <A HREF="#fig08.29">8.29</A>) demonstrates u
<TT>phasor~</TT> object (at top) creates a sawtooth wave to filter. (This is
not especially good practice as we are not controlling the possibility of
foldover; a better sawtooth generator for this purpose will be developed in
Chapter 10.) The second <TT>phasor~</TT> object (labeled ``LFO for sweep")
Chapter 10.) The second <TT>phasor~</TT> object (labeled "LFO for sweep")
controls the time-varying center frequency. After adjusting to set the depth
and a base center frequency (given in MIDI units), the result is converted into
Hertz (using the <TT>tabread4~</TT> object) and passed to <TT>vcf~</TT> to set

View File

@ -92,7 +92,7 @@ top to bottom they are:
Message boxes, with a flag-shaped border, interpret the text as a message to
send whenever the box is
activated (by an incoming message or with a pointing device). The message in this
case consists simply of the number ``21".
case consists simply of the number "21".
<P>
</LI>
@ -104,10 +104,10 @@ when you load a patch. Object boxes may hold hundreds of different
classes of objects--including oscillators, envelope generators, and other
signal processing modules to be introduced later--depending on the text
inside. In this example, the box holds an adder. In most Pd patches, the
majority of boxes are of type ``object". The first word typed into an object
majority of boxes are of type "object". The first word typed into an object
box specifies its
<A NAME="1223"></A><I>class</I>,
which in this case is just ``+". Any additional (blank-space-separated) words
which in this case is just "+". Any additional (blank-space-separated) words
appearing in the box are called
<A NAME="1225"></A><A NAME="1226"></A><I>creation arguments</I>,
which specify the initial state of the object when it is created.
@ -131,7 +131,7 @@ by typing values in it.
</LI>
</UL>
In Figure
<A HREF="#fig01.10">1.10</A> (part a) the message box, when clicked, sends the message ``21" to an
<A HREF="#fig01.10">1.10</A> (part a) the message box, when clicked, sends the message "21" to an
object box which adds 13 to it. The lines connecting the boxes carry data
from one box to the next; outputs of boxes are on the bottom and inputs on top.
@ -167,8 +167,8 @@ object outputs messages, but the <TT>*~</TT> object
outputs a signal. The inputs of a given object may or may not accept
signals (but they always accept messages, even if only to convert them to
signals). As a convention, object boxes with signal inputs or outputs
are all named with a trailing tilde (``<TT>~</TT>") as in ``<TT>*~</TT>"
and ``<TT>osc~</TT>".
are all named with a trailing tilde ("<TT>~</TT>") as in "<TT>*~</TT>"
and "<TT>osc~</TT>".
<P>
<HR>

View File

@ -115,7 +115,7 @@ at 1001 Hertz?
</LI>
<LI>A one-pole complex filter is excited by an impulse to make a tone at 1000
Hertz, which decays 10 decibels in one second (at a sample rate of 44100
Hertz). Where would you place the pole? What is the value of ``q"?
Hertz). Where would you place the pole? What is the value of "q"?
<P>
</LI>

View File

@ -155,7 +155,7 @@ Finally, we will develop some standard applications such as the phase vocoder.
<LI><A NAME="tex2html3015"
HREF="node180.html">Narrow-band companding: noise suppression</A>
<LI><A NAME="tex2html3016"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
<LI><A NAME="tex2html3017"
HREF="node182.html">Phase vocoder time bender</A>
</UL>

View File

@ -71,7 +71,7 @@ Properties of Fourier transforms</A>
<P>
In this section we will investigate what happens when we take the Fourier
transform of a (complex) sinusoid. The simplest one is ``DC", the special
transform of a (complex) sinusoid. The simplest one is "DC", the special
sinusoid of frequency zero. After we derive the Fourier transform of that, we
will develop some properties of Fourier transforms that allow us to apply the
result to any other sinusoid.

View File

@ -96,10 +96,10 @@ to find this out is just to run Pd on the relocated file and see what Pd
complains it can't find.
<P>
There should be dozens of files in the ``examples" folder, including the
There should be dozens of files in the "examples" folder, including the
examples themselves and the support files. The filenames of the examples
all begin with a letter (A for chapter 1, B for 2, etc.) and a number, as
in ``A01.sinewave.pd".
in "A01.sinewave.pd".
<P>
The example patches are also distributed with Pd, but beware that you may find

View File

@ -137,7 +137,7 @@ is:
Much ink has been spilled over the design of suitable window functions for
particular situations, but here we will consider the simplest one, named the
<A NAME="12504"></A><A NAME="12505"></A><I>Hann</I>
window function (the name is sometimes corrupted to ``Hanning" in DSP circles).
window function (the name is sometimes corrupted to "Hanning" in DSP circles).
The Hann window is:
<BR><P></P>
<DIV ALIGN="CENTER">

View File

@ -273,7 +273,7 @@ described in the following sections.)
<P>
Finally we reconstruct an output signal. To do this we apply the inverse of
the Fourier transform (labeled ``iFT" in the figure). As shown in
the Fourier transform (labeled "iFT" in the figure). As shown in
Section <A HREF="node166.html#sect9-IFT">9.1.2</A> this can be done by taking another Fourier transform,
normalizing, and flipping the result backwards. In case the reconstructed
window does not go smoothly to zero at its two ends, we apply the Hann
@ -289,7 +289,7 @@ of four and space each window <IMG
WIDTH="68" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1154.png"
ALT="$H=N/4$"> samples past the previous one), we can
reconstruct the original signal faithfully by omitting the ``modification"
reconstruct the original signal faithfully by omitting the "modification"
step. This is because the iFT undoes the work of the <IMG
WIDTH="27" HEIGHT="14" ALIGN="BOTTOM" BORDER="0"
SRC="img1155.png"

View File

@ -86,8 +86,8 @@ Block diagram for narrow-band noise suppression by companding.</CAPTION>
A
<A NAME="12582"></A><I>compander</I>
is a tool that amplifies a signal with a variable gain, depending on the
signal's measured amplitude. The term is a contraction of ``compressor" and
``expander". A compressor's gain decreases as the input level increases, so
signal's measured amplitude. The term is a contraction of "compressor" and
"expander". A compressor's gain decreases as the input level increases, so
that the
<A NAME="12584"></A><I>dynamic range</I>,
that is, the overall variation in signal level, is reduced. An expander does

View File

@ -73,7 +73,7 @@ Timbre stamping (classical vocoder)</A>
<DIV ALIGN="CENTER"><A NAME="fig09.09"></A><A NAME="12598"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 9.9:</STRONG>
Block diagram for timbre stamping (AKA ``vocoding'').</CAPTION>
Block diagram for timbre stamping (AKA "vocoding'').</CAPTION>
<TR><TD><IMG
WIDTH="257" HEIGHT="420" BORDER="0"
SRC="img1172.png"

View File

@ -80,7 +80,7 @@ Examples</A>
<LI><A NAME="tex2html3230"
HREF="node180.html">Narrow-band companding: noise suppression</A>
<LI><A NAME="tex2html3231"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
<LI><A NAME="tex2html3232"
HREF="node182.html">Phase vocoder time bender</A>
</UL>

View File

@ -157,14 +157,14 @@ block of computation outputs the same first <IMG
ALT="$N$"> samples of the table.
<P>
In this example, the table ``$0-hann" holds a Hann window function
In this example, the table "$0-hann" holds a Hann window function
of length 512, in agreement with the specified block size. The signal
to be analyzed appears (from the parent patch) via the <TT>inlet~</TT> object.
The channel amplitudes (the output of the <TT>rfft~</TT> object) are reduced
to real-valued magnitudes: the real and imaginary parts are squared separately,
the two squares are added, and the result passed to the <TT>sqrt~</TT> object.
Finally the magnitude is written (controlled by a connection not shown in
the figure) via <TT>tabwrite~</TT> to another table, ``$0-magnitude", for
the figure) via <TT>tabwrite~</TT> to another table, "$0-magnitude", for
graphing.
<P>
@ -234,7 +234,7 @@ A modification is applied, however: each channel is multiplied by a
(positive real-valued) gain. The complex-valued amplitude for each channel is
scaled by separately multiplying the real and imaginary parts by the gain. The
gain (which depends on the channel) comes from another table, named
``$0-gain". The result is a graphical equalization filter; by mousing in the
"$0-gain". The result is a graphical equalization filter; by mousing in the
graphical window for this table, you can design gain-frequency curves.
<P>

View File

@ -52,7 +52,7 @@ original version by: Nikos Drakos, CBLU, University of Leeds
SRC="index.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html3260"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
<B> Up:</B> <A NAME="tex2html3254"
HREF="node178.html">Examples</A>
<B> Previous:</B> <A NAME="tex2html3248"
@ -76,7 +76,7 @@ Narrow-band companding: noise suppression</A>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 9.16:</STRONG>
Noise suppression as an example of narrow-band companding: (a)
analysis and reconstruction of the signal; (b) computation of the
``mask".</CAPTION>
"mask".</CAPTION>
<TR><TD><IMG
WIDTH="663" HEIGHT="477" BORDER="0"
SRC="img1227.png"
@ -151,8 +151,8 @@ and otherwise replaced by zero.
<P>
The mask itself is the product of the measured average noise in each channel,
which is contained in the table ``$0-mask", multiplied by a value named
``mask-level". The average noise is measured in a subpatch
which is contained in the table "$0-mask", multiplied by a value named
"mask-level". The average noise is measured in a subpatch
(<TT>pd calculate-mask</TT>), whose contents are shown in part (b) of the
figure. To compute the mask we are using two new new objects:
@ -186,7 +186,7 @@ the contents of a table, affecting up to the first <IMG
<P>
The power averaging process is begun by sending a time duration in milliseconds
to ``make-mask". The patch computes the equivalent number of blocks <IMG
to "make-mask". The patch computes the equivalent number of blocks <IMG
WIDTH="10" HEIGHT="14" ALIGN="BOTTOM" BORDER="0"
SRC="img21.png"
ALT="$b$">
@ -214,11 +214,11 @@ stops evolving.
<P>
To use this patch for classical noise suppression requires at least a few
seconds of recorded noise without the ``signal" present. This is played into
the patch, and its duration sent to ``make-mask", so that the
``$0-mask" table holds the average measured noise power for each channel.
seconds of recorded noise without the "signal" present. This is played into
the patch, and its duration sent to "make-mask", so that the
"$0-mask" table holds the average measured noise power for each channel.
Then, making the assumption that the noisy part of the signal rarely exceeds 10
times its average power (for example), ``mask-level" is set to 10, and the
times its average power (for example), "mask-level" is set to 10, and the
signal to be noise-suppressed is sent through part (a) of the patch. The noise
will be almost all gone, but those channels in which the signal exceeds 20
times the noise power will only be attenuated by 3dB, and higher-power channels
@ -251,7 +251,7 @@ from any other one.)
SRC="index.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html3260"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
<B> Up:</B> <A NAME="tex2html3254"
HREF="node178.html">Examples</A>
<B> Previous:</B> <A NAME="tex2html3248"

View File

@ -11,8 +11,8 @@ original version by: Nikos Drakos, CBLU, University of Leeds
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<TITLE>Timbre stamp (``vocoder")</TITLE>
<META NAME="description" CONTENT="Timbre stamp (``vocoder")">
<TITLE>Timbre stamp ("vocoder")</TITLE>
<META NAME="description" CONTENT="Timbre stamp ("vocoder")">
<META NAME="keywords" CONTENT="book">
<META NAME="resource-type" CONTENT="document">
<META NAME="distribution" CONTENT="global">
@ -66,7 +66,7 @@ original version by: Nikos Drakos, CBLU, University of Leeds
<!--End of Navigation Panel-->
<H2><A NAME="SECTION001373000000000000000">
Timbre stamp (``vocoder")</A>
Timbre stamp ("vocoder")</A>
</H2>
<P>
@ -99,7 +99,7 @@ are <IMG
WIDTH="28" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1237.png"
ALT="$c[k]$"> for the control source, we just
``whiten" the filter input, multiplying by <IMG
"whiten" the filter input, multiplying by <IMG
WIDTH="46" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1238.png"
ALT="$1/f[k]$">, and then stamp the control
@ -113,7 +113,7 @@ done by limiting the whitening factor <IMG
SRC="img1238.png"
ALT="$1/f[k]$"> to a specified maximum value
using the <TT>clip~</TT> object. The limit is controlled by the
``squelch" parameter, which is squared and divided by 100 to map values
"squelch" parameter, which is squared and divided by 100 to map values
from 0 to 100 to a useful range.
<P>

View File

@ -55,7 +55,7 @@ original version by: Nikos Drakos, CBLU, University of Leeds
<B> Up:</B> <A NAME="tex2html3280"
HREF="node178.html">Examples</A>
<B> Previous:</B> <A NAME="tex2html3276"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
&nbsp; <B> <A NAME="tex2html3282"
HREF="node4.html">Contents</A></B>
&nbsp; <B> <A NAME="tex2html3284"
@ -84,7 +84,7 @@ Phase vocoder for time stretching and contraction.</CAPTION>
<P>
The phase vocoder usually refers to the general technique of passing from
(complex-valued) channel amplitudes to pairs consisting of (real-valued)
magnitudes and phase precession rates (``frequencies"), and back, as
magnitudes and phase precession rates ("frequencies"), and back, as
described in Figure <A HREF="node175.html#fig09.11">9.11</A> (Section <A HREF="node175.html#sect9.phase">9.5</A>). In
Example I07.phase.vocoder.pd (Figure <A HREF="#fig09.18">9.18</A>), we use this technique with the
specific aim of time-stretching and/or time-contracting a recorded sound
@ -118,8 +118,8 @@ computation than a full-precision square root and reciprocal would.
<P>
The process starts with a sub-patch, <TT>pd read-windows</TT>, that outputs
two Hann-windowed blocks of the recorded sound, a ``back" one and a
``front" one 1/4 window further forward in the recording. The window
two Hann-windowed blocks of the recorded sound, a "back" one and a
"front" one 1/4 window further forward in the recording. The window
shown uses the two outputs of the sub-patch to guide the amplitude
and phase change of each channel of its own output.
@ -142,7 +142,7 @@ After normalizing <IMG
SRC="img1198.png"
ALT="$S[m-1,k]$">, its complex conjugate (the normalized inverse)
is multiplied by the windowed Fourier
transform of the ``back" window <IMG
transform of the "back" window <IMG
WIDTH="32" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1199.png"
ALT="$T[k]$">, giving the product <IMG
@ -151,7 +151,7 @@ transform of the ``back" window <IMG
ALT="$R[k]$"> of
Page <A HREF="node176.html#sect9.phaserelationship"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"
SRC="crossref.png"></A>.
Next, depending on the value of the parameter ``lock", the computed value of
Next, depending on the value of the parameter "lock", the computed value of
<IMG
WIDTH="33" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1206.png"
@ -163,9 +163,9 @@ is done using <TT>lrshift~</TT> objects, whose outputs are added into <IMG
WIDTH="33" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1206.png"
ALT="$R[k]$"> if
``lock" is set to one, or otherwise not if it is zero.
"lock" is set to one, or otherwise not if it is zero.
The result is then normalized and multiplied by the Hann-windowed Fourier transform
of the ``front" window (<IMG
of the "front" window (<IMG
WIDTH="37" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img1196.png"
ALT="$T'[k]$">) to give <IMG
@ -239,7 +239,7 @@ can be made using daisy-chained cross-fades.
<B> Up:</B> <A NAME="tex2html3280"
HREF="node178.html">Examples</A>
<B> Previous:</B> <A NAME="tex2html3276"
HREF="node181.html">Timbre stamp (``vocoder")</A>
HREF="node181.html">Timbre stamp ("vocoder")</A>
&nbsp; <B> <A NAME="tex2html3282"
HREF="node4.html">Contents</A></B>
&nbsp; <B> <A NAME="tex2html3284"

View File

@ -73,7 +73,7 @@ Constant amplitude scaler</A>
Example A01.sinewave.pd, shown in Figure <A HREF="#fig01.11">1.11</A>, contains essentially the
simplest possible patch that makes a sound,
with only three object boxes. (There are also comments, and two message
boxes to turn Pd's ``DSP" (audio) processing on and off.) The three object boxes
boxes to turn Pd's "DSP" (audio) processing on and off.) The three object boxes
are:
<P>
@ -144,20 +144,20 @@ consult the Pd documentation for details.
<P>
The two message boxes show a peculiarity in the way messages are parsed in
message boxes. Earlier in Figure <A HREF="node16.html#fig01.10">1.10</A> (part a), the message
consisted only of the number 21. When clicked, that box sent the message ``21"
consisted only of the number 21. When clicked, that box sent the message "21"
to its outlet and hence to any objects connected to it. In this current
example, the text of the message boxes starts with a semicolon. This is a
terminator between messages (so the first message is empty), after which the
next word is taken as the name of the recipient of the following message. Thus
the message here is ``dsp 1" (or ``dsp 0") and the message is to be sent, not
the message here is "dsp 1" (or "dsp 0") and the message is to be sent, not
to any connected objects--there aren't any anyway--but rather, to the object
named ``pd". This particular object is provided invisibly by the Pd program
named "pd". This particular object is provided invisibly by the Pd program
and you can send it various messages to control Pd's global state, in this case
turning audio processing on (``1") and off (``0").
turning audio processing on ("1") and off ("0").
<P>
Many more details about the control aspects of Pd, such as the above, are
explained in a different series of example patches (the ``control examples") in
explained in a different series of example patches (the "control examples") in
the Pd release, but they will only be touched on here as necessary to
demonstrate the audio signal processing techniques that are the subject of this
book.

View File

@ -98,7 +98,7 @@ jumps so that the result repeats from one period to the next.
Example J03.pulse.width.mod.pd (not shown) combines two sawtooth waves, of opposite sign, with
slightly different frequencies so that the relative phase changes
continuously. Their sum is a rectangle wave whose width varies in time. This
is known as pulse width modulation (``PWM").
is known as pulse width modulation ("PWM").
<P>

View File

@ -75,7 +75,7 @@ Strategies for band-limiting sawtooth waves</A>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 10.14:</STRONG>
Alternative techniques for making waveforms with corners: (a) a
triangle wave as the minimum of two line segments; (b) clipping a triangle wave to
make an ``envelope".</CAPTION>
make an "envelope".</CAPTION>
<TR><TD><IMG
WIDTH="617" HEIGHT="359" BORDER="0"
SRC="img1377.png"
@ -101,7 +101,7 @@ shape is used as in the previous example, and the falling shape differs only
in that its phase is set so that it falls to zero at a controllable point (not
necessarily at the end of the cycle as before). The <TT>clip~</TT> object prevents
it from rising above 1 (so that, if the intersection of the two segments is
higher than one, we get a horizontal ``sustain" segment), and also from falling
higher than one, we get a horizontal "sustain" segment), and also from falling
below zero, so that once the falling shape reaches zero, the output is zero for
the rest of the cycle.

View File

@ -96,8 +96,8 @@ A03.line.pd; (c) A05.output.subpatch.pd.</CAPTION>
WIDTH="77" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img150.png"
ALT="\fbox{ $ \mathrm{dbtorms} $}"> : Decibels to linear
amplitude conversion. The ``RMS" is a misnomer; it should have been named
``dbtoamp",
amplitude conversion. The "RMS" is a misnomer; it should have been named
"dbtoamp",
since it really converts from decibels to any linear amplitude unit, be it
RMS, peak, or other. An input of 100 dB is normalized to an output of 1.
Values greater than 100 are fine (120 will give 10), but values less than or
@ -116,7 +116,7 @@ one way or another to avoid using it.)
<P>
The two number boxes are connected to the input and output of the
<TT>dbtorms</TT> object. The input functions as a control; ``mouse" on it
<TT>dbtorms</TT> object. The input functions as a control; "mouse" on it
(click and drag upward or downward) to change the amplitude. It has been set to range from
0 to 80; this is protection for your speakers and ears, and it's wise to build
such guardrails into your own patches.

View File

@ -1046,7 +1046,7 @@ Index</A>
<DT><STRONG>in Pd</STRONG>
<DD><A HREF="node49.html#3723">Control operations in Pd</A>
</DL>
<DT><STRONG>quality (``q'')</STRONG>
<DT><STRONG>quality ("q'')</STRONG>
<DD><A HREF="node149.html#10499">Impulse responses of recirculating</A>
<DT><STRONG>real part of a complex number</STRONG>
<DD><A HREF="node105.html#7764">Complex numbers</A>

View File

@ -120,8 +120,8 @@ the beginning and end values of a segment in <TT>line~</TT>'s output.
<P>
The treatment of <TT>line~</TT>'s right inlet is unusual among Pd objects in that
it forgets old values; a message with a single number such as ``0.1" is
always equivalent to the pair, ``0.1 0". Almost any other object will retain
it forgets old values; a message with a single number such as "0.1" is
always equivalent to the pair, "0.1 0". Almost any other object will retain
the previous value for the right inlet, instead of resetting it to zero.
<P>

View File

@ -81,10 +81,10 @@ have equal amplitudes.
<P>
The amplitude control in this example is taken care of by a new object called
<TT>output~</TT>. This isn't a built-in object of Pd, but is itself a Pd patch
which lives in a file, ``output.pd". (You can see the internals of
which lives in a file, "output.pd". (You can see the internals of
<TT>output~</TT> by opening the properties menu for the box and selecting
``open".) You get two controls, one for amplitude in dB (100 meaning ``unit
gain"), and a ``mute" button. Pd's audio processing is turned on automatically
"open".) You get two controls, one for amplitude in dB (100 meaning "unit
gain"), and a "mute" button. Pd's audio processing is turned on automatically
when you set the output level--this might not be the best behavior in
general, but it's appropriate for these example patches. The mechanism for
embedding one Pd patch as an object box inside another is discussed in Section

View File

@ -71,7 +71,7 @@ Conversion between frequency and pitch</A>
<P>
Example A06.frequency.pd&nbsp; (Figure <A HREF="#fig01.13">1.13</A>) shows Pd's object for converting
pitch to frequency units (<TT>mtof</TT>, meaning ``MIDI to frequency") and its
pitch to frequency units (<TT>mtof</TT>, meaning "MIDI to frequency") and its
inverse <TT>ftom</TT>. We also introduce two other object classes,
<TT>send</TT> and <TT>receive</TT>.
@ -105,7 +105,7 @@ Conversion between pitch and frequency in A06.frequency.pd.</CAPTION>
<A NAME="1405"></A><A NAME="1406"></A>convert MIDI pitch to frequency units according to the
Pitch/Frequency Conversion Formulas (Page <A HREF="node11.html#eq-pitchmidi"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"
SRC="crossref.png"></A>). Inputs
and outputs are messages (``tilde" equivalents of the two also exist,
and outputs are messages ("tilde" equivalents of the two also exist,
although like <TT>dbtorms~</TT> they're expensive in CPU time).
The <TT>ftom</TT> object's output is -1500
if the input is zero or negative; and likewise, if you give <TT>mtof</TT> -1500 or lower it outputs zero.
@ -125,11 +125,11 @@ if the input is zero or negative; and likewise, if you give <TT>mtof</TT> -1500
SRC="img157.png"
ALT="\fbox{ $ \mathrm{r} $}">:
<A NAME="1407"></A><A NAME="1408"></A>Receive messages non-locally.
The <TT>receive</TT> object, which may be abbreviated as ``<TT>r</TT>",
The <TT>receive</TT> object, which may be abbreviated as "<TT>r</TT>",
waits for non-local messages to be sent by a <TT>send</TT> object (described below)
or
by a message box using redirection (the ``;" feature discussed in the
earlier example, A01.sinewave.pd). The argument (such as ``frequency" and ``pitch"
by a message box using redirection (the ";" feature discussed in the
earlier example, A01.sinewave.pd). The argument (such as "frequency" and "pitch"
in this example) is the name to which messages are sent. Multiple
<TT>receive</TT> objects may share the same name, in which case any message
sent to that name will go to all of them.
@ -148,7 +148,7 @@ sent to that name will go to all of them.
WIDTH="31" HEIGHT="33" ALIGN="MIDDLE" BORDER="0"
SRC="img159.png"
ALT="\fbox{ $\mathrm{s}$\ }">:
<A NAME="1409"></A><A NAME="1410"></A>The <TT>send</TT> object, which may be abbreviated as ``<TT>s</TT>", directs
<A NAME="1409"></A><A NAME="1410"></A>The <TT>send</TT> object, which may be abbreviated as "<TT>s</TT>", directs
messages to <TT>receive</TT> objects.
<P>
@ -156,11 +156,11 @@ Two new properties of number boxes are used here. Earlier we've used them
as controls or as displays; here, the two number boxes each function as both.
If a number box gets a number in its inlet, it not only displays the number
but also repeats the number to its output. However, a number box may also be sent
a ``set" message, such as ``set 55" for example. This would set the value
a "set" message, such as "set 55" for example. This would set the value
of the number box to 55 (and display it) but not cause the output that would
result from the simple ``55" message. In this case, numbers coming from the
two <TT>receive</TT> objects are formatted (using message boxes) to read ``set 55" instead
of just ``55", and so on. (The special word ``$1" is replaced by the
result from the simple "55" message. In this case, numbers coming from the
two <TT>receive</TT> objects are formatted (using message boxes) to read "set 55" instead
of just "55", and so on. (The special word "$1" is replaced by the
incoming number.) This is done because otherwise we would have an infinite
loop: frequency would change pitch which would change frequency and so on
forever, or at least until something broke.

View File

@ -89,7 +89,7 @@ oscillators, effectively turning them on and off.
<P>
Even when all four oscillators are combined (with the toggle switch in the
``1" position), the result fuses into a single tone,
"1" position), the result fuses into a single tone,
heard at the pitch of the leftmost oscillator. In effect this patch sums a
four-term Fourier series to generate a complex, periodic waveform.

View File

@ -77,7 +77,7 @@ a continuous stream at some sample rate. The sample rate isn't really a
quality of the audio signal, but rather it specifies how fast the individual
samples should flow into or out of the computer. But audio signals are at
bottom just sequences of numbers, and in practice there is no requirement that
they be ``played" sequentially. Another, complementary view is that
they be "played" sequentially. Another, complementary view is that
they can be stored in memory, and, later, they can be read back in any
order--forward, backward, back and forth, or totally at random. An
inexhaustible range of new possibilities opens up.
@ -224,7 +224,7 @@ x[ \lfloor y[n] \rflo...
<IMG
WIDTH="44" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img173.png"
ALT="$\lfloor y[n] \rfloor$"> means, ``the greatest integer not
ALT="$\lfloor y[n] \rfloor$"> means, "the greatest integer not
exceeding <IMG
WIDTH="30" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img2.png"
@ -246,7 +246,7 @@ same for <IMG
ALT="$y[1]$"> and <IMG
WIDTH="28" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img177.png"
ALT="$z[1]$"> and so on. The ``natural" range for the input <IMG
ALT="$z[1]$"> and so on. The "natural" range for the input <IMG
WIDTH="30" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img2.png"
ALT="$y[n]$">

View File

@ -72,12 +72,12 @@ Sampling
</H1>
<P>
``Sampling"
"Sampling"
<A NAME="2203"></A>
is nothing more than recording a live signal into a wavetable, and then later
playing it out again. (In commercial samplers the entire wavetable is
usually called a ``sample" but to avoid confusion we'll only use the word
``sample" here to mean a single number in an audio signal.)
usually called a "sample" but to avoid confusion we'll only use the word
"sample" here to mean a single number in an audio signal.)
<P>
At its simplest, a sampler is simply a wavetable oscillator, as was shown in
@ -281,7 +281,7 @@ h[n] = 12 {{\log_2} \left \vert y[n] - y[n-1] \right \vert}
</DIV>
<BR CLEAR="ALL">
<P></P>
(Here the enclosing bars ``<IMG
(Here the enclosing bars "<IMG
WIDTH="7" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img202.png"
ALT="$\vert$">" mean absolute value.)
@ -361,7 +361,7 @@ at the beginning of each new cycle.
<P>
It's well known that transposing a recording also transposes its timbre--this
is the ``chipmunk" effect. Not only are any periodicities (such as might
is the "chipmunk" effect. Not only are any periodicities (such as might
give rise to pitch) transposed, but so are the frequencies of
the overtones. Some timbres, notably those of vocal sounds, have characteristic
frequency ranges in which overtones are stronger than other nearby ones.
@ -378,7 +378,7 @@ wavetables periodically. In Section <A HREF="node27.html#sect2.oscillator">2.1<
repeated quickly enough that the repetition gives rise to a pitch, say between
30 and 4000 times per second, roughly the range of a piano. In the current
section we assumed a wavetable one second long, and in this case
``reasonable" transposition factors (less than four octaves up) would give rise
"reasonable" transposition factors (less than four octaves up) would give rise
to a rate of repetition below 30, usually much lower, and going down as low as
we wish.
@ -506,7 +506,7 @@ location as the segment's midpoint, we first subtract <IMG
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 2.5:</STRONG>
A simple looping sampler, as yet with no amplitude control.
There are inputs to control the frequency and the segment size and location.
The ``-" operation is included if we wish the segment location to be specified
The "-" operation is included if we wish the segment location to be specified
as the segment's midpoint; otherwise we specify the location of the left
end of the segment.</CAPTION>
<TR><TD><IMG

View File

@ -173,7 +173,7 @@ them. The relative phase is controlled by the parameter <IMG
WIDTH="11" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img4.png"
ALT="$a$"> (which takes the
value 0.3 in the graphed signals). The ``wrap" operation computes the fractional
value 0.3 in the graphed signals). The "wrap" operation computes the fractional
part of its input.</CAPTION>
<TR><TD><IMG
WIDTH="603" HEIGHT="402" BORDER="0"

View File

@ -430,7 +430,7 @@ reciprocal <IMG
WIDTH="28" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img241.png"
ALT="$1/f$">--at least approximately, and the approximation is at least
fairly good if the waveform ``behaves well" at its ends.
fairly good if the waveform "behaves well" at its ends.
(As we'll see later, the waveform can always be forced to behave at least
reasonably well by enveloping it as in Figure <A HREF="node29.html#fig02.07">2.7</A>.)
@ -449,9 +449,9 @@ curve is both compressed to the left (the frequencies all drop) and amplified
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 2.10:</STRONG>
The Fourier series magnitudes for the waveforms shown in Figure
<A HREF="#fig02.09">2.9</A>. The horizontal axis is the harmonic number. We only ``hear"
<A HREF="#fig02.09">2.9</A>. The horizontal axis is the harmonic number. We only "hear"
the coefficients for integer harmonic numbers; the continuous curves are the
``ideal" contours.</CAPTION>
"ideal" contours.</CAPTION>
<TR><TD><IMG
WIDTH="474" HEIGHT="317" BORDER="0"
SRC="img242.png"
@ -467,11 +467,11 @@ interpolate between each pair of consecutive points of the 100 percent duty
cycle contour (the original one) with 99 new ones. Already in the figure the
50 percent duty cycle trace defines the curve with twice the resolution of
the original one. In the limit, as the duty cycle gets arbitrarily small, the
spectrum is filled in more and more densely; and the limit is the ``true"
spectrum is filled in more and more densely; and the limit is the "true"
spectrum of the waveform.
<P>
This ``true" spectrum is only audible at suitably low duty cycles, though. The
This "true" spectrum is only audible at suitably low duty cycles, though. The
200 percent duty cycle example actually misses the peak in the ideal
(continuous) spectrum because the peak falls below the first harmonic. In
general, higher duty cycles sample the ideal curve at lower resolutions.
@ -484,11 +484,11 @@ endlessly variable waveforms from recorded samples (Section
<A HREF="node28.html#sect2.sampling">2.2</A>), it is possible to generate all sorts of sounds.
For example, the block diagram of Figure <A HREF="node29.html#fig02.07">2.7</A> gives us a
way to to grab and stretch timbres from a recorded wavetable. When the
``frequency" parameter <IMG
"frequency" parameter <IMG
WIDTH="13" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
SRC="img112.png"
ALT="$f$"> is high enough to be audible as a pitch, the
``size"
"size"
parameter <IMG
WIDTH="10" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img208.png"

View File

@ -81,7 +81,7 @@ lookup.
To speak of error in table lookup, we must view the wavetable as a sampled
version of an underlying function. When we ask for a value of the
underlying function which lies between the points of the wavetable, the error
is the difference between the result of the wavetable lookup and the ``ideal"
is the difference between the result of the wavetable lookup and the "ideal"
value of the function at that point. The most revealing study of wavetable
lookup error assumes that the underlying function is a sinusoid (Page
<A HREF="node7.html#eq-realsinusoid"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"

View File

@ -71,9 +71,9 @@ Wavetable oscillator</A>
<P>
Example B01.wavetables.pd, shown in Figure <A HREF="#fig02.12">2.12</A>, implements a wavetable
oscillator, which plays back from a wavetable named ``table10". Two new Pd
oscillator, which plays back from a wavetable named "table10". Two new Pd
primitives are shown here. First is the wavetable itself, which appears at
right in the figure. You can ``mouse" on the wavetable to change its shape and
right in the figure. You can "mouse" on the wavetable to change its shape and
hear the sound change as a result. Not shown in the figure but demonstrated in
the patch is Pd's facility for automatically calculating wavetables with
specified partial amplitudes, which is often preferable to drawing waveforms by
@ -101,8 +101,8 @@ A wavetable oscillator: B01.wavetables.pd.</CAPTION>
WIDTH="89" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img273.png"
ALT="\fbox{ $ \mathrm{tabosc4}\sim $}">:
<A NAME="2554"></A>a wavetable oscillator. The ``4" indicates that this class uses 4-point
(cubic) interpolation. In the example, the table's name, ``table10", is
<A NAME="2554"></A>a wavetable oscillator. The "4" indicates that this class uses 4-point
(cubic) interpolation. In the example, the table's name, "table10", is
specified as a creation argument to the <TT>tabosc4~</TT> object.
(You can also switch between wavetables dynamically by sending appropriate
messages to the object.)

View File

@ -117,12 +117,12 @@ you can send messages to select which table to use.
<A NAME="2556"></A>record an audio signal into a wavetable. In this example the
<TT>tabwrite~</TT> is used to display the output (although later
on it will be used for all sorts of other things.) Whenever it receives a
``bang" message from the pushbutton icon above it, <TT>tabwrite~</TT> begins
"bang" message from the pushbutton icon above it, <TT>tabwrite~</TT> begins
writing successive samples of its input to the named table.
<P>
Example B03.tabread4.pd shows how to combine a <TT>phasor~</TT> and a <TT>tabread4~</TT> object to make a wavetable oscillator. The <TT>phasor~</TT>'s output ranges from
0 to 1 in value. In this case the input wavetable, named ``waveform12", is 131
0 to 1 in value. In this case the input wavetable, named "waveform12", is 131
elements long. The domain for the <TT>tabread4~</TT> object is thus from 1 to
129. To adjust the range of the <TT>phasor~</TT> accordingly, we multiply it by
the length of the domain (128) so that it reaches between 0 and 128, and then
@ -133,7 +133,7 @@ between the <TT>phasor~</TT> and the <TT>tabread4~</TT>.
<P>
With only these four boxes we would have essentially reinvented the
<TT>tabosc4~</TT> class. In this example, however, the multiplication
is not by a constant 128 but by a variable amount controlled by the ``squeeze"
is not by a constant 128 but by a variable amount controlled by the "squeeze"
parameter. The function of the four boxes at the right hand side of the patch
is to supply the <TT>*~</TT> object with values to scale the
<TT>phasor~</TT> by. This makes use of one more new object class:
@ -150,18 +150,18 @@ is to supply the <TT>*~</TT> object with values to scale the
number of arguments, their types (usually numbers) and their initial values.
The inlets (there will be as many as you specified creation arguments) update
the values of the message arguments, and, if the leftmost inlet is changed
(or just triggered with a ``bang" message), the message is output.
(or just triggered with a "bang" message), the message is output.
<A NAME="pdpack"></A>
<P>
In this patch the arguments are initially 0 and 50, but the number box will
update the value of the first argument, so that, as pictured, the most recent
message to leave the <TT>pack</TT> object was ``206 50". The effect of this
message to leave the <TT>pack</TT> object was "206 50". The effect of this
on the <TT>line~</TT> object below is to ramp to 206 in 50 milliseconds; in
general the output of the <TT>line~</TT> object is an audio signal that smoothly
follows the sporadically changing values of the number box labeled ``squeeze".
follows the sporadically changing values of the number box labeled "squeeze".
<P>
Finally, 128 is added to the ``squeeze" value; if ``squeeze" takes non-negative
Finally, 128 is added to the "squeeze" value; if "squeeze" takes non-negative
values (as the number box in this patch enforces), the range-setting multiplier
ranges the phasor by 128 or more. If the value is greater than 128, the effect
is that the rescaled phasor spends some fraction of its cycle stuck at the end

View File

@ -119,7 +119,7 @@ In Figure <A HREF="#fig02.15">2.15</A> (part a), a <TT>phasor~</TT> object suppl
indices into the wavetable (at right) and phases for a half-cosine-shaped
envelope function at left. These two are multiplied, and the product is
high-pass filtered and output. Reading the wavetable is straightforward; the
phasor is multiplied by a ``chunk size" parameter, added to 1, and used as an
phasor is multiplied by a "chunk size" parameter, added to 1, and used as an
index to <TT>tabread4~</TT>The chunk size parameter is multiplied by
441 to convert it from hundredths of a second to samples. This corresponds
exactly to the block diagram shown in Figure <A HREF="node28.html#fig02.05">2.5</A>, with a segment
@ -141,10 +141,10 @@ function in the range (<IMG
of the waveform.
<P>
Part (b) of Figure <A HREF="#fig02.15">2.15</A> introduces a third parameter, the ``read
Part (b) of Figure <A HREF="#fig02.15">2.15</A> introduces a third parameter, the "read
point", which specifies where in the sample the loop is to start. (In part (a)
we always started at the beginning.) The necessary change is simple enough:
add the ``read point" control value, in samples,
add the "read point" control value, in samples,
to the wavetable index and proceed as before. To avoid discontinuities
in the index we smooth the read point value using
<TT>pack</TT> and <TT>line~</TT> objects, just as we did in

View File

@ -100,9 +100,9 @@ same, but with a phasor-controlled read point (B11.sampler.rockafella.pd).</CAPT
WIDTH="87" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img286.png"
ALT="\fbox{ $\mathrm{loadbang}$\ }">:
<A NAME="2561"></A>output a ``bang" message on load. This is used in this patch to make sure the
<A NAME="2561"></A>output a "bang" message on load. This is used in this patch to make sure the
division of transposition by chunk size will have a valid transposition factor
in case ``chunk size" is moused on first.
in case "chunk size" is moused on first.
<P>
<BR><!-- MATH
@ -115,7 +115,7 @@ in case ``chunk size" is moused on first.
<A NAME="2562"></A>evaluate mathematical expressions. Variables appear as $f1, $f2, and so on,
corresponding to the object's inlets. Arithmetic operations are allowed,
with parentheses for grouping, and many library functions are supplied,
such as exponentiation, which shows up in this example as ``pow" (the
such as exponentiation, which shows up in this example as "pow" (the
power function).
<P>
@ -167,9 +167,9 @@ the <TT>throw~</TT> and <TT>catch~</TT> objects).
<P>
In the example, part of the wavetable reading machinery is duplicated, using
identical calculations of ``chunk-size-samples" (a message stream) and
``read-pt" (an audio signal smoothed as before). However, the ``phase"
audio signal, in the other copy, is replaced by ``phase2". The top part
identical calculations of "chunk-size-samples" (a message stream) and
"read-pt" (an audio signal smoothed as before). However, the "phase"
audio signal, in the other copy, is replaced by "phase2". The top part
of the figure shows the calculation of the two phase signals: the first one
as the output of a <TT>phasor~</TT> object, and the second by adding
0.5 and wrapping, thereby adding 0.5 cycles (<IMG

View File

@ -101,7 +101,7 @@ samples and use seconds instead, converting to samples (and shifting by
one) only just before the <TT>tabread4~</TT> object.
The wavetable holds one second of sound, and we'll assume here that the
nominal chunk size will not exceed 0.1 second, so that we can safely let
the read point range from 0 to 0.9; the ``real" chunk size will vary, and
the read point range from 0 to 0.9; the "real" chunk size will vary, and
can become quite large, because of the moving read pointer.
<P>
@ -110,7 +110,7 @@ control sets the frequency of a phasor of amplitude 0.9, and therefore the
precession must be multiplied by 0.9 to set the frequency of the phasor (so
that, for a precession of one for instance, the amplitude and frequency of
the read point are both 0.9, so that the slope, equal to amplitude over
frequency, is one). The output of this is named ``read-pt" as before, and
frequency, is one). The output of this is named "read-pt" as before, and
is used by both copies of the wavetable reader.
<P>

View File

@ -115,7 +115,7 @@ distinguishes it from the simpler <IMG
ALT="$f(t)=1$">. But intuition tells us that
the constant function is in the <I>spirit</I> of digital audio signals,
whereas the one that hides a secret between the samples isn't. A function
that is ``possible to sample" should be one for which we can use some reasonable
that is "possible to sample" should be one for which we can use some reasonable
interpolation scheme to deduce its values on non-integers from its values on
integers.
@ -228,7 +228,7 @@ integers at least, to one with frequency between 0 and <IMG
WIDTH="13" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img41.png"
ALT="$\pi $">; you simply can't
tell the two apart. And since any conversion hardware should do the ``right"
tell the two apart. And since any conversion hardware should do the "right"
thing and reconstruct the lower-frequency sinusoid, any higher-frequency one
you try to synthesize will come out your speakers at the wrong
frequency--specifically, you will hear the unique frequency between 0 and <IMG

View File

@ -94,7 +94,7 @@ reflect the result of the computation.
<P>
In a non-real-time system (such as Csound in its classical form),
this means that logical time proceeds from zero to the length of the
output soundfile. Each ``score card" has an associated logical time (the
output soundfile. Each "score card" has an associated logical time (the
time in the score), and is acted upon once the audio computation has reached
that time. So audio and control calculations (grinding out the samples and
handling note cards) are each handled in turn, all in increasing order of
@ -148,7 +148,7 @@ one. We then do all control calculations up to but not including logical
time 2, then the sample of index one, and so on. (Here we are adopting
certain conventions about labeling that could be chosen differently. For
instance, there is no fundamental reason control should be pictured as
coming ``before" audio computation but it is easier to think that way.)
coming "before" audio computation but it is easier to think that way.)
<P>
Part (b) of the figure shows the situation if we wish to compute the audio

View File

@ -190,7 +190,7 @@ and the <IMG
at those times.
<P>
A numeric control stream is roughly analogous to a ``MIDI controller", whose
A numeric control stream is roughly analogous to a "MIDI controller", whose
values change irregularly, for example when a physical control is moved by a
performer. Other control stream sources may have higher possible rates of
change and/or more precision. On the other hand, a time sequence might be a

View File

@ -88,7 +88,7 @@ without audible artifacts; we probably can ramp it off in less time if the
current amplitude is low than if it is high. To do this we must confect a
message to the <TT>line~</TT> object to send it to zero in an amount of
time we'll calculate on the basis of its current output value. This will
require, first of all, that we ``sample" the <TT>line~</TT> object's
require, first of all, that we "sample" the <TT>line~</TT> object's
output (an audio signal) into a control stream.
<P>

View File

@ -107,8 +107,8 @@ threshold levels; (c) debounced using dead periods.</CAPTION>
Figure <A HREF="#fig03.07">3.7</A> (part a) shows a simple realization of this idea.
We assume the signal input is as shown in the continuous graph. A horizontal
line shows the constant value of the threshold. The time sequence marked
``onsets" contains one event for each time the signal crosses the threshold
from below to above; the one marked ``turnoffs" marks crossings in the other
"onsets" contains one event for each time the signal crosses the threshold
from below to above; the one marked "turnoffs" marks crossings in the other
direction.
<P>

View File

@ -134,8 +134,8 @@ sample and hold unit [<A
HREF="node202.html#r-strange72">Str95</A>, pp. 80-83]
[<A
HREF="node202.html#r-chamberlin80">Cha80</A>, p. 92]. This takes an incoming signal, picks out certain
instantaneous values from it, and ``freezes" those values for its output. The
particular values to pick out are selected by a secondary, ``trigger" input.
instantaneous values from it, and "freezes" those values for its output. The
particular values to pick out are selected by a secondary, "trigger" input.
At points in time specified by the trigger input a new, single value is taken
from the primary input and is output continuously until the next time point,
when it is replaced by a new value of the primary input.
@ -145,7 +145,7 @@ when it is replaced by a new value of the primary input.
<DIV ALIGN="CENTER"><A NAME="fig03.09"></A><A NAME="3665"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 3.9:</STRONG>
Sample and hold (``S/H"), using falling edges of the trigger signal.</CAPTION>
Sample and hold ("S/H"), using falling edges of the trigger signal.</CAPTION>
<TR><TD><IMG
WIDTH="535" HEIGHT="416" BORDER="0"
SRC="img334.png"

View File

@ -85,7 +85,7 @@ carries numbers as messages.
<P>
Messages not containing data make up <I>time sequences</I>. So that you can
see messages with no data, in Pd they are given the (arbitrary) symbol ``bang".
see messages with no data, in Pd they are given the (arbitrary) symbol "bang".
<P>
@ -125,8 +125,8 @@ using two explicit delay objects:
SRC="img339.png"
ALT="\fbox{ $\mathrm{delay}$\ }">:
<A NAME="3843"></A>simple delay. You can specify the delay time in a creation argument or via
the right inlet. A ``bang" in the left inlet sets the delay, which then outputs
``bang" after the specified delay in milliseconds. The delay is <I>simple</I>
the right inlet. A "bang" in the left inlet sets the delay, which then outputs
"bang" after the specified delay in milliseconds. The delay is <I>simple</I>
in the sense that sending a bang to an already set delay resets it to a new
output time, canceling the previously scheduled one.
@ -188,7 +188,7 @@ otherwise.
SRC="img343.png"
ALT="\fbox{ $\mathrm{sel}$\ }">:
<A NAME="3846"></A>prune for specific numbers. Numeric messages coming in the left inlet produce
a ``bang" on the output only if they match a test value exactly. The test
a "bang" on the output only if they match a test value exactly. The test
value is set either by creation argument or from the right inlet.
<P>
@ -199,7 +199,7 @@ control streams implicitly in its connection mechanism, as illustrated by part
(d) of the figure. Most objects with more than one inlet synchronize all other
inlets to the leftmost one. So the <TT>float</TT> object shown in the figure
resynchronizes its right-hand-side inlet (which takes numbers) to its
left-hand-side one. Sending a ``bang" to the left inlet outputs the most
left-hand-side one. Sending a "bang" to the left inlet outputs the most
recent number <TT>float</TT> has received beforehand.
<P>

View File

@ -81,7 +81,7 @@ theory and technique so usefully.
<P>
By far the most popular music and sound synthesis programs in use today are
block diagram compilers with graphical interfaces. These allow the composer to
design instruments by displaying the ``objects" of his instrument on a computer
design instruments by displaying the "objects" of his instrument on a computer
screen and drawing the connecting paths between the objects. The resulting
graphical display is very congenial to musicians. A naive user can design a
simple instrument instantly. He can rapidly learn to design complex

View File

@ -95,7 +95,7 @@ zero and back up.
<P>
Two other waveforms are provided to show the interesting effects of beating
between partials which, although they ``should" have been far apart, find
between partials which, although they "should" have been far apart, find
themselves neighbors through foldover. For instance, at 1423 Hertz, the second
harmonic is 2846 Hertz whereas the 33rd harmonic sounds at 1423*33-44100 = 2859
Hertz--a rude dissonance.
@ -107,7 +107,7 @@ hear it. Example C02.sawtooth-foldover.pd (not pictured here) demonstrates this
(the <TT>phasor~</TT> object). For wavetables holding audio recordings,
interpolation error can create extra foldover. The effects of this can
vary widely; the sound is sometimes described as
``crunchy" or ``splattering", depending on the recording, the transposition,
"crunchy" or "splattering", depending on the recording, the transposition,
and the interpolation algorithm.
<P>

View File

@ -98,7 +98,7 @@ numeric control stream into a signal inlet. In Pd, implicit conversions from
numeric control streams to audio streams is done in the fast-as-possible mode
shown in Figure <A HREF="node43.html#fig03.04">3.4</A> (part a). The <TT>line</TT> output becomes a
staircase signal with 50 steps per second. The result is commonly called
``zipper noise".
"zipper noise".
<P>
Whereas the limitations of the <TT>line</TT> object for generating audio
@ -124,7 +124,7 @@ is provided for these situations:
WIDTH="69" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img346.png"
ALT="\fbox{ $\mathrm{vline}\sim$}">:
<A NAME="3848"></A>exact line segment generator. This third member of the ``line" family
<A NAME="3848"></A>exact line segment generator. This third member of the "line" family
outputs an audio signal (like <TT>line~</TT>), but aligns the endpoints of the signal to
the desired time points, accurate to a fraction of a sample. (The accuracy
is limited only by the floating-point numerical format used by Pd.) Further,

View File

@ -107,21 +107,21 @@ Non-looping sampler.</CAPTION>
The amplitude of the output of <TT>tabread4~</TT> is controlled by a
second <TT>vline~</TT> object, in order to prevent discontinuities
in the output in case a new event is started while the previous event is still
playing. The ``cutoff" <TT>vline~</TT> object ramps the output down to zero
playing. The "cutoff" <TT>vline~</TT> object ramps the output down to zero
(whether or not it is playing) so that, once the output is zero, the index
of the wavetable may be changed discontinuously.
<P>
In order to start a new ``note", first, the ``cutoff" <TT>vline~</TT> object is
In order to start a new "note", first, the "cutoff" <TT>vline~</TT> object is
ramped to zero; then, after a delay of 5 msec (at which point <TT>vline~</TT> has reached zero) the phase is reset. This is done with two messages: first,
the phase is set to 1 (with no time value so that it jumps to 1 with no
ramping). The value ``1" specifies the first readable point of the wavetable,
ramping). The value "1" specifies the first readable point of the wavetable,
since we are using 4-point interpolation. Second, in the same message box,
the phase is ramped to 441,000,000 over a time period of 10,000,000 msec. (In
Pd, large numbers are shown using exponential notation; these two appear as
4.41e+08 and 1e+07.) The quotient is 44.1 (in units per millisecond) giving
a transposition of one. The upper <TT>vline~</TT> object (which generates the
phase) receives these messages via the ``r phase" object above it.
phase) receives these messages via the "r phase" object above it.
<P>
The example assumes that the wavetable is ramped smoothly to zero at either

View File

@ -71,7 +71,7 @@ Analog-style sequencer</A>
<P>
Example C08.analog.sequencer.pd (Figure <A HREF="#fig03.15">3.15</A>) realizes the analog sequencer and envelope
generation described in Section <A HREF="node47.html#sect3.analog">3.7</A>. The ``sequence" table,
generation described in Section <A HREF="node47.html#sect3.analog">3.7</A>. The "sequence" table,
with nine elements, holds a sequence of frequencies. The <TT>phasor~</TT> object at top cycles through the sequence table at 0.6 Hertz. Non-interpolating
table lookup (<TT>tabread~</TT> instead of <TT>tabread4~</TT>) is
used to read the frequencies in discrete steps. (Such situations, in

View File

@ -130,8 +130,8 @@ directly use hardware MIDI input or output.
SRC="img355.png"
ALT="\fbox{ $\mathrm{t}$}">:
<A NAME="3853"></A>copy a message to outlets in right to left order, with type conversion. The
creation arguments (``b" and ``f" in this example) specify two outlets, one
giving ``bang" messages, the other ``float" (i.e., numbers). One outlet
creation arguments ("b" and "f" in this example) specify two outlets, one
giving "bang" messages, the other "float" (i.e., numbers). One outlet
is created for each creation argument. The outputs appear in Pd's standard
right-to-left order.
@ -146,19 +146,19 @@ the input failed to match 0); this is divided by the maximum MIDI velocity of
<P>
However, when a note-off is received, it is only appropriate to stop the sound
if the note-off pitch actually matches the pitch the instrument is playing.
For example, suppose the messages received are ``60 127", ``72 127",
``60 0", and ``72 0". When the note-on at pitch 72 arrives the pitch should
change to 72, and then the ``60 0" message should be ignored, with the note
playing until the ``72 0" message.
For example, suppose the messages received are "60 127", "72 127",
"60 0", and "72 0". When the note-on at pitch 72 arrives the pitch should
change to 72, and then the "60 0" message should be ignored, with the note
playing until the "72 0" message.
<P>
To accomplish this, first we store the velocity in the upper <TT>float</TT> object. Second, when the pitch arrives, it too is stored (the lower
<TT>float</TT> object) and then the velocity is tested against zero (the
``bang" outlet of <TT>t b f</TT> recalls the velocity which is sent to
"bang" outlet of <TT>t b f</TT> recalls the velocity which is sent to
<TT>sel 0</TT>). If this is zero, the second step is to recall the pitch and
test it (the <TT>select</TT> object) against the most recently received
note-on pitch. Only if these are equal (so that ``bang" appears at the
left-hand-side outlet of <TT>select</TT>) does the message ``0 1000" go to the
note-on pitch. Only if these are equal (so that "bang" appears at the
left-hand-side outlet of <TT>select</TT>) does the message "0 1000" go to the
<TT>line~</TT> object.
<P>

View File

@ -95,7 +95,7 @@ repeating a fixed waveform every N samples. What value of <IMG
ALT="$N$"> should we
choose, and how many cents (Page <A HREF="node11.html#eq-pitchmidi"><IMG ALIGN="BOTTOM" BORDER="1" ALT="[*]"
SRC="crossref.png"></A>) are we off from the
``true" middle C?
"true" middle C?
<P>
</LI>

View File

@ -83,11 +83,11 @@ ordinary way to use one, but there are many other possible uses.
<P>
Envelope generators have come in many forms over the years, but the simplest
and the perennial favorite is the
<A NAME="4577"></A><A NAME="4578"></A><I>ADSR</I> envelope generator. ``ADSR" is an acronym for
``Attack, Decay, Sustain, Release", the four segments of the
<A NAME="4577"></A><A NAME="4578"></A><I>ADSR</I> envelope generator. "ADSR" is an acronym for
"Attack, Decay, Sustain, Release", the four segments of the
ADSR generator's output. The ADSR generator is turned on and off by a control
stream called a ``trigger". Triggering the ADSR generator ``on" sets off its
attack, decay, and sustain segments. Triggering it ``off" starts the
stream called a "trigger". Triggering the ADSR generator "on" sets off its
attack, decay, and sustain segments. Triggering it "off" starts the
release segment. Figure <A HREF="#fig04.01">4.1</A> shows the block
diagram representation of an ADSR envelope generator.
@ -113,9 +113,9 @@ the <I>attack</I> and <I>decay</I> parameters give the time duration of the
attack and decay segments. Fourth, a <I>sustain</I> parameter gives the level
of the sustain segment, as a fraction of the level parameter. Finally, the
<I>release</I> parameter gives the duration of the release segment. These five
values, together with the timing of the ``on" and ``off" triggers, fully
values, together with the timing of the "on" and "off" triggers, fully
determines the output of the ADSR generator. For example, the duration of the
sustain portion is equal to the time between ``on" and ``off" triggers, minus
sustain portion is equal to the time between "on" and "off" triggers, minus
the durations of the attack and decay segments.
<P>
@ -123,8 +123,8 @@ the durations of the attack and decay segments.
<DIV ALIGN="CENTER"><A NAME="fig04.02"></A><A NAME="4593"></A>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 4.2:</STRONG>
ADSR envelope output: (a) with ``on" and ``off" triggers separated;
(b), (c) with early ``off" trigger; (d), (e) re-attacked.</CAPTION>
ADSR envelope output: (a) with "on" and "off" triggers separated;
(b), (c) with early "off" trigger; (d), (e) re-attacked.</CAPTION>
<TR><TD><IMG
WIDTH="332" HEIGHT="530" BORDER="0"
SRC="img357.png"
@ -135,23 +135,23 @@ ADSR envelope output: (a) with ``on" and ``off" triggers separated;
<P>
Figure <A HREF="#fig04.02">4.2</A> graphs some possible outputs of an ADSR
envelope generator. In
part (a) we assume that the ``on" and ``off" triggers are widely enough
separated that the sustain segment is reached before the ``off" trigger is
part (a) we assume that the "on" and "off" triggers are widely enough
separated that the sustain segment is reached before the "off" trigger is
received.
Parts (b) and (c) of Figure <A HREF="#fig04.02">4.2</A> show the result of following an
``on" trigger quickly by an ``off" one: (b) during the decay segment, and (c)
"on" trigger quickly by an "off" one: (b) during the decay segment, and (c)
even earlier, during the attack. The ADSR generator reacts to these situations
by canceling whatever remains of the attack and decay segments and continuing
straight to the release segment. Also, an ADSR generator may be retriggered
``on" before the release segment is finished or even during the attack, decay,
"on" before the release segment is finished or even during the attack, decay,
or sustain segments. Part (d) of the figure shows a reattack during the
sustain segment, and part (e), during the decay segment.
<P>
The classic application of an ADSR envelope is using a voltage-control keyboard
or sequencer to make musical notes on a synthesizer. Depressing and releasing
a key (for example) would generate ``on" and ``off" triggers. The ADSR
generator could then control the amplitude of synthesis so that ``notes" would
a key (for example) would generate "on" and "off" triggers. The ADSR
generator could then control the amplitude of synthesis so that "notes" would
start and stop with the keys. In addition to amplitude, the ADSR generator
can (and often is) used to control timbre, which can then be made to evolve
naturally over the course of each note.

View File

@ -75,7 +75,7 @@ process, and analyze musical sounds, a practice which came into its modern form
in the years 1948-1952, but whose technological means and artistic uses have
undergone several revolutions since then. Nowadays most electronic music is
made using computers, and this book will focus exclusively on what used to be
called ``computer music", but which should really now be called ``electronic
called "computer music", but which should really now be called "electronic
music using a computer".
<P>
@ -117,7 +117,7 @@ appears in [<A
HREF="node202.html#r-strawn85">Str85</A>, pp. 1-68].
<P>
Although the ``level" of mathematics is not high, the mathematics itself is
Although the "level" of mathematics is not high, the mathematics itself is
sometimes quite challenging. All sorts of cool mathematics is in the reach of
any student of algebra or geometry. In the service of computer music, for
instance, we'll run into Bessel functions, Chebychev polynomials, the Central
@ -127,7 +127,7 @@ Limit Theorem, and, of course, Fourier analysis.
You don't need much background in music as it is taught in the West; in
particular, Western written music notation is not needed. Some elementary bits
of Western music theory are used, such as the tempered scale, the A-B-C system
of naming pitches, and terms like ``note" and ``chord". Also you should be
of naming pitches, and terms like "note" and "chord". Also you should be
familiar with terms of musical acoustics such as sinusoids, amplitude,
frequency, and the overtone series.

View File

@ -103,7 +103,7 @@ having the opposite discontinuity (lower graph), which then decays smoothly.</CA
<P>
Figure <A HREF="#fig04.07">4.7</A> shows how the switch-and-ramp technique can be realized
in a block diagram. The box marked with ellipsis (``...") may hold any
in a block diagram. The box marked with ellipsis ("...") may hold any
synthesis algorithm, which we wish to interrupt discontinuously so that it
restarts from zero (as in, for example, part (a) of the previous figure). At
the same time that we trigger whatever control changes are necessary (exemplified by

View File

@ -74,15 +74,15 @@ Polyphony
<P>
In music, the term
<A NAME="4665"></A><I>polyphony</I>
is usually used to mean ``more than one separate voices singing or playing at
is usually used to mean "more than one separate voices singing or playing at
different pitches one from another". When speaking of electronic musical
instruments we use the term to mean ``maintaining several copies of some
process in parallel." We usually call each copy a ``voice" in keeping with the
instruments we use the term to mean "maintaining several copies of some
process in parallel." We usually call each copy a "voice" in keeping with the
analogy, although the voices needn't be playing separately distinguishable
sounds.
<P>
In this language, a piano is a polyphonic instrument, with 88 ``voices". Each
In this language, a piano is a polyphonic instrument, with 88 "voices". Each
voice of the piano is normally capable of playing exactly one pitch. There
is never a question of which voice to use to play a note of a given pitch,
and no question, either, of playing several notes simultaneously at the

View File

@ -88,11 +88,11 @@ called the
<A NAME="4711"></A><I>parent</I>.
<P>
If you type ``pd" or ``pd my-name" into an object box, this creates a one-off
If you type "pd" or "pd my-name" into an object box, this creates a one-off
subpatch. The contents of the subpatch are saved as part of the parent patch,
in one file. If you make several copies of a subpatch you may change them
individually. On the other hand, you can invoke an abstraction by typing into
the box the name of a Pd patch saved to a file (without the ``.pd" extension).
the box the name of a Pd patch saved to a file (without the ".pd" extension).
In this situation Pd will read that file into the subpatch. In this way,
changes to the file propagate everywhere the abstraction is invoked.
@ -141,11 +141,11 @@ comes to the inlet of the box in the parent patch comes out of the
<P>
Pd provides an argument-passing mechanism so that you can parametrize different
invocations of an abstraction. If in an object box you type ``$1",
it is expanded to mean ``the first creation argument in my box on the
parent patch", and similarly for ``$2" and so on. The text in
invocations of an abstraction. If in an object box you type "$1",
it is expanded to mean "the first creation argument in my box on the
parent patch", and similarly for "$2" and so on. The text in
an object box is interpreted at the time the box is created, unlike the
text in a message box. In message boxes, the same ``$1" means ``the
text in a message box. In message boxes, the same "$1" means "the
first argument of the message I just received" and is interpreted whenever
a new message comes in.
@ -164,7 +164,7 @@ and 5.
Pd's abstraction mechanism: (a) invoking the abstraction,
<TT>plusminus</TT> with 5 as a creation argument; (b) the contents of the
file,
``plusminus.pd".</CAPTION>
"plusminus.pd".</CAPTION>
<TR><TD><IMG
WIDTH="353" HEIGHT="134" BORDER="0"
SRC="img378.png"
@ -174,10 +174,10 @@ file,
<P>
The <TT>plusminus</TT> object is not defined by Pd, but is rather defined
by the patch residing in the file named ``plusminus.pd". This patch is shown
by the patch residing in the file named "plusminus.pd". This patch is shown
in part (b) of the figure. The one <TT>inlet</TT> and
two <TT>outlet</TT> objects correspond to the inlets and outlets of
the <TT>plusminus</TT> object. The two ``$1" arguments (to the
the <TT>plusminus</TT> object. The two "$1" arguments (to the
<TT>+</TT> and <TT>-</TT> objects) are replaced by 5 (the creation argument of the
<TT>plusminus</TT> object).

View File

@ -72,7 +72,7 @@ ADSR envelope generator</A>
<P>
Example D01.envelope.gen.pd (Figure <A HREF="#fig04.12">4.12</A>) shows how the <TT>line~</TT> object may
be used to generate an ADSR envelope to control a synthesis patch (only the
ADSR envelope is shown in the figure). The ``attack" button, when pressed, has
ADSR envelope is shown in the figure). The "attack" button, when pressed, has
two effects. The first (leftmost in the figure) is to set the <TT>line~</TT> object on its attack segment, with a target of 10 (the peak amplitude) over 200
msec (the attack time). Second, the attack button sets a <TT>delay 200</TT> object, so that after the attack segment is done, the decay segment can start.
The decay segment falls to a target of 1 (the sustain level) after another 2500
@ -93,10 +93,10 @@ ADSR envelope.</CAPTION>
</DIV>
<P>
The ``release" button sends the same <TT>line~</TT> object back to zero over
The "release" button sends the same <TT>line~</TT> object back to zero over
500 more milliseconds (the release time). Also, in case the
<TT>delay 200</TT> object happens to be set at the moment the ``release" button is pressed, a
``stop" message is sent to it. This prevents the ADSR generator from
<TT>delay 200</TT> object happens to be set at the moment the "release" button is pressed, a
"stop" message is sent to it. This prevents the ADSR generator from
launching its decay segment after launching its release segment.
<P>
@ -148,9 +148,9 @@ Inside the <TT>adsr</TT> abstraction.</CAPTION>
</DIV>
<P>
The attack segment goes to a target specified as ``$1" (the first
creation argument of the abstraction) over ``$2" milliseconds; these
values may be overwritten by sending numbers to the ``peak level" and ``attack"
The attack segment goes to a target specified as "$1" (the first
creation argument of the abstraction) over "$2" milliseconds; these
values may be overwritten by sending numbers to the "peak level" and "attack"
inlets. The release segment is similar, but simpler, since the target is
always zero. The hard part is the decay segment, which again must be set
off after a delay equal to the attack time (the <TT>del $2</TT> object).

View File

@ -136,7 +136,7 @@ to implement the summing bus:
WIDTH="69" HEIGHT="39" ALIGN="MIDDLE" BORDER="0"
SRC="img386.png"
ALT="\fbox{ \texttt{catch\~}}">:
<A NAME="4908"></A>define and output a summing bus. The creation argument (``sum-bus" in this example)
<A NAME="4908"></A>define and output a summing bus. The creation argument ("sum-bus" in this example)
gives the summing bus a name so that <TT>throw~</TT> objects below can refer
to it. You may have as many summing busses (and hence <TT>catch~</TT> objects)
as you like but they must all have different names.
@ -153,8 +153,8 @@ as you like but they must all have different names.
use.
<P>
The control interface is crude: number boxes control the ``fundamental"
frequency of the bell and its duration. Sending a ``bang" message to
The control interface is crude: number boxes control the "fundamental"
frequency of the bell and its duration. Sending a "bang" message to
the <TT>s trigger</TT> object starts a note. (The note then decays over
the period of time controlled by the duration parameter; there is no
separate trigger to stop the note). There is no amplitude
@ -190,9 +190,9 @@ frequency and the relative frequency.
</LI>
</OL>
Inside the <TT>partial</TT> abstraction, the amplitude is simply taken
directly from the ``$1" argument (multiplying by 0.1 to adjust for
directly from the "$1" argument (multiplying by 0.1 to adjust for
the high individual amplitudes); the duration is calculated from the
<TT>r duration</TT> object, multiplying it by the ``$2" argument.
<TT>r duration</TT> object, multiplying it by the "$2" argument.
The frequency is computed as <IMG
WIDTH="48" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
SRC="img390.png"

View File

@ -122,7 +122,7 @@ This has the advantage of being more explicit than the <TT>throw~</TT> /
problem.
<P>
The main job of the patch, though, is to distribute the ``note" messages to
The main job of the patch, though, is to distribute the "note" messages to
the <TT>sampvoice</TT> objects. To do this we must introduce some new Pd
objects:
@ -149,8 +149,8 @@ There is also an integer division object named <TT>div</TT> ; dividing 17 by
<A NAME="4912"></A>Polyphonic voice allocator. Creation arguments give the number of
voices in the bank and a flag (1 if voice stealing is needed, 0 if not).
The inlets are a numeric tag at left and a flag at right indicating whether
to start or stop a voice with the given tag (nonzero numbers meaning ``start"
and zero, ``stop"). The outputs are, at left, the voice number, the tag
to start or stop a voice with the given tag (nonzero numbers meaning "start"
and zero, "stop"). The outputs are, at left, the voice number, the tag
again at center, and the start/stop flag at right. In MIDI applications, the
tag can be pitch and the start/stop flag can be the note's velocity.
@ -163,13 +163,13 @@ tag can be pitch and the start/stop flag can be the note's velocity.
SRC="img398.png"
ALT="\fbox{ \texttt{makenote}}">:
<A NAME="4913"></A>Supply delayed note-off messages to match note-on messages. The inlets are
a tag and start/stop flag (``pitch" and ``velocity" in MIDI usage) and the
a tag and start/stop flag ("pitch" and "velocity" in MIDI usage) and the
desired duration in milliseconds. The tag/flag pair are repeated to
the two outlets as they are received; then, after the delay, the tag is
repeated with flag zero to stop the note after the desired duration.
<P>
The ``note" messages contain fields for pitch, amplitude, duration,
The "note" messages contain fields for pitch, amplitude, duration,
sample number, start location in the sample, rise time, and decay time. For
instance, the message,
<PRE>
@ -177,7 +177,7 @@ instance, the message,
</PRE>
(if received by the <TT>r note</TT> object)
means to play a note at pitch 60 (MIDI units), amplitude 90 dB, one second
long, from the wavetable named ``sample2", starting at a point 500 msec
long, from the wavetable named "sample2", starting at a point 500 msec
into the wavetable, with rise and decay times of 10 and 20 msec.
<P>
@ -192,7 +192,7 @@ a unique number corresponding to the note.
The next step is to use the <TT>poly</TT> object to determine which voice to play
which note. The <TT>poly</TT> object expects separate messages to start
and stop tasks (i.e., notes). So the tag and duration are first fed to the
<TT>makenote</TT> object, whose outputs include a flag (``velocity") at
<TT>makenote</TT> object, whose outputs include a flag ("velocity") at
right and the tag again at left. For each tag <TT>makenote</TT> receives, two pairs
of numbers are output, one to start the note, and another, after a delay
equal to the note duration, to stop it.
@ -200,7 +200,7 @@ equal to the note duration, to stop it.
<P>
Having treated <TT>poly</TT> to this separated input, we now have to strip
the messages corresponding to the ends of notes, since we really only need
combined ``note" messages with
combined "note" messages with
duration fields. The <TT>stripnote</TT> object does this job. Finally, the
voice number we have calculated is prepended to the seven parameters we
started with (the <TT>pack</TT> object), so that the output of the
@ -208,11 +208,11 @@ started with (the <TT>pack</TT> object), so that the output of the
<PRE>
4 60 90 1000 2 500 10 20
</PRE>
where the ``4" is the voice number output by the <TT>poly</TT> object.
where the "4" is the voice number output by the <TT>poly</TT> object.
The voice number is used to route the message
to the desired voice using the <TT>route</TT> object. The appropriate
<TT>sampvoice</TT> object then gets the original list starting with
``60".
"60".
<P>
Inside the <TT>sampvoice</TT> object (Figure <A HREF="#fig04.21">4.21</A>), the message
@ -239,20 +239,20 @@ list generated by the <TT>pack</TT> object at the center of the voice patch.
<P>
We arbitrarily decide that the ramp will last ten thousand seconds (this is the
``1e+07" appearing in the message box sent to the wavetable index generator),
"1e+07" appearing in the message box sent to the wavetable index generator),
hoping that this is at least as long as any note we will play. The ending index
is the starting index plus the number of samples to ramp through. At a
transposition factor of one, we should move by 441,000,000 samples during those
10,000,000 milliseconds, or proportionally more or less depending on the
transposition factor. This transposition factor is computed by the <TT>mtof</TT> object, dividing by 261.62 (the frequency corresponding to MIDI note 60) so
that a specified ``pitch" of 60 results in a transposition factor of one.
that a specified "pitch" of 60 results in a transposition factor of one.
<P>
These and other parameters are combined in one message
via the <TT>pack</TT> object so that the following message boxes can
generate the needed control messages. The only novelty is
the <TT>makefilename</TT> object, which converts numbers such as ``2" to
symbols such as ``sample2" so that the <TT>tabread4~</TT> object's
the <TT>makefilename</TT> object, which converts numbers such as "2" to
symbols such as "sample2" so that the <TT>tabread4~</TT> object's
wavetable may be set.
<P>

View File

@ -115,7 +115,7 @@ off any notes?
WIDTH="11" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img262.png"
ALT="$1$">. While
a note is playing, a new note is started using the ``rampdown" voice-stealing
a note is playing, a new note is started using the "rampdown" voice-stealing
technique. What is the maximum output?
<P>

View File

@ -77,7 +77,7 @@ in electronic music, we return to describing audio synthesis and processing
techniques. So far we have seen additive and wavetable-based methods. In this
chapter we will introduce three so-called <I>modulation</I> techniques:
<I>amplitude modulation</I>, <I>frequency modulation</I>, and
<I>waveshaping</I>. The term ``modulation" refers loosely to any technique
<I>waveshaping</I>. The term "modulation" refers loosely to any technique
that systematically alters the shape of a waveform by bending its graph
vertically or horizontally. Modulation is widely used for building synthetic
sounds with various families of <I>spectra</I>, for which we must develop some

View File

@ -200,7 +200,7 @@ component of zero frequency, for which
ALT="$a$">--without dividing by two.
(Components of zero frequency are often called
<I>DC</I><A NAME="5614"></A>
components; ``DC" is historically an acronym for ``direct current").
components; "DC" is historically an acronym for "direct current").
These conventions for amplitudes in spectra will simplify the mathematics later
in this chapter; a deeper reason for them will become apparent in
Chapter <A HREF="node104.html#chapter-delay">7</A>.
@ -248,7 +248,7 @@ the signal's momentary behavior.
<P>
This way of viewing sounds is greatly oversimplified. The true behavior of
audible pitch and timbre has many aspects which can't be explained in terms of
this model. For instance, the timbral quality called ``roughness" is sometimes
this model. For instance, the timbral quality called "roughness" is sometimes
thought of as being reflected in rapid changes in the spectral envelope over
time. The simplified description developed here is useful nonetheless in
discussions about how to construct discrete or continuous spectra for a wide

View File

@ -192,7 +192,7 @@ of a sound, called
<A NAME="5641"></A><I>carrier signal</I>, which
is simply multiplied by the input. In this context the input is called the
<A NAME="5643"></A><I>modulating signal</I>.
The term ``ring modulation" is often used
The term "ring modulation" is often used
more generally to mean multiplying any two signals together, but here we'll
just consider using a sinusoidal carrier signal. (The technique of ring
modulation dates from the analog era [<A
@ -312,7 +312,7 @@ with <IMG
</DIV>
<P>
Parts (a) and (b) of the figure show ``general" cases where <IMG
Parts (a) and (b) of the figure show "general" cases where <IMG
WIDTH="13" HEIGHT="13" ALIGN="BOTTOM" BORDER="0"
SRC="img7.png"
ALT="$\alpha $"> and <IMG

View File

@ -365,7 +365,7 @@ increases; if there are <IMG
only <IMG
WIDTH="12" HEIGHT="14" ALIGN="BOTTOM" BORDER="0"
SRC="img58.png"
ALT="$k$"> ``straight" terms in the product, but there are <IMG
ALT="$k$"> "straight" terms in the product, but there are <IMG
WIDTH="75" HEIGHT="34" ALIGN="MIDDLE" BORDER="0"
SRC="img442.png"
ALT="$({k^2}-k)/2$">
@ -520,7 +520,7 @@ f(x+y)[n + \tau] = f(x+y)[n].
<BR CLEAR="ALL">
<P></P>
This has been experienced by every electric guitarist who has set the amplifier
to ``overdrive" and played the open B and high E strings together: the
to "overdrive" and played the open B and high E strings together: the
distortion product sometimes sounds at the pitch of the low E string, two
octaves below the high one.

View File

@ -99,7 +99,7 @@ Ring modulation of a complex tone by a sinusoid: (a) its realization;
In the signal generation portion of the patch (part (a) of the figure), we sum
the six partials and multiply the sum by the single, carrier oscillator.
(The six signals are summed implicitly by connecting them all to the same
inlet of the <TT>*~</TT> object.) The value of ``fundamental" at the top
inlet of the <TT>*~</TT> object.) The value of "fundamental" at the top
is computed to line up well with the spectral analysis, whose result is
shown in part (b) of the figure.

View File

@ -184,7 +184,7 @@ an inaudibly quiet one--is about 100 dB.
Amplitude is related in an inexact way to the perceived loudness of a sound.
In general, two signals with the same peak or RMS amplitude won't necessarily
have the same loudness at all. But amplifying a signal by 3 dB, say, will
fairly reliably make it sound about one ``step" louder. Much has been made of
fairly reliably make it sound about one "step" louder. Much has been made of
the supposedly logarithmic nature of human hearing (and other senses), which
may partially explain why decibels are such a useful scale of
amplitude [<A

View File

@ -214,8 +214,8 @@ g(x) = e ^ {- x ^ 2}
<BR CLEAR="ALL">
<P></P>
Except for a missing normalization factor, this is a Gaussian distribution,
sometimes called a ``bell curve". The amplitudes of the harmonics are
given by Bessel ``I" type functions.
sometimes called a "bell curve". The amplitudes of the harmonics are
given by Bessel "I" type functions.
<P>
Another fine choice is the (again unnormalized) Cauchy distribution:

View File

@ -81,7 +81,7 @@ carrier is an integer multiple of the fundamental frequency.
<P>
In the stretched wavetable approach we can accomplish this simply by sampling
a sinusoid and transposing it to the desired ``pitch". The transposed pitch
a sinusoid and transposing it to the desired "pitch". The transposed pitch
isn't heard as a periodicity since the wavetable itself is read periodically at
the fundamental frequency. Instead, the sinusoid is transposed as a spectral
envelope.
@ -124,7 +124,7 @@ formant center frequency to be <IMG
WIDTH="19" HEIGHT="29" ALIGN="MIDDLE" BORDER="0"
SRC="img586.png"
ALT="$\omega_b$">,
we set the ``stretch" parameter to the <I>center frequency quotient</I>
we set the "stretch" parameter to the <I>center frequency quotient</I>
defined as <!-- MATH
${\omega_c}/\omega$
-->

View File

@ -100,7 +100,7 @@ stretched wavetable lookup.</CAPTION>
<TABLE>
<CAPTION ALIGN="BOTTOM"><STRONG>Figure 6.14:</STRONG>
Intermediate audio signals from Figure <A HREF="#fig06.12">6.13</A>: (a) the
result of multiplying the phasor by the ``index"; (b) the same, clipped to
result of multiplying the phasor by the "index"; (b) the same, clipped to
lie between -0.5 and 0.5; (c) the output.</CAPTION>
<TR><TD><IMG
WIDTH="349" HEIGHT="429" BORDER="0"