Audio system measurements are used to quantify audio system performance. These measurements are made for several purposes. Designers take measurements to specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing.
Subjectively valid methods came to prominence in consumer audio in the UK and Europe in the 1970s, when the introduction of compact cassette tape, dbx and Dolby noise reduction techniques revealed the unsatisfactory nature of many basic engineering measurements. The specification of weighted CCIR-468 quasi-peak noise, and weighted quasi-peak wow and flutter became particularly widely used and attempts were made to find more valid methods for distortion measurement.
Measurements based on psychoacoustics, such as the measurement of noise, often use a weighting filter. It is well established that human hearing is more sensitive to some frequencies than others, as demonstrated by equal-loudness contours, but it is not well appreciated that these contours vary depending on the type of sound. The measured curves for pure tones, for instance, are different from those for random noise. The ear also responds less well to short bursts, below 100 to 200 ms, than to continuous sounds[1] such that a quasi-peak detector has been found to give the most representative results when noise contains click or bursts, as is often the case for noise in digital systems.[2] For these reasons, a set of subjectively valid measurement techniques have been devised and incorporated into BS, IEC, EBU and ITU standards. These methods of audio quality measurement are used by broadcast engineers throughout most of the world, as well as by some audio professionals, though the older A-weighting standard for continuous tones is still commonly used by others.[3]
No single measurement can assess audio quality. Instead, engineers use a series of measurements to analyze various types of degradation that can reduce fidelity. Thus, when testing an analogue tape machine it is necessary to test for wow and flutter and tape speed variations over longer periods, as well as for distortion and noise. When testing a digital system, testing for speed variations is normally considered unnecessary because of the accuracy of clocks in digital circuitry, but testing for aliasing and timing jitter is often desirable, as these have caused audible degradation in many systems.[citation needed]
Once subjectively valid methods have been shown to correlate well with listening tests over a wide range of conditions, then such methods are generally adopted as preferred. Standard engineering methods are not always sufficient when comparing like with like. One CD player, for example, might have higher measured noise than another CD player when measured with a RMS method, or even an A-weighted RMS method, yet sound quieter and measure lower when 468-weighting is used. This could be because it has more noise at high frequencies, or even at frequencies beyond 20 kHz, both of which are less important since human ears are less sensitive to them (see Noise shaping). This effect is how Dolby B works and why it was introduced. Cassette noise, which was predominately high frequency and unavoidable given the small size and speed of the recorded track could be made subjectively much less important. The noise sounded 10 dB quieter, but failed to measure much better unless 468-weighting was used rather than A-weighting.
Note that digital systems do not suffer from many of these effects at a signal level, though the same processes occur in the circuitry since the data being handled is symbolic. As long as the symbol survives the transfer between components, and can be perfectly regenerated (e.g., by pulse shaping techniques) the data itself is perfectly maintained. The data is typically buffered in a memory, and is clocked out by a very precise crystal oscillator. The data usually does not degenerate as it passes through many stages, because each stage regenerates new symbols for transmission.
Digital systems have their own problems. Digitizing adds noise, which is measurable and depends on the audio bit depth of the system, regardless of other quality issues. Timing errors in sampling clocks (jitter) result in non-linear distortion (FM modulation) of the signal. One quality measurement for a digital system (Bit Error Rate) relates to the probability of an error in transmission or reception. Other metrics on the quality of the system are defined by sample rate and bit depth. In general, digital systems are much less prone to error than analogue systems; However, nearly all digital systems have analogue inputs and/or outputs, and certainly all of those that interact with the analogue world do so. These analogue components of the digital system can suffer analogue effects and potentially compromise the integrity of a well designed digital system.
Sequence testing uses a specific sequence of test signals, for frequency response, noise, distortion etc., generated and measured automatically to carry out a complete quality check on a piece of equipment or signal path. A single 32-second sequence was standardized by the EBU in 1985, incorporating 13 tones (40 Hz–15 kHz at −12 dB) for frequency response measurement, two tones for distortion (1024 Hz/60 Hz at +9 dB) plus crosstalk and compander tests. This sequence, which began with a 110-baud FSK signal for synchronizing purposes, also became CCITT standard O.33 in 1985.[12]
Lindos Electronics expanded the concept, retaining the FSK concept, and inventing segmented sequence testing, which separated each test into a 'segment' starting with an identifying character transmitted as 110-baud FSK so that these could be regarded as 'building blocks' for a complete test suited to a particular situation. Regardless of the mix chosen, the FSK provides both identification and synchronization for each segment, so that sequence tests sent over networks and even satellite links are automatically responded to by measuring equipment. Thus TUND represents a sequence made up of four segments which test the alignment level, frequency response, noise and distortion in less than a minute, with many other tests, such as Wow and flutter, Headroom, and Crosstalk also available in segments as well as a whole.[citation needed]
The Lindos sequence test system is now a 'de facto' standard [citation needed]in broadcasting and many other areas of audio testing, with over 25 different segments recognized by Lindos test sets, and the EBU standard is no longer used.
Many audio components are tested for performance using objective and quantifiable measurements, e.g., THD, dynamic range and frequency response. Some take the view that objective measurements are useful and often relate well to subjective performance, i.e., the sound quality as experienced by the listener.[13] Floyd Toole has extensively evaluated loudspeakers in acoustical engineering research.[14][15] In a peer reviewed scientific journal, Toole has presented findings that subjects have a range of abilities to distinguish good loudspeakers from bad, and that blind listening tests are more reliable than sighted tests. He found that subjects can more accurately perceive differences in speaker quality during monaural playback though a single loudspeaker, whereas subjective perception of stereophonic sound is more influenced by room effects.[16] One of Toole's papers showed that objective measurements of loudspeaker performance match subjective evaluations in listening tests.[17]
Some argue that because human hearing and perception are not fully understood, listener experience should be valued above everything else. This is often encountered in the world of home audio publications.[18] The usefulness of blind listening tests and common objective performance measurements, e.g., THD, are questioned.[19] For instance, crossover distortion at a given THD is much more audible than clipping distortion at the same THD, since the harmonics produced are at higher frequencies. This does not imply that the defect is somehow unquantifiable or unmeasurable; just that a single THD number is inadequate to specify it and must be interpreted with care. Taking THD measurements at different output levels would expose whether the distortion is clipping (which increases with level) or crossover (which decreases with level).
Whichever the view, some measurements have been historically favoured. For example, THD is an average of a number of harmonics equally weighted, even though research[citation needed] identifies that lower order harmonics are harder to hear at the same level, compared with higher-order ones. In addition, even-order harmonics are said to be generally harder to hear than odd order. A number of formulas that attempt to correlate THD with actual audibility have been published, however, none have gained mainstream use.[citation needed]
The mass market consumer magazine Stereophile promotes the claim that home audio enthusiasts prefer sighted tests than blind tests.[20][21]
{{cite web}}