
January 5th 05, 05:38 AM
posted to uk.rec.audio
|
|
Capacitor comparisons
Greetings to All.
First of all, thanks to all who have given suggestion/
ideas/feedback regarding the capacitor evaluation
project.It had been suggested to me that
the ABX method should be used.
Does anyone have experience of or comments upon
this method, and suggestions as to how it should
be carried out?
I have a panel of ten people, and plan to test them
separately, with three pieces of music
1. Piano and female voice
2. String Quartet.
These in my experience are most revealing.
3. A title known to the listener, on CD,
which he/she may bring along.
It now seems, to keep variables to a minimum, that
the two caps to be compared will be fitted as output
capacitors to the same pre-amplifier, and comparison
made between the two outputs
The tests will be done in a studio control room
environment, with signals under evaluation fed
to a studio console, the automation of which can be
used to select the source at a pre-determined
instant following time code. This will
allow repetition of the test with great accuracy.
Each member of the panel will be tested separately,
and will listen to each piece of music twice. On the first
run there will be no changes, on the second, the outputs will
be switched at a TC known only to the tester. The listener
will press a cue marker, which will capture the time code at
which he/she perceived a change. This can then be compared
with the TC of the real changes.
This will be repeated once for each member of the panel,
so ten times per piece of music, so thirty times in all.
We are looking for a 60% result. By comparing TC markers,
we can rule out spurious or faulty reactions.
Any comments/suggestions welcome.
Iain
|

January 5th 05, 06:08 AM
posted to uk.rec.audio
|
|
Capacitor comparisons
Iain M Churches wrote:
Greetings to All.
First of all, thanks to all who have given suggestion/
ideas/feedback regarding the capacitor evaluation
project.It had been suggested to me that
the ABX method should be used.
Does anyone have experience of or comments upon
this method, and suggestions as to how it should
be carried out?
I have a panel of ten people, and plan to test them
separately, with three pieces of music
1. Piano and female voice
2. String Quartet.
These in my experience are most revealing.
3. A title known to the listener, on CD,
which he/she may bring along.
It now seems, to keep variables to a minimum, that
the two caps to be compared will be fitted as output
capacitors to the same pre-amplifier, and comparison
made between the two outputs
The tests will be done in a studio control room
environment, with signals under evaluation fed
to a studio console, the automation of which can be
used to select the source at a pre-determined
instant following time code. This will
allow repetition of the test with great accuracy.
Each member of the panel will be tested separately,
and will listen to each piece of music twice. On the first
run there will be no changes, on the second, the outputs will
be switched at a TC known only to the tester. The listener
will press a cue marker, which will capture the time code at
which he/she perceived a change. This can then be compared
with the TC of the real changes.
This will be repeated once for each member of the panel,
so ten times per piece of music, so thirty times in all.
We are looking for a 60% result. By comparing TC markers,
we can rule out spurious or faulty reactions.
Any comments/suggestions welcome.
Iain
I am not sure what you would class as a success in this way, if they
find the change within 1 second, .5 seconds, 10 seconds ?
I think for what you are doing ABX would be better.
And I still have doubts about the two outputs :-)
--
Nick
"Life has surface noise" - John Peel 1939-2004
|

January 5th 05, 06:39 AM
posted to uk.rec.audio
|
|
Capacitor comparisons
On Wed, 5 Jan 2005 08:38:13 +0200, "Iain M Churches"
wrote:
Greetings to All.
First of all, thanks to all who have given suggestion/
ideas/feedback regarding the capacitor evaluation
project.It had been suggested to me that
the ABX method should be used.
Does anyone have experience of or comments upon
this method, and suggestions as to how it should
be carried out?
I have a panel of ten people, and plan to test them
separately, with three pieces of music
1. Piano and female voice
2. String Quartet.
These in my experience are most revealing.
3. A title known to the listener, on CD,
which he/she may bring along.
It now seems, to keep variables to a minimum, that
the two caps to be compared will be fitted as output
capacitors to the same pre-amplifier, and comparison
made between the two outputs
The tests will be done in a studio control room
environment, with signals under evaluation fed
to a studio console, the automation of which can be
used to select the source at a pre-determined
instant following time code. This will
allow repetition of the test with great accuracy.
Each member of the panel will be tested separately,
and will listen to each piece of music twice. On the first
run there will be no changes, on the second, the outputs will
be switched at a TC known only to the tester. The listener
will press a cue marker, which will capture the time code at
which he/she perceived a change. This can then be compared
with the TC of the real changes.
This will be repeated once for each member of the panel,
so ten times per piece of music, so thirty times in all.
We are looking for a 60% result. By comparing TC markers,
we can rule out spurious or faulty reactions.
Any comments/suggestions welcome.
Iain
OK - comments.
First the capacitors. Go through many samples with a meter to make
sure they are as far as possible equal in value. Install them in a
switcher box so they can be changed without delay.
Check the whole system to make sure that when a change is made the
output level remains the same. Also make sure there are no switching
transients that could identify which is being used.
Put the whole thing in a separate room to the subjects. Identify the
point at which a change is made with some kind of signal light that
invites the subject to make his choice. Allow the subject to listen
for as long as he needs to make his choice.
At any point in the test, allow the subject to ask to hear either of
the capacitors identified, to verify impressions of difference.
The tester should determine the order of switching just before the
test with thirty coin tosses. He should write these down and follow
his list. The order should be a new random set for each subject.
Go for more than 60% - 75% would be more reasonable.
Do not let any subject meet the proctor at any point.
d
Pearce Consulting
http://www.pearce.uk.com
|

January 5th 05, 08:11 AM
posted to uk.rec.audio
|
|
Capacitor comparisons
In article , Iain M Churches
wrote:
Greetings to All.
Each member of the panel will be tested separately, and will listen to
each piece of music twice. On the first run there will be no changes, on
the second, the outputs will be switched at a TC known only to the
tester. The listener will press a cue marker, which will capture the
time code at which he/she perceived a change. This can then be compared
with the TC of the real changes.
I would prefer the *listener* to be able to operate the ABX switch as and
when they choose. The critical point is that they should have no
information beyond what they then hear as to whether 'X' is 'A' or 'B'. I
suspect that this would allow the listener to detect smaller changes than
the protocol you describe. Given the variability of music I also suspect
that a switch at some moments would take longer to notice than at other
times. Hence this may have an effect on the statistics that mean that a
larger number of tests would be required.
FWIW I am not really interested in "how long people take", but in their
ability to just detect (or not!). Hence introducing time as a factor is one
I would personally avoid as I fear it complicates the real issue.
However, provided there is no time pattern which the listener can predict
or deduce, your protocol seems OK.
I would also wish to have a lot of measured data on the performance of the
system to establish the level of any 'uncontrolled' effects which may
influence the results. Ideally, this would be in advance so that any
'contentious' points could be sorted out before actual listening tests.
Slainte,
Jim
--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html
|

January 5th 05, 10:52 AM
posted to uk.rec.audio
|
|
Capacitor comparisons
In article , Iain M Churches
wrote:
"Don Pearce" wrote in message
...
First the capacitors. Go through many samples with a meter to make
sure they are as far as possible equal in value. Install them in a
switcher box so they can be changed without delay.
OK. The service technician at the studio has promised to select with a
bridge carefully matched samples from those we give him.
I would wish to know the details of the measurement. Frequencies used (if
sinewave), distortion levels, signal levels, measurement system
calibration, etc. Ideally, I'd wish to know values measured across the
audio band, and get a match to a tight level.
If you are using a single-frequency cap bridge, you may need to consider
the effects of series ESR, etc. Given the points made elsewhere about
the dc you may also find it useful to measure with dc applied, and perhaps
even before/after use to see if this has had some effect!
Afraid I am quasi-paranoid about such things as I've spent 20+ years
working on precision measurement systems for people like the NPL 8-
Data only becomes information when you know how it was produced. :-)
Check the whole system to make sure that when a change is made the
output level remains the same.
Yes of course;.) We can see the output levels (sample and hold) on the
console meter bridge.
I think I'd like to know things like the frequency and phase response with
each cap in circuit, again with a fair degree of precision.
Do you think that we should let the subject know that a change had been
made? (it would be simple to rig a cue light for this purpose) I think
it would be better if the subject were asked to detect the change
without any visual warning of when and if it had taken place.
It may be better to give a signal when a change *may* have been made. Then
sometimes change, sometimes not, and sometimes change there-and-back
swiftly (in case of things like a 'click' or contact cleaning). However as
indicated elsewhere, my preference is ABX as opposed to AB, with the
listener choosing as and when to switch.
At any point in the test, allow the subject to ask to hear either of
the capacitors identified, to verify impressions of difference.
We planned to give the subject a cue button with which he could, on the
fly, send a TC marker when he hears a change. This will alow us to roll
back to the same point to recheck.
The tester should determine the order of switching just before the
test with thirty coin tosses. He should write these down and follow
his list.
Thirty coin tosses? We shall be spending most of our time in the dimly
lit control room on our hands and knees looking for 1 EUR coins:-))
Use a psudo-random generator to give you a set of 'one time pads' of 1's
and 0's. Then choose a pad sheet just before each session, burn it after
the session, and use a different one next time. :-)
We could use a random numbers generator to pick from three numbers, and
feed these to the desk automation.
Erm... how are you planning to do the switching?
Slainte,
Jim
--
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Audio Misc http://www.st-and.demon.co.uk/AudioMisc/index.html
Armstrong Audio http://www.st-and.demon.co.uk/Audio/armstrong.html
Barbirolli Soc. http://www.st-and.demon.co.uk/JBSoc/JBSoc.html
|

January 5th 05, 05:09 PM
posted to uk.rec.audio
|
|
Capacitor comparisons
On Wed, 5 Jan 2005 11:37:50 +0200, "Iain M Churches"
wrote:
"Don Pearce" wrote in message
...
First the capacitors. Go through many samples with a meter to make
sure they are as far as possible equal in value. Install them in a
switcher box so they can be changed without delay.
OK. The service technician at the studio has promised to select with
a bridge carefully matched samples from those we give him.
Check the whole system to make sure that when a change is made the
output level remains the same.
Yes of course;.) We can see the output levels (sample and hold) on the
console meter bridge.
Also make sure there are no switching
transients that could identify which is being used.
We can guarantee this if we let the console do the switching of two
outputs, but can we guarantee it if we actually switch the capacitors
with 200V across them?
Yes - just arrange a bleed resistor to keep the voltage across the cap
at all times.
Put the whole thing in a separate room to the subjects. Identify the
point at which a change is made with some kind of signal light that
invites the subject to make his choice. Allow the subject to listen
for as long as he needs to make his choice.
The idea was to sit the subject in the studio, (low lights:-) and run the
test from the control room.
Do you think that we should let the subject know that a change had been
made? (it would be simple to rig a cue light for this purpose) I think it
would
be better if the subject were asked to detect the change without any visual
warning of when and if it had taken place.
Let the subject know the start of each trial - whether a change has
been made or not. It is too much of a strain for a subject to try and
tell the exact point - it is quite possible that the music playing at
that instant is not a type which best highlights the change.
At any point in the test, allow the subject to ask to hear either of
the capacitors identified, to verify impressions of difference.
We planned to give the subject a cue button with which he could, on
the fly, send a TC marker when he hears a change. This will alow us to
roll back to the same point to recheck.
The tester should determine the order of switching just before the
test with thirty coin tosses. He should write these down and follow
his list.
Thirty coin tosses? We shall be spending most of our time in the dimly
lit control room on our hands and knees looking for 1 EUR coins:-))
We could use a random numbers generator to pick
from three numbers, and feed these to the desk automation.
OK - do you have a random number generator? Hint - the RND() function
in a PC doesn't produce random numbers.
The order should be a new random set for each subject.
Someone off group has said that every subject should listen to
changes at exactly the same TC and the beauty of our set up
is that we can replicate it exactly for each and every subject.
No - if the changes are the same for every subject, you will fail to
randomize the effects of small changes in musical detail that may feel
like capacitor changes to a subject.
Go for more than 60% - 75% would be more reasonable.
Agreed. Olsen has used 60% in the past, as a "convincing"
percentage.
Do not let any subject meet the proctor at any point.
Agreed.
Thanks
Iain
Pearce Consulting
http://www.pearce.uk.com
|

January 5th 05, 07:38 PM
posted to uk.rec.audio
|
|
Capacitor comparisons
"Jim Lesurf" wrote in message
...
In article , Iain M Churches
wrote:
"Don Pearce" wrote in message
...
First the capacitors. Go through many samples with a meter to make
sure they are as far as possible equal in value. Install them in a
switcher box so they can be changed without delay.
OK. The service technician at the studio has promised to select with a
bridge carefully matched samples from those we give him.
I would wish to know the details of the measurement. Frequencies used (if
sinewave), distortion levels, signal levels, measurement system
calibration, etc. Ideally, I'd wish to know values measured across the
audio band, and get a match to a tight level.
Just a moment. Not long ago, the general opinion seemed to be that we
would not be able to tell the difference when changing between two
caps of the same value and voltage working from different makers.
Now we are being asked to match them to a tight level :-))
If you are using a single-frequency cap bridge, you may need to consider
the effects of series ESR, etc. Given the points made elsewhere about
the dc you may also find it useful to measure with dc applied, and perhaps
even before/after use to see if this has had some effect!
The original claim was that a listening panel would be able to
differentiate between an industrial/commercial grade coupling
capacitor andJensen capacitor.
Afraid I am quasi-paranoid about such things as I've spent 20+ years
working on precision measurement systems for people like the NPL 8-
It shows:-) But I am sure it is a good thing, and that we can all sleep
more
safely in our beds:-)
I think I'd like to know things like the frequency and phase response with
each cap in circuit, again with a fair degree of precision.
Those are surely things that will have to be determined afterwards, to find
out why there is a difference in sound, if one can be established.
We could use a random numbers generator to pick from three numbers, and
feed these to the desk automation.
Erm... how are you planning to do the switching?
As mentioned earlier, The source material and the console both run to time
code
and so we can program the changes to TC. This way we will be able to repeat
the
experiment with great accuracy. It has been suggested that we should use
odd
bar counts (not many people will expect this) in multiples of fives, sevens
and
elevens, and switch within the bar:-)
Iain
|

January 5th 05, 09:45 PM
posted to uk.rec.audio
|
|
Capacitor comparisons
Iain M Churches wrote:
(...)
Each member of the panel will be tested separately,
and will listen to each piece of music twice. On the first
run there will be no changes, on the second, the outputs will
be switched at a TC known only to the tester. The listener
will press a cue marker, which will capture the time code at
which he/she perceived a change. This can then be compared
with the TC of the real changes.
This will be repeated once for each member of the panel,
so ten times per piece of music, so thirty times in all.
We are looking for a 60% result. By comparing TC markers,
we can rule out spurious or faulty reactions.
TBH, I would tend to present two musical intervals (I1 and I2),
consisting of the same musical passage played on the two bits of kit.
The listener would need to identify which interval contained (e.g.) cap
A. The listener would get accuracy feedback after each trial.
The one problem with this method is that it is less intuitive than
other methods (such as the one you describe). The advantage is that it
should be about the most sensitive method available for detecting
differences, and I suspect more sensitive than the method you've
suggested. Another advantage of the 2-interval method is that any noise
from the switching would occur in the interval between trials (and
could be muted?).
Your timecode response also has response time included in the ... er
.... response, so it's not going to be a spot on accurate measure of
when the listener detected the switch. You should also discard (mark as
incorrect) any responses that occurred early, even if this was just a
matter of ms early. Unless your listeners have ears sensitive to the
future...
I would also include 'switch' trials where no switch actually occurred.
Ideally you would do far more than 30 trials, and use something like d'
as a measure of ability to carry out the discrimination. It can take a
while to learn to use accuracy feedback and thus to zero in on what
exactly to listen for. With 30 trials, you risk not picking up small -
but detectable - differences.
Steve.
|
Thread Tools |
|
Display Modes |
Linear Mode
|
|