Lossless Audio vs MP3 and Accurate Speakers vs Inaccurate speakers

AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
So in these double-blinded studies by Harman International (Sean Olive), high-school students preferred Lossless audio over lossy audio MP3.

And these students also preferred the Infinity P362 over the Polk Rti10, Klipsch RF35, and the $3,800 Martin Logan Vista!!!!!:eek:

http://seanolive.blogspot.com/2010/06/some-new-evidence-that-generation-y.html

I was not surprised with lossless audio sounding better than lossy audio.

I was not too surprised that the P362 sounded better than the Polk and Klipsch.

But I was surprised that they picked the P362 over the much more expensive Martin Logan Vista towers.

I've listened to the Martin Logan Vantage (more expensive than Vista), and they sounded great to me.
 
H

Hocky

Full Audioholic
Chances are good that the set up of the ML's was wrong. They're pretty sensitive and can't be placed where the towers were sitting (which is how they test it, isn't it?).
 
jonnythan

jonnythan

Audioholic Ninja
I'm astonished - astonished - that a test run by a speaker company found their own speakers to be superior to another company's more expensive speakers.
 
H

Hocky

Full Audioholic
The Aerius is 20 years old... things have changed a bit since then. I've seen very few FR charts for the modern speakers, though.
 
GranteedEV

GranteedEV

Audioholic Ninja
It's possible that the Martin Logans were too close to the wall.

In fact that's probably my number one issue with the blind testing - distance from wall is a constant. The truth is different speakers are designed for a different distance from wall and it shows in things rearward radiation in bipoles/dipoles and even lower midrange and upper bass in monopoles.

anyways I don't really know what to say regarding the ML measurements, but I will post this as it's relevant:

www.linkwitzlab.com/Editor-Stereophile-MG36.doc
 
tonmeister

tonmeister

Audioholic
So in these double-blinded studies by Harman International (Sean Olive), high-school students preferred Lossless audio over lossy audio MP3.

And these students also preferred the Infinity P362 over the Polk Rti10, Klipsch RF35, and the $3,800 Martin Logan Vista!!!!!:eek:

http://seanolive.blogspot.com/2010/06/some-new-evidence-that-generation-y.html

I was not surprised with lossless audio sounding better than lossy audio.

I was not too surprised that the P362 sounded better than the Polk and Klipsch.

But I was surprised that they picked the P362 over the much more expensive Martin Logan Vista towers.

I've listened to the Martin Logan Vantage (more expensive than Vista), and they sounded great to me.
Look at slide 28 of the PDF the comprehensive anechoic frequency response measurements of the four loudspeakers and you will see a clear relationship between the preference ratings and the measured performance of the loudspeakers. The most preferred speaker had the flattest on-axis response with the smoothest off-axis curves. The least preferred speaker had among the worst frequency response in terms of flatness and smoothness.

The listening test results can be predicted with 86% accuracy based on those measurements alone. see
http://www.aes.org/e-lib/browse.cfm?elib=12794
http://www.aes.org/e-lib/browse.cfm?elib=12847
 
GranteedEV

GranteedEV

Audioholic Ninja
Sean, I'm huge on listening window and polar response measurements, but what response is considered flat with respect to baffle step losses? 6db of boost would give flat response enchoically, and 0db of boost would give flat response in a wall-install, but what about in your testing room? isn't that a big source of bias? How much BSC is 'correct' in that room, and is that universally representative and versatile enoigh?

Anyways, on the topic of dipoles and measurements, I'd be very interested if you guys would put together a Nao Note diy speaker;

http://www.musicanddesign.com/NaO_NoteDetails.html

Give it the appropriate room from wall (around 5-6ft) and compare to the various Harman speakers (granted, the drivers aren't the highest end you could go) including thr Salon

The polars seem really good, yet are very different from a direct radiator.
 
Last edited:
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
The most preferred speaker had the flattest on-axis response with the smoothest off-axis curves. The least preferred speaker had among the worst frequency response in terms of flatness and smoothness.
I guess I did not realize that MartinLogan speakers are inaccurate speakers as far as FR.

These speakers were measured in the anechoic chamber, so I assume speaker placement is removed from the equation.
 
tonmeister

tonmeister

Audioholic
I guess I did not realize that MartinLogan speakers are inaccurate speakers as far as FR.

These speakers were measured in the anechoic chamber, so I assume speaker placement is removed from the equation.
Yes, those are anechoic measurements so the effect of boundaries and/or room reflections are removed. What you see in the ML measurements are resonances that are in all of the curves. In the room, the speakers are positioned almost 4 feet from the back wall.
 
Alex2507

Alex2507

Audioholic Slumlord
I guess I did not realize that MartinLogan speakers are inaccurate speakers as far as FR.
Not all ML speakers are created equally. Nor are they priced equally.
By the time you get to the Vantage line I don't care about graphs. They sound good.
 
AcuDefTechGuy

AcuDefTechGuy

Audioholic Jedi
Not all ML speakers are created equally. Nor are they priced equally.
By the time you get to the Vantage line I don't care about graphs. They sound good.
The ML Vantage speakers were the only ML speakers I have listened to. And yes, they sounded great to my ears when compared to the Dali Euphonia MS5 and Krell Resolution One towers in the same dealership.
 
Alex2507

Alex2507

Audioholic Slumlord
Not to completely jump track here but a few years ago I got to listen to some lesser ML's that were easily bested by Def Tech 7002's. This thread has me itching for another pair of 360/2/3's. I don't know where I would put them but boy oh boy would I like to compare them to my lightly modded pair and then I'd like to do some more extensive mods but not on my kitchen table. I want saws and routers to tear into the boxes ... and I'd like a budget. Of course I'd need time off from work to do this but for me that's a certainty that unfortunately never coincides with a surplus of funds.

Hey Johnny, you think Sean fudged the research findings over at the HA lab?
Why don't you just come out and call him a liar?
 
GranteedEV

GranteedEV

Audioholic Ninja
Yes, those are anechoic measurements so the effect of boundaries and/or room reflections are removed. What you see in the ML measurements are resonances that are in all of the curves. In the room, the speakers are positioned almost 4 feet from the back wall.
Is 8 ft delay long enough for the reflected rear wave info to be 'late arriving'? AFAIK that's only 7ms

To get to 10+ms for the rear wave reflection to be 'spaciousness' don't you want 12+ft of delay (6ft from wall)?
 
jonnythan

jonnythan

Audioholic Ninja
Hey Johnny, you think Sean fudged the research findings over at the HA lab?
Why don't you just come out and call him a liar?
Why would I call him a liar? I highly doubt he's lying.
 
jonnythan

jonnythan

Audioholic Ninja
Here is a quote from Gene on this very topic, which I find very useful:


"But, what always fascinates me is every company that swears to adhere to a strict DBT protocol always seems to win or at worst case tie the other speakers in their own internally run tests. They all seem to have perfected an allegedly bias free testing methodology they can NEVER lose. Best of all, they found a way to win (or at least tie) their internally run tests often using the cheapest parts and minimalist designs in their products.

"While I admire taking a scientific approach to testing and evaluating loudspeakers, I feel everyone must be careful in analyzing their testing procedures and results, else false conclusions can easily be reached. Biases always exist in tests, even when run blind. It is identifying these biases that helps better understand the results they produced.

"We have conducted our own blind tests in the past and to date, none of the brands that claim victory in their own DBT's even win first place. They are often very competitive for products pitted against others in their price class however."

I'm not saying any manufacturer's speakers are or are not better than any other's. I'm saying don't take Company X's word for it when they tell you they've done tests and their speakers at $xxx are better than Company Y's speakers at 10*$xxx.
 
3db

3db

Audioholic Slumlord
Here is a quote from Gene on this very topic, which I find very useful:


"But, what always fascinates me is every company that swears to adhere to a strict DBT protocol always seems to win or at worst case tie the other speakers in their own internally run tests. They all seem to have perfected an allegedly bias free testing methodology they can NEVER lose. Best of all, they found a way to win (or at least tie) their internally run tests often using the cheapest parts and minimalist designs in their products.

"While I admire taking a scientific approach to testing and evaluating loudspeakers, I feel everyone must be careful in analyzing their testing procedures and results, else false conclusions can easily be reached. Biases always exist in tests, even when run blind. It is identifying these biases that helps better understand the results they produced.

"We have conducted our own blind tests in the past and to date, none of the brands that claim victory in their own DBT's even win first place. They are often very competitive for products pitted against others in their price class however."

I'm not saying any manufacturer's speakers are or are not better than any other's. I'm saying don't take Company X's word for it when they tell you they've done tests and their speakers at $xxx are better than Company Y's speakers at 10*$xxx.
Gene...perhaps you can clarifiy your statement posted above. I don't see the value of a company doing DBT UNLESS

1. They use its used to bring their house sound closer to the competitions while in the design phase.

2. There are several prototype models ready to go and the manufacturer wants to choose the one best model that will move into production. PSB and Paradigm do this.

Aside from that, just doing DBT against the competition is not really meaningful as one doesn't know the acoustic environment used, the target audience (any biased camps sitting in the audience, etc) and equipment used, if the speakers were postioned optimally in the room etc.
 
Pyrrho

Pyrrho

Audioholic Ninja
It's possible that the Martin Logans were too close to the wall.

In fact that's probably my number one issue with the blind testing - distance from wall is a constant. The truth is different speakers are designed for a different distance from wall and it shows in things rearward radiation in bipoles/dipoles and even lower midrange and upper bass in monopoles.

anyways I don't really know what to say regarding the ML measurements, but I will post this as it's relevant:

www.linkwitzlab.com/Editor-Stereophile-MG36.doc
Yes, that is an important consideration. In the test, all the speakers were in the same position. You can bet the position was right for the manufacturer's speakers, but that might mean that all of the others were in the wrong position.

That said, although I love flat panel speakers, I have never liked Martin Logan speakers. To be sure, I have not heard all of them, or any top of the line model, but the ones I have heard did not sound good to me for the asking price (both in the distant past, and within the past couple of years with new models). And I have wanted to like them; it would be nice to believe that a speaker made in the middle of the U.S. is great, and it would be nice for some exotic speaker to sound magical. But they have never seemed right to me, no matter which ones I have heard, or where I have heard them. Of course, maybe they were all set up wrong, or I only happened to listen to poor models, but they cost plenty for what they were.


I would be much happier with a test done by someone without a vested interest in the results. And it would be nice to see how the particular other models were selected. Are they representative of their respective brands, or are they unusually bad for them? Also, what music was selected, and why was it selected rather than other music? Was it selected because the house brand was particularly good at reproducing such sounds, or was it selected based upon some other considerations? Were there other tests done with different models that turned out badly for the house brand, so that they decided to pick different models until they got the results they wanted? Did they run the same test over and over with different results, but then only report the set of tests that gave the desired results? There are too many unanswered questions to come to any firm conclusions about the tests.

It might be that the Infinity speakers are the best, but we have no way of determining that from this article.
 
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top