I have no idea what your point is here. Do you have one? I never claimed to have an accurate 3-D model of soundfields and their perception. In fact, I simply pointed out that a single frequency response sweep at a single position isn't sufficient to capture such complexities. So I take it you're just agreeing with me.
It's a common excuse because it's correct. So try a level-matched, double-blinded DAC comparison and listen for soundstage differences. Get back to me with your results. I've done such comparisons. I'm guessing you haven't.
I have no idea what your point is here either.
I have no idea what your point is here either. Nobody claimed what you're arguing against. Having a model of human hearing that incorporates brain processing is entirely different from knowing limits of human hearing.
Edit to add: I have no idea why you mention the 'bioengineering dudes' you know modeling the human spine. I don't recall the spine having much to do with hearing. Unless your ears are somehow located in a different place than mine, your auditory nerves don't go through the spine. They are cranial nerves. So I get the impression you're just bullsh!tting to try to impress us.
The point is, since no engineering team can currently prove to you with data that sitting in a glass box makes your music sound lousy, you should have no problems sitting in a glass box and cranking it up. If I am covering up my glass with wood to make it sound better, it is just unproven audiophool koolaid. You should be very happy cranking it up in a glass box.
The point is, if I am about a dB off (without level matching) and listening at a 88 dB baseline (swings up to a 100db)….and I can clearly point out that one DAC is a lousy flat blur and I perceive clear instrument separation and precise placement of it in a apparent 3D soundstage on another DAC, what does level matching of 1 dB accomplish man? I heard one at 87 dB and the other at 88dB baseline, so be it. That’s your excuse? a lack of level matching? I put 2 other non-audiophile musicians on a blindfold and randomly switched back and forth. They were able to tell the difference like night and day each and every fcking time. You will level match to the exact 1db dB and prove me a miracle? Or wait, do you have a coupla 10 dollar DACs and a 100 dollar speaker? If that’s the case, everything would sound the same to you, one big flat blur.
If I gathered every snippet of data from the manufacturer/designer for a specific speaker and threw a whole big pile of graphs, spreadsheets, spin-o-rama whatever and asked you to point to the exact snippet of data that shows this is why this speaker sounds a bit holographic. You couldn’t do that, could do? You simply don’t know why that phenomenon occurs, do you? Go read up something stale/simplistic on ASR, come back and let me know. I doubt many speaker designers out there even know why the speakers they make sound the way they do.
Is the human ear and a fcking mic equivalent as sensors? Is a DAQ and the human CNS equivalent? What you perceive as music is a very complex sensor’s signals interpreted by your very complex brain, about which you or the scientific community have very little understanding of. But, yep, a mic and a DAQ can prove everything for you, eh? Get outta here dude. The more you know, the more you start to realize how little you know. You obviously haven’t gotten there yet.
Since, you are merely scratching the surface with your rudimentary instrumentation (a mic and a DAQ whoop di doo), you could dismiss everything (you can't understand or model) as 'listener bias' and try to look brilliant on a forum, I suppose.