mkossler

mkossler

Audioholic
Well, okay! It's an orthogonal topic in this thread, but… not for nothing, but I happen to work at the place that has since 1899 been the prime contractor for engineering, designing, and building the submarines folks have been referring to in this thread. As it happens, we have some pretty savvy acoustics engineers working here - in fact I'd have to say the best, bar none, in the world.

Based on what I have been exposed to (to be clear, my particular role is to establish the architecture for the software systems used to engineer, design, manage, and fabricate the nuclear subs we deliver to Uncle Sam), the capability to not just measure, but to predict and analyze acoustic characteristics of the sub and its contents (and its environment) go far beyond what any human system can detect or discern. What the human can do, though, is something that computers and other processing systems still struggle with: pattern recognition. That is what the sonar operator is there for - to recognize patterns and characteristics of the surrounding environment, and interpret that information accurately (and mostly importantly, quickly!).

I will make a point to bring this discussion to the acoustics folks here at General Dynamics - Electric Boat, and see what they have to say on the subject. You can rest assured; there is no better authority on acoustic evaluation and analysis systems, anywhere.

Cheers,

Matty K.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
pikers said:
We can hear sounds lower in amplitude when a louder one is present. That is why when we can hear the difference between SACD and CD.
pikers said:
Huh? Not sure I understand you here.

Who said you can hear the difference between SACD and CD?


The harmonics return. That's why some speakers and electronics are better than others - They resolve the things that "we can't hear" more better as which. :D


HUH??? Masking. Your hearing is limited. Resolution is the voltage differences it can play back. And you ears have limits on noticing level differences, JND. componens are well below your JND limits.
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
MacManNM said:
Yet you think a digitizer can? You obviously haven't worked with this type of equipment collecting any real world data. Any simultaneous signal, depending on the duration, freq content etc, the smaller one will be buried. Now if there is a slight delay between the two, depending on the dynamic range of the digitizer, the bit noise and the front end, one may be able to temporally resolve the signal, but at the same time this is not possible. It would be possible with 2 units, assuming the difference in frequency was enough that one may split the signal into two separate paths apply filtering, and proper scaling of the test equipment, whereas, that is not possible with the human ear.

I don't know what those digitizers can and cannot do. We were talking about hearing sensitivity and instrument sensitivity. Now we are talking digital equipment? On a sonar?
 
MacManNM

MacManNM

Banned
mtrycrafts said:
I don't know what those digitizers can and cannot do. We were talking about hearing sensitivity and instrument sensitivity. Now we are talking digital equipment? On a sonar?

This has nothing to do with sonar. All modern day instruments that record any type of time domain data have a digitizer in them. Since you don't know about them, how can you make claims about instrument sensitivity?
 
mtrycrafts

mtrycrafts

Seriously, I have no life.
mkossler said:
- in fact I'd have to say the best, bar none, in the world.
mkossler said:
I as a taxpayer surely hope so :D

What the human can do, though, is something that computers and other processing systems still struggle with: pattern recognition.

Yes, thank you for the clarification :)




I will make a point to bring this discussion to the acoustics folks here at General Dynamics - Electric Boat, and see what they have to say on the subject. You can rest assured; there is no better authority on acoustic evaluation and analysis systems, anywhere.

Cheers,

Matty K.


Thanks. We are anxiously waiting your feedback. :D
 
T

telva

Audiophyte
MacManNM said:
Discrimination is one of the strong points of the human audio system. That is why humans are used to listen in subs, and not computers.
Incorrect. Humans are used in submarines because they're still better than AI. Radar and Sonar operators are trained to pick out and indentify the subtle characteristics of radar and echo signatures, not because they can hear things the equipment can't. The equipment hears everything, it just isn't quite as good at indentifying what it is. An analogy is playing a guitar to a computer - the computer's equipment can detect and record audio well beyond what any human can hear, but it isn't that good at saying 'Hey, that's a guitar, a Spanish one if I'm not mistaken.'
 

Latest posts

newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top