First let me say that I do respect that some people can and do a hear a difference and prefer fully lossless or uncompressed audio when given the choice. I still maintain that lossy codecs do a great job in the vast majority of cases and is still 'good enough' for the other cases. I apologize if I sounded harsh with the uncalled for 'audiophile' quip - that is usually not my style.
Ok, let's start with why just comparing bit rates between PCM and lossy codecs is not sufficient to explain why lossy codecs sometimes produce sound that is noticeably inferior to some people.
In a nutshell, the bits that come out of the encoder are not simply a subset of the bits that went in. The encoder does not look at 1411 kilobits (1.4 megabits) in one second of audio and decide we can't hear bits 200-500 or bits 800,000 - 810,000 and thus they can be discarded.
PCM theory
As I think everyone knows, at least intuitively, PCM audio data is a sequence of numbers that are generated by sampling the analog audio thousands of times per second and calculating a value to represent the amplitude of the signal at each moment in time. This is what the analog to digital converter does.
The 'sampling rate' determines how many numbers there are per second and the 'bit depth' determines the range of those values. For CD, that means 44,100 samples per second and each value is a number between -32,768 and +32,767 (16 bits, but the numbers are signed because the waveform has both a positive and negative component). Basically, the ADC takes a snapshot of the analog signal every 1/44,100 seconds and computes its amplitude at that moment in time.
But our ears are analog, so for playback we need a DAC to convert those samples from the time domain into the frequency domain. There are many algorithms to do this: Fast Fourier Transform (FFT), Modified Discrete Cosine Transform (MDCT), and others. The amplitudes are converted to voltages and the DAC, along with a filter that chops the frequency to (roughly) half of the sampling frequency, re-creates the analog waveform.
So what does that have to do with a lossy codec? The codec can't determine what we can hear or not based solely on the PCM samples - it must first do the same thing that a DAC would do to convert from the time domain to the frequency domain.
It operates on blocks of samples. Using a filter, the blocks are further divided into "sub-bands", little slivers of the block if you will. Each sub-band is converted to the frequency domain and analyzed by comparing it to the perceptual model that defines how the human ear hears. If the model says we would not hear that little sliver because for example the preceding sliver was louder or the following sliver is louder and thus would mask this one and not be heard, that sub-band (all the samples from which it was created) are discarded. If it is deemed audible, then it must be included in the result.
All of the blocks (frames) that survive must be coded; ie turned back into bits and stored in an efficient manner. The target bit rate; eg, 192 kbps, determines how many bits are available to code each second of audio - so the encoder is constantly calculating the most efficient use of those 192 Kbits to encode that second of audio. If it is variable rate, it will use more or fewer bits as necessary instead of being stuck with a fixed number like 192 Kbits as in constant bit rate.
So the bits that come out of the encoder are not identical to the ones that went in and it is not fair to simply divide 1411 kbps by 192 kbps to determine how much was 'lost'. The MPEG 1 - Layer III (mp3) algorithm has been studied and tested extensively over the years. If you look at spectral plots of the resulting frequency, you will see that it tracks the original fairly closely.
Sure some things are lost (192 kbps for example drops ALL frequencies above roughly 16 kHz which most people can't hear anyway) but by modifying the block size and number of sub-bands it can be tweaked to work better for different types of music. Research on improving the perceptual model also may improve it, but I think those efforts are largely dead as we've moved on to other algorithms like AAC and of course Dolby Digital and DTS.
That's the best I can do for now...