Well, I can't help but start with a question... how deep of a discussion would you like to have? Discussions of DAC trade-offs get into some math and implementation issues pretty quickly.
The 1-bit DACs you refer to were the first DACs using a design called bit-stream "delta-sigma" modulation. This was an improvement over the previous word-parallel, pulse width modulation devices, as the DSM converters exhibit greater linearity at low signal levels. The old PWM devices required extreme implementation precision to achieve what's called monotonicity relative to the original analog signal, which is nothing more than a fancy way of saying that a -60.4db analog signal convertered to digital needs to re-emerge as a -60.4db analog signal after conversion, not -68db or -55db. One way around perfect precision in PWM devices was to increase the word length, to say 18bits, and push a 16bit data stream through them using the only the most significant bits. This trick increased linearity, but in the end it's just easier to use the DSM technique, and that's what every modern DAC device I'm aware of uses. Is all that as clear as mud?
It is confusing is to keep the parallelism in the internal DAC architecture, which is measured in bits, separate from the word length used to describe the amplitude range of the bit stream. The old 18 or 20bit PWM DACs did only 16bit/44KHz or 16bit/48KHz data streams. New DACs, which are almost all variations on DSMs, don't really talk about their internal architecture any longer. A 24bit/192KHz DAC simply says it is capable of converting a data stream with up to 192K samples of up to 24bit-wide words to describe the amplitude of the signal at each frequency.
If you want the specifics we could discuss it, but oversampling factors into this too, because the higher the sampling frequency the easier it is to perform the filtering necessary to eliminate the quantization/conversion noises that are a natural result of the algorithms. Oversampling does not increase the amount of unique information in the data stream, it allows better noise filtering. Some people would argue that the majority of the audible differences between DACs are due to the filtering strategies. Filtering is another massive mathematically interesting field if you want a deep discussion.
The newest DACs are multi-bit DSMs. Personally, I'm to the point where I think the analog amp sections of CD/SACD/DVD players have a whole lot more to do with sound quality than the DAC. Also, given how fast the technology evolution has been I would not consider used low-to-midrange digital equipment in a high quality audio system. That's just me, but unless you're talking some relatively high-end thing, like an old Sony 707ESD that you can now buy for $300, I'd just buy something new for $300. The old $2K 707 Sony may sound better than a new $300 Sony, but that's probably because the analog section has a better power supply and/or op-amp to pump out 2V into 10Kohms or so to the pre-amp or amp. I have seen some real bargains in used high end digital, but unless you have a high end audio system you probably won't hear the differences, or not at all.
To answer your bottom-line question, IMO the newest DACs sound the best, but the sound of a CD player is probably more determined by the analog circuitry than the digital circuitry. Again, IMO.