Velodyne Digital Drive Plus 18 (DD18+) Subwoofer Review

Irvrobinson

Irvrobinson

Audioholic Spartan
Am I compensating for something?? :D
Do you have a lifted F350 4x4 pick-up with the diesel engine, quad shocks, roof lights, massive mud tires, a huge chrome tailpipe, and mud flaps with the silhouette of a naked girl on them? You never go off-road, and the only thing you haul in the bed is new subwoofer equipment? Then, yes, you are compensating. ;)

Ya know, I do think that subliminally that's why I keep the grille on the DD18+ but not on the Salon 2s. Leaving the driver exposed on the DD18+ is a little shocking to the uninitiated. :) I also find that when folks can see the driver they want to know how loud the bass can get, and when they can't see it they just enjoy the music.

I think the same "size counts" effect seems to extend to other things audio, like cables. I like neat cabling, so I go custom on all cables for length. My Blue Jeans Cable Belden balanced interconnects don't impress anyone, but I make my own power cords out of Carol Cable 12/3 SOOW with Marinco IECs and plugs, and I'm always amazed by how many people, well, guys, comment on how cool my power cables are. And when I ask why? Because they're big and thick, and the plugs are so big and serious-looking.

Audio equipment does seem to appeal to the male psyche.
 
F

FirstReflection

AV Rant Co-Host
Audio equipment does seem to appeal to the male psyche.
Which is why I can't understand why a lot of MEN buy Bose systems. Women? I get it. The smaller the better - they'd rather have no speakers or TV at all! But guys? No idea how that works :p
 
gene

gene

Audioholics Master Chief
Administrator
This is what is so unfortunate. Even though AHs/Josh's results seem to correspond almost perfectly with SVS and others while Brent's do not (well, except apparently the VTF-15 on its 2nd or 3rd try IIRC), if I was Brent I'd want to better understand the reasoning why. I.e., I'm sure everyone could learn from each other on how they do these measurements to make the process that much more reliable and better for all readers.

If I was a professional reviewer, I know I'd be keenly interested in ensuring my reviews were comparable to my peers, and if they were well off the onus would be on me to figure out why. The DD-15 just sticks out like a sore thumb in my opinion as the CEA numbers just don't seem realistically possible for any 15" driver in a sealed box that size with 1250 watts applied to it, let alone compared to your own DD-15 and DD-18 tests.

Instead now we have comments floating around of "CEA testing can result in drastically different measurements from one person to the next" to justify the differences.

Even though, heck, ignoring Ed's comments above, you could take Josh's PB13 measurements and compare them to Ilkka's from back in 2007 or 2008 within a couple db's across the board (save IIRC below 20hz), most of which is fully explainable by the fact that the PB13 Josh reviewed had the new amp which added 1 to 2db's by virtue of it's additional 250 watts or so. Two different subs, several years later!

:rolleyes:
Without distortion measurements disclosed you can't accurately determine if the CEA #s are valid. Just look at Brent's #s for the SVS pb12-NSD. They aren't even close to svs's #s nor do they jive with his frequency response curve.
 
E

Ed Mullen

Manufacturer
I'm not talking about clipping the test rig. I'm talking about clipping the subwoofer input stages. ;)
My personal test rig can only output about 1200 mV before it clips and I always run it well below that point.

The input sensitivity of the amplifier is the amount of voltage required at the input stage to drive the amplifier to full power at max gain setting. Typical input sensitivity for a subwoofer plate amp is in the 150 mV to 400 mV range.

That is not the same as the input overload threshold, which is always far higher than the input sensitivity. Most subwoofer plate amps have an input stage overload threshold of around 2V, which is much higher than what almost any test rig or consumer AVR will output.

Our Sledge 800/1000 platforms have an input voltage switch with a high level setting (basically a 6 dB pad) that allows up to 4V before overloading the input stage.

So while it's certainly conceivable that the input stage could be overloaded during testing, in reality the possibility is extremely remote due to the generous overload voltage limits on subwoofer plate amps.

As a general SOP, we test powered subwoofers at/near max gain, where input sensitivity is the most favorable. We determine the input sensitivity of the amp and the overload threshold before testing it. Then we ensure the test rig is outputting enough voltage to drive the amp to full power, but not enough voltage to overload the input stage (which again is pretty darn hard to do anyway with most test rigs).
 
E

Ed Mullen

Manufacturer
You cannot do CEA measurements indoors with the same degree of accuracy as outdoor tests regardless what section 4.3 of the CEA 2010 standard suggests as a procedure to follow. For an indoor correction curve to even remotely be correct, you would have to apply a unique one for each sub being tested.
It would be extraordinarily difficult IMO to develop a set of valid CEA-2010 CFs for either indoor measurements, or outdoor measurements where a nearby reflective boundary is present.

IMO simply generating a simple set of delta's between the true quasi-anechoic FR and the in-room (or compromised outdoor) FR as per 4.3 of the CEA-2010 standard is not sufficient. It doesn't take into account how boundary reflections can also affect the harmonic distortion profile, which in turn will impact the CEA-2010 readings if the distortion harmonics being generated by a given test frequency are coincident and interacting with the standing wave pattern being generated by the nearby reflective boundary.

It's best to simply test as far away from any reflective boundaries as possible and eliminate the variable of CFs from the equation.
 
pbc

pbc

Audioholic
Thanks for the input Ed.

BTW, I read a bit of Brent's article and here is where he appears to discussion the use of a correction curve:

4.I performed a completely fresh calibration procedure, necessitated by the switch to measuring at 2 meters. (I’d done this procedure before, but at 1 meter.) To do this, I packed up a 15-inch sealed-box subwoofer (the one I built for my 4-2-1 subwoofer shootout), a Topping TP30 amp, my Clio FW speaker measurement rig and a couple of battery packs, and drove to a nearby soccer field very early on Sunday morning (the only time of the week when traffic noise from Highway 101, about 1.6 miles away, wasn’t audible). This allowed me to measure the frequency response of the subwoofer at 2 meters with no obstructions that would affect the measurement. I then drove home and performed the exact same measurement in my backyard, in the exact spot where I intended to do the measurements. I divided the park measurement by the backyard measurement to get a “correction curve,” which I entered into the CEA-2010 software to compensate for the frequency response anomalies caused by reflections from nearby objects, such as my house.
I was probably confused by his response at AVS where he says

For the record, all of my published CEA-2010 measurements were done outside, with a "chamber correction" curve added into the software to compensate for the acoustical effects of nearby objects (i.e. my house). (You can do CEA-2010 measurements without a correction curve, but you need an environment with at least 39 feet of clearance in every direction for valid results to 20 Hz, as well as a 50 dB noise floor, as well as AC power. Not an easy combination to find.) If the measurements are done properly, with a correction curve (this process is outlined in the CEA-2010 document and explained in greater depth in the article I linked to), it does not matter whether they were done indoors or outdoors. If it does, the technician is not conforming to the CEA-2010 standard, or to general good practices for that matter.

Before I started doing CEA-2010, I did my subwoofer output measurements indoors with a correction curve. I've seen statements on AVS Forum that I measured such-and-such subwoofer in-room, with the implication that room effects were included in the result. This is incorrect. In every subwoofer measurement I've ever done, I've eliminated the acoustical effects of the surroundings, either through close-miking or a correction curve. Doing in-room measurements of subwoofers without a correction curve does not yield meaningful results - which you can easily see if you use TrueRTA or equivalent, do a close-miked measurement of a sealed sub, then do an in-room measurement. The result will be radically different.
I assume he's saying that the CEA2010 verbiage states one can do a measurement indoors with some sort of correction curve, which you'd have to create yourself. I.e., drag some proxy sub outdoors to a place where you have no boundaries within 39 feet of you, measure appropriately, then drag that same sub to your measuring location and remeasure, and then you have a correction curve you can apply to other subs you measure in that exact same location.

I thought he was implying CEA came up with some standard set of correction curves which seemed difficult to say the least.

Though applying any correction curve to the results based on some proxy sub (which could react completely differently to room and boundaries vs some other sub .. I think?.. ) sounds fraught with potential for errors but he's simply implying that the CEA2010 standards permit it if you are able to come up with the chamber correction curve in accordance with the rules therein.
 
gene

gene

Audioholics Master Chief
Administrator
Thanks for the input Ed.

BTW, I read a bit of Brent's article and here is where he appears to discussion the use of a correction curve:



I was probably confused by his response at AVS where he says



I assume he's saying that the CEA2010 verbiage states one can do a measurement indoors with some sort of correction curve, which you'd have to create yourself. I.e., drag some proxy sub outdoors to a place where you have no boundaries within 39 feet of you, measure appropriately, then drag that same sub to your measuring location and remeasure, and then you have a correction curve you can apply to other subs you measure in that exact same location.

I thought he was implying CEA came up with some standard set of correction curves which seemed difficult to say the least.

Though applying any correction curve to the results based on some proxy sub (which could react completely differently to room and boundaries vs some other sub .. I think?.. ) sounds fraught with potential for errors but he's simply implying that the CEA2010 standards permit it if you are able to come up with the chamber correction curve in accordance with the rules therein.
PBC re-read Ed's response a bit more carefully. You can correct for frequency response but you can NOT correct for THD limited output. The distortion harmonics can interact with the standing waves from the nearby boundary which in turn can affect the CEA-2010 score by artificially inflating or reducing the score, depending on whether the affected harmonics are boosted or suppressed at the mic location. Hence why CEA testing needs to be done outdoors and away from interfering surfaces.
 
E

Ed Mullen

Manufacturer
Thanks for the input Ed.

BTW, I read a bit of Brent's article and here is where he appears to discussion the use of a correction curve:



I was probably confused by his response at AVS where he says



I assume he's saying that the CEA2010 verbiage states one can do a measurement indoors with some sort of correction curve, which you'd have to create yourself. I.e., drag some proxy sub outdoors to a place where you have no boundaries within 39 feet of you, measure appropriately, then drag that same sub to your measuring location and remeasure, and then you have a correction curve you can apply to other subs you measure in that exact same location.

I thought he was implying CEA came up with some standard set of correction curves which seemed difficult to say the least.

Though applying any correction curve to the results based on some proxy sub (which could react completely differently to room and boundaries vs some other sub .. I think?.. ) sounds fraught with potential for errors but he's simply implying that the CEA2010 standards permit it if you are able to come up with the chamber correction curve in accordance with the rules therein.
Yes, basic FR can be corrected with a before/after set of CFs. But correcting for distortion-limited output is a different animal. The CEA-2010 pass/fail threshold is based on stair-step limits for each distortion harmonic. And each subwoofer has its own unique distortion harmonic profile at each test frequency. So when a nearby boundary reflection acoustically interacts with the harmonic distortion profile for a given test frequency, it will affect the CEA-2010 pass/fail threshold for that test frequency.
 
pbc

pbc

Audioholic
Hmmm .... I guess my question is, is there something in the CEA2010 material as Brent suggests that permits one to create and apply some sort of correction curve or profile and use that to come up with proper CEA-2010 distortion limited SPL results?

I.e., his assertion that "this process is outlined in the CEA-2010 document and explained in greater depth in the article I linked to".

If in fact it simply says to do as Brent mentions above (use some proxy sub to get a correction curve) then clearly as you guys are saying, this doesn't compensate for correcting for distortion limiting output.

In which case either Brent is misinterpreting whatever it is that the CEA-2010 process specifies with respect to creating some sort of compensation curve, or it only applies to coming up with a basic frequency response and he is incorrectly also using it to come up with the max CEA-2010 numbers.

Just seems odd given he's implied in his article that he's using a method approved by Keale. Well, implied he's using a method that was reviewed by Dr. HSU who's method was reviewed/approved by Keale anyhow. Who's on first again?

So I'm just curious what process is specified in the CEA-2010 document for applying correction curves and when it is "correct" to do so, if there is in fact something specified there. :confused:

Edit: Not trying to be a PIA btw, just genuinely curious as, again, would be great to have more consistently comparable results to AH's and Ilkka's and hence a much larger database for people to work from so we can get away from "subwoofer X shook my room like no other ... plunged to the depths of oblivion ... provided the cleanest bass in my room ..." blah blah blah stuff.
 
gene

gene

Audioholics Master Chief
Administrator
Hmmm .... I guess my question is, is there something in the CEA2010 material as Brent suggests that permits one to create and apply some sort of correction curve or profile and use that to come up with proper CEA-2010 distortion limited SPL results?

I.e., his assertion that "this process is outlined in the CEA-2010 document and explained in greater depth in the article I linked to".

If in fact it simply says to do as Brent mentions above (use some proxy sub to get a correction curve) then clearly as you guys are saying, this doesn't compensate for correcting for distortion limiting output.

In which case either Brent is misinterpreting whatever it is that the CEA-2010 process specifies with respect to creating some sort of compensation curve, or it only applies to coming up with a basic frequency response and he is incorrectly also using it to come up with the max CEA-2010 numbers.

Just seems odd given he's implied in his article that he's using a method approved by Keale. Well, implied he's using a method that was reviewed by Dr. HSU who's method was reviewed/approved by Keale anyhow. Who's on first again?

So I'm just curious what process is specified in the CEA-2010 document for applying correction curves and when it is "correct" to do so, if there is in fact something specified there. :confused:

Edit: Not trying to be a PIA btw, just genuinely curious as, again, would be great to have more consistently comparable results to AH's and Ilkka's and hence a much larger database for people to work from so we can get away from "subwoofer X shook my room like no other ... plunged to the depths of oblivion ... provided the cleanest bass in my room ..." blah blah blah stuff.
Section 4.3 of CEA 2010 is hogwash. Nobody would dream of using it to justify doing max output tests indoors. You can't get reliable distortion results and hence accurate max output results. Brent doesn't publish distortion results which makes his erratic #s suspect.
 
E

Ed Mullen

Manufacturer
Hmmm .... I guess my question is, is there something in the CEA2010 material as Brent suggests that permits one to create and apply some sort of correction curve or profile and use that to come up with proper CEA-2010 distortion limited SPL results?

I.e., his assertion that "this process is outlined in the CEA-2010 document and explained in greater depth in the article I linked to".

If in fact it simply says to do as Brent mentions above (use some proxy sub to get a correction curve) then clearly as you guys are saying, this doesn't compensate for correcting for distortion limiting output.

In which case either Brent is misinterpreting whatever it is that the CEA-2010 process specifies with respect to creating some sort of compensation curve, or it only applies to coming up with a basic frequency response and he is incorrectly also using it to come up with the max CEA-2010 numbers.

Just seems odd given he's implied in his article that he's using a method approved by Keale. Well, implied he's using a method that was reviewed by Dr. HSU who's method was reviewed/approved by Keale anyhow. Who's on first again?

So I'm just curious what process is specified in the CEA-2010 document for applying correction curves and when it is "correct" to do so, if there is in fact something specified there. :confused:

Edit: Not trying to be a PIA btw, just genuinely curious as, again, would be great to have more consistently comparable results to AH's and Ilkka's and hence a much larger database for people to work from so we can get away from "subwoofer X shook my room like no other ... plunged to the depths of oblivion ... provided the cleanest bass in my room ..." blah blah blah stuff.
Keele's software does have the ability to enter 'chamber correction data'. In theory, the CEA-2010 data set for a subwoofer measured in a non-reflective environment should exactly match the CEA-2010 data set for the same subwoofer measured in a reflective environment - provided the chamber correction feature is working correctly. In my experience, the two data sets are sufficiently divergent to the point where I would simply recommend avoiding reflective environments entirely, because it adds another variable and level of complexity, and the ability to compensate for same is an imperfect science.
 
S

Sputter

Junior Audioholic
Keele's software does have the ability to enter 'chamber correction data'. In theory, the CEA-2010 data set for a subwoofer measured in a non-reflective environment should exactly match the CEA-2010 data set for the same subwoofer measured in a reflective environment - provided the chamber correction feature is working correctly. In my experience, the two data sets are sufficiently divergent to the point where I would simply recommend avoiding reflective environments entirely, because it adds another variable and level of complexity, and the ability to compensate for same is an imperfect science.
It would definately be simpler. Being more complex allows wider margin for error.
 

cjwhitehouse

Audiophyte
My personal test rig can only output about 1200 mV before it clips and I always run it well below that point.

The input sensitivity of the amplifier is the amount of voltage required at the input stage to drive the amplifier to full power at max gain setting. Typical input sensitivity for a subwoofer plate amp is in the 150 mV to 400 mV range.

That is not the same as the input overload threshold, which is always far higher than the input sensitivity. Most subwoofer plate amps have an input stage overload threshold of around 2V, which is much higher than what almost any test rig or consumer AVR will output.

Our Sledge 800/1000 platforms have an input voltage switch with a high level setting (basically a 6 dB pad) that allows up to 4V before overloading the input stage.

So while it's certainly conceivable that the input stage could be overloaded during testing, in reality the possibility is extremely remote due to the generous overload voltage limits on subwoofer plate amps.

As a general SOP, we test powered subwoofers at/near max gain, where input sensitivity is the most favorable. We determine the input sensitivity of the amp and the overload threshold before testing it. Then we ensure the test rig is outputting enough voltage to drive the amp to full power, but not enough voltage to overload the input stage (which again is pretty darn hard to do anyway with most test rigs).
Hi Ed,

Long time, no speak. :)

By way of comparison, my own test rig is capable of 1.943V RMS in -10dBV trim and 7.62V RMS in +4dBU trim via the Lynx 2B sound card. Havng tested more Velodyne subs than most, and using them in my own home system, I am all too aware of the potential for overloading them inadvertently.

The gain structure on the Velodyne DD subs is rather high, particularly when driven via the XLR inputs, and the overload threshold is not very generous. By my own measurements, a form of clipping sets in around 1.5V RMS. The odd thing is this distortion starts to creep in below 20Hz to start with. As you increase the drive level by a few dB, the distortion creeps up the frequency scale maintaining a very definite descending slope with increasing frequency. Which suggests to me this is not simple clipping but possibly some quirk in the DSP code at high amplitudes. I have a question about this open with Chris Hagen at Velodyne currently.

The upshot of this is that I am acutely aware of the potential for overload when testing these subs. The sloping nature of the distortion as it starts to appear makes it less easy to spot in swept sine tests as it simply becomes a contributing factor in the background. Spotting it with tone-burst testing would be even harder as it will typically just appear as rather high levels of odd-order distortion. Something for the reviewer/tester to beware of is all I am saying.

However, this issue can also impact on the general user. By their nature these subs will tend to end up in high-end systems, often partnered by equipment that generates signal levels more typical in the professional arena. For example, my own Theta Casablanca DACs are specified to deliver up to 20V RMS via XLR. If you connect a DD sub direct to this and match levels, you typically end up with the DD volume set to something like 3 to compensate for the very high signal level from the processor. In this scenario, it then becomes rather easy to clip the subwoofer on loud material, which is a pity after spending all that money! A higher overload margin and lower gain on these subs would be a good idea, particularly via the XLR inputs.

My advice to anyone using these subs in such a high end system would be to ensure you don't set the DD volume lower than 15 and match the speaker levels by dropping the level of the subwoofer output in the processor setup menus only. If you currently have the DD volume set in single figures, then this overload issue may be a problem for you that you may not even be aware of.
 
Ricci

Ricci

Bassaholic
Typically I have found that you will get a slight difference in maximum output before input overload, depending on the gain setting of the subwoofer. For example with CEA2010 type burst signals it is easy to overload the input stage trying to get the subwoofer to produce every last 0.1 dB it can. This usually shows up as increasing distortion without much if any increase in output at the fundamental center frequency. If you keep pushing the level higher the limiter will not allow the sub to go any louder but the distortion will keep increasing and will jump as you overload the input. You may increase the level quite a bit to net only a few tenths of a decibel increase in output while increasing THD notably. This begs the question...Is it better to report the absolute maximum output or a slightly lower output with less distortion? CEA2010 allows for a rather large amount of THD before the output is considered as failing and it is generally held that distortion in the bass range is relatively inoffensive. Also the intent of the program is to allow a safe and easy way to assess maximum useful dynamic output potential in 1/3rd octave bands while also looking at the THD profile. Since that is the case it makes most sense to report whatever maximum output can be reached as long as the distortion comes in under the thresholds. I have found that the subwoofers usually produce slightly higher burst output and least distortion with maximum gain, which requires less input voltage.

However in practice this is not how you would want to operate a subwoofer at home as this maximizes system noise.

Ahhh gain structure...There is a surprising amount of final system performance tied to this and many are unaware of how much.
 
Irvrobinson

Irvrobinson

Audioholic Spartan
The gain structure on the Velodyne DD subs is rather high, particularly when driven via the XLR inputs, and the overload threshold is not very generous. By my own measurements, a form of clipping sets in around 1.5V RMS.

[snip]

However, this issue can also impact on the general user. By their nature these subs will tend to end up in high-end systems, often partnered by equipment that generates signal levels more typical in the professional arena. For example, my own Theta Casablanca DACs are specified to deliver up to 20V RMS via XLR. If you connect a DD sub direct to this and match levels, you typically end up with the DD volume set to something like 3 to compensate for the very high signal level from the processor. In this scenario, it then becomes rather easy to clip the subwoofer on loud material, which is a pity after spending all that money! A higher overload margin and lower gain on these subs would be a good idea, particularly via the XLR inputs.

My advice to anyone using these subs in such a high end system would be to ensure you don't set the DD volume lower than 15 and match the speaker levels by dropping the level of the subwoofer output in the processor setup menus only. If you currently have the DD volume set in single figures, then this overload issue may be a problem for you that you may not even be aware of.
Excellent information. I've always wondered why the owners of DD18+ and DD15+ I've talked with end up setting their sub's volume to the 8-15 range while Velodyne defaults the volume level to 30. For such a well-thought-out design this does seem to fall into the "What were they thinking?" category.

My amps supposedly reach full output at 1.44v, so you would think they would make a good match for the DD18+, but to get adequate volume from my Benchmark HDR pre-amp section via the XLR outputs I have to set the jumpers at -10db, which puts the estimated output level for normal listening at about 6 to 8dbU, or ~1.5v RMS, which makes no sense. Perhaps I need to pad down the input level to the DD18+ and turn up its volume, from the current setting of 12. (This is a music system, so massive bass peaks a la movie special effects are rare.)
 
pbc

pbc

Audioholic
Ahhh gain structure...There is a surprising amount of final system performance tied to this and many are unaware of how much.
Maybe AH should write up a dummies guide to setting gain structure? I recall starting a thread about it over at AVS and never coming to a conclusion (at one point an expensive o-scope was required and then building some sort of attenuation circuit instead of the o-scope to use on my Mobile-Pre/REW setup to test for signal distortion as a VMM couldn't do this). Mind you, that was probably more involved as it included an outboard amp and DCX..

Just got too complicated. :eek:
 
G

gp4Jesus

Audiophyte
Bagends subs

They make (or did make) both a 15" & an 18" that did -F3 to 8hz for way less than 1/3 the Velodyne $. And from what I recall more musically, accurately, & cleanly. BTW: powered & plenty of it. Haven't read every post, but from the few I read, if I'm ever in the market for $ no object sub, I'll hunt down Bagend, buy 2 of theirs, & bank the rest.

Cheers tony
 
S

screen_x

Audiophyte
Fire risk?

I couldn't help but notice the white foam inside the cabinet (in the design section).

Isn't that too close to the amp when bolted/screwed on? If the amp is hot wouldn't the foam catch on fire? Or is it made out of fireproof material? :confused:
 
Irvrobinson

Irvrobinson

Audioholic Spartan
Hi Ed,

Long time, no speak. :)

By way of comparison, my own test rig is capable of 1.943V RMS in -10dBV trim and 7.62V RMS in +4dBU trim via the Lynx 2B sound card. Havng tested more Velodyne subs than most, and using them in my own home system, I am all too aware of the potential for overloading them inadvertently.

The gain structure on the Velodyne DD subs is rather high, particularly when driven via the XLR inputs, and the overload threshold is not very generous. By my own measurements, a form of clipping sets in around 1.5V RMS. The odd thing is this distortion starts to creep in below 20Hz to start with. As you increase the drive level by a few dB, the distortion creeps up the frequency scale maintaining a very definite descending slope with increasing frequency. Which suggests to me this is not simple clipping but possibly some quirk in the DSP code at high amplitudes. I have a question about this open with Chris Hagen at Velodyne currently.

The upshot of this is that I am acutely aware of the potential for overload when testing these subs. The sloping nature of the distortion as it starts to appear makes it less easy to spot in swept sine tests as it simply becomes a contributing factor in the background. Spotting it with tone-burst testing would be even harder as it will typically just appear as rather high levels of odd-order distortion. Something for the reviewer/tester to beware of is all I am saying.

However, this issue can also impact on the general user. By their nature these subs will tend to end up in high-end systems, often partnered by equipment that generates signal levels more typical in the professional arena. For example, my own Theta Casablanca DACs are specified to deliver up to 20V RMS via XLR. If you connect a DD sub direct to this and match levels, you typically end up with the DD volume set to something like 3 to compensate for the very high signal level from the processor. In this scenario, it then becomes rather easy to clip the subwoofer on loud material, which is a pity after spending all that money! A higher overload margin and lower gain on these subs would be a good idea, particularly via the XLR inputs.

My advice to anyone using these subs in such a high end system would be to ensure you don't set the DD volume lower than 15 and match the speaker levels by dropping the level of the subwoofer output in the processor setup menus only. If you currently have the DD volume set in single figures, then this overload issue may be a problem for you that you may not even be aware of.
I've been meaning to update this thread for some time and just never got around to it.

To recap, for all Velodyne Digital Drive Plus sub owners, regardless of sub size, there are three sets of inputs on each sub. One set for speaker level inputs, one set for single-ended line level, and one set for XLR line level. The speaker-level inputs and the single-ended inputs have dedicated input level controls, so input stage overload is not a factor, but the XLR inputs do not have level controls.

For those of us driving the DD Plus subs with full-range balanced outputs from our pre-amps, input overload is a very real factor. As cjwhitehouse noted, many owners running XLR cables are using sub volume levels of "5" to "10", when Velodyne ships the sub with a default of "30". I called Velodyne and asked if there was an XLR input sensitivity fix, and they never responded.

Some time ago it occurred to me that the fix was simple - use XLR line level attenuators ahead of the XLR inputs on the sub. Fixed levels (-10db, -20db, etc.) are available from Parts Express for about $10 each, and Audio Technica makes some more expensive adjustable ones. I tried the -10db versions from Parts Express.

What a difference! I remember that first listening session with the attenuators in-line clearly, and it was several months ago. Prior to the attenuators I was running the DD18 Plus's volume control at "8", and now I use "25". So what? Well, two things changed. First, the granularity of volume adjustment is so much finer. Before, "7" was too little and "9" sounded bass-heavy. Now I can fine tune the bass level depending on the recording from "20" to "30" and the subtlety of effect is very nice. Second, the sub's EQ is more effective and appears to be more accurate when I do in-room measurements. I don't know for sure that input overload was causing the problem, but the attenuators made a significant, easily audible and measurable improvement in the sub's usability and EQ performance.

As I said, most users won't care because they'll probably use the single-ended inputs. But if you are using the XLR inputs on Digital Drive Plus subs with full-range pre-amp outputs I highly recommend trying XLR attenuators.
 
Last edited:
newsletter

  • RBHsound.com
  • BlueJeansCable.com
  • SVS Sound Subwoofers
  • Experience the Martin Logan Montis
Top