Getting there, but not very well designed. This kind of testing is OK for general subjective shoot-outs, but the question here requires a higher degree of control and skepticism - if you're asking *if* something exists, that calls for more rigorous methods.
There are no trials where a cord is tested against itself, for example: do similar preferences emerge on those trials? We don't know. Also...this really isn't very much data: Each comparison only has a single trial, which also means there's no counterbalancing of comparisons (A then B, B then A), and we don't know if these alleged preferences are consistent from trial to trial.
Also, as the authors say: "Since no statistical tests of significance have been applied to the results it is not possible to demonstrate that one set of cables was found to be superior to any others in an objective sense." (Short answer: no, not statistically significant).
That said, those look like some pretty darn big differences in capacitance, and they say they measured frequency response but don't say what the results are. Any change in frequency response for wire is pretty darn important, so it would be nice if they had mentioned it. These are things that can definitely affect sound, but are *faults* pretty much by definition.
I wish folks would stick to simple A-B-X testing in cases like these; those are hard to mess up. If you don't understand experimental design and statistical testing, you just create a mess when you try to do something complicated.