Class g (and h) are/were just methods to reduce heat in conventional a/b amps. They use the same class a/b output stages, but the power supplies are dual rail (or variable, in the case of 'class h'). As mentioned, such supplies are of necessity more complex, with much higher parts count and increased chance of failure.
Also, it throws a wrench into amp comparisons when using continuous power as the metric. A class g amp's power supply will have a low voltage rail that can run 24/7, and a high voltage rail that can operate for a second or two. The standard 'continuous power' rating will reflect the lower voltage rail's limits, where with dynamic program material the class g amp might punch above what it's continuous rated power suggests. It's a ratings game that NAD has historically engaged in, as one example, touting their IHF rated power specs. I have no idea what Arcam specifies or what salesmen believe, but amp performance is all reducible, if only that info was disclosed. In the real world, listening to music, I doubt that there's a hill of beans difference between those amps, maybe a db or two of headroom?
Modern amp tech renders obsolete such approaches anyway, but OP was asking about class g. No, if one can actually hear rails commute (the primary difference b/t the Arcam and Monolith, and something that happens at power levels where speaker induced distortion wildly dwarfs that from the electronics), they either have superhuman senses or delusions of such powers.
So, for almost twice the ducats, the Arcam might, might, approach the absolute power of the Monolith for dynamic signals. If both amps can achieve similar amounts of clean power, why go with the more expensive, more complex option? That's a $1400 difference that could go toward actual audibly beneficial improvements.