Underwater image enhancement remains challenging in computer vision due to its complex and diverse degradation phenomena. Existing state-of-the-art methods mainly focus on spatial domain modelling and demonstrate superiority, but they fall short in two key aspects: 1) underutilization of frequency domain features, and 2) insufficient exploitation of the relationship between the positive and negative samples during the training. To overcome these limitations, we propose a wavelet-driven framework called WaveUIE. Specifically, our method decomposes images into high/low-frequency components via the wavelet transform. To better mine the features of these two component patterns, a high-frequency enhancement block (HFEBlock) is introduced to restore fine textures of the high-frequency component. In addition, we also introduce a low-frequency enhancement block (LFEBlock) to effectively eliminates color casts and blur of the low-frequency features. Moreover, we design a dynamic frequency fusion block (DFFBlock), which orchestrates cross-frequency interactions by first calibrating high-frequency data with the enhanced low-frequency features, and then synthesising them into coherent visual outputs. Finally, we design a wavelet fine-grained contrastive policy (WFCPolicy) to decompose images into four sub-bands and impose contrastive constraints on the enhanced result (anchor) within the sub-band space. These constraints pull the anchor closer to the ground-truth (positive samples) while pushing it away from the original degraded image (negative samples), thereby explicitly leveraging the degradation features present in different frequency sub-bands to improve enhancement performance. Experimental evaluations demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements in both SSIM and UIQM metrics over existing methods.
BibTex Code Here