That definitely sounds like a sensible compromise.
I'm good with or without it, I've been mapping all these years without it just fine, so pretty much anything is an improvement.
Still just out of curiosity, I'm wondering:
1. Doesn't the renderer already perform real-time visibility calculations for all the brushes/entities in the 3D view?
2. Considering the above, is the following idea too crazy?
Suppose you can have two frame buffers, one of them with the brushes/entities and another one with just the 6 faces of a skybox. It sounds possible that as you render the final frame, you replace the sky textures in the first buffer with what's rendered in the second buffer. Or, in other words, layering the first buffer onto the second, with the sky textures being transparent. The final effect being the skybox being painted "into" the sky brushes and still having a void area.
The skybox itself is pretty simple, after all it doesn't require position calculations, just camera rotation and maybe field of view.
This is entirely theoretical and based on my extremely limited and outdated knowledge of graphics rendering, so I'm prepared to see this post bashed when I return