But that's the point: shaders always work with geometry ("run[s] for each pixel drawn of an object") and never directly on a bitmap buffer. How would you f.i. categorize the light accumulation pass of a light pre-pass/deferred renderer, where you render bounding geometry (probably low-poly spheres) for your lights and project your G-Buffer content on it to do the lighting calculations in screen-space?
What you call a "post-processing effect" is not more than rendering a full-screen quad and projecting some buffer content and it. But you can achieve the very same effect with arbitrary geometry f.i. bounding geometry to mask certain parts of the scene.
I don't say that it is essentially wrong to speak of "pp-effects", but as a not-so-experienced graphics programmer this distinction can easily lead to wrong assumptions about how a described multi-pass technique actually works. wink