I hate to be a buzzkill, but this task is way more complicated than you might think it is. For starters, you don't want to actually bitmaps, as they are large and will clog up your network very very fast. Instead, you probably want to live encode it into a video and stream that, as those provide adaptive quality, if needed, but can also cope with package loss much much better. For a quick and dirty start, anything keyframe based will probably work, although h.264 is probably the best since there is plenty of documentation and code for it around.

That is not to say that you still have to grab your screen, because you have to, but you will probably want to dump the result directly into something like ffmpeg to get the video encoding going, and then send the resulting frame via a socket and then assemble the video back together on the other end. And don't even bother with converting the backbuffer to a BMAP or anything, you want raw access and then do as little as possible with it to avoid the penalty of converting the result around. So ideally your backbuffer is already in a format that is directly supported by ffmpeg and preferably you have a way to get it asynchronously without blocking the rendering (so probably triple buffer everything or so).

The receiving side on the other hand is much simpler, although I'm not sure if the Raspberry Pi has enough power to deal with the video decoding. Probably not, but I think there you can get some hardware decoding action if you pay for a license. But yeah, pretty much you just wait for a keyframe to come along and then start assembling the video frame by frame.


Shitlord by trade and passion. Graphics programmer at Laminar Research.
I write blog posts at feresignum.com