How are screenspace normals created, and is this step before or after using normal maps or bump maps?
They are created after using normal maps. In deferred rendering, you write to the various buffers (diffuse, normal, depth, etc.) in fragment shaders. By this time, the normal maps would have already been read.
If it's done before using normal maps, how are normal maps going to be affected by the screenspace-ness of the normals,...
If you are referring to normal maps on models, normal maps can stay the same. You can compress normal maps as well if you want, but this becomes irrelevant to what you do for writing to the normal buffer.
...and if after, how can you justify using a Model-View matrix on all fragments rather on vertices? Isn't that a lot more calculations?
Your model's normals are computed per vertex and interpolated through rasterization to be per fragment. The normal maps are processed per fragment, but these are just texture lookups, and these can be applied to the camera-space normals produced from the previous pipeline steps. Therefore, a view matrix is no longer necessary.
Finally, how are they unpacked? I realize you can get the blue component using pythagoras, but how are they returned to world space?
The idea behind 2-component normals versus 3-component normals is that the vector magnitude isn't often taken into account for normals. Also, normals facing away from the camera generally aren't useful. With two components, you can use model a hemisphere facing the camera. You unpack them the same way you pack them.
Think about normals as positions. When converting from world space (x, y, z) to screenspace (x, y, depth), but the depth does not effect where on the screen a mesh gets displayed. Convert the 3D vector to a vector in terms of screenspace (through view matrix multiplications).
Then you can discard the z-component (aligned with the camera) as long as you normals are unit vectors (because you can derive z from the other two vectors).