object count -> more than 2k Objects rendered at once will already cause your CPU to be the bottleneck
vertex count -> a higher vertex count means more vertices to be pusht to your graphicscard, which could make you bandwidth limited
texture size -> basicly like the vertex count + more expensive sampling. On the other hand, mipmaps help a lot to minimize the cost. Maybe this should be before vertex count, I am not sure.
polygon count -> a little more data has to be pusht to your graphicscard and your graphicscard has to draw more polygons, which could become a problem, especially with complex vertex shaders.
In the end, it is hard to give them an order. If you try to render 10million polygons (which most probably also means, pushing a few million vertices to your graphicscard), this will most probably cause your rendering speed to be not THAT great. But sampling an extremely highres texture several times for each pixel, or even different ones, will have the same effect. Same goes for an extremely complex vertexshader (which could also sample highres textures) in combination with a high vertex count.
It all also depends a lot on your hardware. Modern graphicscards basicly don´t care about the number of polygons to render (today I for example rendered a watersurface with a bit over 1million polygons at about 150fps with my gtx 460), here the problem tends to get the amount of data, which is the reason for creating geometry on the fly on your graphicscard. In contrast to this, older hardware has for example easier restricted by textures and polygon count.
And again, shader complexity can cause everything to be slow as hell, too.
So in the end you should always check the number of objects being rendered first, and if your cpu is not the problem, all the other points.