Okay can somebody show me the difference in how it would actually translate into better or more accurate pictures?
Maybe I am too slow but how would a picture metered by a D70 differ with one down by D3 for example.
A big advantage to having an array with higher resolution is that it is easier to compare with the camera's scene database. It will say, for example, that because there's a bright horizontal stripe at the top that it's probably a landscape with a sky and the user probably wants to hold detail in the sky and so will weight that area more than others in calculating exposure. Where I'd really like to see if it makes a difference is with a face with a bright background.
The coolest thing about the way Nikon does their metering, though, is how they connect it to their autofocus system. In 3D AF, it actually looks at what colors/levels are under the selected AF point and then tracks the object simultaneously with the metering matrix and the autofocus matrix. With the D90 it wasn't spectacular because you had only 11 AF points to pass the object back and forth with, but with the 51 of the high-end bodies it is amazing to watch the AF selection follow your subject across the frame. I used it sometimes to recompose portraits with the D300.
With the D3/D300, Nikon began doing face detection with the metering matrix when the camera was in auto AF selection, so that it would preferentially select faces when guessing which area the user would want to focus on. It was pretty smart with the D300, but I only used it a couple of times because I was a lot smarter—I always knew what I wanted to focus on. ;-)
We'll see if the higher-resolution metering array of the D7000 makes it even smarter for that. I probably still won't use it, but it is a selling point for people who never want to touch any controls beside the shutter and (occasionally) the mode dial.