I probably should have mentioned this in the original post, but I try to avoid the first approach you mentioned (saving the results of each command to a temporary variable, then using the foreach loop). While the code is more readable, the functionality is slightly different. When you’re working with very large result sets, saving the entire set to a variable can cause problems with excessive memory utilization. Using the pipeline can avoid this problem by streaming objects one at a time, never storing the whole list in memory.
I tend to write all of my code to be v2-compatible, out of habit. At least until my company gets around to upgrading the version of PowerShell on our 5000 or so Windows 2008 servers.