No, the methods aren't the same. When I say that you project onto the positive L1 ball, what I really mean to say is that you find the projection onto the positive L1 ball with minimum euclidean distance. A Moorse-Penrose solution is effectively the minimum euclidean projection onto a L2 ball. You can rescale it so that it falls inside a L1 ball of given radius, but it's not the optimal projection (measured via euclidean distance) onto the L1 ball. This is actually a key point for promoting sparse solutions. The 'pointy-ness' of the L1 ball encourages solutions that fall along a small number of axes (a toy illustration of this http://grapeot.me/image.axd?picture=2011/3/think-intuitive-sparsity.png).
The convexity of the problem (and the constraint set) means that there will only be one global minimum (and no local minima) of the squared error term. It's possible that the optimal f may not be unique (a trivial example is if C has two identical columns), but all of the optimal f will produce the same squared error.
The convexity of the problem (and the constraint set) means that there will only be one global minimum (and no local minima) of the squared error term. It's possible that the optimal f may not be unique (a trivial example is if C has two identical columns), but all of the optimal f will produce the same squared error.