CC algorithms deal with scheduling packets going through a network, responding to varying network conditions (e.g. with the changing number of TCP streams flowing simultaneously, and with lost packets). This means that as you move away from an extremely simple network topology (e.g. a LAN with a single switch to which all the machines are connected), the CC algorithm becomes increasingly important.
For a demonstration on what influence a choice of CC algorithm can have on a connection, see figure 4 in here:
http://ee.lbl.gov/papers/sacks.pdf
The FreeBSD's default "newreno" algorithm is reasonably good, and this is why you are correct in saying that it shouldn't be changed by people who don't know what they are doing.
But ... for a demostration of how changing newreno to a slightly newer algorithm (vegas) can influence throughput, see here:
http://www2.ensc.sfu.ca/~ljilja/ENS.../bian_zhang.hilary/Hilary_and_Bian_Report.pdf
(you can just skip to the conclusion).
For a more bleeding edge algorithm, see CHD and CDG (pages 8 and 10) here:
http://www.ietf.org/proceedings/84/slides/slides-84-iccrg-2.pdf
Packet losses DO happen in non-trivial networks. What prompted me to write this feature request was that I actually have a network here which is pretty lossy due to its legacy origins and too complex topology, and have observed NFS stalls which result from such losses. I was hoping to test if changing CC algorithms would help me.
For more "typical" uses, modern CC algorithms will help if e.g. the servers are accessed over a wireless network, or networks which are rate-limited by a somewhat crude algorithm.
(And I can't just copy .ko's from other FreeBSD machines because the FreeNAS kernel is a random point-in-time snapshot of 9-STABLE. I've tried modules from both a 9.1-RELEASE and a recent 9.1-STABLE and there's a symbol mismatch in both cases; this is one reason why it's bad basing some production software off a non-RELEASE branch. Maybe there will soon be a FreeNAS 9.2 based on 9.2-RELEASE?).