Short answer: All the modern well designed file system resist fragmentation, and do not need to be defragged unless the disk is used in extreme condition (over 90% full for a long period of time).
Long answer: The fragmentation ratio is (# of fragmented files / # of files) * 100. What's a fragmented file? It's opposite of contiguous file, a file stored in single (or a group of) contiguous blocks. Non-fragmented disk will have all the files laid out next to each other on the disk. Highly fragmented disk will often have files scattered around the disk in bits and pieces. Why is fragmentation bad? Because of the way disk operates. The slowest operation you can do on a disk is to seek, which is moving disk heads around. Each fragmentation will cause a seek in order to locate the other fragment on the disk.
For example, consider a disk drive with 10Mbps sustained read throughput and 10ms seek time. Reading a contiguous 1KB block will take slightly less than 11ms. (proof is left to the readers) Reading a 1KB block fragmented in two pieces will take in average 21ms. The throughput has dropped by 50% when reading a file in two fragments. A file fragmented in three pieces will take 31ms, and so forth.
There are million ways to design a file system. It turns out that the best file system that resist fragmentation is the one that intentionally fragment files when writing to the disk. It sounds like oxymoron. At first the best thing seems to use one big chunk to store each file on the disk. But think what'll happen when you delete a file. You'll have a big hole of the size of deleted file in between. After a few deletion you will soon end up with big holes of various sizes on the disk. Then when the time comes to save another large file, you can't find a hole that's exact size of the new file that you want to save, and you need to start one piece here, and another piece there, thus causing fragmentation.
The idea is to intentionally fragment files in FIXED size chunks, regardless of how large/small a file is. Then you will have a lot of flexibility in laying out the chunk on the disk, and you won't have problems finding the right sized holes to accomodate the new file.
Anyways, all modern Unix file systems resist fragmentation, and defragmentation tool is NOT provided. I'm sure the journal file system won't need defrag either.