Although I have studied some information on half-duplex and full-duplex operation I am not entirely sure how the following figures work out, any help would be appreciated.
For this scenario lets assume UTP cable is being used: So far in my studies, the information provided to me states that the following cable configurations/statistics are correct.
10 Mbps half-duplex operation = 2 of 8 wires used.
100 Mbps half-duplex operation = 4 of 8 wires used.
1 Gbps full-duplex operation = 8 of 8 wires used.
Now I know that going from half-to-full-duplex allows a theoretical 2x line bandwidth increase, but I do not understand how going from 2 wires in ethernet use to 4 wires in fast ethernet use, increases the speed ten-fold, while maintaining half-duplex operation?
Not only that but the increase from fast ethernet to gigabit ethernet is also ten-fold, yet it has been upscaled to full-duplex, does this mean all 8 wires being used at half-duplex gives 512Mbps? If yes then why is the increase not as great as the previous, and if not then please explain.
I am sure I am probably missing something small yet fundamental into how these speeds are calculated, but I just dont see how you get a ten-fold increase by doubling the wires in use for one scenario and then a different increase in the next?
Thanks in advance, Sean.
May 7, 2008 9:08 PM
November 9, 2012 5:00 PM