Information required on 10Gigabit Ethernet cards

pts.
Tags:
Ethernet
Ethernet card
gigabit
Hello All, I needed some information and datapoints on the following topics: - today what is 10GbE card used for? - what do you see the future for 10GbE? - what do you think is an acceptable price point for this card (just a guesstimate)? Today it is very expensive as compared to the regular 1GbE cards. It is almost 10 times the price. - which applications and where do you see the usage of this card? - which target market (vertical industry as well as horizontal industry) - Telco, server consolidation, partitioning etc? - do you think that a 10GbE card can be used as a core I/O card on a server - maybe on high end servers (UNIX, Intel ....) OR do you think customers would continue to use 1GbE card as a core I/O? - what is a core I/O card and what is it used for by the customers in the datacenters? If you can respond to any of these questions if not all, it would be really appreciated. If you can give any additional information on this topic of 10GbE, it would be very helpful. Any kind of pointers etc also would be very helpful. Thanks a lot for spending some time on this and help me get my answers.

Answer Wiki

Thanks. We'll let you know when a new response is added.

Until you have a computer with no moving parts, hard drives or tapes, you cannot push enough data through a 1 gig card to use it’s full capabilities. The best servers can burst up to about 600 meg. Only a backup could sustain any data transfer that gets close to that. At this time, 10 gig is for switch to switch.

Discuss This Question: 7  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Larrythethird
    Until you have a computer with no moving parts, hard drives or tapes, you cannot push enough data through a 1 gig card to use it's full capabilities. The best servers can burst up to about 600 meg. Only a backup could sustain any data transfer that gets close to that. At this time, 10 gig is for switch to switch.
    0 pointsBadges:
    report
  • PeterMac
    Can think of a few applications where it would be useful. Most Linux/Unix systems make far better use of memory than Windows, and running repetative memory / comms applications can easily outpace even a 10Gig card. Remember Ethernet speed is Bits per second, motherboard speed (on newer processors) is 64 bits wide so a 400MHZ FSB system is roughly eqivalent to a 30GB card in terms of data movement. Web Crawlers, and Spammers will love this card. Also any congested network will benefit from using it, as there will be lots of small messages moving around, and even if it makes little difference on an individual basis the reduction in message times will free up a considerable amount of network time for other use. As to price, like any other comodity, whatever the market will bear. for some people even 10 times the price of a 1GB card will seem cheep, I doubt if many would be prepared to pay even double at the moment though.
    15 pointsBadges:
    report
  • Astronomer
    I see 10G as the next logical step in the network backbone which will eventually filter down to servers and workstations. I don't see doing it server to server at this time unless you have unique needs. We currently have our main windows servers connected to a 1G switch. From the backup statistics it seems clear our silicon mechanics "backup server" can handle anything our other servers throw at it. We normally sustain a little over 600M from our fastest servers. In a few years we should see servers pushing past 1G but until then I don't see the point in investing in 10G to the server. On the other hand, your backbone may well need more than 1G already. Check your utilization. Verify the need before spending the money. rt
    15 pointsBadges:
    report
  • Astronomer
    I see 10G as the next logical step in the network backbone which will eventually filter down to servers and workstations. I don't see doing it server to server at this time unless you have unique needs. We currently have our main windows servers connected to a 1G switch. From the backup statistics it seems clear our silicon mechanics "backup server" can handle anything our other servers throw at it. We normally sustain a little over 600M from our fastest servers. In a few years we should see servers pushing past 1G but until then I don't see the point in investing in 10G to the server. On the other hand, your backbone may well need more than 1G already. Check your utilization. Verify the need before spending the money. rt
    15 pointsBadges:
    report
  • HannahDrake
    Hi, Great questions. I sent them in to SearchDataCenter.com's networking expert Carrie Higbie, Global Network Applications Market Manager for The Siemon Company. Although she normally only answers questions that are sent in through our Ask The Experts site, she thought they were great questions, and answered them all! You'll find her answers here: http://searchdatacenter.techtarget.com/ateQuestionNResponse/0,289625,sid80_cid953424_tax301483,00.html Hope that helps. Hannah Drake Assistant Editor SearchDataCenter.com
    190 pointsBadges:
    report
  • Martin78
    All performance will increase until the micron sized channel becomes the limiting factor. 10G is excellent if virtualized server clusters, when virtual servers on separate boxes exchange inputs/outputs as part of a larger system (Application and their respective Databases/Indexing operations). Using 10G is more of a local event (Data Center centric) and unless you can afford an OC192 speed WAN, outside the data center is impractical "AT THIS TIME." (I remember when T1 circuits were the absolute in speed). CPU and bus speeds will increase and this will pust the need for the upcoming 40G/100G switching on the horizon. As far as moving parts go, solid state drives have no moving parts. The curent limiting factor is becoming the conversion from optical to electrical and back.
    10 pointsBadges:
    report
  • petkoa
    Commodity HPC clusters would make use of 10Gb Ethernet, if the 10Gb hardware will be much cheaper than Infiniband (and all chances are it will be). Petko
    3,120 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following