A pair of stacked 10Gb swtiches have been deployed for the campus' network core. Aggregate links between building swtiches and core have been established where there was enough campus fiber.
There are a couple of labs at both DPB and DGE that have expressed interest in upgrading the network backbone to 10Gb. Greg Asner has indicated that network bandwidth to computational clusters and storage infrastructure at Forsythe is one of his primary concerns. Upgrading the core of the network to 10Gb should help remove the campus network core as a potential bottleneck for accessing data in Forsythe. David Ehrhardt has also expressed interest in upgrading the network core to 10Gb. Rapid access to data over the network is a key requirement for a well functioning imageing lab, and he's has expressed concern that network throughput could be limiting the ability for instruments to push data to data storage, and likewise for image processing and analysis machines to be able to retrieve acquired data.
- 10+ Gbps non-blocking throughput per-port, 40+ Gbps intra-stack
- Active/Active redundancy for high-availability
- Sufficient connectivity for future expansion (16 free ports minimum)
The core switch should be upgraded to a stacked pair of 10Gb switches, and access switches within buildings should be upgraded with 10Gb uplinks. Inter-switch connections betwen the core and access switches should be configured as aggregate links spread across both switches in the stack. This will provide 10Gb+ bandwidth between the core and the other switches across campus, as well as providing high-availability such that one of the core switches could fail and network service would be maintained.