NetApp Snapshots are however available to OpenStack administrators to do administrative backups, create and/or modify data protection policies, etc. This is because a Cinder user requests that a Snapshot be taken of a particular Cinder volume, not the containing FlexVol volume. Because a Cinder volume is represented as either a file in the NFS or as a LUN (in the case of iSCSI or Fibre Channel), Cinder snapshots can be created by using FlexClone, which allows you to create many thousands of Cinder snapshots of a single Cinder volume. NetApp Snapshots are taken at the FlexVol level, so they cannot be directly leveraged within an OpenStack user context.
L’afflux croissant des data IoT, la prolifération des données multimédia non structurées, ainsi que les pics d’accès liés à l’actualité, notamment sur les mobiles, posent question : comment gérer le caractère imprévisible du trafic sur les réseaux. Quelles réponses apportent les services “On Demand” des opérateurs télécoms.
Each Cisco UCS chassis is equipped with a pair of Cisco UCS Fabric Extenders. There are two different IOMs, each providing a different number of interfaces, 2208XP and 2204XP. Cisco UCS Fabric Extender also knows as IO Modules operates as a remote line card to the Fabric Interconnect. Cisco UCS 2208XP has eight 10 Gigabit Ethernet, FCoE-capable ports that connect the blade chassis to the fabric interconnect. All interfaces are 10Gb DCE (Datacenter Ethernet). Each Cisco UCS 2208XP has thirty-two 10 Gigabit Ethernet ports connected through the mid-plane to the eight half-width slots (4 per slot) in the chassis, while the 2204XP has 16 such ports (2 per slot). The IO-Module has Network facing interfaces (NIF), which connect the IOM to the Fabric Interconnect, and host facing interfaces (HIF), which connect the IOM to the adapters on the blades. The Cisco UCS 2204 has four external ports with identical characteristics to connect to the fabric interconnect.
Disk space for Swift object data, and optionally the operating system itself, is supplied by the E-Series storage array. · Reduced Swift node hardware requirements. Internal drive requirements for storage nodes are reduced, and only operating system storage is required.
And finally, Red Hat has upgraded its popular Distributed Continuous Integration (DCI) to automatically deliver actionable logs to its quality engineering teams, reducing the amount of time it takes to identify, patch, and introduce fixes back into the upstream community.
In the SANtricity OS architecture, all data is accessed through individual LUNs exposed to hosts that are backed by DDPs through one of the following – Fibre Channel Protocol (FCP), iSCSI, SAS, or Infiniband. It is a NetApp best practice to have one DDP per host that communicates with the storage system. The OpenStack Object Storage Service (Swift) is the exclusive consumer of the storage provided by the NetApp E5560.
In addition, the Cisco Nexus 9000 series features virtual PortChannel (vPC) capabilities. 3ad standard Link Aggregation Control Protocol (LACP). VPC addresses aggregate bandwidth, link, and device resiliency. Port channeling is a link aggregation technique offering link fault tolerance and traffic distribution (load balancing) for improved aggregate bandwidth across member ports. VPC allows links that are physically connected to two different Cisco Nexus 9000 Series devices to appear as a single “logical” port channel to a third device, essentially offering device fault tolerance. As illustrated in Figure 23 , link aggregation technologies play an important role, providing improved aggregate bandwidth and link resiliency across the solution stack. The NetApp storage controllers, Cisco Unified Computing System, and Cisco Nexus 9000 platforms support active port channeling using 802. The Cisco UCS Fabric Interconnects and NetApp FAS controllers benefit from the Cisco Nexus vPC abstraction, gaining link and device resiliency as well as full utilization of a non-blocking Ethernet fabric.
Enabling jumbo frames allows the FlexPod environment to optimize throughput between devices while simultaneously reducing the consumption of CPU resources. In this validation effort the FlexPod was configured to support jumbo frames with an MTU size of 9000. Cisco Unified Computing System and Nexus QoS system classes and policies deliver this functionality. ) and is capable of absorbing traffic spikes and protect against traffic loss. FlexPod accommodates a myriad of traffic types (iSCSI, NFS, VM traffic, etc.
· Four HIC ports are used for Ethernet data across both controllers. This design leverages ports that not only encompass both controllers, but different ASICs on the controllers themselves. They are Controller A, HIC1, Port 1, Controller A, HIC1, Port3, Controller B, HIC1, Port1, and Controller B, HIC1, Port3.
All of the resulting UCS blades that are provisioned with Red Hat Enterprise Linux 7. The NetApp Unified Driver for clustered Data ONTAP with NFS is a driver interface from OpenStack block storage to a Data ONTAP cluster system. This software provisions and manages OpenStack volumes on NFS exports provided by the Data ONTAP cluster system. 1 or Parallelized NFS [pNFS]). 1 mount the appropriately designated NetApp FlexVols on the FAS8040 at the highest protocol level possible for the instance volumes (NFS version 4. The NetApp Unified Driver for the Data ONTAP cluster does not require any additional management software to achieve the desired functionality because it uses NetApp APIs to interact with the Data ONTAP cluster. In this Cisco Validated Design, we take advantage of the NetApp Unified Driver using the NetApp driver backend through the Red Hat Enterprise Linux OpenStack Platform Installer. It also does not require additional configuration in addition to selecting NetApp during an OpenStack Deployment in the Red Hat Enterprise Linux OpenStack Platform installer.
1Q VLAN tagged interfaces to support NFS and iSCSI traffic. · Ethernet ports e0e and e0g on each node are members of a multimode LACP interface group (a0a) for Ethernet data. This design leverages an interface group that has LIFs associated with it on 802.
Cisco UCS fabric failover is an important feature because it reduces the complexity of defining NIC teaming software for failover on the host. It does this transparently in the fabric based on the network property that is defined in the service profile.