بالاخره انتظار ها به پایان رسید و شرکت Vmware بعد از انتشار نسخه Release candidate نسخه نهایی Vmware Vsphere 7 April 2020 را ارائه داد.
لازم به یاد آوری است که نسخه منتشر شده نسخه نهایی است و می توان در سناریو های اصلی و عملیاتی مورد استفاده قرار گیرد .البته شرکت Vmware در آبان ماه گذشته نسخه آزمایشی این ورژن را منتشر کرذه بود که در این جا قابل مشاهده است .
بخش زیادی از تغییرات در این نسخه حول مباحث امنیتی , بالابرد تطبیق پذیری با سخت افزار و افزایش سرعت عمل Hypervisor می باشد ضمن اینکه تغییراتی در سرویس Vcenter به وجود آمده که به امکان بروز رسانی مستقیم Vcenter از نسخه ۶٫۵ و ۶٫۷ به نسخه ۷ را می توان نام برد
لینک دانلود مستقیم برای شما همراهان عزیز در انتهای همین پست قرار گرفته است .
تغییرات در نسخه جدبد به شرح زیر میباشد
vCenter Server Upgrades
- vCenter Server supports upgrades from vCenter Server 6.5 and 6.7.
- Deployment of the vCenter Server appliance is supported on an ESXi host version 6.5 orlater, or in the inventory of a vCenter Server instance version 6.5 or later.
- vCenter Server for Windows is no longer supported.
- vCenter Server for Windows 6.5 or 6.7 instances will be migrated to the vCenter Serverappliance as part of the upgrade workflow.
- vCenter Server deployments with an external Platform Services Controller (PSC) are no longer supported and will automatically be converged to an embedded PSC deployment as part of the vCenter Server appliance upgrade and migration workflows.
vCenter Server Update Planner
- vCenter Server Update Planner provides native tooling to help with discovering, planning,and upgrading successfully.
- Receive notifications when an upgrade or update is available in the vSphere Client.
- Monitor VMware product interoperability against your current vCenter Server version. Run What-If workflows on how interoperability can change before an upgrade.
- Perform pre-update checks against the selected target vCenter Server version before you upgrade.
vCenter Server Configuration
- vCenter Server Profiles provide the capability to export the vCenter Server configuration from one vCenter Server and apply it to multiple vCenter Server instances consistently.
- vCenter Server now supports Dynamic DNS (DDNS) configuration. With DDNS enabled, vCenter Server will register itself with the DNS server and dynamically update the DNS records in case of IP address changes.
- You are now able to change the vCenter IP address / FQDN.
vCenter Server Certificate Management
- New UI and APIs have been introduced to manage vCenter Server certificates for both custom and VMCA-issued certificates. This functionality includes generating Certificate Signing Requests (CSRs), and replacing and renewing existing certificates.
vCenter Server Performance and Scale
- We continue to make scale and performance improvements for individual vCenter Server instances and across vCenter Server instances in [Enhanced] Linked Mode. Improvements cover various vSphere operations and reducing memory utilization, a
- For Remote and Branch Office (ROBO) use cases, vCenter Server instances are supported in [Enhanced] Linked Mode.
- ۱۵۰ms between linked vCenter Server instances and additional 100ms from browser to vCenter Server.
- ۲۵۰۰ hosts per VC, 35k for registered VMs (30k for powered-on VMs) per VC and in case of linked VCs 15k hosts & 150K VMs are supported.
VM Templates in Content Library
- Deployment of VM Templates directly from Content Library is now supported. This allows a vSphere Administrator to deploy a VM from a template in Content Library, make changes to the VM (e.g. apply a software patch), and then convert it back to a template which will create a new version of that template in the library. In previous releases, it was required to first copy the VM Template out of Content Library and then convert that VM Template to a VM, applying changes, re-converting it back to a VM Template, and finally uploading this new version back into Content Library.
- vMotion of encrypted VMs is now supported across vCenter Server instances (provided both vCenter Server instances are connected to the same KMS) as well as encrypting VMs during Cloning.
- Encrypted VM console screenshots will no longer be displayed in the VM summary page. This prevents potentially sensitive information from being exposed to unauthorized users.
vSphere Trust Authority
- Introducing a foundational technology aimed at securing the SDDC against malicious attacks by extending the trustworthiness of a Trusted Computing Base to the entire organization’s compute infrastructure using remote attestation technologies and controlled conditional access to advanced cryptographic capabilities.
- Automated method of setting up and maintaining a secure infrastructure, integrity protection, and ensures that sensitive workloads only run on a hardware and software stack proven to be in verified good condition.
vSphere Client Search Improvements
- Additional filter capabilities have been introduced for the search results (e.g. filter results based on Alert Status). Object lists from search results (e.g. VM lists) can be filtered based on Tags or Custom Attributes associated with those objects.
Dynamic DirectPath I/O:
- Introducing a new mechanism to consume PCI passthrough devices in a VM. vSphere will now automatically group devices by its PCI vendor name and device name in a cluster and make it available for a VM. This allows DRS to select a host to place the VM on at poweron time.
- Dynamic DirectPath I/O also provides the ability to specify multiple devices in the selection. In such a case, DRS will select a host that satisfies one of the devices specified in the list.
- Also, introducing a new way to define PCIe passthrough devices for a VM. Instead of specific device selection, vSphere will auto-group devices by Vendor and Device Name,which will allow DRS dynamic selection of available devices during VM poweron.
- This can be accessed through the standard “Add New PCIe device→Dynamic DirecPathIO” menu, right next to the legacy DirectPath IO model.
- A software-based watchdog timer is provided in this version to monitor VM responsiveness. This tool is often used by clustering applications to implement heartbeats and detect node failure scenarios.
Guest Customization Improvements
- New REST APIs for guest customization are now availableCreate, Update, Set, Delete spec
- List, Get, Apply spec
- Import/Export spec to/from JSON
- Now supported are user-defined scripts in guest customization. vSphere Administrators can write their own scripts in their preferred programing language (e.g. sh, dash, bash,python) and specify the script to be run post-customization or pre-customization.
- Instant Clone now supports guest customization.
- New REST APIs for VMware Tools install, upgrade, and guest networking related operations are enabled.
- Support for mapping VM OS volume to corresponding VMDK files
VM Compatibility Version 17
- Native instruction support for the Intel Cascade Lake and AMD Rome processors and the following features:
- New Enhanced vMotion Compatibility (EVC) modes for Intel Cascade Lake and AMD Rome processors.
- Virtual SGX (vSGX): this functionality exposes Intel’s Secure Enclave technology. SGX provides run time encryption and protection of protected applications, such that in the event of a compromised guest operating system or hypervisor, the critical data of the application remains secured.
- Virtual Watchdog Timer (vWDT): this new virtual device implements a virtualized version of the watchdog timer (WDT) which can operate with the Microsoft Windows hardware watchdog timer driver (hardware WDT driver). vWDT is often used by clustering applications to determine node failure and initiate the transfer of control to the remaining nodes in the cluster.
- Partial vCPU pinning: this feature allows the VM to pin a subset of the vCPUs to individual cores for greater performance determinism. Note that this is an enhancement to the existing VM latency sensitivity setting, which pins all vCPUs in the VM to cores. Pinning a specified vCPU subset reduces the resource requirements while maintaining the needed performance.
- Added namespace support to improve the performance and scalability of para virtualized RDMA connections.
- With vSphere 7.0, we are introducing NVM Express over Fibre Channel (NVMe-oFC) and NVM Express over RDMA Over Converged Ethernet v2 (NVMe-oRoCEv2) support.
- NVMe-oFC: It is a new and innovative way to access block storage leveraging NVMe protocol over your existing Fibre channel fabric. It will allow keeping single fabric for SCSI as well as NVMe storage needs.
- NVMe-oRoCEv2: RDMA over Converged Ethernet (RoCE) is becoming an interesting technology to optimize performance-sensitive workloads I/O needs. Efficient storage is a big challenge for such workloads, as these workloads work on a large set of data. NVMeoRoCEv2 is one such technology to provide ultra-efficient storage to workloads over your existing ethernet fabric.
- NVMe-oF Interoperability depends on Fiber Channel (FC) HBA or RDMA capable NICs and Storage Array vendors. VMware is working with broad ecosystem of vendors (HBA, NIC and Storage Array) interoperability. Specific interoperability matrix will be published on VMware Compatibility Guide (VCG) upon vSphere GA. But at this time, if you would like to try out this feature in your environment, please contact VMware to ensure we can run this feature in your environment.
- In addition to that vSphere 7.0 also introduces new Pluggable Storage Architecture (PSA) stack support for NVMe. New PSA stack is the default option for NVMe storage (local and remote) and it allows efficient NVMe protocol operation.
- Enhancements to the PCIe storage device hot-plug support. Now vSphere support the hot-add and hot-remove of local NVMe devices, if server system supports it.
- vVOL support for iSCSI Extension for RDMA (iSER) enabled.
New Device Support
- Support for USB 3.1 and able to interoperable networking as eXtensible Host Controller Interface (xHCI) with 10G speed
- Enabled support for Marvell’s 64G FC HBA. Please note this HBA is tested in 32G mode only
- Enable support for on-chip NIC in Coffee Lake processor
- Enable Broadcom 100G NIC support with bnxtnet driver
- Support SolarFlare NICs with sfvmk driver
- Support Marvell’s RDMA NIC with qedentv and qedrntv driver. Also, introduces multiple namespaces support with this update
- Enable Broadcom’s Tri-mode (SATA/SAS/NVMe) controller with lsi_mr3 and lsi_msgpt35 driver
VMkernel Hardware & IO Technologies
- Enabled multi-RSS (Receive Side Scaling) support for nvmxnet3 driver to improve the network stack performance
- Reboot requirement after configuring a PCI device for PCI passthrough to a VM is removed.
- Natived Data Center Bridging (DCB) support in ESXi enabled.
System Storage Enhancements
- ESXi System storage represents storage space used by the system for its operations e.g. software images, system configuration, cache, various generated output like logs, coredump etc.
- vSphere 7.0 overhaul and reformat the system storage layout to address the future demand from vSphere and rest of the software components which are supported to run on the server system like VSAN, NSX, Service-VMs etc.
- Due to this change if you upgrade your system to vSphere 7.0, there is no roll-back possible as the boot-disk will be reconfigured as part of the upgrade process.
- For vSphere 7.0 onwards, it is recommended to have minimum 32GB of disk-space for system storage. Please note, VMware also recommend the high-endurance media to be used for boot-partition.
- For vSphere 7.0, we officially deprecate these CPUs in the installer:
- Intel Family 6, Model = 2C (Westmere-EP)
- Intel Family 6, Model = 2F (Westmere-EX)
- For vSphere 7.0, we officially warn in the installer, but to continue to support these CPUs. These CPUs expected to be deprecated in the next major version of vSphere.
- Intel Family 6, Model = 2A (Sandy Bridge DT/EN, GA 2011)
- Intel Family 6, Model = 2D (Sandy Bridge EP, GA 2012)
- Intel Family 6, Model = 3A (Ivy Bridge DT/EN, GA 2012)
- AMD Family 0x15, Model = 01 (Bulldozer, GA 2012)
- Drivers using legacy VMKLinux modules are no longer supported.
- If your server hardware still has some older I/O devices that depend on the VMKLinux APIs, you must upgrade those devices before upgrading to this version (October 2019 RC1) vSphere release.