Microsoft support and security updates for Internet Explorer 8, 9, and 10 end on January 12, 2016

Internet-Explorer-centered-header-664x374

Microsoft has announced that they will no longer provide security updates or technical support for older versions of Internet Explorer. Running older versions of Internet Explorer after January 12, 2016 may expose you to potential risks.

The latest version of Internet Explorer will continue to follow the component policy, which means that it follows the support lifecycle and is supported for as long as the Windows operating system for which it is installed on. Focusing support on the latest version of Internet Explorer for a supported Windows operating system is in line with industry standards.

Most customers are already using the latest version of Internet Explorer for their respective Windows operating system, however we have found there is still fragmentation across the install base which poses problems for web developers and support staff. Microsoft recommends customers upgrade to the latest version of Internet Explorer available in order to experience increased performance, improved security, better backward compatibility, and support for the modern web technologies that power today’s websites and services.

Beginning January 12, 2016, only the most current version of Internet Explorer available for a supported operating system will receive technical support and security updates, as shown in the table below:

Windows Desktop Operating Systems Internet Explorer Version
Windows Vista SP2 Internet Explorer 9
Windows 7 SP1 Internet Explorer 11
Windows 8.1 Update Internet Explorer 11

 

Windows Server Operating Systems Internet Explorer Version
Windows Server 2008 SP2 Internet Explorer 9
Windows Server 2008 IA64 (Itanium) Internet Explorer 9
Windows Server 2008 R2 SP1 Internet Explorer 11
Windows Server 2008 R2 IA64 (Itanium) Internet Explorer 11
Windows Server 2012 Internet Explorer 10
Windows Server 2012 R2 Internet Explorer 11

 

Windows Embedded Operating Systems Internet Explorer Version
Windows Embedded for Point of Service (WEPOS) Internet Explorer 7
Windows Embedded Standard 2009 (WES09) Internet Explorer 8
Windows Embedded POSReady 2009 Internet Explorer 8
Windows Embedded Standard 7 Internet Explorer 11
Windows Embedded POSReady 7 Internet Explorer 11
Windows Thin PC Internet Explorer 8
Windows Embedded 8 Standard Internet Explorer 10
Windows 8.1 Industry Update Internet Explorer 11

 

For customers running on an older version of Internet Explorer, such as Internet Explorer 8 on Windows 7 Service Pack 1 (SP1), Microsoft recommends customers plan to migrate to one of the above supported operating systems and browser combinations by January 12, 2016.

Customers have until January 12, 2016, to upgrade their browser after which time the previous versions of Internet Explorer will reach end of support. End of support means there will be no more security updates, non-security updates, free or paid assisted support options, or online technical content updates.

 

Sources:

 

Migration from Cisco 1000v to VMware Virtual Distributed Switch (Part 1)

Cisco_Nexus_3000_Series_1
While working with a enterprise customer I was tasked with migrating an entire production environment from the Cisco Nexus 1000v to a VMware Virtual Distributed Switch (VDS). Then moving the VDS and the ESXi 5.1 hosts over to a fresh built vSphere 6.0 server. The customer is in the middle of an upgrade from vCenter 5.1 to 6.0. Most of the host upgrades will be done once the hosts are moved over to to the new vCenter.

Goals:

  • Non-disruptive migration of networking for Virtual Machines (this is a live production environment)
  • Migrate away from the Cisco 1000v, to VDS
  • Migrate the VDS config from old 5.1 vCenter to new 6.0 vCenter
  • Touch-up naming of virtual machine networks/VLANs
  • Move Virtual Machines from the Virtual Distributed Switch (VDS)/Nexus 1000v to a Virtual Standard Switch (VSS)
  • Disconnect and remove the ESXi hosts from the old vCenter 5.1
  • Connect ESXi hosts to the new vCenter 6.0
  • Migrate VM networking from VSS to VDS

VMware vSphere 5.1 and later allow you to export, import, or restore Distributed Switch configurations from the vSphere Web Client. Since moving the 1000v would be too convoluted, if not actually impossible, I will move everything over to a VDS on the existing 5.1 vCenter first. Then once everything is up and running on the VDS we can then migrate the VDS configuration over to the new 6.0 vCenter server.

000193_2015-10-29 10_06

Unfortunately we will also need to create a Virtual Standard Switch (VSS) switch configured with all the networks, all with matching configuration on each ESXi host in order to actually do the ESXi host migrations over to the new 6.0 vCenter. This will be automated with scripts, of course. We must migrate all virtual machine, VMkernel, and service console networking from VDS to VSS so that network connectivity is not lost when we remove the hosts from the VDS in order to disconnect them from the 5.1 vCenter and add them to the 6.0 vCenter.

MIGRATION FROM 1000v to VDS

Summary:
The following steps will migrate the host and VM networking from the Cisco Nexus 1000v to a VMware Virtual Distributed Switch. This migration plan assumes that there are at least two dedicated uplinks for VM traffic. The purpose of this is to remove dependencies on the legacy 1000v and create a known working configuration of the VDS that will later migrated to the new vCenter 6.0 server. The customer has decided against using the 1000v and wants it removed from their environments. We need to perform this migration before we can move the hosts to the v6.0 vCenter so that we have a working VDS configuration that we can later export to the new vCenter and as a result have an immediately working VDS configuration.

I performed the migration in two parts; that I named “legs” which is basically a reference to the actual uplinks (A + B) themselves. This is to ensure I can quickly and easily roll-back the change if necessary. For the duration of the migration we will only be on one “leg” at a time (either the 1000v on uplink A – or – the VDS on uplink B), this of course introduces a single point of failure but the risk is acceptable since the change window is quite small and the chances of a switch or uplink failure during our change is low. Regardless, I will ensure that both the 1000v and VDS are fully working at all times until all VM networking is migrated from the 1000v to the VDS – testing along the way to ensure there is no impact to VM networking. Once all VMs are moved from the 1000v then we can remove the uplink to the 1000V which at that point should be completely unused and add that uplink to the VDS so that we can then achieve our A+B uplink redundancy again.

Pre Tasks:

  • *** Disable HA, DRS, and EVC on the Cluster ***
  • *** Storage DRS needs to be set to manual or disabled ***

LEG 1
The following will put the hosts into a split 1000v + VDS configuration. One leg on the 1000v, one leg on the VDS. This is necessary to allow proper configuration verification and full migration of VM networking. During this time however, VMs will only have 1 uplink on either side which introduces a single point of failure. However this will only be for the duration of the migration and then they will move back to 2+ uplink paths.

1 – Place target host into maintenance mode
2 – Remove ONE host uplink to the 1000v
3 – Attach host to new VDS using the now available vmnic that used to be on the 1000v
4 – Remove host from maintenance mode
5 – Use Testing VM to verify networking is working (at your discretion)
6 – Repeat process for all hosts individually

LEG 2
The following will migrate the 2nd leg into the uplink group of the first to allow proper link redundancy on the VDS. Currently VM networking should be working on both the 1000v and VDS. The following steps will fully disconnect the 1000v and migrate VM networking to the VDS. This will allow the removal of the 1000v while the VMs continue running without interruption.

1 – Using the ‘Migrate Virtual Machine Networking’ tool migrate ALL VM networks as required from the 1000v to the VDS networks
2 – Repeat the Migrate Virtual Machine Network for each VM Port Group until all VMs are migrated
3 – ** At this point all VMs should be running from a VDS **
4 – Place target host into maintenance mode
5 – Remove host from the 1000v
6 – Add the vmnic that was on the 1000v to the VDS uplinks
7 – Packet Control (etc) vmnics should show as DOWN on the host
8 – Remove host from maintenance mode
9 – Repeat process for all remaining hosts

Post Tasks:
– Re-enable HA, DRS, EVC, and Storage DRS as appropriate
– Export working VDS configuration to the new v6.0 vCenter server

We’re done!

This migration was completed successfully for the customer in all development, staging, and production VMware environments. During the migration we even took the time to clean-up and standardize the network names of the VM port groups consistently across the environments. I hope this guide can be helpful if you find yourself in a similar situation.

Click here for Part 2, where we will migration the VDS to VSS networking so that we can move the hosts to the new vCenter server, the move back to the VDS on the new vCenter 6.0!

If you have any comments, questions, or suggestions please let me know in the comments section below!

 

References / Sources:

Automatically reboot an ESXi host after PSOD

VMWare_ESXi_PSOD

Anyone who has worked in a VMware environment for any length of time should be quite familiar with this purple diagnostic screen, or what we like to call the “purple screen of death“.  Even VMware themselves internally reference this setting as “BlueScreenTimeout”, so make no mistake where it’s fathered it’s name. This PSOD screen is what will appear when the ESXi host goes into an unresponsive state.

Note: The default and VMware recommended setting is to leave the host in an unresponsive state with the purple diagnostic screen displayed on the console screen to aid in troubleshooting.

There are some exceptions to VMware’s recommendation on this, mainly for environments or situations where we simply don’t care about what or why the host had a PSOD. We just need it rebooted and be back online and working as soon as possible. Especially if you are using remote syslog on the ESXi host (which you should) this PSOD screen is of trivial importance and just forces manual intervention to have the host rebooted from iLO/IPMI.

If appropriate for your environment lets set a ESXi host to automatically reboot after 120 seconds at the PSOD screen. There are three ways to do this. By SSH or using the “Advanced Settings” window from the vSphere client or vSphere web client.

Using SSH:

  1. Connect to the ESXi host via SSH
  2. Run command:
    • esxcfg-advcfg -s 120 /Misc/BlueScreenTimeout

The value is in seconds before the reboot will occur. Change this as desired.

Using vSphere Client:

  1. Select the host you wish to configure
  2. Go to the Configuration tab, select Advanced Settings
  3. From the Advanced Settings window select “Misc“.
  4. Find the “Misc.BlueScreenTimeout” value.
  5. Enter desired auto reboot time, in seconds.
  6. Click OK to save, and rinse and repeat for other hosts.

000212_2015-11-19 08_39

Using vSphere Web Client (5.x+):

  1. Select the host you wish to configure
  2. Select the Manage Tab. Select “Advanced System Settings”.
  3. Scroll down (or use the filter) to find “Misc.BlueScreenTimeout“.
  4. Click the Edit button. Enter the timeout value, in seconds.

000213_2015-11-19 08_48

Source: http://kb.vmware.com/kb/2042500

Storage Refresh 2016 – Time to Build! (Part 2)

IMG_0254

Part 1: http://www.vskilled.com/2015/07/storage-refresh-2016-the-plan/

The hard drives have arrived today from NCIX and it’s now time to build it out to finally increase the storage capacity in my home lab. I’ve made only minor changes to the original plan; I ended up shying away from the Seagate 8TB archive hard drives I had originally planned on buying to use strictly for backup purposes. Much like 3TB drives, I just don’t have any confidence in them long-term.

DeviceCurrent Drive LayoutCurrent CapacityDesired Drive LayoutDesired Capacity
NAS 1 (Thecus N5550)5 x 2TB (RAID 5)7.21TB5 x 4TB (RAID 5)16TB~
NAS 2 (Thecus N5550)1x4TB, 2x2TB, 1x1TB, 1x640GB (JBOD)8.9TB5 x 2TB (RAID 5)7.25TB~
VMH01 (Dell C1100)1 x 1TB1TB2 x 8TB, 2 x 2TB (JBOD)20TB~

The end result stays the same. I’m looking to end up with two large data/media storage pools with about 22TB of usable storage. A considerably large increase from my existing data capacity of 7.21TB.

The challenge now is performing a safe and successful data migration to the new storage. Normally I use NAS2 as my backup/archive NAS. I am going to remove the drives from it and move some of them into my VMH01 (Dell C1100) and create a temporary datastore to backup of all the data on there. That way I can safely create a backup of the data and still be semi-protected by RAID.

After some careful scavenging through some documentation I found that I would probably be able to swap the disks from NAS2 and move them into NAS1 without losing any configuration or data. However this is risky, so I will store a 3rd copy of my data on a JBOD  on “BackupSrv”. In this case the risk is worth the reward if it pays off because I will be saving myself from having to copy the data from the BackupSrv JBOD again, and worst case scenario I still have the data on the drives so I can just swap then back if I needed to roll-back the change.

The Step-by-Step Action Plan:

  1. NAS2: Destroy JBOD, power-off, remove drives
  2. VMH01: Add 1x4TB, 1x2TB, 1x1TB. (Add to BackupSrv VM)
  3. BackupSrv: Create JBOD datastore for backup
  4. NAS2: Add 5x4TB NAS drives, build raid, re-configure NFS, rsync, ftp, etc
  5. NAS1: Full rsync backup to BackupSrv & NAS2, verify data
  6. Power-off both NAS1 and NAS2.
  7. Swap disks from NAS2 into NAS1, NAS1 into NAS2. Power on, cross fingers.
  8. Verify data and shares. It works!
  9. Data Migration Completed
  10. Cleanup: Reconfigure Rsync Backup Schedules
  11. Cleanup: Update Home Lab page, CMDB, Wiki
  12. Cleanup: Permissions on shares

* – Veeam Backup Repository moved temporarily to NAS1 (approx 600GB~)
* – NFS datastores + permissions will be lost during a RAID rebuild
* – Printer Scan-to-FTP Setup

 

Lets take a look at our storage now:

000204_2015-11-09 08_47

 

Excellent. Now I have a large RAID5 14.5TB share for media/data storage, another RAID5 7.26TB share for more data storage, and another 7TB of disks in JBOD for archive/backups. I have a LSI MegaRaid MR SAS 9260-8i 8 Port SAS Raid Card on the way to properly archive/backup JBOD the drives so that I can present the disks more cleanly to a backup VM.

Dashlane – Password Manager Review

000203_2015-11-09 08_33

First off, I am a technical user. Nothing gripes me more than a piece of software that is too dumb’ed down for the sake of “ease of use” that it lacks basic functionality. I have tried many password managers in the past but just have not been impressed by their reliability, security, features, etc. This is no longer the case since I discovered Dashlane. It’s both user friendly enough to be easy for anyone to use but also has the technical options and features that makes it usable day to day by a technical user or a systems administrator.

The best way to stay secure on the web today is to have a unique secure password for each individual website or service that you use. That way when one of your credentials are compromised the fallout is limited to that login and not everywhere you use that same password. The problem is how is someone supposed to remember a unique and secure password for each website they visit? You don’t. You use a password manager. It will generate a unique password for you, store it securely and automatically login to the website when you visit it. This both saves time from having to remember the password and have to fill in the login form. Maybe the time saving is trivial at best, but after say 500 times that adds up to lots of filling in login forms! Again the main point here is for security.

 

Installing open-vm-tools on CentOS

Centos-Logo

Just a quick post here today. This is in regards to installing open-vm-tools on CentOS. There’s no need to download and install separate epel-release files anymore as it’s now in the CentOS extras repo directly.

To install them, just use this command, then install open-vm-tools.

yum -y –enablerepo=extras install epel-release

– extras is enabled by default but the –enablerepo caters for those that have disabled it.

Enjoy!