PowerShell Snippet to Remotely Disable IPv6

 As we’ve been working more and more with Windows Server 2012 R2, we’ve discovered issues stemming from IPv6 in the network stack. Two specific issues we’ve discovered:

  • During DNS lookups, Windows Server 2012 R2 first issues a DNS lookup request over IPv6. Typically the lookup will fail as IPv6 is not widely deployed yet. In some scenarios, rather than also performing a request over IPv4, the DNS lookup just fails. This impacts everything that depends on DNS, from AD to System Center to Exchange.
  • In some scenarios, PowerShell cmdlets, specifically RDS-related cmdlets, will fail. Disabling IPv6 enables those cmdlets to successfully execute.

The following PowerShell snippet can be used to disable IPv6 on a remote system. It can easily be wrapped inside a function for batch operation.

$adminUser = "DOMAIN\USERNAME"
$adminPwd = "PASSWORD"
$compName = "REMOTECOMPUTERNAME"
 
$secPwd = ConvertTo-SecureString $adminPwd -AsPlainText -Force
$remoteCreds = New-Object System.Management.Automation.PSCredential ($adminUser, $secPwd)
$ServerSession = New-PSSession -ComputerName $compName -Authentication CredSSP -Credential $remoteCreds
 
Invoke-Command -Session $ServerSession -ScriptBlock 
{
	# Disable IPv6
	$regPath = "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters"
	New-ItemProperty -Path $regPath -Name "DisabledComponents" -Value "0xFFFFFFFF" -PropertyType "DWORD" | Out-Null
	Restart-Computer -ComputerName $env:computername -Force
}

Caveats: WinRM must be enabled, and the code restarts the remote computer to finish applying the operation.

Removing ‘Unknown’ Storage Pool in Server 2012 / R2

I ran into an issue the other day where a Server 2012 clustered storage pool transitioned to an ‘Unknown’ status, and changed to read-only mode. I had difficulty removing the storage pool, and after a great deal of hair-pulling, figured out how to do so.

This scenario occurred when I’d provisioned 3 iSCSI virtual target disks across three different hosts, and joined them in a storage pool to a Scale-Out File Share cluster (two of them mirrored, the third just a small parity disk for the storage pool). During my testing, I managed to corrupt one of the iSCSI target disks, and it changed to a status of ‘Retired’. Ordinarily this would be fine, and I could just replace the disk. Unfortunately, I accidentally deleted the wrong iSCSI target, and removed the second disk in the mirror. Now BOTH disks were marked as ‘Retired’, and the storage pool changed to an operational status of ‘Unknown’, and set itself to Read-Only to prevent further corruption.

No big deal I thought, I’d just delete the storage pool as there was no data in the pool. Unfortunately, since the storage pool, I could not run the delete operation. I tried to right-click the pool and change back to Read-Write, but that option was greyed out. I couldn’t remove the disks from the pool, I couldn’t delete the disks associated, as far as I could figure out, it was stuck in limbo. I tried deleting via PowerShell, but the Remove-StoragePool cmdlet has no ‘-Force’ option, and it failed due to the unhealthy state of the pool. I then tried the old trick of routing around PS via direct WMI access to the MSFT_StoragePool class object, and running the .Delete() and .DeleteObject() methods. Unfortunately, this also failed due to the storage pool state. Searching the Internet also yielded no results.

As a last-ditch effort, I tried going back to the top of the stack and working down. I removed the storage pool from the cluster via Failover Cluster Manager. I then noticed in Server Manager that the read access changed from the ‘FSC’ cluster object to read access for ‘FSC1,FSC2’, the two computers in the scale out file share cluster. I tried a right-click on the pool, and this time I was able to set the Read-Write access. Once I’d set the Read-Write access I was able to delete the storage pool.

Long story short: if you’re having issues deleting a corrupt storage pool, ensure it’s removed from the cluster first, and then you should be able to change the access to Read-Write and delete the storage pool.

Converged Networking and Guest Clustering in Server 2012 / R2

As you’re probably aware if you’re reading this blog, Server 2012 introduced a new few new features, two of which are Guest Clustering and Converged Networking. Converged Networking allows you to create a virtual switch, bind to the NIC team, and then create vNICs for BOTH the VMs AND the host OS to use. Guest Clustering allows you to create failover clusters inside VMs running on top of a Hyper-V cluster.

One of the features of Hyper-V virtual switches is detection of spoofed MAC addresses, and the switch drops the packets. This usually surfaces when creating NLBs on VMs in a Hyper-V environment. The NLB point has its own MAC address, and MAC Address Spoofing must be turned on before the NLB functions correctly.

When combining Converged Networking with Guest Clustering, what happens is the converged switch drops packets coming to/from the cluster object, as it’s not a ‘legitimate’ MAC address. This causes untold amounts of weird issues, where pings work, but cluster traffic doesn’t.

The resolution obviously is to allow MAC Address Spoofing on the Management OS NICs. The way to do this is via PowerShell (as there is no GUI to access management OS vNICs):

Get-VMNetworkAdapter –ManagementOS | Set-VMNetworkAdapter –MacAddressSpoofing On

I banged my head against some weird networking issues with a guest cluster before figuring this issue out. I hope that my experience can help you avoid doing the same.

Update on Life

I’m afraid I’ve let this blog sit idle for far too long. I always promise myself that I’ll post more, but then life happens. :)

In late August I accepted an offer for a new opportunity, and began the new job early in September. My new employer is a Microsoft Partner, focusing solely on System Center. There were a few reasons for this move, a couple of them based on what I perceive happening in the IT market. I’d like to write out a couple of these perceptions.

The first reason for a change in focus was the perception that SharePoint is moving down the stack. The days of SharePoint as a massive product implementation are over, with Microsoft driving licensing deals down, and shipping Office 365 (SharePoint Online) as a service for small and medium sized businesses. While SharePoint Online leaves much to be desired, it is swiftly catching up in parity, and soon will be exemplifying the cloud-first approach. Considering our clients were all small business clients, this was not a market I wanted to be in long-term. I enjoy implementing backend technology to serve the business and user, and SharePoint Online is not a great fit for that interest.

The second reason I elected to pursue a System Center company is my deep interest in private and hybrid cloud scenarios. The cloud industry is in a state of substantial flux, and will be for the next couple of years. System Center is the key that holds it all together and pulls it into something that is useful for enterprises and users. It’s the key to hybrid cloud for the next few years, no matter what happens in the cloud industry. This is *the* place to work for the next few years if you work in infrastructure in information technology. I’d already worked a fair bit implementing System Center to enable better SharePoint development, this was a natural evolution of my skills and experience.

The third reason I pursued Infront Consulting Group in particular was their track record. I found them by looking through the Impact Award winners for 2011 and 2012. Infront’s name came up a few times, and that they won Partner of the Year for Private Cloud was a huge flag for me. I looked them up on PinPoint, and they’ve got incredibly high reviews and feedback. They had a number of YouTube videos of staff explaining why they like working here. They also have the highest density of MVPs in an organization world-wide. They were also number 211 on Canada’s Profit 500, and they’d only been around since 2001, which meant they must be doing something right. I submitted a cover letter and resume within 2 hours of discovering they had a Senior Technical Consultant position open.

Since I’ve joined on here, I’ve been very pleased with my decision. There are several streams I can focus on, so I won’t get bored for a very long time. The on-boarding process has been nothing short of excellent. There is a very strong emphasis on mentorship and personal development here, which is somewhat surprising given that they are a consulting company. If there is bench time, that time is devoted to picking up new skills or auditing one of the classes another instructor might be delivering (every consultant has their MCT).

Of course, one of the other perks of working here is the ability to work with some very big brains. It’s a real privilege to be able to grow from the depth and wealth of the experience of these colleagues.

So, that’s that. Lots of things happening the last few months, but it’s all been very very worth it.

SharePoint 2013 Authentication Prompt for Anonymous PDF Files

Ran into an interesting problem today. On an anonymous, public SharePoint 2013 internet site, public documents of the Office or PDF type were prompting for authentication. The PDF authentication prompt in particular puzzled me; then I remembered that the April 2013 CU for SharePoint 2013 enabled WOPI support for PDF documents. This indicated to me that I should treat them as Office documents in my troubleshooting.

There is an old SharePoint issue with ‘OpenItems’ permission not being granted to documents in sites with anonymous access turned on at the root site level. Once I realized that the PDFs were being treated the same as Office documents, I realized there was a good chance this was the cause. Three PowerShell commands later, and the issue was resolved.

You can add the required ‘OpenItems’ permission to the site as follows:

(Open the SharePoint Management Shell as the Farm Administrator)

$Web = Get-SPWeb http://www.yoursite.com
$Web.AnonymousPermMask64 = "ViewListItems, ViewVersions, Open, ViewPages, UseClientIntegration, OpenItems"
$Web.Update()

Virtualization, Hyper-V, and the SMB market

I have been intending to put together a post for sometime now on
virtualization, and how it impacts the small business market, ever
since I first obtained access to a pre-release version of Server 2012.
However, I have been so busy that I have not had the time to put one
together until now.

Virtualization has been viewed as some sort of higher-level
infrastructure component by the small business market. The assumption
is that it requires more expensive hardware and expertise to implement
and maintain, and these are costs the average small business tends to
avoid if possible. However, Windows Server 2012 has changed that
paradigm, and I’d like to talk about how I see virtualization becoming
an integral piece of small business infrastructure implementations,
and some of the approaches I’ve taken.

Virtualization is important to small businesses because it allows for
higher availability, and an abstraction of the server from the
hardware (hardware being a significant expense in this market). If the
hardware fails, the server can be spun up on another host while
budgetary constraints and/or support can be worked out. In addition,
licensing allows an increased efficiency in the hardware the business
already owns, allowing for more capabilities out of the same physical
box(es).

The primary perceived barrier is cost of entry. Licensing is perceived
as being expensive, and hardware requirements for servers are
perceived to be higher than necessary. In addition, some of the more
desirable attributes of virtualization like High Availability and
Failover have required expensive shared storage units.

Windows Server 2012 addresses the cost of entry in several ways. The
first is licensing. A single license of Server 2012 Standard entitles
the owner to two virtual machines on that host. This is essentially
two workhorse servers for the price of one, plus the underlying host
OS/hypervisor for free. Server 2012 Datacenter provides unlimited VM
server OS licenses. Or perhaps a client doesn’t have Software
Assurance and/or doesn’t want to purchase new licenses; Server 2012
Hyper-V Server is free, and provides the same virtualization
capabilities as the Standard and Datacenter versions.

The second way that Windows Server 2012 addresses the cost of entry
into the virtualization market for small businesses is both the
increased performance on existing hardware (compared to Server
2008R2), but also the introduction of SMB 3.0 with application tier
share support. This means that with a Server 2012 SMB server, you can
use that SMB share as a poor man’s version of a SAN or DAS shared
storage unit. So for the cost of a few additional drives and/or a new
server, you have a storage share suitable for High Availability &
Failover Clustering, all for the cost of a few thousand versus the
$20,000-$50,000 for a SAN or DAS unit. This is a very affordable way
for the small business to get clustered file storage, and be able to
take advantage of HA & FC.

All this is great: Server 2012 is cheaper licensing-wise, I can fit
more virtual servers on one box, I can get a cheap file share and use
it for High Availability and Failover Clustering, but where does one
start taking advantage of these capabilities? I have a few thoughts
and experiences on this.

A great place to start is to capture the client’s server into a VHD
file using an excellent utility called Disk2VHD. This executable
leverages the Windows VSS writers to snapshot the hard drive as is,
and dump that snapshot into a VHD file. One thing to bear in mind is
that you will essentially be booting this server up into ‘new
hardware’. This means you will need to document the IP configuration
of the server, because it will use DHCP on ‘new’ NICs.

Once the ‘image’ of the server in question is in your virtual hard
drive, you can then enable the Hyper-V role on the server host (if
you’re running Server 2008, 2008 R2, or 2012). You might want to give
the host a new NIC IP configuration. Disconnect the host from the
virtual network, then rename the server host and disjoin it from the
domain (this leaves the entity in Active Directory). Then create a new
virtual machine in Hyper-V manager, using your newly captured VHD
file. Boot up the VM and fix the IPs. Once you validate everything is
working, uninstall unnecessary software from your new virtual host.
You now have the original server running in a Hyper-V virtual
environment on the same hardware!

If a client is running an OS older than Server 2008, I recommend
installing Server 2012 Hyper-V Server into a VHD file on the hard
disk, and then proceeding through the steps I just outlined. This
enables you to run basically a ‘dual-boot’ configuration, without
wiping the disk. If worse comes to worse, you just boot back into the
disk, rather than the native boot into a VHD file for your new Hyper-V
host OS.

Windows Server 2012 virtualization is a hug win for small businesses. Once a
business gets a taste of the free virtualization and HA/failover
features, it is impossible to go back to being without.

SharePoint 2013 Search and Server 2012 Hyper-V

Have run into an issue with the SharePoint 2013 public beta after upgrading our Hyper-V cluster from Server 2008 R2 to Server 2012. It seems that SharePoint 2013, installed on Windows Server 2012, in a VM that is running on a Server 2012 host, will lock up if the Search Administration component is running. I believe there is some sort of CPU race condition occurring between Windows Explorer, the new Hyper-V integration components, and the Search Administration component. Disabling search crawls did not resolve the lockup issue, but pausing the Search Administration component appeared to resolve it.

To sum up, here is the scenario that I found to cause SP2013 VMs to lock up (tested on 3 different VMs):

  • Virtual Host: Server 2012
  • VM Operating System: Server 2012
  • SharePoint 2013 Public Beta
  • Search configured

Pausing the search administration service seems to resolve the lockups, but kind of defeats the purpose of having SharePoint search. This issue did not occur on a virtual host running Server 2008 R2, with Server 2012 as the client OS, SP2013 installed, and search configured.

Windows Server 2008 R2, Hyper-V, and NLB

Just a quick note for those of you using Hyper-V, and attempting to use NLB. You need to turn on MAC Address spoofing capabilities on your virtual NIC for the VM. By default Hyper-V will block incoming requests to the NLB virtual MAC address, because it does not match the actual VM’s virtual NIC MAC address.

Remote Desktop Disconnects Randomly

Update 3: As per a reply from JBAB on the Technet thread, the problem lies with the default RDP configuration on Server 2008 R2. I had a GPO that was enabling RDP, but when the SCEP client refreshed the policy, the GPO would temporarily be disabled, dropping back down to whatever is set (do not allow) in the registry. This can be resolved by setting HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Control->Terminal Server->fDenyTSConnections to 0. I pushed this registry change out via another GPO, and haven’t seen any problems since.

Update 2: I have started this Technet Forum thread here 1.

Update 1: disabling NIS didn’t fix it.

I deployed SCCM 2012 RTM to our environment last week, after having run the RC successfully for a while. Since then, there have been a number of dropped RDP sessions to our servers. They occur at random intervals, and there are no errors reported in the event logs.

On further investigation, I discovered that the disconnects were occurring at the instant the Forefront Endpoint Protection client updated the Default Antimalware Policy. I’ve turned off the ‘Behavior Monitoring’ and ‘Protection Against Network-Based Exploits’, under the ‘Realtime Protection’ tab. Things appear to be stabilizing.

I suspect that it is the protection against network-based exploits feature (which uses the Network Inspection System) that is causing this. It’s caused me grief in the past with Forefront TMG, and doesn’t appear to be that much better in SCEP.

SCVMM 2012 RC Console Crashing Repeatedly

I encountered an issue with SCVMM 2012 RC console crashing repeatedly. After further investigation, I discovered that it had previously been configured to point to an RC install of SCOM 2012. This install had been replaced with an RTM version, and no longer had the VMM connection details on the SCOM server. Changing the VMM server hosts file to point the SCOM name to itself allowed me to open the console to reconfigure.

SCCM 2012 Failed to Create Machine Certificate

If you, like me, have been attempting to get SCCM 2012 installed in your lab environment, you may have encountered the error ‘Failed to create machine certificate’, and been unable to proceed. In my case, I was attempting to install against a default install of SQL 2012. SQL 2012 defaults to creating local ‘Network Service’ accounts for each of the SQL service accounts. Changing the MSSQLSERVER service to run as a domain account resolved the error.

UI, Cloud, and Ecosystems

I’ve been absolutely fascinated over the last few months with the developing cohesion of the respective Apple and Microsoft ecosystems. On the one hand, Apple is attempting to unify their user experiences across their mobile and desktop platforms via cloud services and UI interaction models. On the other hand, Microsoft is unifying their user interfaces across mobile, desktop, gaming, online services, and server platforms, and unifying the settings in each device category via cloud services. I’d like to detail my perspective on the design path for each ecosystem.

Apple

With the introduction of iOS, Apple unveiled a brand new UI model. Many pundits have theorized over the last few years about Apple unifying the two platforms, similarly to how Microsoft has attempted to sell ‘Windows everywhere’.

While I’ve always admired Apple for their tightly integrated user experiences inside OS X, and the tightly integrated user experience inside iOS, the two platforms seemed very distinct and separate over the last few years. By rights, they are two very different platforms, with very different interaction models.

Apple made its first real attempt to unify the available services across the platforms with MobileMe, a less than stellar ‘cloud’ service intended to unify communications content across the platform, as provide hosted media sharing services. The service failed miserably, and Apple moved on.

Over the last year or so, with the introduction of iOS 5 and OS X Lion, Apple has positioned iCloud as a unified content service, bundled with any new Mac or iOS device. Developers can plug their OS X apps and iOS apps in, and expect the same content to be accessible (streamed or synced locally) on any device the user is signed into. There is also limited cloud front-end for some of Apple’s own apps, but I suspect that in a year or two we may see the iCloud front end open up to developers, and enable users to sign into a web portal to access their content.

In addition to unifying the accessible content across their platforms, Apple has also been unifying design elements across their platforms. The latest versions of OS X Lion and OS X Mountain Lion borrow design elements heavily from iOS. However, while certain design elements are heavily borrowed, OS X remains  oriented toward keyboard/trackpad usage, and iOS remains heavily oriented toward direct touch interaction. They look visually similar, enough to put a new user coming from the other platform at ease, while still maintaining their respective, functional interaction optimizations. Add the user’s content being automatically available via iCloud, and a new user will feel right at home.

Microsoft

Microsoft has lagged behind in the mobile market over the last few years, having gone back to the drawing board after the success of iOS. With the introduction of Windows Phone 7 however, Microsoft unveiled an innovative new design language now called ‘Metro’, optimized for touch interaction.

Windows Phone 7, while not a mass-market share success, was well received and praised by critics for the UI. While very different from iOS, it was unique, very fluid, and felt very natural very quickly. Unfortunately however, it entered the smartphone market very late in the game, and was forced to compete with iOS and the various Android copies/competition.

This past year however, Microsoft began to implement the Metro design language across its platforms. The Xbox 360 saw the Kinect add-on and a firmware update, which changed the console UI to a virtual-touch UI model. Windows 8, client and server, were unveiled with dramatically changed UI’s. Gone is the old Start menu, replaced with a very in-your-face fullscreen Metro Start menu. Leaked screenshots of Office 15 also appear to signal a shift inside the office applications toward Metro. Microsoft has also been pushing developers very strongly to shift toward the Metro design language.

Microsoft has been positioning Windows with Metro UI as a single operating system and UI across their devices. Using Windows Live ID, users have their desktop settings synced from desktop to desktop, Xbox to Xbox, tablet to tablet, smartphone to smartphone, and so on. Content optionally can be synced via the Windows Live Skydrive, and is accessible through a web interface, while Office documents can be modified via Office Web Apps.

‘Windows everywhere’ means that a user will see the same UI across desktop and mobile, and have the same settings for their type of platform (gaming, mobile/desktop, smartphone), no matter where they sign in. This provides a consistent user interface across the Microsoft ecosystem. The issue with this approach however, is that the design of Metro UI is not really suited toward keyboard/mouse, but toward direct touch interaction. In addition, users used to the interaction model from the last 10 years of Windows OS, are now being forced to transition to a UI model that is not even tailored for their mode of interaction.

Summary

In summary, each approach is very similar, yet subtly and fundamentally different. Apple has opted for having the user’s content accessible to them everywhere, while interacting with that content through different, albeit similar, design interfaces on different device types. Microsoft on the other hand, has opted for the route of universally consistent UI interaction, and consistent settings within the individual device types, while optionally making content accessible across devices.

Misconceptions Regarding Android’s ‘Open’ness

I’ve wanted to write a post for some time now regarding the ‘open’ness of Android. Every time an Android user tells me about how their device is better because it is open, no one has been able to show me how that makes it a better OS than its competitors.

The only people who tend to care about ‘open’ are the ones looking for a utilitarian benefit. The tinkerers/programmers who want to code functionality into something, and businesses looking to save money. Consequently, little care or thought is given to the user experience. Programmers by their nature generally have no interest in the user experience of their application. Fortunately, UI guidelines/requirements in a closed model force programmers to think about how their application is being used, or wanted to be used. There is no such driving factor in an ‘open’ model, and consequently, they generally fall back to modelling their application after UI/UX work done by others. There is also no real governance (by principle) of an ‘open’ model, and therefore little financial incentive to research and develop UI/UX. This is why ‘Open’ will never lead in UI/UX development, and will always tend to copy the look and feel of other proprietary software on the market.

This is also why Open Source has done so well on the server side. There is almost no need for UI/UX, but the breadth of functionality available, and the ability to create new functionality, is very advantageous to businesses and users looking for low cost server functionality.

I’ve written three points regarding the openness of Android, along with supporting information.

‘Open’ does not mean what you think it means

  • Google gives early, priority access to select partners. 1 This is hardly ‘open’ nature.
  • Google buys partners. This not only is merely to get access to patents to use as defense in litigation, it also is hardly fair to other device manufacturers.
  • Google takes an average of 100 days to open source Android code. 2 The point of the ‘open’ principle is to allow everyone to contribute to the same set of code.
  • Android is encumbered by patent lawsuits. More than 70% of Android OEMs have signed patent license agreements with Microsoft 3, and Samsung has well-publicized patent lawsuits from Apple. Google steals hard work and ideas from other companies, makes it ‘open’ (not free), and considers themselves justified. If you don’t like the patent rules, work to change the system, don’t abuse it. Play by the rules while working to change them.
  • Slavish copying of the iPhone by Android manufacturers. See here 4 and here. 5
  • Carriers block versions of Android if they choose 6. This is one of the flaws (features depending how you look at it) of the Android model. Every carrier can customize and distribute Android as they see fit. Unfortunately, this also means that they can choose to not distribute entire versions of their customers if they so choose.
  • The idealisms of ‘open’ and ‘free’ are not enough to win. Linux zealots have been claiming for as long as I can remember that ‘this is the year of Linux’, that Open Source will triumph. Yet, the desktop market share of Linux has never gone much above 1% market share 7. Idealism is not enough. Just like communism, Open Source promises much in its ideology, but there are many practical matters in life that hinder reaching ideal. Only the billions of dollars thrown at Android by Google have given it any headway whatsoever.
  • Developers live by the profit generated from their code. They will go where the money is. iOS generates 4 times as much return for developers as Android 8, so this leads to more investment in the platform, and better apps for the platform.

‘Open’ does not mean safer

  • Android has seen a rise of malware (37% increase last quarter, 1000 detected infections, doubled over the past year). 9 Almost all new mobile malware targets Android. Just because software might be ‘open’, does not mean that exploits are patched and gone.
  • CarrierIQ. Precisely because the Android distribution model allows carriers to install their own customizations/bloatware on devices before distributing, nefarious apps like CarrierIQ can be installed and customized to scrape all your data, including text messages and email. So the average customer gets a device that they believe is safer because it’s ‘open’, but the carrier may have already exploited that ‘open’ nature and implemented spyware.
  • Viruses are prevalent on Android. Because apps are not vetted, it is free range for coders/hackers to distribute malicious apps. There was a 400% increase in malware Year Over Year in May 2011, and in 2H 2011, another 472% increase.10
  • I’ve heard arguments that Android has permissions that can be set on a per-app basis, and that this makes the device secure. This model of security however, has been broken, using the very model designed to protect it.11 It does not make your device secure.
  • Another excuse I hear frequently is that the user should make sure that they are installing legitimate apps. No, just no. Respecting a user means taking all that background gunk out of the picture and giving them peace of mind. They should not have to worry about whether the app is safe or not… that is up to the distributor. Users in general are not inclined toward technology, and just want something that works. You don’t ask to see your bus driver’s license every time you get on the bus because you trust the transit commission. Why should a user have to worry about whether the app they’re installing is safe if coming from a primary distributor?
  • I also hear the excuse that a user may need to sacrifice security for choice. Again, no. Microsoft and Apple have managed to bring the best of both worlds in a closed model, so this is merely an excuse for selling Android’s ‘open’ness with its security flaws.
  • I also hear that if users want security, they should only stick with ‘trustworthy’ sources. This violates the entire principle of ‘open’! A user should not have to go to ‘trustworthy’ sources at the expense of ‘open’, if you are selling to them on the principle of ‘open’!
  • A misconception I often hear is that viruses infect iOS and WP7, proven by the jailbreak toolkits. No. Exploits are not viruses, and viruses are not exploits. An exploit is a vulnerability, a virus is something malicious that takes advantage of the vulnerability. Android is the only major smartphone platform invaded by viruses, thanks to its ‘open’ model.
  • Carriers distribute updates infrequently. Typically, after 6 months, carriers/OEMs of Android phones no longer distribute updates.12 This means all those security vulnerabilities that have been discovered, are no longer patched. New security enhancements and features in new phones are not available on the old phones. This is because there is too much cost and no incentive to either the carrier or the OEM in the ‘open’ model to distribute updates to their users. Compare this to the iOS and WP7 platforms, where updates are mandatory on WP7, and updates are still being distributed for the latest OS to even 2.5 year old iPhone models.

‘Open’ does not mean better

  • As we saw above, ‘open’ systems will always lag behind ‘closed’ systems in areas of design and UI/UX, thanks to the very nature of those developing ‘open’ systems.
  • ‘Open’ systems will generally be significantly weaker in security, thanks to the principle of allowing anyone to distribute whatever they want. There is no real safeguard to prevent coders with malicious intent from distributing their wares to unsuspecting users.
  • As MG Siegler points out13, comparing an iOS device to an Android device is a bit like comparing a Mercedes to a Honda. Those who appreciate design and experience will get much more out of the Mercedes, but have difficulty telling someone who only appreciates functionality why.
  • Android has poor integration with enterprise services. No native IPsec VPN, and varying Exchange compatibility between OS versions. Thanks to the carriers who choose not to ensure updates to their devices, the support effort required to support Android on an enterprise deployment becomes astronomically larger in comparison to properly governed systems in a closed model.
  • There is no official support desk for Android. This is a huge barrier for many enterprises. Sure, there are many forums with coders and hackers to come up with fixes, but how many of them have experience in an enterprise setting, and would be able to resolve issues involving infrastructure beyond the device itself?
  • ‘First’ is irrelevant. Arguing that one OS or piece of UI was developed before a competitor is irrelevant when it comes to which is better. Stop sidetracking!
  • In general, Android apps are not as polished as iOS or WP7 apps, thanks to reasons I outlined previously. Low-quality apps from more sources is not ‘better choice’ than high-quality apps from a single source.
  • ‘More Choice’ does not necessarily attract a customer. Simple is often better, and when you look at the lineup of iOS phones (4 phones) vs the hundreds of phones from other vendors, a user will often pick from a simple, easy to understand lineup. A very interesting study on this here.14
  • Feature phones do not equal smartphones. By stripping down Android as a base OS for cheap/free phones that provide basic phone service with a few extra features increases market share. However, this increased marketshare does not make Android a better smartphone OS, as it’s no longer a smartphone. It merely speaks to the flexibility that Android can function.
  • Being able to install Flash because it’s ‘open’ does not make it better. Mobile Flash has proven to be a battery and performance killer on every platform. Installing a now-deprecated15 battery and performance killer does not make the platform better.
  • ‘Open’ software does not mean able to change your battery. This is something that is at the discretion of the manufacturer. Some will choose to make it user-serviceable, others will not. The only thing that really matters in this scenario is the cost and downtime to fix it.
  • ‘Open’ does not mean better quality of code. Firefox for example, is incredibly bloated on the Mac OS, and runs poorly. It also has hit the 32bit limitation for compiling.16 Open does not mean better code or coding practices.

As we can see from the above points, the virtuous, ‘open’ nature of Android is really not so open or virtuous. Please don’t try to sell Android on the merits of being ‘open’.