Please help to click 1!

Tuesday 31 July 2012

Cracking MS-CHAPv2 with a 100% success rate

Why MS-CHAPv2?
The first obvious question is why we looked at MS-CHAPv2, given a lingering sense that the internet should already know better than to rely on it. Unfortunately, however, even as an aging protocol with some prevalent criticism, it's still used quite pervasively. It shows up most notably in PPTP VPNs, and is also used quite heavily in WPA2 Enterprise environments — often in cases where its mutual authentication properties are being relied upon. For the talk, we put together a list of the hundreds of VPN providers which depend on PPTP. This included some high profile examples such as iPredator, The Pirate Bay's VPN service, which is presumably designed to protect communication from state-level observation: 

 

We believe that MS-CHAPv2 remains so prevalent because previous examinations of the protocol's potential weaknesses have focused mostly on dictionary attacks. Combine this narrow focus with its extremely wide base of supported clients and default OS compatibility, and it's understandably very tempting to deploy as the user experience with the least amount of friction.

In their 1999 analysis of the protocol, for instance, Bruce Schneier and Mudge conclude "Microsoft has improved PPTP to correct the major security weaknesses described in [SM98]. However, the fundamental weakness of the authentication and encryption protocol is that it is only as secure as the password chosen by the user." [emphasis added] This, along with other writings, has led both service providers and users to conclude that they can use MS-CHAPv2 in the form of PPTP VPNs and mutually authenticating WPA2 Enterprise servers safely, if they choose good passphrases.
As an example, based on the analysis of the Schneier paper, Riseup.net, a security-focused VPN provider, went so far as to generate uniformly random 21-character passphrases for their users, without ever allowing the user the opportunity to choose their own, in order to ensure that they could deploy their PPTP VPN service safely. 



The Protocol 

Let's take a look at the protocol itself, in order to see what we're dealing with:
At first glance, one is initially struck by the unnecessary complexity of the protocol. It almost feels like the digital equivalent of hand-waving — as if throwing in one more hash, random nonce, or unusual digest construction will somehow dazzle any would-be adversaries into submission. The literal strings "Pad to make it do more than one iteration" and "Magic server to client signing constant" are particularly amusing.

If we look carefully, however, there is really only one unknown in the entire protocol — the MD4 hash of the user's passphrase, which is used to construct three separate DES keys. Every other element of the protocol is either sent in the clear, or can be easily derived from something sent in the clear: 

Given that everything else is known, we can try ignoring everything but the core unknown, and seeing if there are any possibilities available to us: 

We have an unknown password, an unknown MD4 hash of that password, a known plaintext, and a known ciphertext. Looking back at the larger scope, we can see that the MD4 hash of the user's password serves as a password-equivalent — meaning that the MD4 hash of the user's password is enough to authenticate as them, as well as to decrypt any of their traffic. So our objective is to recover the MD4 hash of the user's password.
Typically, given a packet capture, this is where a network adversary would attempt to employ a dictionary attack. Using a tool such as asleap, it's possible to rapidly attempt a series of password guesses offline. The attacker can simply calculate MD4(password_guess), split that hash up into three DES keys, encrypt the known plaintext three times, and see if the concatenated output from those DES operations matches the known ciphertext.

The problem with this approach is that it won't give the attacker a 100% success rate, and relies on the user's propensity for selecting a predictable password. In the case of the riseup.net PPTP VPN service, for instance, the attacker would need to attempt guesses across the full 96 key character set for all 21 characters of the generated password. That's a total complexity of 9621 — slightly larger than 2138, or what you could think of as a 138 bit key.

In a situation with an unbounded password length across a large character set, it would make more sense to brute force the output of the MD4 hash directly. But that's still 128bits, making the total keyspace for a brute force approach on that value 2128 — which will likely be forever computationally infeasible.
Divide And Conquer
The hash we're after, however, is used as the key material for three DES operations. DES keys are 7 bytes long, so each DES operation uses a 7 byte chunk of the MD4 hash output. This gives us an opportunity for a classic divide and conquer attack. Instead of brute forcing the MD4 hash output directly (a complexity of 2128), we can incrementally brute force 7 bytes of it at a time.

Since there are three DES operations, and each DES operation is completely independent of the others, that gives us an additive complexity of 256 + 256 + 256, a total keyspace of 257.59

This is certainly better than 2138 or 2128, but still quite a large number. There's something wrong with our calculations though. We need three DES keys, each 7 bytes long, for a total of 21 bytes: 


Those keys are drawn from the output of MD4(password), though, which is only 16 bytes: 

We're missing five bytes of key material for the third DES key. Microsoft's solution was to simply pad those last five bytes out as zero, effectively making the third DES key two bytes long: 

Since the third DES key is only two bytes long, a keyspace of 216, we can immediately see the effectiveness of divide-and-conquer approach by brute forcing the third key in a matter of seconds, giving us the last two bytes of the MD4 hash. We're left trying to find the remaining 14 bytes of the MD4 hash, but can divide-and-conquer those in two 7 byte chunks, for a total complexity of 257.

Again, still a big number, but considerably better. We're left with, essentially, this core problem: 

The next interesting thing about the remaining unknowns is that both of the remaining DES operations are over the same plaintext, only with different keys. The naive approach to cracking these DES operations would look like: 

...iterate over every key in the keyspace, and use each key to encrypt our known plaintext and compare it to our first known ciphertext. When we find a match, we start over and iterate through every key in the keyspace, encrypt our known plaintext, and compare it to our second known ciphertext.

The expensive part of these loops are the DES operations. But since it's the same plaintext for both loops, we can consolidate them into a single iteration through the keyspace, with one encrypt for each key, and two compares: 

This brings us down to a total complexity of 256!

This means that, effectively, the security of MS-CHAPv2 can be reduced to the strength of a single DES encryption.
Cracking DES
At this point, a question of feasibility remains. In 1998, the EFF used ASICs to build Deep Crack, which cost $250,000 and took an average of 4.5 days to crack a key.
David Hulton's company, Pico Computing, specializes in building FPGA hardware for cryptography applications. They were able to build an FPGA box that implemented DES as a real pipeline, with one DES operation for each clock cycle. With 40 cores at 450mhz, that's 18 billion keys/second. With 48 FPGAs, the Pico Computing DES cracking box gives us a worst case of ~23 hours for cracking a DES key, and an average case of about half a day. 

With Pico Computing's DES cracking machine in hand, we can now crack any MS-CHAPv2 handshake in less than a day.

It wouldn't be a ton of fun if only David or I could crack MS-CHAPv2 handshakes, however. So we've integrated the DES cracking box with CloudCracker, in order to make David and his team's genius/skills/resources available to everyone.

We've published a tool called chapcrack, which will parse a network capture for any MS-CHAPv2 handshakes. For each handshake, it outputs the username, known plaintext, two known ciphertexts, and will crack the third DES key. It will also output a CloudCracker "token," which is an encoded format of the three parameters we need for our divide and conquer attack.

When this token is submitted to CloudCracker, the job is transmitted to Pico Computing's DES cracking box, and you receive your results in under a day.
What do you win?
At this point, you can plug the cracked MD4 hash CloudCracker gives you back into chapcrack, and it will decrypt the entire network capture (and all future captures for that user). Alternately, you can also use it to login to the user's VPN service or WPA2 Enterprise radius server.

We hope that by making this service available, we can effectively end the use of MS-CHAPv2 on the internet once and for all. And as always, submitting MS-CHAPv2 jobs to CloudCracker is available through the standard web interface as well as the API.
What Now?
1) All users and providers of PPTP VPN solutions should immediately start migrating to a different VPN protocol. PPTP traffic should be considered unencrypted.

2) Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else.

In many cases, larger enterprises have opted to use IPSEC-PSK over PPTP. While PPTP is now clearly broken, ISEC-PSK is arguably worse than PPTP ever was for a dictionary-based attack vector. PPTP at least requires an attacker to obtain an active network capture in order to employ an offline dictionary attack, while IPSEC-PSK VPNs in aggressive mode will actually hand out hashes to any connecting attacker.

In terms of currently available solutions, deploying something securely requires some type of certificate validation. This leaves either an OpenVPN configuration, or IPSEC in certificate rather than PSK mode.

Monday 30 July 2012

Apache Web Server 2.4 :Security Tips - Part 2


Other sources of dynamic content

Embedded scripting options which run as part of the server itself, such as mod_php, mod_perl, mod_tcl, and mod_python, run under the identity of the server itself (see the User directive), and therefore scripts executed by these engines potentially can access anything the server user can. Some scripting engines may provide restrictions, but it is better to be safe and assume not.

Dynamic content security

When setting up dynamic content, such as mod_php, mod_perl or mod_python, many security considerations get out of the scope of httpd itself, and you need to consult documentation from those modules. For example, PHP lets you setup Safe Mode, which is most usually disabled by default. Another example is Suhosin, a PHP addon for more security. For more information about those, consult each project documentation.
At the Apache level, a module named mod_security can be seen as a HTTP firewall and, provided you configure it finely enough, can help you enhance your dynamic content security.

Protecting System Settings

To run a really tight ship, you'll want to stop users from setting up .htaccess files which can override security features you've configured. Here's one way to do it.
In the server configuration file, put
<Directory />
    AllowOverride None
</Directory>
    
This prevents the use of .htaccess files in all directories apart from those specifically enabled.

Protect Server Files by Default

One aspect of Apache which is occasionally misunderstood is the feature of default access. That is, unless you take steps to change it, if the server can find its way to a file through normal URL mapping rules, it can serve it to clients.
For instance, consider the following example:
# cd /; ln -s / public_html
Accessing http://localhost/~root/

This would allow clients to walk through the entire filesystem. To work around this, add the following block to your server's configuration:
<Directory />
    Order Deny,Allow
    Deny from all
</Directory>
    
This will forbid default access to filesystem locations. Add appropriate Directory blocks to allow access only in those areas you wish. For example,
<Directory /usr/users/*/public_html>
    Order Deny,Allow
    Allow from all
</Directory>
<Directory /usr/local/httpd>
    Order Deny,Allow
    Allow from all
</Directory>
    
Pay particular attention to the interactions of Location and Directory directives; for instance, even if <Directory /> denies access, a <Location /> directive might overturn it.
Also be wary of playing games with the UserDir directive; setting it to something like ./ would have the same effect, for root, as the first example above. We strongly recommend that you include the following line in your server configuration files:
UserDir disabled root

Watching Your Logs

To keep up-to-date with what is actually going on against your server you have to check the Log Files. Even though the log files only reports what has already happened, they will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present.
A couple of examples:
grep -c "/jsp/source.jsp?/jsp/ /jsp/source.jsp??" access_log
grep "client denied" error_log | tail -n 10

The first example will list the number of attacks trying to exploit the Apache Tomcat Source.JSP Malformed Request Information Disclosure Vulnerability, the second example will list the ten last denied clients, for example:
[Thu Jul 11 17:18:39 2002] [error] [client foo.example.com] client denied by server configuration: /usr/local/apache/htdocs/.htpasswd
As you can see, the log files only report what already has happened, so if the client had been able to access the .htpasswd file you would have seen something similar to:
foo.example.com - - [12/Jul/2002:01:59:13 +0200] "GET /.htpasswd HTTP/1.1"
in your Access Log. This means you probably commented out the following in your server configuration file:
<Files ".ht*">
    Order allow,deny
    Deny from all
</Files>
    

Merging of configuration sections

The merging of configuration sections is complicated and sometimes directive specific. Always test your changes when creating dependencies on how directives are merged.
For modules that don't implement any merging logic, such as mod_access_compat, the behavior in later sections depends on whether the later section has any directives from the module. The configuration is inherited until a change is made, at which point the configuration is replaced and not merged.

Thursday 26 July 2012

Linux 3.5 Kernel : 5 Best New Features

Here are some of the best new features you'll find in Linux 3.5.

1. Metadata Checksums in Ext4
Playing a little bit of catch-up with filesystems such as ZFS and Btrfs, the Ext4 filesystem  in Linux 3.5 has now gained the ability to store checksums for various metadata fields. So, “every time a metadata field is read, the checksum of the read data is compared with the stored checksums,” the Linux 3.5 changelog explains. “If they are different it means that the metadata is corrupted.” Because it's focused on internal metadata structures and not data, no significant performance cost is expected to be associated with this new feature under typical desktop and server workloads.

2. A User-Space Monitor
Uprobes, meanwhile, is a new performance monitor that's essentially equivalent to Kernel Dynamic Probes (Kprobes) but for the user-space side. Using it, performance probes can be placed in any user application memory address, where they will collect debugging and performance information nondisruptively and help identify any performance problems.

3. Better Android Compatibility
When Android code was merged into Linux earlier this year, there was some controversy over Android's "suspend blocker" functionality used for power management. The technology has been especially problematic because drivers in Android devices use the suspend blocker API, but the lack of such an API in Linux has made it impossible to merge them. Now, with Linux 3.5, similar functionality in the kernel called "autosleep and wake locks" should make it easier to merge drivers from Android devices.

4. A Weapon Against 'Bufferbloat'
Then there's the new queue management algorithm in Linux 3.5 called Codel (short for "controlled delay") that aims to battle “bufferbloat,” or the problem that arises when there's excessive buffering across an entire network path. With this new technology, in fact, bottleneck delays can be reduced “by several orders of magnitude,” according to the Codel project page.

5. Extended Seccomp Sandboxing
Back in 2005 Linux 2.6.12 gained support for seccomp, or "secure computing,” which is a  sandboxing mechanism that enables a state in which only a very restricted set of system calls can be made. Now, with Linux 3.5, seccomp has been extended into “a filtering mechanism that allows processes to specify an arbitrary filter of system calls (expressed as a Berkeley Packet Filter program) that should be forbidden,” the changelog explains. “This can be used to implement different types of security mechanisms.” The Linux port of the Chromium Web browser, for example, supports this feature to run plugins in a sandbox.

Tuesday 24 July 2012

Apache Web Server 2.4 :Security Tips - Part 1


Some hints and tips on security issues in setting up a web server. Some of the suggestions will be general, others specific to Apache.

Keep up to Date

The Apache HTTP Server has a good record for security and a developer community highly concerned about security issues. But it is inevitable that some problems -- small or large -- will be discovered in software after it is released. For this reason, it is crucial to keep aware of updates to the software. If you have obtained your version of the HTTP Server directly from Apache, we highly recommend you subscribe to the Apache HTTP Server Announcements List where you can keep informed of new releases and security updates. Similar services are available from most third-party distributors of Apache software.
Of course, most times that a web server is compromised, it is not because of problems in the HTTP Server code. Rather, it comes from problems in add-on code, CGI scripts, or the underlying Operating System. You must therefore stay aware of problems and updates with all the software on your system.

Denial of Service (DoS) attacks

All network servers can be subject to denial of service attacks that attempt to prevent responses to clients by tying up the resources of the server. It is not possible to prevent such attacks entirely, but you can do certain things to mitigate the problems that they create.
Often the most effective anti-DoS tool will be a firewall or other operating-system configurations. For example, most firewalls can be configured to restrict the number of simultaneous connections from any individual IP address or network, thus preventing a range of simple attacks. Of course this is no help against Distributed Denial of Service attacks (DDoS).
There are also certain Apache HTTP Server configuration settings that can help mitigate problems:
  • The RequestReadTimeout directive allows to limit the time a client may take to send the request.
  • The TimeOut directive should be lowered on sites that are subject to DoS attacks. Setting this to as low as a few seconds may be appropriate. As TimeOut is currently used for several different operations, setting it to a low value introduces problems with long running CGI scripts.
  • The KeepAliveTimeout directive may be also lowered on sites that are subject to DoS attacks. Some sites even turn off the keepalives completely via KeepAlive, which has of course other drawbacks on performance.
  • The values of various timeout-related directives provided by other modules should be checked.
  • The directives LimitRequestBody, LimitRequestFields, LimitRequestFieldSize, LimitRequestLine, and LimitXMLRequestBody should be carefully configured to limit resource consumption triggered by client input.
  • On operating systems that support it, make sure that you use the AcceptFilter directive to offload part of the request processing to the operating system. This is active by default in Apache httpd, but may require reconfiguration of your kernel.
  • Tune the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections without running out of resources. See also the performance tuning documentation.
  • The use of a threaded mpm may allow you to handle more simultaneous connections, thereby mitigating DoS attacks. Further, the event mpm uses asynchronous processing to avoid devoting a thread to each connection. Due to the nature of the OpenSSL library the event mpm is currently incompatible with mod_ssl and other input filters. In these cases it falls back to the behaviour of the worker mpm.
  • There are a number of third-party modules available through http://modules.apache.org/ that can restrict certain client behaviors and thereby mitigate DoS problems.

Permissions on ServerRoot Directories

In typical operation, Apache is started by the root user, and it switches to the user defined by the User directive to serve hits. As is the case with any command that root executes, you must take care that it is protected from modification by non-root users. Not only must the files themselves be writeable only by root, but so must the directories, and parents of all directories. For example, if you choose to place ServerRoot in /usr/local/apache then it is suggested that you create that directory as root, with commands like these:
mkdir /usr/local/apache
cd /usr/local/apache
mkdir bin conf logs
chown 0 . bin conf logs
chgrp 0 . bin conf logs
chmod 755 . bin conf logs

It is assumed that /, /usr, and /usr/local are only modifiable by root. When you install the httpd executable, you should ensure that it is similarly protected:
cp httpd /usr/local/apache/bin
chown 0 /usr/local/apache/bin/httpd
chgrp 0 /usr/local/apache/bin/httpd
chmod 511 /usr/local/apache/bin/httpd

You can create an htdocs subdirectory which is modifiable by other users -- since root never executes any files out of there, and shouldn't be creating files in there.
If you allow non-root users to modify any files that root either executes or writes on then you open your system to root compromises. For example, someone could replace the httpd binary so that the next time you start it, it will execute some arbitrary code. If the logs directory is writeable (by a non-root user), someone could replace a log file with a symlink to some other system file, and then root might overwrite that file with arbitrary data. If the log files themselves are writeable (by a non-root user), then someone may be able to overwrite the log itself with bogus data.

Server Side Includes

Server Side Includes (SSI) present a server administrator with several potential security risks.
The first risk is the increased load on the server. All SSI-enabled files have to be parsed by Apache, whether or not there are any SSI directives included within the files. While this load increase is minor, in a shared server environment it can become significant.
SSI files also pose the same risks that are associated with CGI scripts in general. Using the exec cmd element, SSI-enabled files can execute any CGI script or program under the permissions of the user and group Apache runs as, as configured in httpd.conf.
There are ways to enhance the security of SSI files while still taking advantage of the benefits they provide.
To isolate the damage a wayward SSI file can cause, a server administrator can enable suexec as described in the CGI in General section.
Enabling SSI for files with .html or .htm extensions can be dangerous. This is especially true in a shared, or high traffic, server environment. SSI-enabled files should have a separate extension, such as the conventional .shtml. This helps keep server load at a minimum and allows for easier management of risk.
Another solution is to disable the ability to run scripts and programs from SSI pages. To do this replace Includes with IncludesNOEXEC in the Options directive. Note that users may still use <--#include virtual="..." --> to execute CGI scripts if these scripts are in directories designated by a ScriptAlias directive.

CGI in General

First of all, you always have to remember that you must trust the writers of the CGI scripts/programs or your ability to spot potential security holes in CGI, whether they were deliberate or accidental. CGI scripts can run essentially arbitrary commands on your system with the permissions of the web server user and can therefore be extremely dangerous if they are not carefully checked.
All the CGI scripts will run as the same user, so they have potential to conflict (accidentally or deliberately) with other scripts e.g. User A hates User B, so he writes a script to trash User B's CGI database. One program which can be used to allow scripts to run as different users is suEXEC which is included with Apache as of 1.2 and is called from special hooks in the Apache server code. Another popular way of doing this is with CGIWrap.

Non Script Aliased CGI

Allowing users to execute CGI scripts in any directory should only be considered if:
  • You trust your users not to write scripts which will deliberately or accidentally expose your system to an attack.
  • You consider security at your site to be so feeble in other areas, as to make one more potential hole irrelevant.
  • You have no users, and nobody ever visits your server.

Script Aliased CGI

Limiting CGI to special directories gives the admin control over what goes into those directories. This is inevitably more secure than non script aliased CGI, but only if users with write access to the directories are trusted or the admin is willing to test each new CGI script/program for potential security holes.
Most sites choose this option over the non script aliased CGI approach.

Tuesday 17 July 2012

Microsoft IIS 7.5 .NET source code disclosure and authentication bypass

Affected Software:
Microsoft IIS/7.5 with PHP installed in a special configuration
(Tested with .NET 2.0 and .NET 4.0)
(tested on Windows 7)
The special configuration requires the "Path Type" of PHP to be set to
"Unspecified" in the Handler Mappings of IIS/7.5

Details:
The authentication bypass is the same as the previous vulnerabilities:
Requesting for example
http://<victimIIS75>/admin:$i30:$INDEX_ALLOCATION/admin.php will run
the PHP script without asking for proper credentials.

By appending /.php to an ASPX file (or any other file using the .NET
framework that is not blocked through the request filtering rules,
like misconfigured: .CS,.VB files)
IIS/7.5 responds with the full source code of the file and executes it
as PHP code. This means that by using an upload feature it might be
possible (under special circumstances) to execute arbitrary PHP code.
Example: Default.aspx/.php

Friday 13 July 2012

Application Support in Virtual Environments Policy

All the below listed Microsoft Server-focused application versions, as well as all later versions of those applications, are supported on Hyper-V and Server Virtualization Validation Program (SVVP) validated products, so long as the virtualization product supports the correct operating system version and platform architecture(s) required by a specific application.
This support is subject to the Product Life-cycle Policy for any specific application, as detailed at the included links for each application. For more information, visit the Microsoft Support Lifecycle page. In some cases, specific versions of Microsoft server software are required for support. These versions are noted in this article, and the supported versions may be updated as needed.
Microsoft Server-focused applications not listed below are not supported for operation in a virtual environment. If an application or version is not visible below, queries can be made at this alias svvpfb@microsoft.com. Please provide the specific and exact application product name, version, service pack or feature pack, and platform architecture [x86 or x64].
There may be application-specific links for special guidance on what virtualization limitations might exist. See the "Product-specific virtualization information" links for some of the Windows Server-focused applications below. Additionally, unless otherwise stated, the minimum processor, memory, disk space and other requirements are not modified by operation in a virtual environment. Refer to the respective Product Requirements page for each application for that information.
With regards to what Windows Server operating system versions and platform architectures are supported by a vendor's particular version of their virtualization product, see the vendor's web site for further information.
Any virtualization product listed in the catalog that supports the correct Windows Server operating system Version and Platform Architecture is supported for the applications listed below.
Note that not all software applications are good candidates for running in a virtualized environment. For example, if an application has specific hardware requirements, such as access to a physical PCI card, the applications cannot be supported in a virtual machine. This is true because virtual machines generally do not have access to underlying physical hardware.
The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products. Third parties are responsible for testing their software together with Microsoft software. Microsoft software may not work as intended in third-party virtualized environments or with any features that are not directly related to emulation of a physical system as a virtual machine (VM).
  • Application Virtualization (App-V) 4.5 and later (Product Requirements)
    • Note: support also includes Management Server, Publishing Server, Sequencer, Terminal Services Client, and Desktop Client
  • Azure (Product-specific virtualization information)
  • BizTalk Server 2009
    • Product-specific virtualization information, http://support.microsoft.com/kb/842301
  • BizTalk Server 2006 R2
    • Product-specific virtualization information, http://support.microsoft.com/kb/842301
  • BizTalk Server 2006
    • Product-specific virtualization information, http://support.microsoft.com/kb/842301
  • BizTalk Server 2004
    • Product-specific virtualization information, http://support.microsoft.com/kb/842301
  • Certificate Server Commerce Server 2007 Service Pack 2 and later /li>
  • Dynamics AX 2009 (both server and client) and later
  • Dynamics CRM 2011
  • Dynamics CRM 4.0 and later
  • Dynamics GP 10.0 and later
  • Dynamics NAV 2009 and later
  • Exchange Server 2007 Service Pack 1 and later
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc794548(EXCHG.80).aspx
  • Exchange Server 2010
  • Fast Search Server 2010 for Sharepoint
  • Forefront Client Security
  • Forefront Endpoint Protection 2010
  • Forefront Identity Manager 2010
  • Forefront Intelligent Application Gateway (IAG)
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc891502.aspx
  • Forefront Security for Exchange (FSE) Service Pack 1 (SP1) or higher
  • Forefront Security for SharePoint (FSP)Service Pack 2 (SP2) or higher
  • Forefront Threat Management Gateway 2010
  • Forefront Unified Access Gateway
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc891502.aspx
  • Host Integration Server 2006 and later
  • Host Integration Server 2009
  • Identity Integration Server
  • Identity Lifecycle Manager 2007 Feature Pack 1 (FP1) (w/ latest updates) and later
    • Product Requirements, http://technet.microsoft.com/en-us/library/cc720598(v=WS.10).aspx
  • Identity Lifecycle Mngr Internet Security and Acceleration (ISA) Server 2000
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc891502.aspx
  • Internet Security and Acceleration (ISA) Server 2004
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc891502.aspx
  • Internet Security and Acceleration (ISA) Server 2006
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/cc891502.aspx
  • Lync Server 2010 and later
    • Product-specific virtualization information, http://www.microsoft.com/en-us/download/details.aspx?id=22746
  • Office Communications Server 2007
  • Office Groove Server 2007 Service Pack 1 and later
    • Note: The virtual machine for Microsoft Office Groove Server Relay should be configured to use a physical hard disk, not a virtual hard disk.
  • Office PerformancePoint Server 2007 Service Pack 2 and later
  • Office Project Server 2007 Service Pack 1 and later
  • Office SharePoint Server 2007 Service Pack 1 and later
  • Office Web Apps
    • Product Requirements, http://office.microsoft.com/en-us/web-apps/
  • Opalis Integration Server 6.2.2 and later
    • Product-specific virtualization information, http://support.microsoft.com/default.aspx?scid=kb;en-US;2023123
  • Operations Manager 2005 Service Pack 1 (agents only)
    • Product Requirements, http://technet.microsoft.com/en-us/library/cc180251.aspx
    • Note that System Center Operations Manager 2007 or a later version is required to manage Windows Server 2008 and later versions.
  • Project Server 2010 Search Server 2008 and later
  • Search Server 2010
  • Sharepoint Foundation 2010
  • Sharepoint Server 2010
  • SQL Server 2000
    • Product-specific virtualization information, http://support.microsoft.com/?id=956893
  • SQL Server 2005
    • Product-specific virtualization information, http://support.microsoft.com/?id=956893
  • SQL Server 2008
    • Product-specific virtualization information, http://support.microsoft.com/?id=956893
  • SQL Server 2012
    • Product-specific virtualization information, http://support.microsoft.com/?id=956893
  • System Center Configuration Manager 2007 SP1 (both server and agents) and later
    • Product Requirements, http://technet.microsoft.com/en-us/library/bb680717.aspx
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/bb680717.aspx
  • System Center Data Protection Manager 2007
    • Note Microsoft Systems Center Data Protection Manager 2007 is supported when it runs inside a virtual machine, but only if the DPM storage pool disks are made available directly to the DPM virtual machine as one of the following:
      • Pass-through disks
      • SCSI target disks
      • FC SAN target disks
    • You can also perform backups for the virtual machines either by using a DPM agent that is installed on the host computer or by installing the DPM agent into the virtual machine directly.
  • System Center Essentials 2007 Service Pack 1 and later
  • System Center Operations Manager 2007 (both server and agents) and later
    • Product-specific virtualization information, http://technet.microsoft.com/en-us/library/bb309428.aspx
  • System Center Service Manager 2010 and later
    • Product-specific virtualization information, http://technet.microsoft.com/library/ff460890.aspx
  • System Center Virtual Machine Manager 2008 (both server and agents) and later
  • Systems Management Server 2003 Service Pack 3 (agents only) and later
    • Product Requirements, http://technet.microsoft.com/en-us/library/cc179620.aspx
  • Visual Studio Team System 2008 and later
  • Visual Studio Team Foundation Server 2008
  • Windows Essential Business Server 2008 and later
  • Windows HPC Server 2008 and later
  • Windows 2000 Server
  • Windows Server 2003 Web Edition with Service Pack 2 and later
  • Windows Server 2003
  • Windows Server 2003 R2
  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server Update Services 3.0 Service Pack 1 and later
  • Windows SharePoint Services 3.0 Service Pack 1 and later
  • Windows Small Business Server 2008 and later
  • Windows Web Server 2008
With regards to virtualization product functionality that operates without the knowledge or cooperation of the operating system or applications executing within the virtual machine such as, but not limited to; live migration of virtual machines, virtual machine clustering, memory ballooning, virtual machine fault tolerance, etc., these are outside the scope of the Server Virtualization Validation Program.
There are no industry standards to follow in implementing such features, and since 'by design' the operating system [whether that be Windows Server or some other OS] or application are not cognizant of such virtualization product functionality, there is no practical method of testing these virtualization product features using Microsoft-developed tests and tools. The SVVP program does not test these features or functions, and the virtualization product vendor is solely responsible for testing and supporting such features. However, unless otherwise stated in articles or documents for Windows Server operating systems or for Microsoft Server-focused applications, Microsoft does not preclude their use by customers. If a customer running a supported version of Windows Server on a validated virtualization solution experiences issues after using such third party virtualization features (such as live migration) that operate independently of Microsoft products, then the customer should contact the virtualization vendor for assistance in resolving the issue.
This policy and listing will be updated as new Microsoft Windows Server-focused applications are released. If an application or version is not visible above, queries can be made at this alias: svvpfb@microsoft.com. Please provide the specific and exact application product name, version, service pack or feature pack, and platform architecture [x86 or x64].