This article describes how to configure and create a Web farm environment for the Telligent Evolution platform.

Web farm environments

A Web farm, or Web server cluster, consists of two or more physical servers acting together as if they were one server to host one or more Web sites. Recent technologies such as distributed file system (DFS) have gone a long way to make deploying, configuring and managing Windows-based Web farms much easier.

There are many physical and logical configurations that can be used to create a load-balanced Web farm environment. They can use everything from load-balancing hardware such as F5 to file replication software such as Robocopy. In this section we will take a quick look at the technologies providing the quickest and easiest route to Windows-based load balancing in a Web farm environment.

If you are already familiar with Web farm environments and the necessary technologies then you can skip to Synchronize files across nodes to read the Telligent Evolution platform-specific portions.

Server operating system and Web server

We recommend that customers who want to create a Web farm environment for hosting the Telligent platform use Internet Information (IIS) 7.0 on Windows Server 2008. 

    Either server (Windows Server 2005 or Windows Server 2008) will run the Telligent Evolution platform well, but we recommend Windows Server 2008 with IIS 7.0.

    Load-balancing in a Windows-based Web farm

    Load-balancing is the process of taking many requests and distributing them equally to any number of Web servers that are acting as one. While network load-balancing (NLB) is a built-in capability of Windows, we recommend hardware-based load-balancing. Hardware-based  solutions use dedicated network hardware to manage the traffic distribution.

    An NLB solution should support Direct Server Return (DSR) to ensure maximum scalability. DSR ensures that the Web nodes can communicate directly with the client nodes. The NLB settings for DSR vary and you should consult your solution's documentation.

    Shared configuration: Synchronize or duplicate configuration settings across nodes

    Managing a Web farm can be difficult when it comes to ensuring that all servers stay synchronized. The integrity of the Web farm depends on the concept that all of the servers are virtually identical in their configuration. This unity is required because it only takes one Web server misstep to cause the entire Web farm to malfunction. It can be very difficult to troubleshoot Web farm issues when the machines vary in configuration.

    If you intend to use IIS6 you will need to use a tool called MSDeploy and a variety of other techniques to synchronize settings among the Web servers.

    IIS7 has shared configuration functionality built into it that you can use for configuring your Web farm.

    Synchronize files across nodes

    There are a variety of Telligent platform configurations that require files created by the Web server to be copied or synchronized across all of the Web server nodes in the Web farm. Here are a few example configurations or services that require some form of synchronization across the servers:

    • Search
    • Blog directories: When new blogs are created the directories need to be created on each node

    Ideally all nodes should contain identical files with the exception of configuration files. Windows Server provides a feature called DFS for version 2003 and version 2008.  We highly encourage those using Telligent Evolution in a Web farm environment to use DFS for keeping file systems synchronized.

    DFS requires Active Directory to function. There should be at least two controllers to ensure DFS functions reliably. Additionally, Active Directory should be at the highest level of functionality possible. At a minimum, 2003 R2's version of DFS should be used.

    With Windows Server 2003 R2, DFS underwent significant performance improvements. Older versions of DFS should not be used for maintaining file synchronization among Web nodes.

    Common Web farm configuration

    The following diagram depicts a common Web farm configuration for a solution supporting 3mm+ page views per day.

    In this configuration, there is only one database server. In larger scale solutions, or when redundancy is critical, it is possible to run clustered SQL servers.

    The application servers identified in both the Web and Search/Tasks layers run standard operating system/Web server components that meet Telligent Evolution's recommended requirements. These servers also have all of the Telligent Evolution application files, which include everything you would find in the Web folder of the upgrade or Web installation packages. In the example configuration above, the default search provider Solr is running on a dedicated server with the search indexing tasks hosted by Telligent Tasks Service.

    Telligent Evolution software configuration

    There are a number of Telligent platform-specific configuration issues that people commonly run into:

    Telligent Evolution tasks in a Web farm

    There are many tasks that can be run/configured in a single Telligent Evolution instance. Many of the tasks are maintenance jobs such as performing routine operations inside a Telligent database to maintain data integrity. Tasks like SiteStatisticsUpdates, that hit the database, have no need for to be executed by each node in the Web farm. There are other tasks however, like the AnonymousUsers task, that are specific to each machine and need to be run by each node in the Web farm.

    Note: You can use an override file to disable tasks when running the task service for a load-balanced cluster.

    Below is a list of the available tasks:

    Tasks that need to be run on only one node


    Tasks that need to be run on all nodes

    The task configuration above assumes that all file stores (CFS, etc.) point to a single location (Amazon S3, file share, etc.). If some file stores will be on the individual nodes you must run all cleanup tasks on each individual node.

    A job can be disabled by changing the enabled property value to false in the communityserver.config file. However, we recommend that you use a communityserver_override.config file to override and define changes to communityserver.config. This helps minimize the changes you need to make when upgrading to future versions of Telligent Evolution. Here is an example of what an override entry might look like to disable the SiteStatisticsUpdates job:


    The value in the xpath element above should be on one line.

    Finally, to ensure that email is only processed by one server you should set the disableEmail property in the core element to the value of true.

    How should I store user contributed media?

    You should make any changes referenced in this section before you upgrade your Telligent Evolution or Telligent Enterprise site.

    Telligent Evolution and Telligent Enterprise make use of a Centralized File Storage (CFS) provider. CFS provides a great deal of flexibility to the developer or system administrator because it is relatively straightforward to configure Telligent Evolution and Telligent Enterprise to use any number of external file storage systems (Amazon's S3, UNC path, etc.). Telligent Evolution and Telligent Enterprise  support the concept of an overriding configuration file so rather than changing the default install you can create an overrides file - communityserver_override.config.

    Here is an example that overrides the PostAttachments CFS store to use a UNC path:


    The value for the xpath element should be on one line. Also, to properly override CFS you would need to provide an override value for all of the CFS entries (of which the above is one example).  

    In this configuration at least one of the nodes is accessing a resource (files) on another system (the one that hosts the UNC share). The security implications of cross-machine resource requests can be complicated. You can configure security parameters for your application a number of ways and each operating system uses a different account to run the ASP.NET process. There is a great matrix that shows you the various security contexts in which your resource request will leave the server.

    Because many of these options are environment-specific, we encourage you to read about the various configurations that allow for impersonation and delegation in the Telligent Evolution platform or any ASP.NET application. If you are switching from local file share to remote file share, you need to move all of the files and folders to the remote file share. You also need to review the file permissions once they are moved to the remote file share to ensure they are still accessible.

    Understanding how the underlying ASP.NET security model works is critical to understanding how you must configure remote network file requests from within your application. 

    In the event centralized storage is undesirable, local storage on the Web nodes can be used. Available storage should be monitored closely.  DFS can maintain file synchronization and should be monitored closely to ensure its health. To minimize the complexity of maintaining your communities Web nodes, the best practice is to ensure the local paths are identical across all nodes.

    Do I need to do anything with the authentication, authorization, roles or profile settings?

    There are two keys to keeping authentication, authorization, roles and profile settings working correctly in a Web farm environment:

    • Cookie names: Ensure that cookie names set in the web.config file are the same on each node in the Web farm.

    • Machine keys: Ensure that the validationKey and decryptionKey values are the same on all nodes in the Web farm.

    Do I need to install Telligent Evolution on the Solr server?

    No. Solr runs independent of Telligent Evolution.

    Do I need to install Solr on all Web servers in the Web farm?

    No. Solr only needs to be installed on one server. You can run Solr on a server that runs Telligent Evolution on a dedicated server. For Solr, if you have a large number of documents (> 1M documents) you should consider allocating a larger amount of RAM (2GB+) to Solr.

    All Web servers in the Web farm should be configured to point to the Solr instance. Edit the host attribute for the the Solr element of the communityserver.config file for all Web servers. Example:

    <Solr host="http://[yourSolrHost]:8080/solr" simpleSearchQueryType="dismax"
    enableFieldCollapsing="true" enableHighlighting="true">

    The Solr instance should not be accessible by the public and ideally only accessible by your Web servers (and administrators as necessary).

    Do I need to enable the search indexing tasks on all Web servers in the Web farm?

    No.  Indexing should be done in one of two ways:

    • Select one Web server to run the indexing tasks. The SearchDeleteQueue and SearchIndex tasks should be enabled on this server and removed from all other Web servers.

    • Run indexing using the Task Service.


    File storage in a Web farm

    Install and configure search