When I joined Microsoft back in 2006 to help build out the Microsoft Managed Services offerings -- a precursor to the SharePoint Online platform within Office365 -- I spent a good deal of time talking with customers and partners about improving the performance of SharePoint across multiple locations: from deployment and architectural best-practices, to content synchronization and WAN optimization.
I always loved the customer interactions, working directly with the end user and administrators, and pushing that feedback back to the product teams. In my current role as a technology evangelist, I spend a good portion of my time with customers and partners, listening to their experiences, providing guidance where I can, and doing my best to surface questions and issues when I don't know the answer.
One of the most common issues that customers raise is around optimizing their SharePoint environments. People want to get the most out of the investments they've already made, and while many organizations are slowly making plans to move their data assets into the cloud as a way to reduce infrastructure costs, the reality is that the cloud is not yet a viable option for most of their intellectual property -- and so they're looking for ways to improve performance, reduce storage costs and implement stronger disaster recovery and high-availability solutions with existing on premises infrastructure, or through hybrid solutions that will allow them to start taking advantage of cloud cost efficiencies.
Optimizing SharePoint for Global Deployments
In the #CollabTalk Tweet Jam held last month, we tackled the topic of geographically-dispersed teams and SharePoint, touching on the areas of distributed management, performance, and synchronization -- all of which factor into the themes of SharePoint disaster recovery and high-availability, among others.
When the panel was asked about the best ways to support high-availability of their SharePoint environments (or even versions) across geographies, the responses were fairly straightforward. Nick Kellett (@nickkellett), SharePoint MVP and CTO of Toronto-based consultancy StoneShare, focused on changes within SharePoint, such as a streamlined information architecture, with consistent navigation and taxonomy, as well as consistent branding and user interface.
Other experts, like Michael Noel (@michaeltnoel), another SharePoint MVP and consultant with San Francisco-based Convergent Computing, and Bradley Geldenhuys (@bradgcoza), co-founder of GTconsult in south Africa, focused their comments on issues outside of SharePoint, such as SQL costs and other core infrastructure, but also provided guidance on areas where cloud or hybrid (with both on premises and cloud components) could improve these cost efficiencies.
Moving from high-availability to disaster recovery planning, some of the panelists began to offer up some of their real-world experiences, detailing past projects with SharePoint replication or synchronization at the application level or at the database level. A good portion of the Tweet Jam revolved around this topic (you can find a summary of the event here via Storify), and after the event I followed up with three of the panelists to expand on the topic of SharePoint replication.
SharePoint Content Replication
I first reached out to my colleague, Mark McGovern (@docpointmark), to get a vendor perspective on the problem areas replication addresses. I asked Mark about Microsoft's stance on SharePoint optimization in general, and content replication specifically, as the need for bi-directional, high-fidelity content replication for fast, global access remains a critical need for many organizations.
Microsoft has always been very supportive of partner solutions to address gaps within their various platforms. While Microsoft may provide guidance around optimizing your SharePoint environments, they rarely come out and endorse a specific solution or category of products, knowing that there are multiple options out there.
Microsoft’s guidance has been around how to optimize your environment out-of-the-box, and really helps highlight where replication can provide tremendous value. To improve access for remote users, they recommend that you do not necessarily set up multiple SharePoint farms, but use a central farm and do the following:
- Optimize your web pages
- Employ improved client tools like SkyDrive Pro, SharePoint Workspace 2010 and Office Web App Servers
- Improve your network connections
- Employ Windows services like BranchCache
- Use WAN accelerators
Many clients understand that they have challenges in geographically dispersed and low bandwidth environments, but oftentimes they do not realize there are solutions in the partner ecosystem. There are a lot of “a-ha” moments once they see and understand what replication can provide, such as immediate, live, bi-directional replication over low bandwidth networks, synchronization of content across multiple farms, and even across multiple versions of SharePoint, and replication of live, in-process workflows."
I also reached out to Bradley Geldenhuys and Warren Marks (@markswazza) from GTconsult, who have extensive experience with the problems that replication and content synchronization solutions solve from a service point of view. Brad is the co-founder of SharePoint consultancy GTconsult and runs the company's Durban branch, while Warren runs the Johannesburg branch.
If you're unfamiliar with the South African market, finding a strong internet connection can be a challenge.
Warren shared some great perspectives on the customer scenarios they manage in the South African marketplace:
With cloud being the trend at the moment around the world we have a number of options around this space for the South African market. One of the biggest challenges is WAN connectivity. This challenge is twofold: first being price, and second being fiber/large pipe availability. The choice for many corporates is to leverage off one of the ISP MPLS networks. This allows for a single link from the client office into the MPLS cloud which provides pretty effective connectivity.
Clients that leverage off the MPLS connectivity will often also make use of hosting within the Private cloud. All servers will be houses within the ISP data center and in turn in the MPLS cloud which allows the servers to be central to all customer sites/network spokes to leverage off. The challenge here still lies with client being limited from a bandwidth perspective on the availability and size of the link connecting them into the MPLS cloud. The second limitation of onsite technical resources to manage any onsite systems that the client runs, SharePoint being a prime example.
In my opinion, the optimal SharePoint design in the above scenario would be to house a SharePoint farm for the group within the private cloud, with a third-party replication solution used to synchronize the site collection(s) relevant to each business entity to a local SharePoint server based at each of the entities. This will allow for local users to access documents and collaborate utilizing a server based on their local LAN. When a document is uploaded, this will be replicated (either on a schedule or in real time depending on the triggers that have been setup) to the main SharePoint server based within the Private cloud."
Brad expanded on Warren's comments, summarizing the problem space for most globally-dispersed SharePoint deployments:
Our customers rely on replication to manage SharePoint farms that are geographically disbursed because the solution is simply the best option. These solutions work phenomenally well in low bandwidth situations to compress and only replicate changes, and the leading solutions move everything in a SharePoint environment, including any custom solutions and workflows.
In Africa, bandwidth is both extremely expensive and very poor. Many of our customers have branch locations or factories in rural areas with either site to site Wifi connections or mediocre ADSL lines which are ingredients for an incredibly frustrating user experience. Replication allows for quick easy access regardless of the location or the Internet connectivity.
Another great scenario is offsite disaster recovery. SharePoint becomes business critical for a number of businesses. What would happen if the SAN or Virtual environment were destroyed in a fire or building collapse? The simple answer to these risks is an offsite private cloud replicated environment, and a simple switch of DNS will have all users back online in minutes. At GTconsult, we consider replication in any high-availability scenario as the first possible option. It makes sense and is the best possible way to have data immediately available to the minute where backups cannot compete."
Knowing the Options
For many organizations, moving to the cloud is a relatively easy decision: the cloud is a way to unify the tools that your teams use, to distribute content more easily, and to remove infrastructure costs and scaling issues that come with hosting and managing your own hardware.
But for other organizations, the cloud is not such a simple decision. There may be industry requirements around data storage, performance and bandwidth issues, and data sovereignty issues. While out-of-the-box options may be limited, most organizations are just not fully aware of the breadth of solutions available on the topic through the partner ecosystem -- options which most national and regional service companies are well-trained to deliver.
The cloud may be our future, but that does not mean you cannot get more out of what you have in place today. As panelist Michael Herman (@mwherman2000) put it, "Where a group of users are on the end of slow, unreliable, intermittent network connections they need local access." As with all things SharePoint-related, its best to begin by understanding all of your options. For a number of SharePoint environments with geographically distributed teams, and / or requirements for disaster recovery and high-availability solutions, replication may be the right path forward.
Editor's Note: Read more from Christian in A Holistic Approach to Social Collaboration