Xgility's CEO Chris Hornbecker and Chief Strategy Officer Joe Brown.

ShiftHappens Video: Chris Hornbecker and Joe Brown On Cloud Adoption

Last week, our CEO Chris Hornbecker and Chief Strategy Officer, Joe Brown, had a chance to sit down with Brad Sams at AvePoint’s #ShiftHappens Conference in Washington DC.

Watch as they discuss how Xgility’s federal and corporate clients are adopting Microsoft cloud technologies and the emergence of Microsoft Teams popularity.

Plus, Brad shares a bit of trivia about the origins of Microsoft Teams that you may not know.

Unlock the Power of Office 365 with a Strategic Approach.

Unlock the Power of Office 365 with Strategic Planning

It’s not uncommon for organizations to use Microsoft’s Office 365 for just email and SharePoint for document storage and/or their intranet. Yet, Office 365 has a whole suite of powerful collaboration and teamwork solutions and tools too – including SharePoint Online, OneDrive for Business, Microsoft Teams, and Yammer. But rolling out the tools without a strategic plan can often result in low adoption rates – and lead to frustration for both IT departments and end-users.

Sound familiar? Let’s look at how we got here and how strategic planning can reduce Office 365 confusion and transform it into a powerful platform with widespread end-user adoption.

Why the Chaos? We’ve Opened a Firehose

The great news is that the tools in Office 365 are user-friendly and respond to end-user needs and wants. BUT the sheer volume of new apps and the updates can be overwhelming for both IT and end-users.

It’s Not Because We Aren’t Trying

Most user adoption issues are not for the absence of traditional implementation practices or methods. The tools are powerful, but users don’t know what to use and when. Without a strategy and plan for rollout and adoption, even IT has a hard time supporting end-user questions.

Tame the Office 365 Chaos with Strategic Planning

Experience has taught our team that strategic planning and execution are key for any Office 365 successful implementation, rollout, and higher end-user adoption.

Our Director of Strategy & Advisory Services, JoAnna Battin, recommends the following steps as part of an Office 365 strategy to help drive wide-spread adoption:

  • Start by Understanding the “Why”
    Determine what you want to achieve when rolling out new Office 365 applications, tools, and solutions. Do you need to increase collaboration between workers in multiple locations? Do you need a centralized document management system? Do you need to improve a business process? Once you have determined specific goals then you can move onto the toolset selection.
  • Define How Each Application Will Be Used in the Organization
    Most issues users face can be resolved by using their right tool for the right purpose at the right time. When you define the purpose of each tool and expectations for their use, the users don’t have to guess.
  • Define Governance for Each Tool & for the Tenant
    Put guardrails around the applications and use the tenant admin tools to govern their use.
  • Prepare Messaging & Story Planning – Then Communicate to Users
    Setting expectations for each application and communicating that message give users the information they need to know what tool to know when.
  • Demonstrate & Encourage Using the Tools Together to Improve Business Processes
    Each tool is powerful by itself. Using several tools together to build a business process is where the transformation takes place.
  • Build Excitement
    Engage your key stakeholders and end-users while building excitement around new changes.

JoAnna also advises organizations to start small – don’t try to roll out all the applications at once. Create a strategic roadmap for all Office 365 solutions you plan to implement but stagger the roll out in stages so both the IT departments and end-users won’t be overwhelmed with changes.

Do you need help unraveling the complexities and capabilities of the tools and applications within the Office 365 ecosystem including SharePoint Online, OneDrive for Business, Microsoft Teams, and Yammer? With vast experience under our belt and a passion for transformation, we are ready to help.

We take a strategic approach to Office 365 to connect your people and applications in a way that drives higher employee engagement, productivity, and innovation.

Ready to get started? Contact our team today »

Cloud Optimization and Management.

What Forrester and Customers Say About Cloud Cost Optimization and Management

This is Part 3 of our 3-Part series on Best Practices for Cloud Management.

For those of you who aren’t familiar, the Forrester Wave is a rigorous vendor evaluation conducted over several months by the Forrester Research team. It’s meant to help enterprises put together a short list of vendors for them to evaluate.

In Q2 2018, Forrester named CloudHealth as one of the leaders among the top nine vendors that made the cut for Cloud Monitoring and Optimization. This is significant because our Cloud Optimization Platform Service is powered by CloudHealth.

Xgility’s Cloud Optimization Platform Service

Our Cloud Optimization Platform Service enables control and optimization of your cloud from every angle. The service is designed to enhance visibility and reporting into your public cloud spend and usage, as well as streamline billing. Additionally, we can create rules and automate tasks for powerful active policy management and set up guardrails for your environment using our platform to send alerts and notifications for noncompliant policies and assets. The service is offered in three levels aligned to your infrastructure and requirements.

Xgility's Cloud Optimization Platform Service Levels.

Subscriptions are determined by the percentage of your cloud spend, and supplemental hours are available for configuration, support, and customization.

On average, customers see a reduction in cloud costs of 20%-30%, but sometimes much more. For example, reducing Reserved Instances (RIs) in Azure can quickly supply you with up to a 72% discount on your virtual machine and SQL database costs compared to pay-as-you-go.

Check out this chart below comparing Azure reserved virtual machine instances and SQL Database reserved capacity:

Chart comparing Azure reserved virtual machine instances and SQL Database reserved capacity.

The real key behind cloud cost optimization and management is to not treat different cost-saving tactics as one-time activities but rather ongoing best practices that are automated to ensure efficacy.

That’s why a single pane of glass into your Azure environment helps you gain visibility and actionable insight into cost, security, performance, and compliance of your data center and public cloud infrastructure is the best way to achieve your goals. Xgility’s Cloud Optimization Platform Service does this by:

  • Consolidating all cloud infrastructure and data in one place
  • Identifying opportunities to optimize costs, utilization, performance, and security
  • Manage cloud sprawl
  • Establish governance guidelines and rules with policies to manage your cloud as it scales
  • Align cost reports with business requirements

Real-life Examples of Customer Savings

Needless to say, putting in place an automated cloud cost management and optimization program can have a huge impact not only on your Azure spend but also on growing your business. As proof, here are some real-life examples of savings using the Azure cost-savings tips we provided in our infographic and previous blogs:

  • One large B2B SaaS company found that almost 60% of the instance hours they ran in the past 12 months were using older-generation instance types. Upgrading those instances to the latest generation saved them millions of dollars per year. By setting a target for weekly hours that non-production systems should run, one large publishing company set that target at less than 80 hours per week, which is saving them thousands of dollars a month.
  • By setting a target for weekly hours that non-production systems should run, one large publishing company set that target at less than 80 hours per week, which is saving them thousands of dollars a month. A mobile gaming company moved 600TB+ from S3 to S3 Infrequent Access, saving them more than $7k every month.
  • A mobile gaming company moved 600TB+ from S3 to S3 Infrequent Access, saving them more than $7k every month.

See Xgility’s Cloud Optimization Platform Service in Action – With Your Data!

The best way to start tackling your Azure spend and optimization challenges is to take advantage of a free 14-day trial of our Cloud Optimization Platform Service that uses real data from your environment to help you understand the cost-savings and optimization potential. The trial is securely configured and will provide you the best opportunity to experience true visibility and actionable insights for your Azure environment.

The road to cloud cost optimization and management is easier than you think!

Start your journey by requesting a free trial » or  scheduling a demo with our cloud experts »

Learn how to optimize your cloud costs and usage.

Best Practices for Optimizing Cloud Usage and Costs: Part 2

This is Part 2 of our 3-Part series on Best Practices for Cloud Management.

Our last blog post provided detailed insight on five tips for reducing your overall Azure spend. What’s clear is that gaining visibility across your entire organization’s cloud spend is imperative to making sure you get the most out of your cloud investment. Ongoing management, governance, and automation best practices will then help you achieve true cost savings—on average twenty to thirty percent, and in some cases much more.

Here are five additional tips to help you in your journey to cloud cost optimization and management.

Rightsize Disk Storage

With Disk Storage, the critical factors to consider are capacity, IOPS, and throughput. Removing unattached disks is one way to reduce the cost associated with Disk Storage.

Another approach is to evaluate which disks are over-provisioned and can be modified for potential cost savings.

Microsoft offers two types of storage for VMs: a standard storage performance tier, which can be purchased in three different levels of redundancy, and a premium storage performance tier, which is offered in three different sizes. The price difference between standard and premium disks can be as high as 3x, so pick the right storage for each workload.

Xgility Pro TipPremium storage is billed based on the total disk size, regardless of consumption. Keep a close eye on utilization of Premium storage to minimize wasted cost.

Rightsize SQL Databases

Like rightsizing your IaaS, you also need to rightsize your Azure platform-as-a-service (PaaS). Azure SQL Database is a PaaS offering that is used by many developers to manage their applications.

Evaluate how well your SQL Databases are being utilized in terms of their workloads. The key factors to take into consideration are Database Transaction Units (DTU), Database Size, and Capacity.

SQL Databases are purchased through a DTU-based model, which combines compute, memory, and IO resources. There are three service tiers:  Basic, Standard, and Premium. The Basic tier is primarily used for development and testing. The Standard tier is suitable for applications with multiple real-time users. The Premium tier is for high-performance applications and many simultaneous requests. There is a price difference for the three tiers, and the database sizes within those tiers.

The key is to rightsize to the lowest cost SQL Database that meets your performance requirements.

Xgility Pro TipNormally you will not need to rightsize Basic SQL Databases because they are already competitively priced and mainly used for development or testing.

Stop and Start VMs on a Schedule – Park your VMs

Azure will bill for a VM as long as a VM is running. If a VM is in a stopped state, there is no charge. For VMs that are running 24/7, Microsoft will bill for 672 to 744 hours per VM, depending on the month. If a VM is turned off between 5pm and 9am on weekdays and stopped weekends and holidays, then total billable hours per month would range from 152 to 184 hours per VM. This would save 488 to 592 VM hours per month.

This is a bit of an extreme example, since flexible work weeks and global teams VMs aren’t powered down outside normal working hours. However, outside of production, you are still likely to find that many VMs that do not need to run 24/7/365.

The most cost-efficient environments stop and start VMs based on a set schedule. Each cluster of VMs can be treated a different way.

Xgility Pro TipSet a target for weekly hours that non-production systems should run.

Buy Azure Reserved Virtual Machine Instances and Optimize

Purchasing Microsoft’s Azure Reserved Virtual Machine Instances (RIs) is an effective cost-saving technique.

Azure RIs allow you to make a 1- or 3-year upfront commitment to Microsoft to utilize specific virtual machine instance types for a discount on your compute costs and prioritized capacity. RIs can save you up to 72% compared to pay-as-you-go pricing.

Microsoft allows customers to modify reservations in the following ways:

  • Changing the Scope from Single Subscription to Shared, or vice versa.
  • Exchanging Azure RIs across any region and series.
  • Canceling your Azure RIs at any time for an adjusted refund.

Purchasing Azure RIs and continuously modify them helps provide you the most value. If RI is idle or underutilized, modification means the RI can cover on-demand usage to a greater degree. This ensures that RIs are operating efficiently and that savings opportunities are being maximized.

Xgility Pro TipMicrosoft allows you to achieve a greater cost savings (up to 72%) by leveraging Azure RIs combined with the Azure Hybrid Benefit. The Azure Hybrid Benefit covers the cost of the Windows OS on up to two virtual machines per license.

Move Object Data to Lower-Cost Tiers

Microsoft offers several tiers of Storage at different price points and performance levels.

It’s best to move data between the tiers of storage depending on usage. There are two ways to adjust your Azure storage: by redundancy (how many copies are stored across how many locations), and by access tier (how often data is accessed).

Microsoft allows four redundancy options and three access tier options to create the right solution.  For example, Cold Locally Redundant Storage (LRS) is ideal for long term storage, backups, and disaster recovery content, while Cold Geographically Redundant Storage (GRS) is best suited for archival.

Any objects residing in a Hot tier that are older than 30 days should be converted to a Cool tier. Depending on redundancy levels, the Hot tier is based on the amount of content stored starting at $0.0184 per GB per month. The Cool tier prices are a flat price of $0.01 per GB per month and the Archive tier is available at $0.002 per GB per month.

Xgility Pro TipAny objects residing in a Hot tier that are older than 30 days should be converted to a Cool tier.

In the last of our blog series, we’ll explore some customer success and what analysts are saying about the best tools to tackle cloud sprawl.

Love the cloud but not the cost?

Learn how our Cloud Optimization Platform Service can reduce your Azure costs by an average of 20%-30%. Schedule a demo with our cloud experts » or Request a free trial »

Source: CloudHealth, 10 Best Practices for Reducing Spend in Azure, 2018

Best Practices for Optimizing Cloud Usage and Costs: Part I

This is Part 1 of our 3-Part series on Best Practices for Cloud Management.

The funny thing about developers is that once they get ahold of fancy new toys, there can be a tendency to play with them as much as possible. This may sound familiar if you are using Microsoft Azure or another public cloud for application development and watched the costs go through the roof.

Luckily, that doesn’t have to be the case.

While you don’t want to clip your developer’s wings, striking a balance between freedom and responsibility and making ‘cost consciousness’ a part of your company culture will go a long way to helping reduce your Azure spend and better manage your cloud infrastructure.

The decision to go beyond cost and focus on optimization is also an important step in moving towards a true cloud cost management and optimization (CCMO) program.

Start by asking yourself some key questions:

  • How much is my infrastructure costing me?
  • How much money am I wasting on idle infrastructure?
  • What guardrails do I have in place?

Once you have made an honest evaluation of where you think you are, start implementing these best practices to put you on the road to immediate Azure cost savings.

Delete Unattached Disk Storage

It’s not uncommon to see thousands of dollars in unattached Disk Storage (Page Blobs) within Azure accounts.

That’s because when you delete a Virtual Machine, by default, any disks that are attached to the VMs aren’t deleted. This features helps you prevent data loss from an unintentional VM deletion. However, after a VM is deleted the Disk Storage remains active and you will continue to pay the full price of the disk.

Because of the dynamic nature of cloud computing, it’s easy for users to quickly spin up and spin down workloads, but that means the risk of leaving behind unattached storage is high. Check for unattached Disk Storage in your infrastructure to cut thousands of dollars from your monthly Azure bill.

Xgility Pro TipDelete your Disk Storage if it has been unattached for two weeks. It’s unlikely the same storage will be utilized again.

Delete Aged Snapshots

Many organizations use Snapshots on Blob and Disk Storage to create point-in-time recovery points in case of data loss or disaster.  However, Snapshot can grow quickly if not closely monitored. Individual Snapshots are not costly, but the cost can grow quickly when several are provisioned.

Additionally, users can configure settings to automatically create subsequent snapshots without scheduling older snapshots for deletion. Monitoring Snapshot cost and usage per VM will help ensure Snapshots do not get out of control.

Xgility Pro TipSet a baseline number of snapshots that should be retained per object. Most of the time a recovery will occur from the most recent snapshot.

Terminate Zombie Assets

Zombie assets are infrastructure components that are running in your cloud environment but not being used. For example, they could be former VMs that have not been turned off. Zombie VMs may also occur when VMs fail during the launch process or because of script errors that fail to deprovision VMs. Additionally, zombie assets can also come in the form of idle Load Balancers that aren’t being used effectively, or an idle SQL Database.

Microsoft charges for these assets when they’re in a running state. They should be isolated, evaluated, and terminated if not needed. Take a backup of the asset before terminating or stopping it to ensure recovery if necessary.

Xgility Pro TipIdentify VMs that have a Max CPU <5% over the past 30 days as a starting point for finding zombie assets.

Upgrade VMs to the Latest Generation

In 2014, Microsoft introduced the next generation of Azure deployment, called Azure Resource Manager (ARM), or sometimes v2. This update offers functionality like resource grouping, advanced tagging, role-based access control, and templates. While the prices for ARM and Azure Classic (Azure v1) are the same, the management improvements help save time.

For example, using ARM, you can easily batch deploy new VMs from a JSON template, rather than deploying them one at a time. You can tag assets so they are more easily viewable by Line of Business.

In addition, for some VM types there is the option to upgrade to the latest version. While the VM price points are the same, the performance improvements that may enable you to run fewer VMs.

For example, upgrading a D-series VM gives you 35% faster processing and greater scalability for the same price.

Xgility Pro TipMigrate from Azure Classic to ARM for improved performance, additional features and better manageability. Since both Classic and ARM assets can now be managed in the same console, you can migrate workloads at your own pace.

Rightsize VMs

Rightsizing an Infrastructure as a Service (IaaS) offering such as VMs is the cost reduction initiative with the potential for the biggest impact.

Over-provisioning a VM can lead to exponentially higher costs.  Without performance monitoring or cloud management tools, it’s hard to tell when assets are over- or under-provisioned.

Be sure to consider CPU, memory, disk, and network in/out utilization. Reviewing these trended metrics over time, can help you reduce the size of the VM without affecting VM application performance. Because it’s common for VMs to be underutilized, you can reduce costs by assuring that all VMs are the right size.

Xgility Pro TipLook for VMs that have an Avg CPU < 5% and Max CPU < 20% for 30 days as viable candidates for rightsizing or termination.

It’s important to remember these tips are not meant to be one-time activities but ongoing processes.

Read Part 2 of our blog series to learn five more tips that can help you save costs in Azure.


Need to Gain Control of Your Cloud Spend?

Learn how our Cloud Optimization Platform Service can reduce your Azure costs by an average of 20%-30%. Schedule a demo with our cloud experts » or Request a free trial »

Source: CloudHealth, 10 Best Practices for Reducing Spend in Azure, 2018


[Infographic] 10 Ways to Reduce Your Azure Spend

Microsoft Azure adoption is on the rise. But many organizations have learned the hard way that moving to the public cloud doesn’t always guarantee cost savings. In fact, many have noticed cloud bills that are two to three times higher than expectations.

Luckily, that doesn’t have to be the case. The first step to combatting rising Microsoft Azure costs is to gain visibility across your entire organization’s cloud spend. Then, use the 10 best practices outlined in our infographic –  including deleting unattached storage, deleting aged snapshots, and terminating zombie assets – to reduce your spend.

Download PDF Version

Infographic: 10 Ways to Reduce Your Azure Spend.


Cloud Management Made Simple

Learn how our Cloud Optimization Platform Service can reduce your Azure costs by an average of 20%-30%.
Request a free trial today!

Azure Automation: Infrastructure as Code

Automation is Key to Getting the Most Out of the Cloud

The cloud offers many benefits to companies that move their infrastructure there or develop applications using cloud native services, but automation is the key to making the cloud efficient and cost-effective. We have to get into the “way back” machine for a second to return to the days of on-premise servers, storage, and network gear to get a full grasp of how important automation is. In the old days, IT was typically siloed into four or five teams that all had expertise on different IT components: server team, storage team, network team, application team, and security team. When a basis move, add or change needed to be made the entire team was not only consulted but also the played a role in making the change happen. This made basic changes time consuming and fairly complicated to orchestrate. Additionally, changes were typically done manually because the siloed groups almost never created cross-functional tools or scripts because of the siloed organizational structure. This drove up cost and slowed down the pace of innovation.

Ok, now let’s jump forward into modern times. The public cloud offers a new way to manage change for IT organizations and it is forcing IT professionals to evolve from being builders to becoming orchestrators. In the public cloud, IT professionals can now perform cross-functional changes using one common and ubiquitous console that has access to servers, storage, network, applications, and security. But that’s not the best part, you can now use a common scripting and API interface to perform all of the orchestration and you don’t need multiple subject matter experts with compartmentalized product knowledge to participate in it. All of the cloud services that you use can be treated as application code which allows for direct programming and automation. Moves, adds, and changes are now developed in code and executed in seconds with a simple command. This accelerates an organizations ability to change and allows them to innovate rapidly.

Let’s explore how you can do this with Azure services, I think you will find it enlightening.

Azure PowerShell

Azure provides a set of PowerShell cmdlets (AzureRM) that use the Azure Resource Manager model for the management, provisioning or updating of Azure resources. However, when executing these scripts, the user is required to log into Azure from PowerShell using the cmdlet Connect-AzureRmAccount which provides authentication and authorization.

If you plan to manage Azure PowerShell, the scripts should be executed under an Azure Active Directory (AAD) Service Principal, rather than using your own credentials.

Azure Service Principal

An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources. Think of it as a ‘user identity’ (username and password or certificate) with a specific role, and tightly controlled permissions. A service principal should only need to do specific things, unlike a general user identity. It improves security if you only grant it the minimum permissions level needed to perform its management tasks.

Azure Automation

The following example creates a self-signed certificate, associates the certificate to an Azure Active Directory (AAD) Service Principal and connects to Azure from PowerShell, the AAD Service Principal provides the authentication and authorization.

1. Use Connect-AzureRMAccount to log into Azure from PowerShell using an account that has contributor rights.

2. Execute the following PowerShell commands to create a Self-Signed Certificate, the certificate will be stored in the local (server) Windows Certificate Store.’

3. Execute the following PowerShell command to create an Azure Service Principal.

Technical Note #1
The PowerShell command in Step #2 creates the Self-Signed Certificate along with the Key Value ($KeyValue) of the certificate, the $KeyValue is used as the -CertValue parameter when the Azure Service Principal is created in Step #3 and assigned the Display Name infrastructure.

The Azure Service Principal is created in Azure Active Directory in App registrations.

4. Execute the following PowerShell command to assign Contributor rights to the Azure Service Principal.

5. Execute the following PowerShell commands to retrieve the Azure Tenant Id and the Application Id (the Azure Service Principal) created in Step #3 and the Thumbprint of the Self-Signed Certificate created in Step #2.

Technical Note #2
The Tenant Id ($TenantId) is the Id associated with your Azure tenant, the Application Id ($ApplicationId) is the Azure Active Directory Service Principal that was created in Step #3 and the Thumbprint ($Thumbprint) was created in Step #2.

Connecting to Azure using a Service Principal

6. Using the variables $TenantId, $ApplicationId and $Thumbprint from Step #5 as input parameters, execute the following PowerShell command (Connect-AzureRmAccount).

The Connect-AzureRmAccount PowerShell cmdlet successfully executed and displays the Azure Context information. The context information is the Active Directory account (Account, i.e. Service Principal), Subscription Name (SubscriptionName), Subscription Id, Active Directory tenant (Tenant Id) and Azure Environment (Environment).

Take it One Step Further

Now that the Service Principal has been created, let’s take it one step further and automate an Azure PowerShell script using a CICD environment.

The CICD environment will be Jenkins which provides an automation server for Continuous Integration and Continuous Delivery.

Azure Network Infrastructure

An Azure Virtual Network has been provisioned which contains dev, stage and prod subnets, Azure Network Security groups are associated with each subnet.

The Jenkins server (xprovisioning) is a Windows 2016 Datacenter server, jenkins-2.138.2 and the AzureRM PowerShell cmdlets have been installed on the server.

Jenkins Project

A Jenkins project titled Create-Update Network Security Group is a Freestyle project which is configured to execute PowerShell scripts.

The Jenkins project will leverage Jenkins Credentials Binding plugin, this allows you to configure and inject credentials or text as environment variables which are stored as secrets in Jenkins.

The Jenkins project Build will execute an Azure PowerShell script Infrastructure_NSG_CreateUpdate.ps1


The PowerShell script will be configured to read the Jenkins environment variables (secret text), connect to Azure from PowerShell using a Service Principal, execute a publicly available Microsoft script (GitHub) that updates an Azure Network Security Group and logs out of Azure.

Jenkins Dashboard

The Jenkins Dashboard provides access to all the Jenkins projects that have been set up for the environment, click on the Create-Update Network Security Group project.

Jenkins Project Dashboard

The Jenkins Project Dashboard allows a user to modify, build or delete the project. The following is the output of the successfully executed Jenkins project Create-Update Network Security Group.

Putting It All Together

An Azure service principal was created with a Self-Signed Certificate, this provides authentication and authorization for connecting to Azure.

A Jenkins Freestyle project was created that executes the PowerShell script Infrastructure_NSG_CreateUpdate.ps1. The project also leveraged the Jenkins Credential Bindings Plug-in which allowed the Tenant Id ($TenantId) and Application Id ($ApplicationId) created in Step #3 and the Thumbprint ($Thumbprint) created in Step #2 to be stored as secret text.

The PowerShell script Infrastructure_NSG_CreateUpdate.ps1 calls a PowerShell script CreateUpdateNSGFromCsv.ps1, this script updates the inbound security rules for the Azure Network Security Group prod-nsg which is associated with the prod-subnet.

A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. You can enable network security group flow logs to analyze network traffic to and from resources that have an associated network security group.

Network Security Group – prod-nsg

Before the execution of the PowerShell script, the network security group contained one custom inbound security rule; Priority 100, Name Allow_RDP port 3389. This rule allowed Remote Desktop Protocol (RDP) access to virtual servers in the prod-subnet.

After the execution of the PowerShell script, additional inbound security rules were added, and the priority of the inbound security rule Allow_RDP was changed to Priority 200.

The additional inbound security rules allow inbound web traffic for ports 80 (http), 443 (https) and inbound SQL Server traffic for port 1443.

Six Myths About Moving to the Cloud: What You Really Need to Know About Moving to Office 365

Most organizations that choose to move to the cloud do so because they have decided they need it for business agility and want the cost savings that come with it.

If your organization is considering Microsoft Office 365 as your first step in moving applications to hosted solutions, you may have found similar inconsistencies in your research — making it difficult to separate fact from fiction.

A common misconception about Office 365, for example, is that it is simply a version of Office accessed by a browser.

To help in your migration to the cloud, here are six myth-busting facts about Office 365.

Myth 1: Office 365 is just Office tools in the cloud, and we can only use it online.

Fact: Office 365 is the Office you already know, plus productivity tools that will help you work more efficiently.

Whether at your desk or on the go, you have the tools to focus on the work that matters to your mission. And, since Office 365 lives in the cloud, these tools stay up to date, are simple to use and manage, and are ready to work when you are.

Myth 2: If our data moves to the cloud, our organization will no longer have control over our technology.

Fact: You still have total control over technology, but your IT department won’t have to worry about constant updates.

When you move to the cloud, time spent maintaining hardware and upgrading software is significantly reduced—eliminating headaches with it. Now your IT team can focus on advancing your organization’s technology, rather than being a repair service. Plus, you will have more time to spend improving operations and launching agile initiatives.

Instead of spending more and more portions of your budget on servers for email storage and workloads, you can think strategically and support organizational managers in a much more agile fashion, quickly responding to their needs.

Myth 3: Corporate spies, cyber thieves, and governments will have access to my data if it is in the cloud.

Fact: It’s your data, not anyone else’s.

This is a top fear about the cloud among many organizations, but it is unfounded. Your IT team manages access, sets up rights and restrictions, and provides smartphone access and options. Further, your organization remains the sole owner. You retain the rights, title, and interest in the data stored in Office 365.
Visit the Microsoft Trust Center to learn how they help safeguard your data »

Myth 4: Cloud migration is too much for my organization to handle.

Fact: We’re here to help every step of the way.

When you start considering how to move petabytes of data to the cloud, it’s easy to see why some people think “going cloud” is too big a challenge for IT departments and staff, alike. We’re not going to tell you it’s simple, but you really can get Office 365 up and running quickly.
Learn more about our Office 365 Strategic Services »

Myth 5: We have to learn all new tools to manage SharePoint Online.

Fact: SharePoint Online abstracts maintains the infrastructure, without changing anything else.

All of your hard work learning how to manage SharePoint is not lost! SharePoint Online shares the same familiar administration and management tools, whether your deployment is in the cloud, on location, or in a hybrid of the two. Although your customizations aren’t populated to the cloud, all the administrative controls remain the same.

When moving to SharePoint Online, you no longer need to bother with controlling the workload implementation—instead, your IT team can focus on controlling its configuration. With the convenient, one-time, expert-led implementation that SharePoint Online handles, your IT team can reallocate time they used to spend on implementation and can concentrate on building strong, strategic tools for the organization. SharePoint Online simply abstracts the infrastructure, enabling you to focus on the solution.

Myth 6: I have to move everything to the cloud. It is an all-or-nothing scenario.

Fact: You can move to the cloud at your own pace or use a hybrid approach.

While some early cloud supporters advocated for moving your entire organization to the cloud all at once, this isn’t a process that needs to happen overnight. Most implementations start with a hybrid approach—moving a single workload, like email, and growing from there.

The hybrid cloud creates a consistent platform that spans data centers and the cloud, simplifying IT and delivering apps and data to users on virtually any device, anywhere. It gives you control to deliver the computing power and capabilities that your organization demands and to scale up or down as needed without wasting your onsite technology investments.

As many organizations move their productivity workloads to the cloud, the path for each workload is different, and the time it takes for those migrations varies. We can help you move workloads such as file sync-and-share or email first, and then help you figure out the right long-term plan for more.

Final Thoughts

Feel free to share this with colleagues who need help separating fact from fiction when it comes to Office 365 in the cloud. It’s good to be on the same page, and you’ll save time by not having to argue about these myths.

Are you considering moving to Microsoft Office 365? If so, we can help. Explore our solutions and services for details or connect with an Xgility expert today.

The Principle of Least Privilege Access in the Cloud

The principle of least privilege states that each component should allocate sufficient privileges to accomplish its specified functions, but no more. This limits the scope of the component’s actions, which has two desirable effects: the security impact of a failure, corruption or misuse of the component will have a minimized security impact; and the security analysis of the component will be simplified.

NIST Special Publication 800-160: Systems Security Engineering Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems, November, 2016.

The principle of least privilege (POLP) has long been a best practice for computer security.  In practical application, administrative users will use regular user accounts for routine activities, and use a separate, administrative login to perform administrative functions.  POLP is a component of many different compliance programs, including FISMA, PCI, CIS, etc.

In the Windows desktop, User Access Control (UAC) performs a POLP function – blocking or requesting access for administrative privileges as needed.


Using “Run As Administrator” also performs a POLP function – programs run with normal privileges unless explicitly requested to run with administrator privileges.

In the Windows Server environment, Microsoft has long recommended using separate administrator and regular user accounts for system administrators.  (From The Administrator Accounts Security Planning Guide published in 1999 which is a document no longer available from Microsoft):

Why You Should Not Log On To Your Computer as an Administrator

If you regularly log on to your computer as an administrator to perform common application-based tasks, you make the computer vulnerable to malicious software and other security risks because malicious software will run with the same privileges you used to log on. If you visit an Internet site or open an e-mail attachment, you can damage the computer because malicious code could be deployed that will download and execute on your computer.

If you log on as an administrator of a local computer, malicious code can, among other things, reformat your hard disk drive, delete your files, and create a new user account that has administrative privileges. If you log on as a member of the Domain Admins group, Enterprise Admins group, or Schema Admins group in the Active Directory® service, malicious code can create a new domain user account that has administrative access or put schema, configuration, or domain data at risk.

There are obstacles to adopting this practice of separate accounts for users of cloud services such as Office 365.  Creating separate accounts for administrators requires multiple subscriptions for a single user, for example.  Managing multiple accounts within a browser context can be confusing, leading to a less safe usage of administrator accounts.  Privileges for Office 365 are managed through users, roles, and groups within the Office 365 Admin portal.

Assigning privileges through the use of roles allows limitation of the privileges that are assigned to user accounts.  For example, a user account can be assigned the Reports Reader role so that they can view the activity reports in Office 365, but they are not assigned any other administrative privileges.  Other examples of roles available within Office 365 are below.

Organizations may find that they need a higher level of granularity and control of administrators.  To assist in managing the security of privileged accounts, Microsoft provides Azure Active Directory (AD) Privileged Identity Management (PIM).  This is available with Azure Premium P2 licenses.  Adding this application to the Azure portal provides a high level of protection for privileged accounts.  Azure PIM can secure privileged accounts by requiring Azure Multi-Factor Authentication (MFA), placing time limits on the granting of privileges (like “Just-in-Time Administration” in Windows Server 2016), detect attacks, and allow conditional access.  These controls reduce the attack surface, and help maintain the principle of least privilege for the Azure AD administrator accounts.

Cloud and Office 365 administration requires a paradigm shift when compared to Windows server administration.

When our team performs security assessments we are looking for no less than two global (tenant) administrators and no more than 5.  Many customers make the mistake of giving the Power BI or Exchange administrator global admin rights when all they require for their job is administration rights to that specific workload.

If you have any questions or would like to schedule a security assessment for your organization, please contact us.

Related articles:

Office 365 vs. Your Information Security Program

Unraveling Office 365 Groups

Top Azure Consulting Companies

Below is a list of what I believe are the top Azure consulting companies in the DC Metro area including Maryland, Northern Virginia, and Washington D.C.  The ratings are based mostly on industry insider knowledge, including factors such as satisfaction of known customers, consultant turnover, and experience with key 3rd party solutions from the Azure Marketplace.  Top companies can do more than just migrate virtual machines to Azure, but offer services to transform applications using the power of the cloud.

The top Azure consulting companies are active speakers in local user groups, Microsoft Gold Partners, Cloud Solution Providers, and participate in Microsoft programs such as Azure Everywhere Assessments/Pilots/POC, Go Fast, and Software Assurance Planning Days.

Azure is new and is rapidly evolving.  As Microsoft customers move installing packaged software on servers to the cloud, this creates opportunities for new partners and may change trusted relationships with customers and vendors.  You should expect this list to be updated frequently.  The list below is in no particular order (so don’t email me to complain if your company is #9, we can still be friends).


Booz Allen Hamilton

These guys have a pretty large Microsoft practice focused primarily in the Federal government.  Their customers include most of the intelligence community and the Internal Revenue Service (IRS).  Dan Usher is as active in the Azure technical community as he is in the SharePoint community.



These guys are large, national, and prefer to work on the biggest projects.  They have a large presence in the Federal government, including DHS, and Fortune 1000.  Our team regularly sees Accenture consultants and local Azure user groups.



Xgility started as a SharePoint consulting and application development company.  Over the last few years we have recognized that Microsoft cloud reduces the time to market and deploy technology solutions for our customers.  Recent projects include developing mobile field agent reporting apps, proof of concept (POC) lab for a government customer, migrating virtual machines to Azure for a large trade association, and modernizing an ecommerce application for a major insurance company to take advance of Platform as Service.  Xgility is a Gold partner, CSP, GSA schedule holder, and a certified small business.



CSRA traditionally has been a provider of general IT program management services.  In the past they have hosted customers in their data center and implemented mostly provide cloud (virtual servers) environments.  Recently they have stepped up their commitment to Microsoft and Azure specifically.   Our team has had the opportunity to work with them on past projects in the Federal Government.



AIS, also known as Applied Information Sciences, grew out of small business status by performing mostly on government contracts.  While not as active in the user group community as some, they still have a good (mostly federal) customer base in the DC metro area.


Planet Technologies

Planet Technologies is a Microsoft partner headquartered in Montgomery County, MD.  They have a good presence in the state and local government and also do federal work.  Patrick Curran, one of their consultants, is an active speaker in Azure and SharePoint community.



If you have implemented SaaS solutions like Office 365 and SalesForce, it may be time to evaluate moving the servers in your data center to a public cloud.  Have you worked with another Azure consulting company else you really like?  Drop us a note.



Author:  Kurt Greening

Editor:  Alex Finkel