Darryl Mitchell

@darrmitchell

Lead Engineer @ NeoCloud working in cloud-based messaging and datacenter design/implementation

2,603 words

darrmit.me Thank
You'll only receive email when Darryl Mitchell publishes a new post

5.7.700-749 Access denied, tenant has exceeded threshold

tl;dr if you're getting this error, it's an automated threshold that Microsoft Support has to reset or expire over time

check your message trace logs for signs of abuse and go ahead and call support

I'm currently working on an ~800 user hybrid Exchange deployment and due to some issues with the existing 2010 environment decided to deploy an Exchange 2016 server to handle hybrid duties. Becauuse the customer is switching from Mimecast inbound/outbound to EOP/ATP I decided to route all messages from on-prem out through Office 365 immediately rather than just letting on-prem route directly and Office 365 route directly. Centralized routing reversed, basically.

Everything worked fine for 24 hours, but then I got frantic calls from the customer that they couldn't send or receive e-mail. Looking at the logs I saw rejected messages with the error code:

5.7.700-749 Access denied, tenant has exceeded threshold

I've done a ton of these deployments and never seen this error, but common sense told me that it was related to some sort of abuse prevention. I started running message traces and saw no indication that anyone had been phished. I double checked my connectors to make sure I hadn't accidentally created some sort of open relay on-prem that was abusing EOP as a smarthost. I couldn't find anything.

I called Microsoft and thankfully got a helpful engineer on the first try (not the norm, unfortunately). He immediately determined because we had gone from zero messages to over 1000 unique recipients in a day that we had triggered an automated abuse threshold. He confirmed that suspicion via health checks on his side and used an internal script to reset the threshold, which restored the mail flow within an hour or two.

The bad thing about this is you're entirely at Microsoft's mercy. I was able to restore mail flow to the users on-premise outbound by creating a new send connector, but the mailboxes on Office 365 were not able to send or receive, and even some inbound traffic to on-prem users was being rejected by EOP. If Microsoft had replied they were going to wait 24 hours to fix the issue I was prepared to offboard the few mailboxes we had already migrated back to on-prem, and even cut MX back over until we could figure out what was happening. Thankfully all that was avoided.

So, pro-tip, it may be better to slowly scale up your outbound traffic vs going from zero to 1000 unique recipients in a day, or perhaps to alert Microsoft if you're planning to do it that way. And kudos to Microsoft for having Tier 1 support tools capable of fixing what ultimately was a simple false-positive.

WinRM error 0x80338012 in Windows 10

Working on getting MFA working with my PowerShell connect script for Office 365 I ran into an issue where a WinRM command wasn't working on my machine. Apparently I had never set it up before, so the command:

winrm get winrm/config/client/auth

was not working. I was getting an error:

WSManFault
Message = The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

Error number:  -2144108526 0x80338012
The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

I decided to try running "winrm quickconfig", and got an error:

WinRM is not set up to receive requests on this machine.
The following changes must be made:

Start the WinRM service.
Set the WinRM service type to delayed auto start.

Make these changes [y/n]? y

Error: One or more update steps could not be completed.

Could not change the WinRM service type: Access is denied.
Could not start the WinRM service: Access is denied.

I launched another PowerShell session, this time as administrator, and ran the same command with success. Now the result of winrm get winrm/config/client/auth looks much better:

Auth
    Basic = true
    Digest = true
    Kerberos = true
    Negotiate = true
    Certificate = true
    CredSSP = false

https://docs.microsoft.com/en-us/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell?view=exchange-ps

PowerShell Profile Syncing

I switch devices a lot - laptops, desktops, management VMs, etc. - and generally whatever device I'm on I'm using PowerShell extensively.

I predominantly used Mac/Linux from 2006-2017 and got used to certain tools - openssl, dig, whois, etc. - that I need in a shell, and Windows doesn't have those tools (even if it has some native "equivalents", like Resolve-DnsName). Ironically, I made the switch back to Windows specifically because at the time my most used PowerShell module, MsOnline, wasn't showing any signs of making the jump to MacOS. But then they switched to AzureAD. I digress..

It's easy enough to download these tools and create aliases in a PowerShell profile for them, but it makes switching between devices a pain. The solution I came up with is to store my PowerShell profile, downloaded utilities, and scripts in OneDrive (or Google Drive or Dropbox) and just point to that with a symlink on each machine.

By default, PowerShell stores its profile in the user's Documents folder in a folder called "WindowsPowerShell". To begin, backup that file (assuming you're already in ~/Documents/WindowsPowerShell:

cp .\Microsoft.PowerShell_profile.ps1 .\Microsoft.PowerShell_profile.ps
1.old

Next, create a symlink in that location (~/Documents/WindowsPowerShell):

New-Item -Type SymbolicLink -Target 'C:\Users\Darryl\OneDrive\Utilities
\Microsoft.PowerShell_profile.ps1' -Name Microsoft.PowerShell_profile.ps1

Close and open PowerShell, and you should be able to use the profile that you symlinked to.

One issue I've found is different machines can have different usernames - for example "darryl" vs. "darryl.mitchell". This means that statically setting paths inside the PowerShell profile won't work. Thankfully, there's a PowerShell environment variable for that: $env:USERPROFILE

So, when you're specifying aliases with paths (I have dig, whois, openssl, and some other functions/scripts) make sure to use that variable so you avoid issues with differing profile names. Example:

Set-Alias dig "$env:USERPROFILE\OneDrive\Utilities\bind9\dig.exe"

Using Microsoft Intune to push non-Windows Store apps

Mobile Device Management is quickly becoming a viable alternative to Group Policy in today's cloud-first world. What used to require a domain-joined machine with group policy can now be achieved with an MDM-enrolled machine and configuration or compliance policies.

Several things have made this possible: Microsoft overhauled Intune last year to make it part of the native Azure interface, recent Windows 10 builds shipped with an MDM agent built in, and Azure Active Directory join is taking the place of (or supplementing) legacy Active Directory.

I'll preface this by saying: Intune is powerful, and only getting more powerful by the day. It can configure endpoint encryption, Windows 10 updates, Office apps, LOB apps, and even run remote PowerShell scripts at this point. The latter piece REALLY unlocks a lot of potential, but I'd rather focus on what's natively possible today in the interface.

For this post, we'll focus on Google Chrome, which is fairly ubiquitous on corporate PCs today.

  1. To get started, you need to download the Google Chrome Enterprise bundle and unzip it.

  2. In the unzipped folder, go to the "Installers" folder. You'll see a "GoogleChromeStandaloneEnterprise" MSI file. You'll use this later when you upload the app to Intune.

  3. Open your Intune portal

    • go to "Mobile apps"
    • then click "Apps"
    • Click "Add" at the top
  4. An "Add app" blade appears. Click the drop-down and select "Line-of-business" app at the very bottom.

    • Click "App package file" and select the "GoogleChromeStandaloneEnterprise" MSI file you downloaded earlier.
  5. Save and exit. Now, click the "App information" button.

- Publisher: Google
- Ignore app version: YES
- and update anything else you want

  1. Click ok. The MSI file will begin uploading in the background.

Once the upload is complete, the only thing to do is decide how you want to assign the app. I assign Chrome to all devices, but you may have other apps that need to be restricted. For that, you can create device-specific groups in Azure AD to limit the scope.

Working with Azure Network Security groups

By default when you deploy an Azure VM a Network Security Group (NSG) is created with a set of default rules that allow vNet and Internet traffic and allow RDP from any source. This is fine for throwaway VMs and immediate access for you to get things setup, but it's not ideal for long-term production use.

In most of my customer use cases we're using Azure as a hybrid datacenter solution, so site-to-site connectivity is established. This makes it easy to narrow allowed traffic down to a specific subnet.

You can use PowerShell to quickly create a rule tp do this. To get a list of the NSGs in your subscription you can run this command:

Get-AzureRmNetworkSecurityGroup | select Name, ResourceGroupName

Once you have the desired NSG name and resource group you can store it in a variable:

$nsgName = Get-AzureRmNetworkSecurityGroup -Name "test-nsg" -ResourceGroupName "test-rg"

Now that you have the NSG stored in a variable, you can take a look at what rules are in effect for that NSG:

$nsgName |  select -Expand DefaultSecurityRules

One unique thing about NSGs is, from a PowerShell perspective, they function sort of like a firewall/router where you "Add/Remove" rules and then commit the new ruleset using a "Set" command.

In the next command, we're specifying the NSG variable and then adding a new rule with a priority of 100 (which is the lowest priority in this case) that allows all traffic on any port from our on-premise subnet:

$nsgName | Add-AzureRmNetworkSecurityRuleConfig `
-Name "LocalNetwork-AllowAll" `
-Description "Allows all traffic from local subnets" `
-Access Allow `
-Protocol * `
-Direction Inbound `
-Priority 100 `
-SourceAddressPrefix "10.0.0.0/24" `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange *

Now we commit that rule addition to the NSG:

$nsgName | Set-AzureRmNetworkSecurityGroup

If you don't have a Site-to-Site tunnel, you could replace the SourceAddressPrefix with whatever IP your traffic originates from publicly (i.e. when you go to ipchicken.com).

Note that this is an "allow all" rule. It is literally allowing all traffic from that subnet into the Azure VM. Don't do this with SourceAddressPrefix "0.0.0.0/0" unless you want a compromised VM.

Let's say you did want to allow ICMP traffic publicly, but you want to also disallow all other TCP/UDP traffic. You could do that by creating explicit "deny" rules for both TCP and UDP with a lower priority than an "any" rule that allows any traffic (which would include ICMP). Example:

100 Block TCP

101 Block UDP

102 Allow Any

Server 2003 to Azure

I was tasked recently with migrating an entire datacenter off of VMware and on to Azure, and their production servers were predominantly Server 2003. Yes, seriously.

There is no documented process for migrating Server 2003 to Azure because Microsoft doesn't support running Server 2003 in Azure. But, I was able to find some small tips here and there, and after many months (!) of testing I was able to come up with a relatively foolproof process for getting Server 2003 VMs out to Azure. These are rough notes, so you'll need to fill in some gaps with your own Azure knowledge.

The first thing to know is the Azure VM agent doesn't run on Server 2003, so you're not going to get any of the helpful reset networking, RDP config type stuff you get with 2008R2+ machines. What that also means is, since Azure lacks a console, you need to make absolutely certain your machine is reachable over the network when it boots in Azure.

Import necessary PowerShell modules:

Import-Module AzureRM
Import-Module 'C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1'

Login to Azure account:

Login-AzAccount

Login to 2003 VM and add local admin - this is critical because the domain will be uncontactable when booted in Hyper-V

Note: cached domain credentials can work, but only if you logged in recently

Download VMDK from VMware datastore - ensure you have enough space locally, as it will "expand" if it's thin provisioned

Also note the -VhdType FixedHardDisk and -VhdFormat vhd as Azure only supports Fixed VHDs, not dynamic or VHDX.

ConvertTo-MvmcVirtualHardDisk -SourceLiteralPath .\Desktop\testserver.vmdk -DestinationLiteralPath 'C:\Users\Administrator\Desktop' -VhdType FixedHardDisk -VhdFormat vhd

Create new Hyper-V VM using converted VHD

  1. Launch Hyper-V
  2. New > Virtual Machine
  3. Set Name
  4. Generation 1
  5. Set RAM, Networking
  6. Use existing virtual disk - choose VHD (NOT VMDK)
  7. Finish and launch VM

Login to Hyper-V VM

  1. Install Hyper-V integration tools - gives mouse access
  2. Uninstall VMware tools
  3. Reboot
  4. Run Windows Updates
  5. Shutdown - make sure no blue-screen/clean shutdown because the "Add-AzureRmVhd" command you're about to run will check the filesystem for corruption

Set a resource group and destination for the uploaded VHD:

$rgname = "Test-RG"
$destination = "https://your.blob.core.windows.net/vhds/testserver.vhd"

Upload VHD:

Add-AzureRmVhd -ResourceGroupName $rgname -Destination $destination -LocalFilePath .\Desktop\testserver.vhd

Create global VM variables:

$location = "Central US"
$vmName = "myVM"
$osDiskName = 'myOsDisk'
$vnet = Get-AzureRmVirtualNetwork -Name "test-vnet" -ResourceGroupName $rgname

Create managed disk:

$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk (New-AzureRmDiskConfig -AccountType StandardLRS -Location $location -CreateOption Import -SourceUri $destination) -ResourceGroupName $rgname

Create security group and rule to allow all local traffic:

$nsgName = "myNsg"
$allowAllRule = New-AzureRmNetworkSecurityRuleConfig -Name "LocalNetwork-AllowAll" -Description "Allows all traffic from local subnets" -Access Allow -Protocol * -Direction Inbound -Priority 100 -SourceAddressPrefix "192.168.0.0/17" -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange *
$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgname -Location $location -Name $nsgName -SecurityRules $allowAllRule

Create a network interface:
Make sure your subnet Id is actually $vnet.Subnets[0].Id

$nicName = "myNicName"
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgname -Location $location -SubnetId $vnet.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id

Create VM config:

$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize "Standard_A2"
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType StandardLRS -CreateOption Attach -Windows

Deploy VM config:

New-AzureRmVM -ResourceGroupName $rgname -Location $location -VM $vm

If your VM isn't reachable over the network in 5-10 minutes, you can check Boot Diagnostics in Azure to take a look at where the login process is - Windows logo screen, updating, etc.

Not an easy process by any stretch, but this process brings together several hours of searching and notes, so should point you in the right direction.

Simplenote Export

Simplenote provides an easy export, but the exported file names are actually a Simplenote object identifier of some kind. Not super useful.

The exported note has the note name stored as the first line in the file. You can use a simple PowerShell one-liner to recover your filenames.

Download .zip from Simplenote

Extract to directory

cd to directory in PowerShell and run the following command:

ls | % {$names = (Get-Content $_.name -First 1)+".txt"; foreach ($name in $names) {Renam
e-Item $_.Name -NewName $name}}

As a bonus, if you want to remove extra spaces leftover at the top of the file as a result of the export, you can quickly remove those:

ls | % {(Get-Content $_.name | select-object -Skip 1) | Set-Content $_.name}

There may be a better way to do this - i.e. in a single script or line, but this will get the job done and get it done quickly.