Microsoft Sentinel 101

Learning Microsoft Sentinel, one KQL error at a time

KQLCeption – use KQL to investigate Microsoft Sentinel — 24th Jan 2022

KQLCeption – use KQL to investigate Microsoft Sentinel

For people that use a lot of cloud workloads you would know it can be hard to track cost. Billing in the cloud can be volatile if you don’t keep on top of it. Bill shock is a real thing. While large cloud providers can provide granular billing information. It can still be difficult to track spend.

The unique thing about Sentinel is that it is a huge datastore of great information. That lets us write all kinds of queries against that data. We don’t need a third party cost management product, we have all the data ourselves. All we need to know is where to look.

It isn’t all about cost either. We can also also detect changes to data. Such as finding new information that can be helpful, or detect when data isn’t received.

Start by listing all your tables and the size of them over the last 30 days. Query adapted from this one.

 union withsource=TableName1 *
| where TimeGenerated > ago(30d)
| summarize Entries = count(), Size = sum(_BilledSize) by TableName1, _IsBillable
| project ['Table Name'] = TableName1, ['Table Entries'] = Entries, ['Table Size'] = Size,
          ['Size per Entry'] = 1.0 * Size / Entries, ['IsBillable'] = _IsBillable
 | order by ['Table Size']  desc

You will get an output of the table size for each table you have in your workspace. We can even see if it is free data or billable.

Now table size by itself may not have enough context for you. So to take it further, we can compare time periods. Say we want to view table size last week vs this week. We do that with the following query.

let lastweek=
union withsource=_TableName *
| where TimeGenerated > ago(14d) and TimeGenerated < ago(7d)
| summarize
    Entries = count(), Size = sum(_BilledSize) by Type
| project ['Table Name'] = Type, ['Last Week Table Size'] = Size, ['Last Week Table Entries'] = Entries, ['Last Week Size per Entry'] = 1.0 * Size / Entries
| order by ['Table Name']  desc;
let thisweek=
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| summarize
    Entries = count(), Size = sum(_BilledSize) by Type
| project ['Table Name'] = Type, ['This Week Table Size'] = Size, ['This Week Table Entries'] = Entries, ['This Week Size per Entry'] = 1.0 * Size / Entries
| order by ['Table Name']  desc;
lastweek
| join kind=inner thisweek on ['Table Name']
| extend PercentageChange=todouble(['This Week Table Size']) * 100 / todouble(['Last Week Table Size'])
| project ['Table Name'], ['Last Week Table Size'], ['This Week Table Size'], PercentageChange
| sort by PercentageChange desc

We run the same query twice, over our two time periods. Then join them together based on the name of the table. So we have our table, last weeks data size, then this weeks data size. Then, to make it even easier to read, we calculate the percentage change in size.

You could use this data and query to create an alert when tables increase or decrease in size. To reduce noise you can even filter on table size or percentage change. You could add the following to the query to achieve that. A small table may increase in size by 500% but is still small.

| where ['This Week Table Size'] > 1000000 and PercentageChange > 1.10

Of course, it wouldn’t be KQL if you couldn’t visualize your log source data too. You could provide a summary of your top 15 log sources with.

union withsource=_TableName *
| where TimeGenerated > ago(30d)
| summarize LogCount=count()by Type
| sort by LogCount desc
| take 15
| render piechart with (title="Top 15 Log Sources")

You could go to an even higher level, and look for new data sources or tables not seen before. To find things that are new in our data, we use the join operator, using a rightanti join. Rightanti joins say, show me results from the second query (the right) that weren’t in the first (the left). The following query will return new tables from the last week, not seen for the prior 90 days.

union withsource=_TableName *
| where TimeGenerated > ago(90d) and TimeGenerated < ago(7d)
| distinct Type
| project-rename ['Table Name']=Type
| join kind=rightanti 
(
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| distinct Type
| project-rename ['Table Name']=Type ) 
on ['Table Name']

Let’s have a closer look at that query to break it down. Joining queries in KQL is the most challenging aspect to learn.

We run the first query (our left query), which finds all the table names from between 90 and 7 days ago. Then we choose our join type, in this case rightanti. Then we run the second query, which finds all the tables from the last 7 days. Then finally we choose what field we want to join the table on, in this case, Table Name. We tell KQL to only display items from the right (the second query), that don’t appear in the left (first query). So only show me table names that have appeared in the last 7 days, that didn’t appear in the 90 days before. When we run it, we get our results.

We can flip this around too. We can find tables that have stopped sending data in the last 7 days too. Keep the same query and change the join type to leftanti. Now we retrieve results from our first query, that no longer appear in our second.

union withsource=_TableName *
| where TimeGenerated > ago(90d) and TimeGenerated < ago(7d)
| distinct Type
| project-rename ['Table Name']=Type
| join kind=leftanti  
(
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| distinct Type
| project-rename ['Table Name']=Type ) 
on ['Table Name']

Logs not showing up? It could be expected if you have offboarded a resource. Or you may need to investigate why data isn’t arriving. In fact, we can use KQL to calculate the last time a log arrived for each table in our workspace. We grab the most recent record using the max() operator. Then we calculate how many days ago that was using datetime_diff.

union withsource=_TableName *
| where TimeGenerated > ago(90d)
| summarize ['Days Since Last Log Received']  = datetime_diff("day", now(), max(TimeGenerated)) by _TableName
| sort by ['Days Since Last Log Received'] asc 

Let’s go further. KQL has inbuilt forecasting ability. You can query historical data then have it forecast forward for you. This example looks at the prior 30 days, in 12 hour blocks. It then forecasts the next 7 days for you.

union withsource=_TableName *
| make-series ["Total Logs Received"]=count() on TimeGenerated from ago(30d) to now() + 7d step 12h
| extend ["Total Logs Forecast"] = series_decompose_forecast(['Total Logs Received'], toint(7d / 12h))
| render timechart 

It doesn’t need to be all about cost either. We can use similar queries to alert on things that are new we may otherwise miss. Take for instance the SecurityAlerts table. Microsoft security products like Defender or Azure AD protection write alerts here. Microsoft are always adding new detections which are hard to keep on top of. We can use KQL to detect alerts that are new to our environment we have never seen before.

SecurityAlert
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
// Exclude alerts from Sentinel itself
| where ProviderName != "ASI Scheduled Alerts"
| distinct AlertName
| join kind=rightanti (
    SecurityAlert
    | where TimeGenerated > ago(7d)
    | where ProviderName != "ASI Scheduled Alerts"
    | summarize NewAlertCount=count()by AlertName, ProviderName, ProductName)
    on AlertName
| sort by NewAlertCount desc 

When we run this, any new alerts from the last week not seen prior are visible. To add some more context, we also count how many times we have had the alerts in the last week. We also bring back which product triggered the alert.

Microsoft and others add new detections so often it’s impossible to keep track of. Let KQL to the work for you. We can use similar queries across other data. Such as OfficeActivity (your Office 365 audit traffic).

OfficeActivity
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
| distinct Operation
| join kind=rightanti (
    OfficeActivity
    | where TimeGenerated > ago(7d)
    | summarize NewOfficeOperations=count()by Operation, OfficeWorkload)
    on Operation
| sort by NewOfficeOperations desc 

For OfficeActivity we can bring back the Office workload so we know where to start looking.

Or Azure AD audit data.

AuditLogs
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
| distinct OperationName
| join kind=rightanti (
    AuditLogs
    | where TimeGenerated > ago(7d)
    | summarize NewAzureADAuditOperations=count()by OperationName, Category)
    on OperationName
| sort by NewAzureADAuditOperations desc 

For Azure AD audit data we can also return the category for some context.

I hope you have picked up some tricks on how to use KQL to provide insights into your data. You can query your own data the same way you would hunt threats. By looking for changes to log volume, or new data that could be interesting.

There are also some great workbooks provided by Microsoft and the community. These visualize a lot of similar queries for you. You should definitely check them out in your tenant.

Detecting privilege escalation with Azure AD service principals in Microsoft Sentinel — 4th Jan 2022

Detecting privilege escalation with Azure AD service principals in Microsoft Sentinel

Defenders spend a lot of time worrying about the security of the user identities they manage. Trying to stop phishing attempts or deploying MFA. You want to restrict privilege, have good passphrase policies and deploy passwordless solutions. If you use Azure AD, there is another type of identity that is important to keep an eye on – Azure AD service principals.

There is an overview of service principals here. Think about your regular user account. When you want to access Office 365, you have a user principal in Azure AD. You give that user access, to SharePoint, Outlook and Teams, and when you sign in you get that access. Your applications are the same. They have a principal in Azure AD, called a service principal. These define what your applications can access.

You haven’t seen anywhere in the Azure AD portal a ‘create service principal’ button. Because there isn’t one. Yet you likely have plenty of service principals already in your tenant. So how do they get there? Well, in several ways.

So if we complete any of the following actions, we will end up with a service principal –

  1. Add an application registration – each time you register an application. For example to enable SSO for an application you are developing. Or to integrate with Microsoft Graph. You will end up with both an application object and an service principal in your tenant.
  2. Install a third party OAuth application – if you install an app to your tenant. For instance an application in Microsoft Teams. You will have a service principal created for it.
  3. Install a template SAML application from the gallery – when you setup SSO with a third party SaaS product. If you deploy their gallery application to help. Both an application object and a service principal in your tenant.
  4. Add a managed identity – each time you create a managed identity, you also create a service principal.

You may also have legacy service principals. Created before the current app registration process existed.

If you browse to Azure AD -> Enterprise applications, you can view them all. Are all these service principals a problem? Not at all, it is the way that Azure Active Directory works. It uses service principals to define access and permissions for applications. Service principals are in a lot of ways much more secure than alternatives. Take a service principal for a managed identity – it can end the need for developers to use credentials. If you want an Azure virtual machine to access to an Azure Key Vault, you can create a managed identity. This also creates a service principal in Azure AD. Then assign the service principal access to your key vault. Your virtual machine then identifies itself to the key vault. The key vault says ‘hey I know this service principal has access to this key vault’ and gives it access. Much better than handling passwords and credentials in code.

In the case of a system assigned managed identity, the lifecycle of the service principal is also managed. If you create a managed identity for a Azure virtual machine then decommission the virtual machine. The service principal, and any access it has, is also removed.

Like any identity, we can grant service principals excess privilege. You could make a service account in on premise Active Directory a domain admin, you shouldn’t, but you can. Service principals are the same, we can assign all kinds of privilege in Azure AD and to Azure resources. So how can service principals get privilege, and what kind of privilege can they have? We can build on our visualization of we created service principals. Now we add how they gain privilege.

So much like users, we can assign various access to service principals, such as –

  1. Assigned an Azure AD to role – if we add them to roles such as global or application administrator.
  2. Granted access to the Microsoft Graph or other Microsoft API – if we add permissions like Directory.ReadWrite.All or Policy.ReadWrite.ConditionalAccess from Microsoft Graph. Or other API access like Defender ATP or Dynamics 365, or your own APIs.
  3. Granted access to Azure RBAC – if we add access such as owner rights to a subscription or contributor to a resource group.
  4. Given access to specific Azure workloads – such as being able to read secrets from an Azure Key Vault.

Service principals having privilege is not an issue, in fact, they need to have privilege. If we want to be able to SSO users to Azure AD then the service principal needs that access. Or if we want to automate retrieving emails from a shared mailbox then we will need to provide that access. Like users, we can assign incorrect or excessive privilege which is then open to abuse. Explore the abuse of service principals by checking the following article from @DebugPrivilege. It shows how you can use the managed identity of a virtual machine to retrieve secrets from a key vault.

We can get visibility into any of these changes in Microsoft Sentinel. When we grant a service principal access to Azure AD or to Microsoft Graph, we use the Azure AD Audit log. Which we access via the AuditLogs table in Sentinel. For changes to Azure RBAC and specific Azure resources, we use the AzureActivity or AzureDiagnostics table.

You can add Azure AD Audit Logs to your Sentinel instance. You do this via the Azure Active Directory connector under data connectors. This is a very useful table but ingestion fees will apply.

For the sake of this blog, I have created a service principal called ‘Learn Sentinel’. I used the app registration portal in Azure AD. We will now give privilege to that service principal and then detect in Sentinel.

Adding Azure Active Directory Roles to a Service Principal

If we work through our list of how a service principal can gain privilege we will start with adding an Azure AD role. I have added the ‘Application Administrator’ role to my service principal using PowerShell. We can run the cmdlet below. Where ObjectId is the Id of the role, and RefObjectId is the Object Id of the service principal. You can get all the Ids of all the roles by first running Get-AzureADDirectoryRole first.

Add-AzureADDirectoryRoleMember -ObjectId 67513fd7-cc60-456c-9cdd-c962c884fbdc -RefObjectId a0f399db-f358-429c-a743-735ab902fcbe

We track this activity under the action ‘Add member to role’ in our Audit Log. Which is the same action you see when we add a regular user account to a role. There is a field, nested in the TargetResources data, that we can leverage to ensure our query only returns service principals –

If we complete our query, we can filter for only events where the type is “ServicePrincipal”

AuditLogs
| where OperationName == "Add member to role"
| extend ServicePrincipalType = tostring(TargetResources[0].type)
| extend ServicePrincipalObjectId = tostring(TargetResources[0].id)
| extend RoleAdded = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend ServicePrincipalName = tostring(TargetResources[0].displayName)
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ActorIPAddress = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| where ServicePrincipalType == "ServicePrincipal"
| project TimeGenerated, OperationName, RoleAdded, ServicePrincipalName, ServicePrincipalObjectId, Actor, ActorIPAddress

If we run our query we see the activity with the details we need. When the event occurred, what role, to which service principal, and who did it.

Everyone uses Azure AD in different ways, but this should not be a very common event in most tenants. Especially with high privilege roles such as Application, Privileged Authentication or Global Administrator. You should alert on any of these events. To see how you could abuse the Application Administrator role, check out this blog post from @_wald0. It shows how you can leverage that role to escalate privilege.

Adding Microsoft Graph (or other API) access to a Service Principal

If you create service principals for integration with other Microsoft services like Azure AD or Office 365 you will need to add access to make it work. It is common for third party applications, or those you are developing in house, to request access. It is important to only grant the access required.

For this example I have added

  • Policy.ReadWrite.ConditionalAccess (ability to read & write conditional access policies)
  • User.Read.All (read users full profiles)

to our same service principal.

When we add Microsoft Graph access to an app, the Azure AD Audit Log tracks the event as “Add app role assignment to service principal”. We can parse out the relevant information we want in our query to return the specifics. You can use this as the completed query to find these events, including the user that did it.

AuditLogs
| where OperationName == "Add app role assignment to service principal"
| extend AppRoleAdded = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend ActorIPAddress = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ServicePrincipalObjectId = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[3].newValue)))
| extend ServicePrincipalName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[4].newValue)))
| project TimeGenerated, OperationName, AppRoleAdded, ServicePrincipalName, ServicePrincipalObjectId,Actor, ActorIPAddress

When we run our query we see the events, even though I added both permissions together, we get two events.

Depending on how often you create service principals in your tenant, and who can grant access I would alert on all these events to ensure that service principals are not granted excessive privilege. This query also covers other Microsoft APIs such as Dynamics or Defender, and your own personal APIs you protect with Azure AD.

Adding Azure access to a Service Principal

We can grant service principals access to high level management scopes in Azure, such as subscriptions or resource groups. For instance, if you had an asset management system that you used to track your assets in Azure. It could use Azure AD for authentication and authorization. You would create a service principal for your asset management system, then give it read access your subscriptions. The asset management application could then view all your assets in those subscriptions. We track these kind of access changes in the AzureActivity log. This is a free table so you should definitely ingest it.

For this example I have added our service principal as a contributor on a subscription and a reader on a resource group.

The AzureActivity log can be quite verbose and the structure of the logs changes often. For permissions changes we are after the OperationNameValue of “MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE”. When we look at the structure of some of the logs, we can see that we can filter on service principals. As opposed to granting users access.

We can use this query to search for all events where a service principal was given access.

AzureActivity
| where OperationNameValue == "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE"
| extend ServicePrincipalObjectId = tostring(parse_json(tostring(parse_json(tostring(Properties_d.requestbody)).Properties)).PrincipalId)
| extend ServicePrincipalType = tostring(parse_json(tostring(parse_json(tostring(Properties_d.requestbody)).Properties)).PrincipalType)
| extend Scope = tostring(parse_json(tostring(parse_json(tostring(Properties_d.requestbody)).Properties)).Scope)
| extend RoleAdded = tostring(parse_json(tostring(parse_json(tostring(parse_json(Properties).requestbody)).Properties)).RoleDefinitionId)
| extend Actor = tostring(Properties_d.caller)
| where ServicePrincipalType == "ServicePrincipal"
| project TimeGenerated, RoleAdded, Scope, ServicePrincipalObjectId, Actor

We see our two events. The first when I added a service principal to the subscription, then second to a resource group. You can see the target under ‘Scope’.

You will notice a couple of things. The name of role assigned (in this example, contributor and reader) isn’t returned. Instead we see the role id (the final section of the RoleAdded field). You can find the list of mappings here. We are also only returned the object id of our service principal, not the friendly name. Unfortunately the friendly name isn’t contained within the logs, but this still alerts us to investigate.

When you assign access to subscription or resource group, you may notice you have an option. Either a user, group or service principal or a managed identity.

The above query will find any events for service principals or managed identities. You won’t need a specific one for managed identities.

Adding Azure workload access to a Service Principal

We can also grant our service principals access to Azure workloads. Take for instance being able to read or write secrets into an Azure Key Vault. We will use that as our example below. I have given our service principal the ability to read and list secrets from a key vault.

We track this in the AzureDiagnostics table for Azure Key Vault. We can use the following query to track key vault changes.

AzureDiagnostics
| where ResourceType == "VAULTS"
| where OperationName == "VaultPatch"
| where ResultType == "Success"
| project-rename ServicePrincipalAdded=addedAccessPolicy_ObjectId_g, Actor=identity_claim_http_schemas_xmlsoap_org_ws_2005_05_identity_claims_name_s, AddedKeyPolicy = addedAccessPolicy_Permissions_keys_s, AddedSecretPolicy = addedAccessPolicy_Permissions_secrets_s,AddedCertPolicy = addedAccessPolicy_Permissions_certificates_s
| where isnotempty(AddedKeyPolicy) or isnotempty(AddedSecretPolicy) or isnotempty(AddedCertPolicy)
| project TimeGenerated, KeyVaultName=Resource, ServicePrincipalAdded, Actor, IPAddressofActor=CallerIPAddress, AddedSecretPolicy, AddedKeyPolicy, AddedCertPolicy

We find the service principal Id that we added, the key vault permissions added, the name of the vault and who did it.

We could add a service principal to many Azure resources. Azure Storage, Key Vault, SQL, are a few, but similar events should be available for them all.

Azure AD Service Principal Sign In Data

As well as audit data to track access changes, we can also view the sign in information for service principals and managed identities. Microsoft Sentinel logs these two types of sign ins in two separate tables. For regular service principals we query the AADServicePrincipalSignInLogs. For managed identity sign in data we look in AADManagedIdentitySignInLogs. You can enable both logs in the Azure Active Directory data connector. These should be low volume compared to regular sign in data but fees will apply.

Service principals sign in logs aren’t as detailed as your regular user sign in data. These types of sign ins are non interactive and are instead accessing resources protected by Azure AD. There are no fields for things like multifactor authentication or anything like that. This makes the data easy to make sense of. If we look at a sign in for our test service principal, you will see the information you have available to you.

AADServicePrincipalSignInLogs
| project TimeGenerated, ResultType, IPAddress, ServicePrincipalName, ServicePrincipalId, ServicePrincipalCredentialKeyId, AppId, ResourceDisplayName, ResourceIdentity

We can see we get some great information. There are other fields available but for the sake of brevity I will only show a few.

We get a ResultType, much like a regular user sign in (0 = success). The IP address, the name of the service principal, then the Id’s of pretty much everything. Even the resource the service principal was accessing. We can summarize our data to see patterns for all our service principals. For instance, by listing all the IP addresses each service principal has signed in from in the last month.


AADServicePrincipalSignInLogs
| where TimeGenerated > ago(30d)
| where ResultType == "0"
| summarize IPAddresses=make_set(IPAddress) by ServicePrincipalName, AppId

Conditional Access for workload identities was recently released for Azure AD. If your service principals log in from the same IP addresses then enforce that with conditional access. That way, if we lose client secrets or certificates, and an attacker signs in from a new IP address we will block it. Much like conditional access for users. The above query will give you your baseline of IP addresses to start building policies.

We can also summarize the resources that each service principal has accessed. If you have service principals that can access many resources such as Microsoft Graph, the Windows Defender ATP API and Azure Service Management API. Those service principals likely have a larger blast radius if compromised –

AADServicePrincipalSignInLogs
| where TimeGenerated > ago(30d)
| where ResultType == "0"
| summarize ResourcesAccessed=make_set(ResourceDisplayName) by ServicePrincipalName

We can use similar detection patterns we would use for users with service principals. For instance detecting when they sign in from a new IP address not seen for that service principal. This query alerts when a service principal signs in to a new IP address in the last week compared to the prior 180 days.

let timeframe = 180d;
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(timeframe) and TimeGenerated < ago(7d)
| distinct AppId, IPAddress
| join kind=rightanti
    (
    AADServicePrincipalSignInLogs
    | where TimeGenerated > ago(7d)
    | project TimeGenerated, AppId, IPAddress, ResultType, ServicePrincipalName
    )
    on IPAddress
| where ResultType == "0"
| distinct ServicePrincipalName, AppId, IPAddress

For managed identities we get a cut down version of the service principal sign in data. For instance we don’t get IP address information because managed identities are used ‘internally’ within Azure AD. But we can still track them in similar ways. For instance we can summarize all the resources each managed identity accesses. For instance Azure Key Vault, Azure Storage, Azure SQL. The higher the count, then the higher the blast radius.

AADManagedIdentitySignInLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize ResourcesAccessed=make_set(ResourceDisplayName) by ServicePrincipalName

We can also detect when a managed identity accesses a new resource that it hadn’t before. This query will return any managed identities that access resources that they hadn’t in the prior 60 days. For example, if you have a managed identity that previously only accessed Azure Storage, then accesses an Azure Key Vault, this would find that event.


AADManagedIdentitySignInLogs
| where TimeGenerated > ago (60d) and TimeGenerated < ago(1d)
| where ResultType == "0"
| distinct ServicePrincipalId, ResourceIdentity
| join kind=rightanti (
    AADManagedIdentitySignInLogs
    | where TimeGenerated > ago (1d)
    | where ResultType == "0"
    )
    on ServicePrincipalId, ResourceIdentity
| distinct ServicePrincipalId, ServicePrincipalName, ResourceIdentity, ResourceDisplayName

Prevention, always better than detection.

As with anything, preventing issues is better than detecting them. The nature of service principals though is they are always going to have some privilege. It is about reducing risk in your environment through least privilege.

  • Get to know your Azure AD roles and Microsoft Graph permisions. Assign only what you need. Avoid using roles like Global Administrator and Application Adminstrator. Limit permissions such as Directory.Read.All and Directory.ReadWrite. All are high privilege and should not be required. Azure AD roles can also be scoped to reduce privilege to only what is required.
  • Alert when service principals are assigned roles in Azure AD or granted access to Microsoft Graph using the queries above. Investigate whether the permissions are appropriate to the workload.
  • Make sure that any access granted to Azure management scopes or workloads is fit for purpose. Owner, contributor and user access administrator are all very high privilege.
  • Leverage Azure AD Conditional Access for workload identities. If your service principals sign in from a known set of IP addresses, then enforce that in policy.
  • Don’t be afraid to push back on third parties or internal developers about the privilege required to make their application work. The Azure AD and Microsoft Graph documentation is easy to read and understand and the permissions are very granular.

Finally, some handy links from within this article and elsewhere

Microsoft Sentinel and the power of functions — 16th Dec 2021

Microsoft Sentinel and the power of functions

Functions in Microsoft Sentinel are an overlooked and underappreciated feature in my experience, there is no specific Sentinel guidance provided by Microsoft on how to use them, however they are covered more broadly under the Azure Monitor section of the Microsoft docs site. In general terms though, they allow us to save queries to our Sentinel workspace, then invoke them by a simple name. So imagine you have written a really great query that looks for Active Directory group changes from Security Event logs, and your query also parses the data to make it look tidy and readable. Instead of having to re-use that same query over and over, you can save it to your workspace as a function, and then simply refer to it when needed.

We are going to use the example of Active Directory group membership changes to show how you can speed up your queries using functions. So first up we need to write our query, for anyone that has spent time looking at Security Event logs, the data can be a little inconsistent, and we sometimes need to parse the information we want from a large string of text. We also get a lot of information we just may not care about. Perhaps we only care about what time the event occurred, who was added or removed from the group, the group name and who was the person that made the changes.

To do that we can use the following query –

SecurityEvent
| project TimeGenerated, EventID, AccountType, MemberName, SubjectUserName, TargetUserName, Activity, MemberSid
| where EventID in (4728,4729,4732,4733,4756,4757)
| where AccountType == "User"
| parse MemberName with * 'CN=' UserAdded ',OU=' *
| project TimeGenerated, UserWhoAdded=SubjectUserName, UserAdded, GroupName=TargetUserName, Activity

Now we get a nice sanitized output that is easy to read showing the data we care about.

Now we are happy with our query, let’s save it as a function so we can refer it to it easily. Above your query you can see ‘Save’, if you click on that you will see an option for ‘Save as function’

Choose a name for your function, what you call it here is what you will use to then invoke it, you also choose a Legacy category to help categorize your functions.

So for this example we will call it ‘ADGroupChanges’, it will show you the code it is saving underneath, then hit Save. Give it a couple of minutes to become available to your workspace. When you start to type the name in your query window, you will notice it will now list the function as available to you. The little 3d rectangle icon highlights it as a function.

You can just run that ADGroupChanges on its own with no other input and it will simply run the saved code for you, and retrieve all the Active Directory group changes. Where you get real power from functions though is that you can continue to use your normal Kusto skills and operators against the function. You aren’t bound by only what is referenced in the function code. So you can do things like time limit your query to the last hour.

ADGroupChanges
| where TimeGenerated > ago(1h)

This applies our function code then only retrieves the last hour of results. You can include all your great filtering operators like has and in. The below will search for changes in the last hour and also where the name of the group has “Sentinel” in it.

ADGroupChanges
| where TimeGenerated > ago(1h)
| where GroupName has "Sentinel"

Or if you are looking for actions from a particular admin you can search on the UserWhoAdded field.

ADGroupChanges
| where TimeGenerated > ago(1h)
| where UserWhoAdded has "admin123"

Of course you can do combinations of any of these. Such as finding any groups that admin123 added testuser123 to in the last 24 hours.

ADGroupChanges
| where TimeGenerated > ago(24h)
| where UserWhoAdded has "admin123" and UserAdded has "testuser123"

If you ever want to check out what the query is under the function, just browse to ‘Functions’ on your workspace and they are listed under ‘Workspace functions’

If you hover over your function name, you will get a pop up appear, just select ‘Load the function code’ and it will load it into your query window for you.

If you want to update your function, just edit your query then save it again with the same name. That is what we are going to do now, by adding some more information to our query from our IdentityInfo table. Our Security Event log contains really only the basics of what we want to know, but maybe we want to enrich that with some better identity information. So if we update our query to the below, where we join our UserAdded field to our IdentityInfo table, we can then retrieve information from both, such as department, manager, and location details.

SecurityEvent
| project TimeGenerated, EventID, AccountType, MemberName, SubjectUserName, TargetUserName, Activity, MemberSid
| where EventID in (4728,4729,4732,4733,4756,4757)
| where AccountType == "User"
| parse MemberName with * 'CN=' UserAdded ',OU=' *
| project TimeGenerated, UserWhoAdded=SubjectUserName, UserAdded, UserAddedSid=MemberSid, GroupName=TargetUserName, Activity
| join kind=inner(
IdentityInfo
| where TimeGenerated > ago (21d)
| summarize arg_max(TimeGenerated, *) by AccountUPN)
on $left.UserAdded==$right.AccountName
| project TimeGenerated, UserWhoAdded, UserWhoAddedUPN=AccountUPN, GroupName, Activity, UserAdded, EmployeeId, City, Manager, Department

Save over your function with that new code. Now we have the information from our Security Event table showing when changes occurred, plus our great identity information from our IdentityInfo table. We can use that to write some queries that wouldn’t otherwise be available to us because our Security Event logs simply don’t contain that information.

Looking for group changes from users within a particular department? We can query on that.

ADGroupChanges
| where TimeGenerated > ago(24h)
| where Department contains "IT Service Desk"

You can combine your queries to take information from both the tables we joined in our function, say you are interested in querying for changes to a particular group name where the users are in a particular location, we can now do that having built our function.

ADGroupChanges
| where TimeGenerated > ago(250m)
| where GroupName == "testgroup123" and City contains "Seattle"

This will find changes to ‘testgroup123’ where the user added is from Seattle. Under the hood we look up group name from our Security Event table and City from our IdentityInfo table.

Functions are also really useful if you are trying to get other team members up to speed with Sentinel or KQL. Instead of needing them to do the parsing or data clean up in their own queries, you can build a function for them and have them stick to easier to understand operators like contains or has as a stepping stone to building out their skill set. KQL is a really intuitive language but it can still be daunting to people who haven’t seen it before. They are also a great way for you to just save time yourself, if you have spent ages building really great queries and parsers then just save them for future use, rather than trying to remember what you did previously. The ‘raw’ data will always be there if you want to go back and look at it, the function is just doing the hard work for us.

If you want a few more examples I have listed them here – they include a function to retrieve Azure Key Vault access changes, to find all your domain controllers via various log sources, and a function to join identity info with both sign in logs and risk events.

Using Logic Apps and Microsoft Sentinel to alert on expiring Azure AD Secrets — 1st Dec 2021

Using Logic Apps and Microsoft Sentinel to alert on expiring Azure AD Secrets

Azure AD app registrations are at the heart of the Microsoft Identity Platform, and Microsoft recommend you rotate secrets on them often. However, there is currently no native way to alert on secrets that are due to expire. An expired secret means the application will no longer authenticate, so you may have systems that fail when the secret expires. A number of people have come up with solutions, such as using Power Automate or deploying an app that you can configure notifications with. Another solution is using Logic Apps, KQL and Microsoft Sentinel to build a very low cost and light weight solution.

We will need two Logic Apps for our automation – the first will query Microsoft Graph to retrieve information (including password expiry dates) for all our applications, our Logic App will then push that data into a custom table in Sentinel. Next we will write a Kusto query to find apps with secrets due to expire shortly, then finally we will build a second Logic App to run our query on a schedule for us, and email a table with the apps that need new secrets.

Both our Logic Apps are really simple, the first looks as follows –

The first part of this app is to connect to the Microsoft Graph to retrieve a token for re-use. You can set your Logic App to run on a recurrence for whatever schedule makes sense, for my example I am running it daily, so we will get updated application data into Sentinel each day. Then we will need the ClientID, TenantID and Secret for an Azure AD App Registration with enough permission to retrieve application information using the ‘Get application‘ action. From the documentation we can see that Application.Read.All is the lowest privilege access we will need to give our app registration.

Our first 3 actions are just to retrieve those credentials from Azure Key Vault, if you don’t use Azure Key Vault you can just add them as variables (or however you handle secrets management). Then we call MS Graph to retrieve a token.

Put your TenantID in the URI and the ClientID and Secret in the body respectively. Then parse the JSON response using the following schema.

{
    "properties": {
        "access_token": {
            "type": "string"
        },
        "expires_in": {
            "type": "string"
        },
        "expires_on": {
            "type": "string"
        },
        "ext_expires_in": {
            "type": "string"
        },
        "not_before": {
            "type": "string"
        },
        "resource": {
            "type": "string"
        },
        "token_type": {
            "type": "string"
        }
    },
    "type": "object"
}

Now we have our token, we can re-use it to access the applications endpoint on Microsoft Graph to retrieve the details for all our applications. So use a HTTP action again, and this time use a GET action.

We connect to https://graph.microsoft.com/v1.0/applications?$select=displayName,appId,passwordCredentials

In this example we are just retrieving the displayname of our app, the application id and the password credentials. The applications endpoint has much more detail though if you wanted to include other data to enrich your logs.

One key note here is dealing with data paging in Microsoft Graph, MS Graph will only return a certain amount of data at a time, and also include a link to retrieve the next set of data, and so on until you have retrieved all the results – and with Azure AD apps you are almost certainly going to have too many to retrieve at once. Logic Apps can deal with this natively thankfully, on your ‘Retrieve App Details’ action, click the three dots in the top right and choose settings, then enable Pagination so that it knows to loop through until all the data is retrieved.

Then we parse the response once more, if you are just using displayname, appid and password credentials like I am, then the schema for your json is.

{
    "properties": {
        "value": {
            "items": {
                "properties": {
                    "appId": {
                        "type": "string"
                    },
                    "displayName": {
                        "type": "string"
                    },
                    "passwordCredentials": {
                        "items": {
                            "properties": {
                                "customKeyIdentifier": {},
                                "displayName": {
                                    "type": [
                                        "string",
                                        "null"
                                    ]
                                },
                                "endDateTime": {
                                    "type": [
                                        "string",
                                        "null"
                                    ]
                                },
                                "hint": {
                                    "type": [
                                        "string",
                                        "null"
                                    ]
                                },
                                "keyId": {
                                    "type": [
                                        "string",
                                        "null"
                                    ]
                                },
                                "secretText": {},
                                "startDateTime": {
                                    "type": [
                                        "string",
                                        "null"
                                    ]
                                }
                            },
                            "required": [
                                "customKeyIdentifier",
                                "displayName",
                                "endDateTime",
                                "hint",
                                "keyId",
                                "secretText",
                                "startDateTime"
                            ],
                            "type": [
                                "object",
                                "null",
                                "array"
                            ]
                        },
                        "type": "array"
                    }
                },
                "required": [
                    "displayName",
                    "appId",
                    "passwordCredentials"
                ],
                "type": "object"
            },
            "type": "array"
        }
    },
    "type": "object"
}

Then our last step of our first Logic App is to send the data using the Azure Log Analytics Data Collector to Microsoft Sentinel. So take each value from your JSON and send it to Sentinel, because you will have lots of apps, it will loop through each one. For this example the logs will be sent to the AzureADApps_CL table.

You can then query your AzureADApps_CL table like you would any data and you should see a list of your application display names, their app ids and any password credentials. If you are writing to that table for the first time, just give it 20 or so minutes to appear. So now we need some KQL to find which ones are expiring. If you followed this example along you can use the following query –

AzureADApps_CL
| where TimeGenerated > ago(7d)
| extend AppDisplayName = tostring(displayName_s)
| extend AppId = tostring(appId_g)
| summarize arg_max (TimeGenerated, *) by AppDisplayName
| extend Credentials = todynamic(passwordCredentials_s)
| project AppDisplayName, AppId, Credentials
| mv-expand Credentials
| extend x = todatetime(Credentials.endDateTime)
| project AppDisplayName, AppId, Credentials, x
| where x between (now()..ago(-30d))
| extend SecretName = tostring(Credentials.displayName)
| extend PasswordEndDate = format_datetime(x, 'dd-MM-yyyy [HH:mm:ss tt]')
| project AppDisplayName, AppId, SecretName, PasswordEndDate
| sort by AppDisplayName desc 

You should see an output if any applications that have a secret expiring in the next 30 days (if you have any).

Now we have our data, the second Logic App is simple, you just need it to run on whatever scheduled you like (say weekly), run the query for you against Sentinel (using the Azure Monitor Logs connector), build a simple HTML table and email it to whoever wants to know.

If you use the same Kusto query that I have then the schema for your Parse JSON action is.

{
    "properties": {
        "value": {
            "items": {
                "properties": {
                    "AppDisplayName": {
                        "type": "string"
                    },
                    "AppId": {
                        "type": "string"
                    },
                    "PasswordEndDate": {
                        "type": "string"
                    },
                    "SecretName": {
                        "type": [
                            "string",
                            "null"
                        ]
                    }
                },
                "required": [
                    "AppDisplayName",
                    "AppId",
                    "SecretName",
                    "PasswordEndDate"
                ],
                "type": "object"
            },
            "type": "array"
        }
    },
    "type": "object"
}

Now that you have that data in Microsoft Sentinel you could also run other queries against it, such as seeing how many apps you have created or removed each week, or if applications have expired secrets and no one has requested a new one; they may be inactive and can be deleted.

Detecting multistage attacks in Microsoft Sentinel — 25th Nov 2021

Detecting multistage attacks in Microsoft Sentinel

For defenders, it would be really amazing if every threat we faced was a single event or action that we could detect – we would know that if x happened, then we need to do y and the threat was detected and prevented. Unfortunately not every threat we face is a single event; it may be the combination of several low priority events that on their own may not raise alarms, but when combined are an indicator of more malicious activity. For instance, you probably receive a lot of identity alerts that are considered low risk, such as users accessing via a new device, or a new location – most are likely benign. If you then detected that same user accessed SharePoint from a location not seen before, that may increase the risk level, and if that user then started downloading a lot of data suddenly that may be really serious.

That pattern follows the MITRE ATT&CK framework where we may see initial access, followed by discovery then exfiltration. Thankfully we can build our own queries to hunt for these kinds of attacks. Microsoft also provide multistage protection via their fusion detections in Microsoft Sentinel.

We can send all kinds of data to Microsoft Sentinel, logs from on premise domain controllers or servers, Azure AD telemetry, logs from our endpoint devices and whatever else you think is valuable. Microsoft Sentinel and the Kusto Query Language provide the ability to look for attacks that may span across different sources. There are several ways to join datasets in KQL, this blog we are going to focus on just the join operator. At its most basic, join allows us to combine data from different tables together based on something that matches between the two tables.

For instance, if we have our Azure AD sign in data, which is sent to the SigninLogs table and our Office 365 audit logs which are sent to the OfficeActivity table, we have various options to where we may find a match between these two tables – such as usernames and IP addresses for example. So we could join the two tables based on a username, and match Azure AD sign in data with Office 365 activity data belonging to the same user. Maybe a user signed into Azure AD from a location previously not seen for them before, so then we would be interested in what actions were taken in Office 365 after that sign in event.

When we join data in Microsoft Sentinel we have a lot of options, to keep things straight forward for this post, we are just going to use ‘inner’ joins, where we look for matches between multiple tables and return the combined data. So using our Azure AD and Office 365 example, after completing an inner join, we would see the data from both tables available to us – such as location, conditional access results or user agent from the Azure AD table and actions such as downloading files from OneDrive or inviting users to Teams, from the Office 365 table. There are other types of joins, referenced in the documentation, but we will explore those in a future post. Learning to join tables was one of the things that confused me the most initially in KQL, but it provides immense value.

If we start with something simple, we can join our Azure AD sign in logs to our Azure AD Risk Events (held in the AADUserRiskEvents table), if we build a simple query and tell KQL to join the tables together, you will see it automatically tells us where there is a match in data.

The TimeGenerated, CorrelationId and UserPrincipalName fields exist in both tables. If we join on our CorrelationId, we can then see we get options to fill in our query from both tables

Where the same column exists on both sides you will see it automatically renames one, seen with ‘CorrelationId1’. We can then finish our query with data from both tables

SigninLogs
| project TimeGenerated, UserPrincipalName, AppDisplayName, ResultType, CorrelationId
| join kind=inner
(AADUserRiskEvents)
on CorrelationId
| project TimeGenerated, UserPrincipalName, CorrelationId, ResultType, DetectionTimingType, RiskState, RiskLevel

We get the TimeGenerated, UserPrincipalName, ResultType from Azure AD sign in data, and the DetectionTimingType, RiskState and RiskLevel from AADUserRiskEvents, and we use the CorrelationId to join them together.

We can use these basics as a foundation to start adding some more logic to our queries. In this next example we are looking for AADUserRiskEvents, and this time joining to our Azure AD Audit table (where Azure AD changes are tracked) looking for events where the same user who flagged a risk event also changed MFA details within a short time frame.

let starttime = 45d;
let timeframe = 4h;
AADUserRiskEvents
| where TimeGenerated > ago(starttime)
| where RiskDetail != "aiConfirmedSigninSafe"
| project RiskTime=TimeGenerated, UserPrincipalName, RiskEventType, RiskLevel, Source
| join kind=inner (
    AuditLogs
    | where OperationName in ("User registered security info", "User deleted security info")
    | where Result == "success"
    | extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName)
    | project SecurityInfoTime=TimeGenerated, OperationName, UserPrincipalName, Result, ResultReason)
    on UserPrincipalName
| project RiskTime, SecurityInfoTime, UserPrincipalName, RiskEventType, RiskLevel, Source, OperationName, ResultReason
| where (SecurityInfoTime - RiskTime) between (0min .. timeframe)

This query is a little more complex but it follows the same pattern. First we set a couple of time variables, we are going to look back through 45 days of data and we want to set a time frame of four hours between our events. If a risk event is triggered initially, but then the MFA event doesn’t occur for two weeks, then it is not as likely to be linked compared to these events happening close together. Next, we look up our AADUserRiskEvents, exclude anything that Microsoft dismiss as safe and then we take the details we want to use in our second query – the UserPrincipalName, RiskEventType, RiskLevel and Source, we also take the TimeGenerated, but to make things more simple to understand we rename it to RiskTime, so that it is easy to distinguish later on.

Then to finish our our query, we again inner join, this time to our AuditLogs table, looking for MFA registration or deletion events, and we join the tables together based on UserPrincipalName, that way we know the same user who flagged the risk event also changed MFA details. We rename the time of the second event to SecurityInfoTime to make our data easy to read. Fnally, to add our time logic, we calculate the time between the two separate events and then alert only when that time is less than four hours.

We can re-use this same pattern across all kinds of data, this query follows basically the exact same format, except we are looking for a risk event followed by access to an Azure management interface. If a user flagged a risk event, then within four hours signed into Azure, we would be alerted.

let starttime = 45d;
let timeframe = 4h;
let applications = dynamic(["Azure Active Directory PowerShell", "Microsoft Azure PowerShell", "Graph Explorer", "ACOM Azure Website"]);
AADUserRiskEvents
| where TimeGenerated > ago(starttime)
| where RiskDetail != "aiConfirmedSigninSafe"
| project RiskTime=TimeGenerated, UserPrincipalName, RiskEventType, RiskLevel, Source
| join kind=inner (
    SigninLogs
    | where AppDisplayName in (applications)
    | where ResultType == "0")
    on UserPrincipalName
| project-rename AzureSigninTime=TimeGenerated
| extend TimeDelta = AzureSigninTime - RiskTime
| project RiskTime, AzureSigninTime, TimeDelta, UserPrincipalName, RiskEventType, RiskLevel, Source
| where (AzureSigninTime - RiskTime) between (0min .. timeframe)

We can even have KQL calculate the time between two events for you to easily see the time difference between the two. You do this by simply extending a new column and having it calculate it for you (| extend TimeDelta = AzureSigninTime – RiskTime )

You can extend these queries across any data that makes sense, so we can again take a risk event, but this time join it to our Office 365 activity logs to find a list of files that a user has downloaded shortly after flagging that risk event.

let starttime = 45d;
let timeframe = 4h;
AADUserRiskEvents
| where TimeGenerated > ago(starttime)
| where RiskDetail != "aiConfirmedSigninSafe"
| project RiskTime=TimeGenerated, UserPrincipalName, RiskEventType, RiskLevel, Source
| join kind=inner (
    OfficeActivity
    | where Operation in ("FileSyncDownloadedFull", "FileDownloaded"))
    on $left.UserPrincipalName == $right.UserId
| project DownloadTime=TimeGenerated, OfficeObjectId, RiskTime, UserId
| where (DownloadTime - RiskTime) between (0min .. timeframe)
| summarize RiskyDownloads=make_set(OfficeObjectId) by UserId
| where array_length( RiskyDownloads) > 10

We use much the same query structure, but there are two things to note here, the AADUserRiskEvents and OfficeActivity store username data in two different columns, so we need to manually tell Microsoft Sentinel how to join, which we do by “on $left.UserPrincipalName == $right.UserId”. We are telling KQL that the UserPrincipalName from our first table (AADUserRiskEvents) is the same as the UserId in our second table (OfficeActivity). Data coming in from different vendors, and even Microsoft themselves, is wildly inconsistent, so you will need to provide the brain power to link them together. In this example, we also summarize the list of downloads the risky user has taken, and only alert when it is greater than 10 unique files.

These kind of multistage queries don’t need to be limited to users or identity type events, you can use the same structure to query device data, or anything else that is relevant to you.

let timeframe = 48h;
SecurityAlert
| where ProviderName == "MDATP"
| project AlertTime=TimeGenerated,DeviceName=CompromisedEntity, AlertName
| join kind=inner (
DeviceLogonEvents
| project TimeGenerated, LogonType, ActionType, InitiatingProcessCommandLine, IsLocalAdmin, AccountName, DeviceName
| where LogonType in ("Interactive","RemoteInteractive")
| where ActionType == "LogonSuccess"
| where InitiatingProcessCommandLine == "lsass.exe"
) on DeviceName
| where (AlertTime - TimeGenerated) between (0min .. timeframe)
| summarize arg_max(TimeGenerated, *) by DeviceName
| project LogonTime=TimeGenerated, AlertTime, AlertName, DeviceName, AccountName, IsLocalAdmin

In this last example, we take an alert from Microsoft Defender for Endpoint, then use that first event to circle back to our DeviceLogonEvents which tracks logon event data on Windows devices, from there we can track down who was the most recent user to sign onto that device, and also determine if they are a local administrator.

Keep an eye on your Azure AD guests with Microsoft Sentinel — 4th Nov 2021

Keep an eye on your Azure AD guests with Microsoft Sentinel

Azure AD External Identities (previously Azure AD B2B) is a fantastic way to collaborate with partners, customers or other people external to your company. Previously you may have needed to onboard an Active Directory account for each user, which came with a lot of inherit privilege, or you used different authentication methods for your applications, and you ended up juggling credentials for all these different systems. By leveraging Azure AD External Identities you start to wrestle back some of that control and importantly get really strong visibility into what these guests are doing.

You invite a guest to your tenant by sending them an email from within the Azure Active Directory portal (or directly inviting them in an app like Teams), they go through the process of accepting and then you have a user account for them in your tenant – easy!

If the user you invite to your tenant belongs to a domain that is also an Azure AD tenant, they can use their own credentials from that tenant to access resources in your tenant. If it’s a personal address like gmail.com then the user will be prompted to sign up to a Microsoft account or use a one time passcode if you have configured that option.

If you browse through your Azure AD environment and already have guests, you can filter to just guest accounts. If you don’t have guests, invite your personal email and you can check out the process.

You will notice that they have a unique UserPrincipalName format, if your guests email address is test123@gmail.com then the guest object in your directory has the UserPrincipalName of test123_gmail.com#EXT#@YOURTENANT.onmicrosoft.com – this makes sense if you think about the concept of a guest account, it could belong to many different tenants so it needs to have a unique UPN in your tenant. You can also see a few more details by clicking through to a guest account. You can see if an invite has been accepted or not, a guest who hasn’t accepted is still an object in your directory, they just can’t access any resources yet.

And if you click the view more arrow, you can see if source of the account.

You can see the difference between a user coming in from another Azure AD tenant vs a personal account.

It is really easy to invite guest accounts and then kind of forget about them, or not treat them with the same scrutiny or governance you would a regular account. They also have a tendency to grow in total count very quickly, especially if you allow your staff to invite them themselves, via Teams or any other method.

Remember though these accounts all have some access to your tenant, potentially data in Teams, OneDrive or SharePoint, and likely an app or two that you have granted access to – or more worryingly apps that you haven’t specifically blocked them accessing. Guests can even be granted access to Azure AD roles, or be given access to Azure resources via Azure RBAC.

Thankfully in Microsoft (no longer Azure!) Sentinel, all the signals we get from sign-in data, or audit logs, or Office 365 logs don’t discriminate between members and guests (apart from some personal information that is hidden for guests such as device names), which makes it a really great platform to get insights to what your guests are up to (or what they are no longer up to).

Invites sent and redeemed are collected in the AuditLogs table, so if you want to quickly visualize how many invites you are sending vs those being redeemed you can.

//Visualizes the total amount of guest invites sent to those redeemed
let timerange=180d;
let timeframe=7d;
AuditLogs
| where TimeGenerated > ago (timerange)
| where OperationName in ("Redeem external user invite", "Invite external user")
| summarize
    InvitesSent=countif(OperationName == "Invite external user"),
    InvitesRedeemed=countif(OperationName == "Redeem external user invite")
    by bin(TimeGenerated, timeframe)
| render columnchart
    with (
    title="Guest Invites Sent v Guest Invites Redeemed",
    xtitle="Invites",
    kind=unstacked)

You can look for users that have been invited, but have not yet redeemed their invite. Guest invites never expire, so if a user hasn’t accepted after a couple of months it may be worth removing the invite until a time they genuinely require it. In this query we exclude invites sent in the last month, as those people may have simply not got around to redeeming their invite yet.

//Lists guests who have been invited but not yet redeemed their invites. Excludes newly invited guests (last 30 days).
let timerange=180d;
let timeframe=30d;
AuditLogs
| where TimeGenerated between (ago(timerange) .. ago(timeframe)) 
| where OperationName == "Invite external user"
| extend GuestUPN = tolower(tostring(TargetResources[0].userPrincipalName))
| project TimeGenerated, GuestUPN
| join kind=leftanti  (
    AuditLogs
    | where TimeGenerated > ago (timerange)
    | where OperationName == "Redeem external user invite"
    | where CorrelationId <> "00000000-0000-0000-0000-000000000000"
    | extend d = tolower(tostring(TargetResources[0].displayName))
    | parse d with * "upn: " GuestUPN "," *
    | project TimeGenerated, GuestUPN)
    on GuestUPN
| distinct GuestUPN

For those users that have accepted and are actively accessing applications, we can see what they are accessing just like a regular user. You could break down all your apps and have a look at the split between guests and members for each application.

//Creates a list of your applications and summarizes successful signins by members vs guests
let timerange=30d;
SigninLogs
| where TimeGenerated > ago(timerange)
| project TimeGenerated, UserType, ResultType, AppDisplayName
| where ResultType == 0
| summarize
    MemberSignins=countif(UserType == "Member"),
    GuestSignins=countif(UserType == "Guest")
    by AppDisplayName
| sort by AppDisplayName  

You can quickly see which users haven’t signed in over the last month, having signed in successfully in the preceding 6 months.

let timerange=180d;
let timeframe=30d;
SigninLogs
| where TimeGenerated > ago(timerange)
| where UserType == "Guest" or UserPrincipalName contains "#ext#"
| where ResultType == 0
| summarize arg_max(TimeGenerated, *) by UserPrincipalName
| join kind = leftanti  
    (
    SigninLogs
    | where TimeGenerated > ago(timeframe)
    | where UserType == "Guest" or UserPrincipalName contains "#ext#"
    | where ResultType == 0
    | summarize arg_max(TimeGenerated, *) by UserPrincipalName
    )
    on UserPrincipalName
| project UserPrincipalName

Or you could even summarize all your guests (who have signed in at least once) into the month they last accessed your tenant. You could then bulk disable/delete anything over 3 months or whatever your lifecycle policy is.

//Month by month breakdown of when your Azure AD guests last signed in
SigninLogs
| where TimeGenerated > ago (360d)
| where UserType == "Guest" or UserPrincipalName contains "#ext#"
| where ResultType == 0
| summarize arg_max(TimeGenerated, *) by UserPrincipalName
| project TimeGenerated, UserPrincipalName
| summarize InactiveUsers=make_set(UserPrincipalName) by startofmonth(TimeGenerated)

You could look at guests accounts that are trying to access your applications but being denied because they aren’t assigned a role, this could potentially be some reconnaissance occurring in your environment.

SigninLogs
| where UserType == "Guest"
| where ResultType == "50105"
| project TimeGenerated, UserPrincipalName, AppDisplayName, IPAddress, Location, UserAgent

We can leverage the IdentityInfo table to find any guests that have been assigned Azure AD roles. If your security controls for guests are weaker than your member accounts this is something you definitely want to avoid.

IdentityInfo
| where TimeGenerated > ago(21d)
| summarize arg_max(TimeGenerated, *) by AccountUPN
| where UserType == "Guest"
| where AssignedRoles != "[]" 
| where isnotempty(AssignedRoles)
| project AccountUPN, AssignedRoles, AccountObjectId

We can also use our IdentityInfo table again to grab a list of all our guests, then join to our OfficeActivity table to summarize download activities by each of your guests.

//Summarize the total count and the list of files downloaded by guests in your Office 365 tenant
let timeframe=30d;
IdentityInfo
| where TimeGenerated > ago(21d)
| where UserType == "Guest"
| summarize arg_max(TimeGenerated, *) by AccountUPN
| project UserId=tolower(AccountUPN)
| join kind=inner (
    OfficeActivity
    | where TimeGenerated > ago(timeframe)
    | where Operation in ("FileSyncDownloadedFull", "FileDownloaded")
    )
    on UserId
| summarize DownloadCount=count(), DownloadList=make_set(OfficeObjectId) by UserId

If you wanted to summarize which domains are downloading the most data from Office 365 then you can slightly alter the above query (thanks to Alex Verboon for this suggestion).

//Summarize the total count of files downloaded by each guest domain in your tenant
let timeframe=30d;
IdentityInfo
| where TimeGenerated > ago(21d)
| where UserType == "Guest"
| summarize arg_max(TimeGenerated, *) by AccountUPN, MailAddress
| project UserId=tolower(AccountUPN), MailAddress
| join kind=inner (
    OfficeActivity
    | where TimeGenerated > ago(timeframe)
    | where Operation in ("FileSyncDownloadedFull", "FileDownloaded")
    )
    on UserId
| extend username = tostring(split(UserId,"#")[0])
| parse MailAddress with * "@" userdomain 
| summarize count() by userdomain

You can find guests who were added to a Team then instantly started downloading data from your Office 365 tenant.

// Finds guest accounts who were added to a Team and then downloaded documents straight away. 
// startime = data to look back on, timeframe = looks for downloads for this period after being added to the Team
let starttime = 7d;
let timeframe = 2h;
let operations = dynamic(["FileSyncDownloadedFull", "FileDownloaded"]);
OfficeActivity
| where TimeGenerated > ago(starttime)
| where OfficeWorkload == "MicrosoftTeams" 
| where Operation == "MemberAdded"
| extend UserAdded = tostring(parse_json(Members)[0].UPN)
| where UserAdded contains ("#EXT#")
| project TimeAdded=TimeGenerated, UserId=tolower(UserAdded)
| join kind=inner
    (
    OfficeActivity
    | where Operation in (['operations'])
    )
    on UserId
| project DownloadTime=TimeGenerated, TimeAdded, SourceFileName, UserId
| where (DownloadTime - TimeAdded) between (0min .. timeframe)

I think the key takeaway is that basically all your threat hunting queries you write for your standard accounts are most likely relevant to guests, and in some cases more relevant. While having guests in your tenant grants us some control and visibility, it is still an account not entirely under your management. The accounts could have poor passwords, or be shared amongst people, or if coming from another Azure AD tenancy could have poor lifecycle management, i.e they could have left the other company but their account is still active.

As always, prevention is better than detection, and depending on your licensing tier there are some great tools available to govern these accounts.

You can configure guest access restrictions in the Azure Active Directory portal. Keep in mind when configuring these options the flow on effect to other apps, such as Teams. In that same portal you can configure who is allowed to send guest invites, I would particularly recommend you disallow guests inviting other guests. You can also restrict or allow specific domains that invites can be sent to.

On your enterprise applications, make sure you have assignment required set to Yes

This is crucial in my opinion, because it allows Azure AD to be the first ‘gate’ to accessing your applications. The access control in your various applications is going to vary wildly. Some may need an account setup on the application itself to allow people in, some may auto create an account on first sign on, some may have no access control at all and when it sees a sign in from Azure AD it allows the person in. If this is set to no and your applications don’t perform their own access control or RBAC then there is a good chance your guests will be allowed in, as they come through as authenticated from Azure AD much like a member account.

If you are an Azure AD P2 customer, then you have access to Access Reviews, which is an already great and constantly improving offering that lets you automate a lot of the lifecycle of your accounts, including guests. You can also look at leveraging Entitlement Management which can facilitate granting guests the access they require and nothing more.

If you have Azure AD P1 or P2, use Azure AD Conditional Access, you can target policies specifically at guest accounts from within the console.

You can enforce MFA on your guest accounts like you would all other users – if you enforce MFA on an application for guests, the first time they access it they will be redirected to the MFA registration page. You can also explicitly block guests from particular applications using conditional access.

Also unrelated, I recently kicked off a #365daysofkql challenge on my twitter, where I share a query a day for a year, we are nearly one month in so if you want to follow feel free.