Microsoft Sentinel 101

Learning Microsoft Sentinel, one KQL error at a time

CrowdStrike Falcon, Defender for Endpoint and Azure Sentinel. — 19th Aug 2021

CrowdStrike Falcon, Defender for Endpoint and Azure Sentinel.

Remember when antivirus software was the cause of every problem on devices? Workstation running slow? Disable AV. Server running slow, put in a heap of exclusions. Third party app not working, more exclusions. The thought of running multiple antivirus products on an endpoint was outrageous, and basically every vendor told you explicitly not to do it. Thankfully times change, due to a combination of smarter endpoint security products, more powerful computers and a willingness of Microsoft to work along side other vendors, that is no longer the case. Defender for Endpoint now happily sits behind other products in ‘passive mode’, like CrowdStrike Falcon, while still sending great data and integrating into apps like Cloud App Security, you can connect M365 to Sentinel with a native connector.

So if you are paying for a non Microsoft product like CrowdStrike or Carbon Black, you probably don’t want to send all the data from those products to Azure Sentinel as well, because a) you are paying for that privilege with your endpoint security vendor already, b) that product may either be managed by the vendor themselves, a partner and/or c) even if you manage it yourself, the quality of the native tooling in those products is part of the reason you pay the money for it and it doesn’t make a lot of sense to lift every event out of there, into Sentinel and try and recreate the wheel.

What we can do though is send some low volume, but high quality data into Sentinel to jump start further investigations or automations based on other data we have in there – the logs from Defender for Endpoint in passive mode, the SecurityAlert table from things like Azure Security Center or Defender for ID, Azure AD sign in logs etc. So for CrowdStrike, in this example, we are just going to send a webhook to Sentinel each time a detection is found, then ingest that into a custom table using a simple Logic App so we can expand our hunting. Hopefully you don’t get too many detections, so this data will basically cost nothing.

On the Azure Sentinel side we first create a new Logic App with the ‘When a HTTP request is received’ trigger, once you save it you will be given your webhook URL. Grab that address then head over to CrowdStrike and create your notification workflow, which is a simple process outlined here.

For the actions, we are just going to call our webhook and send the following data on each new detection.

Now each time a detection is created in CrowdStrike Falcon it will send the data to our Logic App. The last part is to configure the Logic App to then push that data to Azure Sentinel which we do with three quick actions. First, we parse the JSON that is inbound from CrowdStrike, if you are using the same data as myself then the schema for this is –

{
    "properties": {
        "data": {
            "properties": {
                "detections.severity": {
                    "type": "string"
                },
                "detections.tactic": {
                    "type": "string"
                },
                "detections.technique": {
                    "type": "string"
                },
                "detections.url": {
                    "type": "string"
                },
                "detections.user_name": {
                    "type": "string"
                },
                "devices.domain": {
                    "type": [
                        "string",
                        "null"
                    ]
                },
                "devices.hostname": {
                    "type": "string"
                }
            },
            "type": "object"
        },
        "meta": {
            "properties": {
                "event_reference_url": {
                    "type": "string"
                },
                "timestamp": {
                    "type": "integer"
                },
                "trigger_name": {
                    "type": "string"
                },
                "workflow_id": {
                    "type": "string"
                }
            },
            "type": "object"
        }
    },
    "type": "object"
}

Then we are going to compose a new JSON payload where we change the column headers to something a little easier to read, then send that data to Sentinel using the ‘Send data’ action. So our entire ingestion playbook is just four steps.

You can create test detections by following the CrowdStrike support article. You should see alerts start to flow in to your CrowdStrikeAlerts_CL.

Once you have some data in there you can start visualizing trends in your data, what types of techniques are being seen –

CrowdStrikeAlerts_CL
| summarize count()by Technique_s
| render piechart 

Where the real value is with getting these detections into Sentinel is leveraging all the other data already in there and then automating response. One of the most simple things is to take the username from your alert and join it to your IdentifyInfo table (powered by UEBA) and find out some more information about the user. Your Azure AD identity information is highly likely to be of greater quality than almost anywhere else. So grab your alert, join it to your identity table grabbing the most recent record for the user –

CrowdStrikeAlerts_CL
| project Hostname_s, AlertSeverity_s, Technique_s, Username_s, AlertLink_s
| join kind=inner
(
IdentityInfo
| where TimeGenerated > ago(21d)
| summarize arg_max(TimeGenerated, *) by AccountName
)
on $left.Username_s == $right.AccountName
| project Hostname_s, AlertSeverity_s, Technique_s, Username_s, AccountUPN, Country, EmployeeId, Manager, AlertLink_s

Now on our alerts we get not only the info from CrowdStrike but the information from our IdentityInfo table, so where the user is located, their UPN, manager and whatever else we want.

We can use the DeviceLogonEvents from Defender to find out if the user is a local admin on that device. You may want to prioritize those detections because there is greater chance of damage being done and lateral movement when the user is an admin –

let csalert=
CrowdStrikeAlerts_CL
| where TimeGenerated > ago (1d)
| project HostName=Hostname_s, AccountName=Username_s, Technique_s, AlertSeverity_s;
DeviceLogonEvents
| where TimeGenerated > ago (1d)
| join kind=inner csalert on HostName, AccountName
| where LogonType == "Interactive"
| where InitiatingProcessFileName == "lsass.exe"
| summarize arg_max(TimeGenerated, *) by DeviceName
| project TimeGenerated, DeviceName, IsLocalAdmin

If a user is flagged using suspicious PowerShell, we can grab the alert, then find any PowerShell events in a 30 minute window (15 mins either side of your alert). When you are joining different tables you just need to check how each table references your device names. You may need to trim or adjust the naming so they match up. You can use the tolower function to drop everything to lower case and trim(@”.yourdomain.com”,DeviceName) if you need to remove your domain name in order to match.

let csalert=
CrowdStrikeAlerts_CL
| where TimeGenerated > ago(1d)
| extend AlertTime = TimeGenerated
| where Technique_s == "PowerShell"
| project AlertTime, Hostname_s, AlertSeverity_s, Technique_s, Username_s;
DeviceProcessEvents
| where TimeGenerated > ago(1d)
| join kind=inner csalert on $left.DeviceName == $right.Hostname_s
| where InitiatingProcessFileName contains "powershell"
| where TimeGenerated between ((AlertTime-timespan(15min)).. (AlertTime+timespan(15min)))

We can look up the device which flagged a CrowdStrike detection and see if it has been flagged elsewhere in SecurityAlert table, maybe by Defender for ID or another product you have. Again, just check out the structure of your various tables as your naming may not be exactly the same but use your trim and other functions to line them up.

let csalert=
CrowdStrikeAlerts_CL
| where TimeGenerated > ago(4d)
| project Hostname_s, AlertSeverity_s, Username_s
| join kind=inner (
IdentityInfo
| where TimeGenerated > ago(21d)
| summarize arg_max(TimeGenerated, *) by AccountName)
on $left.Username_s == $right.AccountName
| project Hostname_s, AlertSeverity_s, Username_s, AccountUPN;
SecurityAlert
| where TimeGenerated > ago (7d)
| join kind=inner csalert on $left.CompromisedEntity == $right.Hostname_s

And the same for user alerts, possibly from your identity products like Azure AD Identity Protection or Cloud App Security. We can use our identity table to make sense of different types of usernames these products may use. CrowdStrike or your AV may use samaccountname, where Cloud App uses userprincipalname for instance.

let csalert=
CrowdStrikeAlerts_CL
| where TimeGenerated > ago(4d)
| project Hostname_s, AlertSeverity_s, Username_s
| join kind=inner (
IdentityInfo
| where TimeGenerated > ago(21d)
| summarize arg_max(TimeGenerated, *) by AccountName)
on $left.Username_s == $right.AccountName
| project Hostname_s, AlertSeverity_s, Username_s, AccountUPN;
SecurityAlert
| where TimeGenerated > ago (7d)
| join kind=inner csalert on $left.CompromisedEntity == $right.AccountUPN

It’s great being alerted to things and having information available to investigate, but sometimes an alert is of a high enough priority that you want to respond to it automatically. With CrowdStrike, the team have built a few playbooks we can leverage, which are located here. The three we are interested in are CrowdStrike_base which handles authentication to their API, CrowdStrike_Enrichment_GetDeviceInformation which retrieves host information about a device and finally CrowdStrike_ContainHost which will network contain a device for us. This playbook works by retrieving the hostname from the Sentinel entity mapping, searching CrowdStrike for a matching asset and containing it. Deploy the base playbook first, because the other two depend on it to access the API. You will also need an API key from your CrowdStrike tenant with enough privilege.

Once deployed you can either require someone to run the playbook manually or you can automate it entirely. For alerts that come in from CrowdStrike, or other AV products there is a good chance you already have the rules set up to determine response to detections. However we can use the same playbook to contain devices that we find when hunting through log data that CrowdStrike don’t see. For instance Defender for ID is going to be hunting for different threats than an endpoint security product. CrowdStrike may not generally care about domain recon or it may not detect pass the hash type activity, but Defender for ID definitely will. If we want to network contain based on domain recon flagged by Defender for ID we parse out the entities from the alert, then we can trigger our playbook based on that. We want to exclude our domain controllers from the entities, because they are the target of the attack and we don’t want to contain those, but we do the endpoint initiating the behaviour.

SecurityAlert
| where ProviderName contains "Azure Advanced Threat Protection"
| where AlertName contains "reconnaissance"
| extend EntitiesDynamicArray = parse_json(Entities) | mv-expand EntitiesDynamicArray
| extend EntityType = tostring(parse_json(EntitiesDynamicArray).Type), EntityAddress = tostring(EntitiesDynamicArray.Address), EntityHostName = tostring(EntitiesDynamicArray.HostName)
| extend HostName = iif(EntityType == 'host', EntityHostName, '')
| where HostName !contains "ADDC" and isnotempty(HostName)
| distinct HostName, AlertName, VendorOriginalId, ProviderName

You can also grab identity alerts, such as ‘Mass Download’, lookup your DeviceLogonEvents table to find the machine most recently used by the person who triggered it, then isolate the host based off that. Our SecurityAlert table uses userprincipalname and our DeviceLogonEvents uses the old style username, so we again use our IdentityInfo table to piece them together.

let alert=
SecurityAlert
| where AlertName has "Mass Download"
| project CompromisedEntity
| join kind=inner 
(
IdentityInfo
| where TimeGenerated > ago (21d)
| summarize arg_max (TimeGenerated, *) by AccountUPN
)
on $left.CompromisedEntity == $right.AccountUPN
| project CompromisedEntity, AccountUPN, AccountName;
DeviceLogonEvents
| where TimeGenerated > ago (1d)
| join kind=inner alert on AccountName
| where LogonType == "Interactive"
| where InitiatingProcessFileName == "lsass.exe"
| summarize arg_max(TimeGenerated, *) by DeviceName
| project DeviceName, CompromisedEntity, AccountName

Most identity driven alerts from Cloud App Security or Azure AD Identity Protection won’t actually have the device name listed, we leverage our other data to go find it. Now we have the device name which our user last logged onto for our ‘Mass Download’ events, we can isolate the machine, or at the very least investigate further. Of course the device we found may not necessarily be the one that has flagged the alert – but you may want to play it safe and contain it anyway while also responding to the identity side of the alert.

Detecting anomalies unique to your environment with Azure Sentinel — 13th Aug 2021

Detecting anomalies unique to your environment with Azure Sentinel

One of the lesser known and more interesting operators that you can use with KQL is series_decompose_anomalies. When you first read the Microsoft article it is a little intimidating to be honest but thankfully there is a community post here that explains it quite well. Essentially, we can use the series_decompose_anomalies operator to look for anomalies in time series data. We can use the various aggregation functions in KQL to turn our log data into time series data. The structure of these queries are all similar, we want to create some parameters to use in our query, build our time series data, then look for anomalies within it. Then finally we want to make some sense of those anomalies by applying them to our raw log data and optionally visualize the anomalies. Easy!

Let’s use Azure AD sign in logs as a first example, there is a good chance you have plenty of data in your tenant and the logs come with plenty of information. We will try and find some anomalies in the amount of a few error codes. Start by creating some parameters to use throughout the query.

let starttime = 7d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);

So we are going to look at the last 7 days of data, break it down into one hour blocks and look for 3 particular error codes which are 50126 (wrong username and password), 53003 (access blocked by conditional access) and 50105 (user signed in correctly but doesn’t have access to the resource). So let’s run the query to look for those, and then make a time series dataset from the results using the make-series operator.

let starttime = 7d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);
SigninLogs
| where TimeGenerated > ago(starttime)
| where ResultType in (resultcodes)
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, Location
| order by TimeGenerated
| summarize Events=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| summarize EventCount=make_list(Events),TimeGenerated=make_list(TimeGenerated) by UserPrincipalName

You should be left with three columns, all the one hour time blocks, how many events in each block and the userprincipalname.

Now we are going to use our series_decompose_anomalies operator to find anomalies in the data set.

let starttime = 7d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);
SigninLogs
| where TimeGenerated > ago(starttime)
| where ResultType in (resultcodes)
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, Location
| order by TimeGenerated
| summarize Events=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| summarize EventCount=make_list(Events),TimeGenerated=make_list(TimeGenerated) by UserPrincipalName
| extend outliers=series_decompose_anomalies(EventCount)

We can see that we get some hits on 1 (more events than expected), -1 (less than expected) and lots of 0 (as expected).

It retains all the outliers in a single series, but we want to use the mv-expand operator to get our outliers as a single row, and for this case we are only interested where outliers = 1 (more events than expected)

let starttime = 30d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);
SigninLogs
| where TimeGenerated > ago(starttime)
| where ResultType in (resultcodes)
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, Location
| make-series totalevents = count()on TimeGenerated from ago(starttime) step timeframe by ResultType
| extend outliers=series_decompose_anomalies(totalevents)
| mv-expand TimeGenerated, totalevents, outliers
| where outliers == 1

Which will give us an output showing which hour had the increase, how many events in that hour and the userprincipalname.

Now the key is making some sense of this data; to do that we are actually going to take the results of our query and cast it as a variable, then run it back through our sign in data to pull out information that is useful. So we can call our first query ‘outlierusers’ and we are only interested in grabbing each username once. We know this account has been flagged with our query, so we use the distinct operator to only retrieve it a single time.

let starttime = 7d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);
let outlierusers=
SigninLogs
| where TimeGenerated > ago(starttime)
| where ResultType in (resultcodes)
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, Location
| order by TimeGenerated
| summarize Events=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| summarize EventCount=make_list(Events),TimeGenerated=make_list(TimeGenerated) by UserPrincipalName
| extend outliers=series_decompose_anomalies(EventCount)
| mv-expand TimeGenerated, EventCount, outliers
| where outliers == 1
| distinct UserPrincipalName;

Then we use our first query as a variable to our second and get a visualization of our outlier users – | where UserPrincipalName in (outlierusers). You can keep either the same time frame for the second part of your query, or make it different. You could look for 7 days of data to detect your anomalies and then hunt just the last day for your more detailed information. In this example we will keep the same, 7 days in 1 hour blocks.

let starttime = 7d;
let timeframe = 1h;
let resultcodes = dynamic(["50126","53003","50105"]);
let outlierusers=
SigninLogs
| where TimeGenerated > ago(starttime)
| where ResultType in (resultcodes)
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, Location
| order by TimeGenerated
| summarize Events=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| summarize EventCount=make_list(Events),TimeGenerated=make_list(TimeGenerated) by UserPrincipalName
| extend outliers=series_decompose_anomalies(EventCount)
| mv-expand TimeGenerated, EventCount, outliers
| where outliers == 1
| distinct UserPrincipalName;
SigninLogs
| where TimeGenerated > ago(starttime)
| where UserPrincipalName in (outlierusers)
| where ResultType != 0
| summarize LogonCount=count() by UserPrincipalName, bin(TimeGenerated, timeframe)
| render timechart 

So we end up with a time chart showing the users, and the hour blocks where the anomaly detection occurred.

So to recap for each query we want to

  • Set parameters
  • Build a time series
  • Detect anomalies
  • Apply that to a broader data set to enrich your alerting

Another example, let’s search the OfficeActivity table for download events, hunt for the anomalies, then use that data to track down the users last logged on machine and retrieve all USB file copy events.


let starttime = 7d;
let timeframe = 30m;
let operations = dynamic(["FileSyncDownloadedFull","FileDownloaded"]);
let outlierusers=
OfficeActivity
| where TimeGenerated > ago(starttime)
| where Operation in (['operations'])
| extend UserPrincipalName = UserId
| project TimeGenerated, UserPrincipalName
| order by TimeGenerated
| summarize Events=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| summarize EventCount=make_list(Events),TimeGenerated=make_list(TimeGenerated) by UserPrincipalName
| extend outliers=series_decompose_anomalies(EventCount)
| mv-expand TimeGenerated, EventCount, outliers
| where outliers == 1
| distinct UserPrincipalName;
let id=
IdentityInfo
| where AccountUPN in (outlierusers)
| where TimeGenerated > ago (21d)
| summarize arg_max(TimeGenerated, *) by AccountName
| extend LoggedOnUser = AccountName
| project LoggedOnUser, AccountUPN, JobTitle, EmployeeId, Country, City
| join kind=inner (
DeviceInfo
| where TimeGenerated > ago (21d)
| summarize arg_max(TimeGenerated, *) by DeviceName
| extend LoggedOnUser = tostring(LoggedOnUsers[0].UserName)
) on LoggedOnUser
| project LoggedOnUser, AccountUPN, JobTitle, Country, DeviceName, EmployeeId;
DeviceEvents
| where TimeGenerated > ago(7d)
| join kind=inner id on DeviceName
| where ActionType == "UsbDriveMounted"
| extend DriveLetter = tostring(todynamic(AdditionalFields).DriveLetter)
| join kind=inner (DeviceFileEvents
| where TimeGenerated > ago(7d)
| extend FileCopyTime = TimeGenerated
| where ActionType == "FileCreated"
| parse FolderPath with DriveLetter '\\' *
| extend DriveLetter = tostring(DriveLetter)
) on DeviceId, DriveLetter
| extend FileCopied = FileName1
| distinct DeviceName, DriveLetter, FileCopied, LoggedOnUser, AccountUPN, JobTitle, EmployeeId, Country

You will be returned a list of file USB file creation activities for each user who had higher than expected Office download actions.

Want to check whether you have had a sharp increase in syslog activity from certain machines?

let starttime = 5d;
let timeframe = 30m;
let Computers=Syslog
| where TimeGenerated >= ago(starttime)
| summarize EventCount=count() by Computer, bin(TimeGenerated,timeframe)
| where EventCount > 1500
| order by TimeGenerated
| summarize EventCount=make_list(EventCount),TimeGenerated=make_list(TimeGenerated) by Computer
| extend outliers=series_decompose_anomalies(EventCount,2)
| mv-expand TimeGenerated, EventCount, outliers
| where outliers == 1
| distinct Computer
;
Syslog
| where TimeGenerated >= ago(starttime)
| where Computer in (Computers)
| summarize EventCount=count() by Computer, bin(TimeGenerated, timeframe)
| render timechart 

In this query we have also increased the detection threshold from the default 1.5 to 2 with | extend outliers=series_decompose_anomalies(EventCount,2). We have also excluded machines with less than 1500 events per 30 minutes with | where EventCount > 1500. Maybe we don’t care if an anomaly is detected until it goes over that threshold. That is where you will need to combine the smarts of Azure Sentinel and KQL with your knowledge of your environment; what Sentinel things is strange may be normal to you. So spend some time making sure the first three steps are sound – your parameters, your time series and what you consider anomalous to your specific environment.

There are a heap of great queries on the official GitHub here and I have started to upload any useful queries to my own.

Streaming Azure AD risk events to Azure Sentinel — 5th Aug 2021

Streaming Azure AD risk events to Azure Sentinel

Microsoft recently added the ability to stream risk events from Azure AD Identity Protection into Azure Sentinel, check out the guidance here. You can add the data in the Azure AD -> Diagnostic Settings page, and once enabled you will see data stream into two new tables

  • AADUserRiskEvents – this is the data that you would see in Azure AD Identity Protection if you went and viewed the risk detections, or risky sign-in reports
  • AADRiskyUsers – this is the data from the Risky Users blade in Azure AD Identity Protection but streamed as log data, so will include when users are remediated.

This is a really welcome addition because there has always been an overlap with where detections are found, Azure AD Identity Protection will find some stuff, Microsoft Cloud App Security will find its own things, there is some crossover, and you may not be licensed for everything. Also having the data in Sentinel means you can query it against other log sources more unique to your environment. If you want to visualize the type of risk events in your environment you can do so. Keep in mind this data will only start populating once you enable it, any risk events prior to that won’t be resent to Azure Sentinel.

AADUserRiskEvents
| where isnotempty( RiskEventType)
| summarize count()by RiskEventType
| render piechart 

You can see here some of the overlap, you get unlikelyTravel and mcasImpossibleTravel, you can also have a look at where the data is coming from.

AADUserRiskEvents
| where isnotempty( RiskEventType)
| summarize count()by RiskEventType, Source

If you look at an AADUserRiskEvents event in detail, you see a column for DetectionTimingType – which tells us whether the detection is realtime (on sign in) or offline.

AADUserRiskEvents
| where isnotempty( DetectionTimingType) 
| summarize count()by DetectionTimingType, RiskEventType, Source

So we get some realtime alerts and some offline alerts from a number of sources. At the end of the day, more data is always useful, even if users will trigger multiple alerts if you are licensed for both systems. For anyone that has spent time looking at Azure AD sign in data, you would also know that there are risk items in those logs too, so how to we match up the data from a sign in to the data in our new AADUserRiskEvents? Thankfully when a sign in occurs that flags a risk event, it registers the same correlation id on both tables. So we can join between them and extract some really great data from both tables. Sign in data has all the information about what the user was accessing, conditional access rules, what client etc and then we can also get the data from our risk events.

let signin=
SigninLogs
| where TimeGenerated > ago(24h)
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| where TimeGenerated > ago(24h)
| join kind=inner signin on CorrelationId

When a user sign-ins with no risk unfortunately the RiskEventTypes_V2 table is actually not actually empty, it is just [], so we exclude those, then join on the correlation id to our risk events and you will get the data from both. We can even extend the columns and calculate the time delta between the sign in event and the risk event, for real time that is obviously going to be quick, but for offline you can find out how long it took for the risk to be flagged.

let signin=
SigninLogs
| where TimeGenerated > ago(24h)
| extend SigninTime = TimeGenerated
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| where TimeGenerated > ago(24h)
| extend RiskTime = TimeGenerated
| join kind=inner signin on CorrelationId
| extend TimeDelta = abs(SigninTime - RiskTime)
| project UserPrincipalName, AppDisplayName, DetectionTimingType, SigninTime, RiskTime, TimeDelta, RiskLevelDuringSignIn, Source, RiskEventType

When looking at these risk events, you may notice a column called RiskDetail, and occasionally you will see aiConfirmedSigninSafe. This is basically Microsoft flagging the risk event as safe based on some kind of signals they are seeing. They won’t tell you what is in the secret sauce to confirm it is safe but we can guess it is a combination of properties they have seen before for that user – maybe an IP address, location or user agent known seen previously. So we can probably exclude those from things we are worried about. Maybe you also only care about realtime detections considered medium or high, so we filter out offline detections and low risk events.

let signin=
SigninLogs
| where TimeGenerated > ago(24h)
| where RiskLevelDuringSignIn in ('high','medium')
| extend SigninTime = TimeGenerated
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| where TimeGenerated > ago(24h)
| extend RiskTime = TimeGenerated
| where DetectionTimingType == "realtime"
| where RiskDetail !has "aiConfirmedSigninSafe"
| join kind=inner signin on CorrelationId
| extend TimeDelta = abs(SigninTime - RiskTime)
| project UserPrincipalName, AppDisplayName, DetectionTimingType, SigninTime, RiskTime, TimeDelta, RiskLevelDuringSignIn, Source, RiskEventType, RiskDetail

You can visualize these events per day if you wanted to have an idea if you are seeing increases at all. Keep in mind this table is relatively new so you won’t have a lot of historical data to work with, and again the data won’t appear at all until you enable the diagnostic setting. But over time it will help you create a baseline of what is normal in your environment.

let signin=
SigninLogs
| where RiskLevelDuringSignIn in ('high','medium')
| extend SigninTime = TimeGenerated
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| extend RiskTime = TimeGenerated
| where DetectionTimingType == "realtime"
| where RiskDetail !has "aiConfirmedSigninSafe"
| join kind=inner signin on CorrelationId
| extend TimeDelta = abs(SigninTime - RiskTime)
| summarize count(RiskEventType) by bin(TimeGenerated, 1d), RiskEventType
| render columnchart  

If you have Azure Sentinel UEBA enabled, you can even enrich your queries with that data, which includes things like City, Country, Assigned Azure AD roles, group membership etc.

let id=
IdentityInfo
| summarize arg_max(TimeGenerated, *) by AccountUPN;
let signin=
SigninLogs
| where TimeGenerated > ago (14d)
| where RiskLevelDuringSignIn in ('high','medium')
| join kind=inner id on $left.UserPrincipalName == $right.AccountUPN
| extend SigninTime = TimeGenerated
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| where TimeGenerated > ago (14d)
| extend RiskTime = TimeGenerated
| where DetectionTimingType == "realtime"
| where RiskDetail !has "aiConfirmedSigninSafe"
| join kind=inner signin on CorrelationId
| extend TimeDelta = abs(SigninTime - RiskTime)
| project SigninTime, UserPrincipalName, RiskTime, TimeDelta, RiskEventTypes, RiskLevelDuringSignIn, City, Country, EmployeeId, AssignedRoles

If you were then to filter on only alerts where the users have an assigned Azure AD role.

let id=
IdentityInfo
| summarize arg_max(TimeGenerated, *) by AccountUPN;
let signin=
SigninLogs
| where TimeGenerated > ago (14d)
| where RiskLevelDuringSignIn in ('high','medium')
| join kind=inner id on $left.UserPrincipalName == $right.AccountUPN
| extend SigninTime = TimeGenerated
| where RiskEventTypes_V2 != "[]";
AADUserRiskEvents
| where TimeGenerated > ago (14d)
| extend RiskTime = TimeGenerated
| where DetectionTimingType == "realtime"
| where RiskDetail !has "aiConfirmedSigninSafe"
| join kind=inner signin on CorrelationId
| where AssignedRoles != "[]"
| extend TimeDelta = abs(SigninTime - RiskTime)
| project SigninTime, UserPrincipalName, RiskTime, TimeDelta, RiskEventTypes, RiskLevelDuringSignIn, City, Country, EmployeeId, AssignedRoles

This kind of combination of attributes – realtime risk which is either medium or high, which Microsoft has not confirmed as safe and the user has an Azure AD role assigned may warrant a faster response from you or your team.

Supercharge your queries with Azure Sentinel UEBA’s IdentityInfo table — 29th Jul 2021

Supercharge your queries with Azure Sentinel UEBA’s IdentityInfo table

For those that use Sentinel, hopefully you have turned on the User and Entity Behaviour Analytics, the cost is fairly negligible and it’s what drives the entity and investigation experiences in Sentinel. There are plenty of articles and blogs around to cover how to use those. I wanted to give you some really great examples of leveraging the same information to make your investigation and rules even better.

When you turn on UEBA you end up with four new tables

  • BehaviorAnalytics – this tracks things like logons or group changes, but goes beyond that and measures if the event is uncommon
  • UserAccessAnalytics – tracks users access, such as group membership but also maintains information such as when the access was first granted
  • PeerAccessAnalytics – maintains a list of a users closest peers which helps to evaluate potential blast radius
  • IdentityUserInfo – maintains a table of identity info from both on premise and cloud for users

We have access those like any other tables even when not using the entity or investigation pages. So let’s have a look at a few examples of using that data to make meaningful queries. The IdentityInfo table is a combination of Azure AD and on-premise AD data and it is a godsend – especially for those of us who still have a large on premise footprint. Previously you had to ingest a lot of this data yourself. Have a read of the Tech Community post here which has the details of this table. We essentially turn our identity data into log data, which is great for threat hunting. You just need to make sure you write your queries to account for multiple entries for users, such as using the take operator, or the arg_max operator.

Have a system that likes to respond using SIDs for users alerts instead of usernames? Here we look for lockout events, grab the SID of the account and then join to the IdentityInfo table where we get information that is actually useful to us. Remember that the IdentityInfo is a table and will have multiple entries for users, so just retrieve the latest record

let alert=
SecurityEvent
| where EventID == "4740"
| extend AccountSID = TargetSid
| project AccountSID, Activity;
IdentityInfo
| join kind=inner alert on AccountSID
| sort by TimeGenerated desc
| take 1
| project AccountName, Activity, AccountSID, AccountDisplayName, JobTitle, Phone, IsAccountEnabled, AccountUPN

Do you grant access to an admin server for your IT staff and want to audit to make sure its being used? This query will find the enabled members of the group “ADMINSERVER01 – RDP Access” then query for successful RDP logons to it. We use the rightanti join in Kusto, and the output will be users who have access, but haven’t connected in 30 days.

let users=
IdentityInfo
| where TimeGenerated > ago (7d)
| where GroupMembership has "ADMINSERVER01 - RDP Access"
| extend OnPremAccount = AccountName
| where IsAccountEnabled == true
| distinct OnPremAccount, AccountUPN, EmployeeId, IsAccountEnabled;
SecurityEvent
| where TimeGenerated > ago (30d)
| where EventID == 4624
| where LogonType == 10
| where Computer has "ADMINSERVER01"
| sort by TimeGenerated desc 
| extend OnPremAccount = trim_start(@"DOMAIN\\", Account)
| summarize arg_max (TimeGenerated, *) by OnPremAccount
| join kind=rightanti users on OnPremAccount
| project OnPremAccount, AccountUPN, IsAccountEnabled

Have an application that you use Azure AD for SSO, but access control is granted from on premise AD groups? You can do a similar join to SigninLogs data.

let users=
IdentityInfo
| where TimeGenerated > ago (7d)
| where GroupMembership has "Business App Access"
| extend UserPrincipalName = AccountUPN
| distinct UserPrincipalName, EmployeeId, IsAccountEnabled;
SigninLogs
| where TimeGenerated > ago (30d)
| where AppDisplayName contains "Business App"
| where ResultType == 0
| sort by TimeGenerated desc
| summarize arg_max(TimeGenerated, AppDisplayName) by UserPrincipalName
| join kind=rightanti users on UserPrincipalName
| project UserPrincipalName, EmployeeId, IsAccountEnabled

Again this will show you who has access but hasn’t authenticated via Azure AD in 30 days. Access reviews in Azure AD can help you with this too, but it’s a P2 feature you may not have, and it won’t be able to change on premise AD group membership.

You could query the IdentityInfo table for users with certain privileged Azure AD roles and correlate with Cloud App Security alerts to prioritize them higher.

let PrivilgedRoles = dynamic(["Global Administrator","Security Administrator","Teams Administrator", "Security Administrator"]);
let PrivilegedIdentities = 
IdentityInfo
| summarize arg_max(TimeGenerated, *) by AccountObjectId
| mv-expand AssignedRoles
| where AssignedRoles in~ (PrivilgedRoles)
| summarize AssignedRoles=make_set(AssignedRoles) by AccountObjectId, AccountSID, AccountUPN, AccountDisplayName, JobTitle, Department;
SecurityAlert
| where TimeGenerated > ago (7d)
| where ProviderName has "MCAS"
| project CompromisedEntity, AlertName, AlertSeverity
| join kind=inner PrivilegedIdentities on $left.CompromisedEntity == $right.AccountUPN
| project TimeGenerated, AccountDisplayName, AccountObjectId, AccountSID, AccountUPN, AlertSeverity, AlertName, AssignedRoles

And finally using the same logic to find users with privileged roles and detecting any Azure AD Conditional Access failures for them

let PrivilgedRoles = dynamic(["Global Administrator","Security Administrator","Teams Administrator", "Exchange Administrator"]);
let PrivilegedIdentities = 
IdentityInfo
| summarize arg_max(TimeGenerated, *) by AccountObjectId
| mv-expand AssignedRoles
| where AssignedRoles in~ (PrivilgedRoles)
| summarize AssignedRoles=make_set(AssignedRoles) by AccountObjectId, AccountSID, AccountUPN, AccountDisplayName, JobTitle, Department;
SigninLogs
| where TimeGenerated > ago (30d)
| where ResultType == 53003
| join kind=inner PrivilegedIdentities on $left.UserPrincipalName == $right.AccountUPN
| project TimeGenerated, AccountDisplayName, AccountObjectId, AccountUPN, AppDisplayName, IPAddress

Remember that once you join your IdentityInfo table to whichever other data sources, you can include fields from both in your queries – so on premise SID’s or ObjectID’s as well as items from your SigninLogs or SecurityAlert tables like alert names, or conditional access failures.

Enforce PIM compliance with Azure Sentinel and Playbooks — 26th Jul 2021

Enforce PIM compliance with Azure Sentinel and Playbooks

Azure AD Privileged Identity Management is a really fantastic tool that lets you provide governance around access to Azure AD roles and Azure resources, by providing just in time access, step up authentication, approvals and a lot of great reporting. For those with Azure AD P2 licensing, you should roll it out ASAP. There are plenty of guides on deploying PIM, so I won’t go back over those, but more focus on how we can leverage Azure Sentinel to make sure the rules are being followed in your environment.

PIM actions are logged to the AuditLogs table, you can find any operations associated by searching for PIM

AuditLogs
| summarize count() by OperationName
| where OperationName contains "PIM"

If you have had PIM enabled for a while, you will see lot of different activities, I won’t list them all here, but you will see each time someone activates a role, when they are assigned to roles, when new roles are onboarded and so on. Most of the items will just be business as usual activity and useful for auditing but nothing we need to alert on or respond to. One big gap of PIM is that users can still be assigned roles directly, so instead of having just in time access to a role, or require an MFA challenge to activate they are permanently assigned to roles – this may not be an issue for some roles like Message Center Reader, but you definitely want to avoid it for highly privileged roles like Global Administrator, Exchange Administrator, Security Administrator and whichever else you deem high risk. This could be an admin trying to get around policy or something more sinister.

Thankfully we get an operation each time this happens, ready to to act on. We can query the AuditLogs for these events, then retrieve the information about who was added to which role, and who did it in case we want to follow up with them. For this example I added our test user to the Power Platform Administrator role outside of PIM.

AuditLogs
| where OperationName startswith "Add member to role outside of PIM"
| extend AADRoleDisplayName = tostring(TargetResources[0].displayName)
| extend AADRoleId = tostring(AdditionalDetails[0].value)
| extend AADUserAdded = tostring(TargetResources[2].displayName)
| extend AADObjectId = tostring(TargetResources[2].id)
| extend UserWhoAdded = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| project TimeGenerated, OperationName, AADRoleDisplayName, AADRoleId, AADUserAdded, AADObjectId, UserWhoAdded

If you don’t want to automatically remediate all your roles, you could put the ones you want to target into a Watchlist and complete a lookup on that first. The role names and role ids are the same for all Azure AD tenants, so you can get them here. Create a Watchlist, in this example called PrivilegedAADRoles with the names and ids of the ones you wish to monitor and remediate. Then just query on assignments to groups in your Watchlist.

Now we can include being in that Watchlist as part of the logic we will use when we write our query. Keep in mind you will still get logs for any assignments outside of PIM, we are just limiting the scope here for our remediation.

let AADRoles = (_GetWatchlist("PrivilegedAADRoles")|project AADRoleId);
AuditLogs
| where OperationName startswith "Add member to role outside of PIM"
| extend AADRoleDisplayName = tostring(TargetResources[0].displayName)
| extend AADRoleId = tostring(AdditionalDetails[0].value)
| extend AADUserAdded = tostring(TargetResources[2].displayName)
| extend AADObjectId = tostring(TargetResources[2].id)
| extend UserWhoAdded = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| where AADRoleId in (AADRoles)
| project OperationName, AADRoleDisplayName, AADRoleId, AADUserAdded, AADObjectId, UserWhoAdded

Now to address these automatically. First, let’s create our playbook that will automatically remove any users who were assigned outside of PIM. You can call it whatever makes sense for you, now much like the example here, if you want to do your secrets management in an Azure Key Vault, then assign the new logic app rights to read secrets. The service principal you use for this automation will need to be able to manage membership of Global Administrators, so will therefore need to be one itself, so make sure you keep your credentials for it safe.

We want the trigger of our playbook to be ‘When Azure Incident creation rule was triggered’. Then the first thing we are going to do is create a couple of variables, one for the role id that was changed and one for the AAD object id for the user who was added. We will map these through using entity mapping when we create our analytics rule in Sentinel – which we will circle back on and create once our playbook is built. Let’s retrieve the entities from our incident – for this example we will map RoleId to hostname and object id for the user to AADUserID.

Then we grab our AADUserId and RoleId from the entities and append them to our variables ready to re-use them.

Next we use the Key Vault connect to grab our client id, tenant id and client secret from Key Vault, then we are going to POST to the MS Graph to retrieve an access token to re-use as authorization to remove the user.

We will need to parse the JSON response to then re-use the token as authorization, the schema is

{
    "properties": {
        "access_token": {
            "type": "string"
        },
        "expires_in": {
            "type": "string"
        },
        "expires_on": {
            "type": "string"
        },
        "ext_expires_in": {
            "type": "string"
        },
        "not_before": {
            "type": "string"
        },
        "resource": {
            "type": "string"
        },
        "token_type": {
            "type": "string"
        }
    },
    "type": "object"
}

Now we have proven we have access to remove the user who was added outside of PIM, we will POST back to MS Graph using the ‘Remove directory role member’ action outlined here. As a precautionary step, we will revoke the users sessions so if they had a session open with added privileged it has now been logged out. You can also add some kind of notification here, maybe raise an incident in Service Now, or email the user telling them they have had their role removed and inform them about your PIM policies.

To round out the solution, we create our Analytics rule in Sentinel, this is one I would run as often as possible because you want to revoke that access ASAP. So if you run it every 5 minutes, looking at the last 5 minutes of data, then complete the entity mapping outlined below, to match our playbook entities.

When we get to Automated response, create a new incident automation rule that runs the playbook we just built. Then activate the analytics rule.

Now you can give it a test if you want, add someone to a role outside of PIM, within ~10 minutes (to allow the AuditLogs to stream to Azure Sentinel, then your Analytics rule to fire), they should be removed and be logged back out of Azure AD.

Monitoring OAuth Applications with Azure Sentinel — 20th Jul 2021

Monitoring OAuth Applications with Azure Sentinel

For those of us who use Azure AD as their identity provider, you are probably struggling with OAuth app sprawl. On one hand it is great that your single sign on and identity is centralized to one place, but that means a whole lot of applications to monitor. When we first started leveraging Azure AD the concept of OAuth applications was really foreign to me, and though not a perfect comparison, I used to like to think of them as the cloud equivalent of on premise AD service accounts. You create an Azure AD App / Service Principal, then you grant it access – now that access can be pretty insignificant, or extremely privileged. Much like an on premise AD service account, it could be anything from read access to one folder on a file server to a local admin on every server (hope you have your eyes on that one!).

The deeper you are in the Microsoft ecosystem, the more apps you will have appear in your tenant. Users want the Survey Monkey app for Teams? Have an app. Users want to use Trello or Miro in Teams or SharePoint, have another app. The sheer number of applications can be overwhelming. Then on top of that you have any applications you are developing in house. The permissions these applications have can be either delegated permissions or application permissions (or a combination of both), and there is a massive difference between the two. A delegated permission grant is bound by what the user accessing the app can also access. So if you have an application called ‘My Custom Application’ and you grant delegated Mail.ReadWrite permissions to it, then ‘My Custom Application’ can only access the same mail items the user who signed in can (their own mailbox, perhaps some mailboxes they have been given specific access to). Application permission means that the app can access everything under that scope, so if you grant ‘My Custom Application’ application Mail.ReadWrite permissions then the application has read & write access to every mailbox in your tenant – big difference!

Consent phishing has become a real issue, as Microsoft has posted about. Users are getting better at knowing they shouldn’t enter their username and password into untrusted sites, but instead attackers may send an email that looks like its from Microsoft or come from internal and have the user consent to an application, which will be registered in your tenant and they can then use to access the users data, start looking around, find other contacts or info and away they go. For those that have Cloud App Security, it has some great OAuth controls here and you can also configure if and how users can add apps to Azure AD with user settings and admin consent.

Regardless of your policy on those settings, if you have the AuditLogs from Azure AD flowing to Azure Sentinel (you can add via the Data Connectors tab) then we can also check out all the activity in there. When you or one of your Azure AD admins creates an application under Azure AD -> App Registrations then three events will trigger in Sentinel. If you create one yourself, then search the AuditLogs table for your account to see the output.

AuditLogs
| where InitiatedBy has "youraccount@yourdomain.com"
| sort by TimeGenerated desc

First an application is added, then an owner (the person who created the app) is added to the app, then a service principal is created. If we look at the ‘Add application’ log under the TargetResources field we can see the name and id of the application created

This id corresponds to the object id in the Azure AD -> App Registrations portal

We also have a service principal created, and again we check the TargetResources under ‘Add service principal’ we can see the id displayed

This second id corresponds to the object id on the Azure AD -> Enterprise Applications portal

Confused about applications vs service principals vs object ids vs application ids? I think everyone has been at some point, thankfully Microsoft have detailed the relationships here for you. By default, when an application is created, it only has delegated user.read permissions. So the application can sign users in and read their profile and that’s it, now lets add some permissions to our app –

I have added delegated Sites.ReadWrite.All and Mail.ReadWrite – remember these are delegated permissions, so now the application can sign on users, see the users profile, and have read write access to any SharePoint Sites or mailboxes that the person who signed on can. Admin consent required = no means that you don’t require a global admin to consent to this permission for it to work, however each user who signs on will be presented a consent prompt. If an admin does consent then users won’t be prompted individually. Now, here is where things get a bit weird. When you add your permissions, you will see two entries in your logs

Now, we have added Sites.ReadWrite.All, so lets make sure that is what we are seeing in the logs.

Weird? No results. So what I have found is that when you first add permissions to an app, if no one has yet consented (either a user logging on and consenting for themselves, or an admin consenting for the tenant) the permissions are stored as EntitlementId’s

Old value = 1 EntitlementId, New Value = 3 EntitlementId’s, because we went from User.Read to User.Read, Sites.ReadWrite.All and Mail.ReadWrite. For delegated permissions there is no real way to map ids to names unfortunately.

Let’s now consent to these permissions as an admin and have a look what we can see. Push the big ‘Grant admin consent’ button on your permissions page. Now we query on actions done by ourselves and we see three new entries

So we have granted the delegated permissions from above, we added the app role assignment to my user account (so if you go to Azure AD -> Enterprise Apps -> Sentinel Test, you will now be assigned to the app and can sign into it) and then finally because we are an admin in this example, we also consented to the app for the tenant. If we dig down onto the add delegated permissions grant item, now we can see –

Now we’re talking, so we are stuck with EntitlementId’s just until someone consents, which is still a pain but we can work with that. Until either a user for themselves, or an admin for everyone consents, then no one has accessed the app. Now we can have a look at what delegated permissions have been added to apps in the last week using the ‘Add delegated permission grant’ operation.

AuditLogs
| where Category == "ApplicationManagement"
| where OperationName has "Add delegated permission grant"
| extend UpdatedPermissions = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue))) 
| extend AppId = tostring(TargetResources[1].id)
| project TimeGenerated, UpdatedPermissions, OperationName, AppId

Now we can see the list of delegated permissions recently added to our applications. Delegated permissions are the lesser of the two, but still shouldn’t be ignored. If an attacker tricks a user into adding an application with a lot of delegated permissions they can definitely start hunting around SharePoint, or email or a lot of other places to start working their way through your environment. You may be especially concerned with any permissions granted that have .All, which means not just access to one persons info, but anything that user can access. Much like a ‘Domain User’ from on premise AD can see a lot of your domain, a member of your Azure AD tenant can see a lot of info about your tenant too. We can hunt for any updated permissions that have All in them, we can also parse out the user who added the permissions. Depending on your policies on app registration this could be end users or IT admin staff.

AuditLogs
| where Category == "ApplicationManagement"
| where OperationName has "Add delegated permission grant"
| extend UpdatedPermissions = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue)))
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| where UpdatedPermissions contains "All"
| project TimeGenerated, OperationName, UpdatedPermissions, User

Let’s remove all the permissions and go back to User.Read only (or just create a new app for testing and delete the old). Now this time let’s add some extremely high permissions, application Directory.ReadWrite.All – read and write everything in Azure AD, not bound by user permission, and the same for mail with Mail.ReadWrite – read and write all mailboxes in the tenant.

Now if we query our AuditLogs, we will find the same issue occurs with application permissions, until they are consented to, they only appear as EntitlementId’s.

Good news though! For application permissions you can query the MS Graph to find the information to map what you are after. You could store that data in a custom table, or a CSV to query against. If you search your tenant for the MS Graph application (00000003-0000-0000-c000-000000000000) then click on it and grab your ObjectID – yours will be different to mine.

Then query MS Graph https://graph.microsoft.com/v1.0/serviceprincipals/yourobjectidhere?$select=appRoles you will get an output like this, these ids are the same for all tenants. Now we have the ids, plus the friendly names and even a description.

I won’t rehash adding info from MS Graph to a custom table here, myself and heaps of others have covered that. In my case I have added all my id’s, names and descriptions to a custom AADPermissions_CL custom table so we can then query it. Now we can query the audit logs and join them to our custom table of permissions and id’s to enrich out alerting, if we get any hits on this query then we know application permissions have been added to an app (but not yet consented to)

let entitlement=
AuditLogs
| where OperationName has "Update application"
| extend AppName = tostring(TargetResources[0].displayName)
| extend AppID = tostring(TargetResources[0].id)
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend Ids_ = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue))[0].RequiredAppPermissions)
| extend EntitlementId_ = extract_all(@"([w]{8}-[w]{4}-[w]{4}-[w]{4}-[w]{12})(b|/)", dynamic([1]), Ids_)
| extend EntitlementIds = translate('["]','',tostring(EntitlementId_))
| extend idsSplit =split(EntitlementIds , ",")
| mv-expand idsSplit
| extend idsSplit_s = tostring(idsSplit);
AADPermissions_CL
| join kind=inner entitlement on $left.PermissionID_g==$right.idsSplit_s
| project TimeGenerated, AppName, AppID, PermissionID_g, PermissionName_s, PermissionDescription_s, User

If you go ahead and consent to the permissions, and recheck the AuditLogs you can see we get two hits for ‘Add app role assignment to service principal’

And if we dig on down through the JSON we can see the permission added

Now can look back and see what application permissions have been added to our apps recently

AuditLogs
| where OperationName has "Add app role assignment to service principal"
| extend UpdatedPermission = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend AppName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[4].newValue)))
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend AppId = tostring(TargetResources[1].id)
| project TimeGenerated, OperationName, UpdatedPermission, AppName, AppId, User

At this point you could query on particular permission sets, maybe looking for where UpdatedPermission has “ReadWrite” or anything with “All”. We can combine the two to see all delegated and application permissions added to an app with the following query, we join two queries based on the id of the application

let DelegatedPermission=
AuditLogs
| where OperationName has "Add delegated permission grant"
| extend AddedDelegatedPermission = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue)))
| extend AppId = tostring(TargetResources[1].id)
| project TimeGenerated, AddedDelegatedPermission, AppId;
AuditLogs
| where OperationName has "Add app role assignment to service principal"
| extend AddedApplicationPermission = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend AppName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[4].newValue)))
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend AppId = tostring(TargetResources[1].id)
| join kind=inner DelegatedPermission on AppId
| project TimeGenerated, AppName, AddedApplicationPermission, AddedDelegatedPermission, AppId

Now that we know what events we are after, we can really start hunting. Looking for an app that had application permissions added and removed quickly, within 10 minutes, maybe someone trying to cover their tracks? We can find who added and removed the permission, which permissions and calculate the time between

let PermissionAddedAlert=
AuditLogs
| where OperationName has "Add app role assignment to service principal"
| extend UserWhoAdded = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend PermissionAdded = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend AppId = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[5].newValue)))
| extend TimeAdded = TimeGenerated
| project UserWhoAdded, PermissionAdded, AppId, TimeAdded;
let PermissionRemovedAlert=
AuditLogs
| where OperationName has "Remove app role assignment from service principal"
| extend UserWhoRemoved = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend PermissionRemoved = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].oldValue)))
| extend AppId = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[5].newValue)))
| extend TimeRemoved = TimeGenerated
| project UserWhoRemoved, PermissionRemoved, AppId, TimeRemoved;
PermissionAddedAlert
| join kind=inner PermissionRemovedAlert on AppId
| where abs(datetime_diff('minute', TimeAdded, TimeRemoved)) <=10
| extend TimeDiff = TimeAdded - TimeRemoved
| project TimeAdded, UserWhoAdded, PermissionAdded, AppId, TimeRemoved, UserWhoRemoved, PermissionRemoved, TimeDiff

Wondering how those third party apps like Survey Monkey or a thousand other random Teams apps or OAuth apps work? Very similar in terms of hunting thankfully. For a multi tenant app, you won’t have an application object (under the Azure AD -> App Registrations portal) because that app will be in the developers tenant, but you will have a service principal (Azure AD -> Enterprise Applications portal). When you or a user consents to a third party application you will still get AuditLogs entries.

AuditLogs
| where OperationName contains "Consent to application"
| extend Consent = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[4].newValue)))
| parse Consent with * "Scope:" PermissionsConsentedto ']' *
| extend AdminConsent = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue)))
| extend AppDisplayName = tostring(TargetResources[0].displayName)
| extend AppType = tostring(TargetResources[0].type)
| extend AppId = tostring(TargetResources[0].id)
| project TimeGenerated, AdminConsent, AppDisplayName, AppType, AppId, PermissionsConsentedto

It will show us whether it was an admin who consented, AdminConsent = True, or a user AdminConsent = false. Both our own apps and third party apps show under ‘Consent to application’ log items.

It is important to remember that Azure Sentinel is event driven, if you only recently enabled Azure AD Audit Logs sending to Sentinel then you may have a lot of applications already consented to with a heap of permissions, similar to how those on premise service accounts creep up in privilege over time. There are a lot of great tools out there that can audit your existing posture and help you clean up. Cloud App Security can visualize all your apps, the permissions, how common they are if you are licensed for it or use PowerShell/MS Graph to run a report. Then once you are at a known place, put in place good practices to reduce risk and be alerted –

Don’t let your users register applications. Azure AD -> User Settings. Users can register applications – set to No

Configure consent settings – Azure AD -> Enterprise Applications -> Consent and permissions. Either set ‘Do not allow user consent’ or ‘Allow user consent for apps from verified publishers, for selected permissions (Recommended)’. The latter will let users consent for applications that are classified as low risk, which by default are User.Read, openid, profile and offline_access. You can add/remove to this list of pre-approve permissions.

Configure admin consent and the appropriate workflow for IT staff to review requests above the approved permissions.

Configure Azure Sentinel to fire alerts for both new applications created, permissions added to them and consent granted to applications – that will cover you for internal apps and third party apps.

Practice least privilege for your applications, like you would on premise service accounts. If an internal team or vendor asked for a service account with Domain Admin access you would hopefully question them, do the same for application access; do you really need directory.readwrite.all, or do you just need to read & write particular users or groups? Want to access an O365 mailbox via OAuth, then don’t give access to all mailboxes – limit access with a scoping policy, and the same for SharePoint.

Cloud App Security? Azure AD Identity Protection? Help! — 16th Jul 2021

Cloud App Security? Azure AD Identity Protection? Help!

If you are an Azure AD P2 tenant, or have E5 licensing there is a chance you have had a look at these products, the way they integrate (or don’t integrate) with each other and Azure Sentinel is sometimes a little unclear and known to change. They are meant to take the noise from your data sources like Azure AD sign in logs, or Office activity logs and make some sense of it all and direct the alerts to you, which is great. However sometimes even the alerts left over can be noisy. In Cloud App Security you can definitely tune this alerts which is helpful – for instance, you can change ‘impossible travel’ alerts to only fire on successful logons, not successful and failed. but I personally like getting as much data as I can into Sentinel and work with it in there.

The downside is that sending everything to Sentinel may mean a lot of alerts, even after Cloud App Security and Identity Protection have done their thing. Depending on the size your environment, it still may be overwhelming, say in a month you get 1430 alerts (using the below test data) for various identity issues.

You could just take the stance that for any of these you just sign the person out or force a password reset, that could result in a heap of false positives and frustrating users, and not treating more serious cases with more urgency.

When you connect Azure AD Identity Protection & Cloud App Security to Azure Sentinel, the alerts will show up in the SecurityAlert table with the ProviderNames of IPC and MCAS respectively. MCAS also alerts on a lot of other things, but we will focus on identity issues for now. When we look at the description for these alerts from Identity Protection, they are all kind of the same, something similar to “This risk event type considers past sign-in properties (e.g. device, location, network) to determine sign-ins with unfamiliar properties. The system stores properties of previous locations used by a user, and considers these “familiar”. The risk event is triggered when the sign-in occurs with properties not already in the list of familiar properties. The system has an initial learning period of 30 days, during which it does not flag any new detections…”, MCAS will give you a little more info but we need to really hunt ourselves.

To help us make sense of all these alerts, I thought we could get the details (IPv4 addresses and UserPrincipalName for this example) from our SecurityAlert, then replay that data through the Azure AD SigninLogs table and see if we can find some key alerts

let IPs=
SecurityAlert
| project TimeGenerated, Status, AlertName,CompromisedEntity,ExtendedProperties, ProviderName
| where TimeGenerated > ago (1h)
| where ProviderName in ('MCAS', 'IPC')
| where AlertName in ('Impossible travel activity','Multiple failed login attempts','Unfamiliar sign-in properties','Anonymous IP address','Atypical travel')
| where Status contains "New"
| extend Properties = tostring(parse_json(ExtendedProperties))
| extend UserPrincipalName = CompromisedEntity
| extend ipv4Addresses = extract_all(@"(([\d]{1,3}\.){3}[\d]{1,3})", dynamic([1]), Properties)
| extend ipv4Add = translate('["]','',tostring(ipv4Addresses))
| extend ipv4Split =split(ipv4Add , ",")
| mv-expand ipv4Split
| extend ipv4Split_s = tostring(ipv4Split);
SigninLogs
| project TimeGenerated, UserPrincipalName, IPAddress, AppDisplayName, ResultType, UserAgent, Location
| where TimeGenerated > ago(3d)
| where IPAddress !startswith "1.1.1."
| where ResultType == 0 or ResultType == 50158
| join kind=inner IPs on UserPrincipalName ,$left.IPAddress==$right.ipv4Split_s
| summarize AgentCount = count()by UserPrincipalName, UserAgent
| where AgentCount == 1

We get our SecurityAlerts over whatever period you want to look through, parse the IPs and UserPrincipalName data out, then we use the mv-expand operator to make a new row for each IP/UPN combination then look up that data to our SigninLogs table. Then to add some more intelligence, we exclude known trusted IP addresses (1.1.1.0/24 in the above example, you can whitelist these in MCAS too of course) and also only filter on successful (ResultType == 0) or successful and then sent to a third party security challenge, such as third party MFA (ResultType == 50158) events. We join on UserPrincipalName where we have a match on one of the IPs taken from the SecurityAlert event. Lastly we count the UserAgents used by each user and tell us when it is new, count == 1.

So get the alerts, grab the IP addresses and user, use that data to look for successful sign ins from non trusted networks on a user agent that is new to that user over the last 3 days. In my test environment full of fake data we go from 1430 alerts, to 11

I am not suggesting you just ignore the other 1119 alerts of course, but maybe these ones you prioritize higher or have a different response to.

Use MS Graph and Sentinel to create dynamic Watchlists — 14th Jul 2021

Use MS Graph and Sentinel to create dynamic Watchlists

This post is a follow up to my post about enriching Sentinel via MS Graph here and in response to the community post here – how do we create dynamic Watchlists of high value groups and their members. There are a couple of ways to do this, you can either use the Azure Sentinel Logic App or the Watchlist API. For this example we will use the Logic App. Let’s start with some test users and groups, for this example our test user 1 and 2 are in privileged group 1 and test user 2 are in privileged group 2.

First let’s start by retrieving the group ids from the MS Graph, we have to do this because Azure AD group names are not unique, but the object ids are. You will need an Azure AD app registration with sufficient privilege to read the directory, directory.read.all is more than enough but depending on what other tasks your app is doing, may be too much. As always, least privilege! Grab the client id, the tenant id and secret to protect your app. How you do your secrets management is up to you, I use an Azure Key Vault because Logic Apps has a native connector to it which uses Azure AD managed identity to authenticate itself.

First create our playbook/Logic App and set the trigger to recurrence, since this job we will probably want to run every few hours or daily or whatever suits you. If you are using an Azure Key Vault, give the Logic App a managed identity under the identity tab.

Then give the managed identity for your Logic App the ability to list & read secrets on your Key Vault by adding an access policy

First we need to call the MS Graph to get the group ids of our privileged groups, we can’t use the Azure AD Logic App connector yet because that requires the object ids, and we want to do something more dynamic that will pick up new groups for us automatically. Let’s use the Key Vault connector to get our client id, tenant id and secret, and we connect using the managed identity.

Next we POST to the MS Graph to get an access token

Add the secrets from your Key Vault, your tenantid goes into the URI, then clientid and secret into the body. We want to parse the JSON response to get our token for re-use. The schema for the response is

{
    "properties": {
        "access_token": {
            "type": "string"
        },
        "expires_in": {
            "type": "string"
        },
        "expires_on": {
            "type": "string"
        },
        "ext_expires_in": {
            "type": "string"
        },
        "not_before": {
            "type": "string"
        },
        "resource": {
            "type": "string"
        },
        "token_type": {
            "type": "string"
        }
    },
    "type": "object"
}

Now we use that token to call the MS Graph to get our object ids. The Logic App HTTP action is picky about URI syntax so sometimes you just have to add your URL to a variable and input it into the HTTP action, our query will search for any groups starting with ‘Sentinel Priv’ but you could search on whatever makes sense for you.

When building these Logic Apps, testing a run and checking the outputs is often valuable to make sure everything is working, if we trigger our app now we can see the output we are expecting

{
  "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#groups(displayName,id)",
  "value": [
    {
      "displayName": "Sentinel Privileged Group 1",
      "id": "54bb0603-7d02-40a1-874d-2dc26010c511"
    },
    {
      "displayName": "Sentinel Privileged Group 2",
      "id": "e5efe4a4-51e0-4ed7-96b5-9d77ffb7ab74"
    }
  ]
}

We will need to leverage the ids from this response, so parse the JSON using the following schema

{
    "properties": {
        "@@odata.context": {
            "type": "string"
        },
        "value": {
            "items": {
                "properties": {
                    "displayName": {
                        "type": "string"
                    },
                    "id": {
                        "type": "string"
                    }
                },
                "required": [
                    "displayName",
                    "id"
                ],
                "type": "object"
            },
            "type": "array"
        }
    },
    "type": "object"
}

Next we need to manually go create our Watchlist in Sentinel that we want to update. For this example we have created the PrivilegedUsersGroups watchlist using a little sample data

Finally we iterate through our groups ids we retrieved from the MS Graph, create a JSON payload and use the Add a new watchlist item Logic App, you will need to add your Sentinel workspace details in here of course.

When we run our Logic App, it should now insert the members to our watchlist, including their UPN, the group name and group id.

We can clean up the test data if we want, but now we can query on events related to those via our watchlist

let PrivilegedUsersGroups = (_GetWatchlist('PrivilegedUsersGroups') | project GroupName);
SecurityEvent
| where EventID in (4728, 4729, 4732, 4756, 4757)
| where TargetUserName in (PrivilegedUsersGroups)
| project Activity, TargetUserName

The one downside to using Watchlists in this way is that the Logic App cannot currently remove items, so when you run it each time it will add the same members again. It isn’t the end of the world though, you can just query the watchlist on its latest updated time, if your Logic App runs every four hours, then just query the last four hours of items.

_GetWatchlist('PrivilegedUsersGroups') | where LastUpdatedTimeUTC > ago (4h)

Join, lookup and union your way to unified identity in Sentinel — 8th Jul 2021

Join, lookup and union your way to unified identity in Sentinel

One of the ongoing challenges I have had when trying to detect credential compromise or other identity issues is that identities across products and environments are often not uniform. For companies who are born in the cloud, have no legacy on premise footprint maybe this is less of an issue. But those of us who have a large on premise environment still, possibly many on premise forests and domains, it becomes a headache. How do we tie together our legacy applications that love using SamAccountName as a logon identifier, or the cloud application that syncs a legacy attribute for its username, with our modern email address logon? You may even been dealing with multiple iterations of naming standards for some attributes.

Let’s take SamAccountName as an example, that is the ‘bobsmith’ part of bobsmith@yourdomain.com. By default this is synced to Azure AD as the onPremiseSamAccountName attribute, however this attribute is not exposed in the Azure AD PowerShell module and it isn’t revealed in Azure AD Sign-in logs. It is available via MS Graph, but we can’t access MS Graph when hunting in Sentinel directly. So how do we get that data into Sentinel, then make sense of it?

Enter this awesome post about enriching Sentinel with Azure AD attributes. I won’t rehash the post here, but in summary you poll Azure AD with PowerShell and send the data to a custom table to look up. Our problem is, some of the data we want isn’t available in the Azure AD PowerShell module, so we could either get the data from MS Graph and send it via the ingestion API, or in my case, we actually use the same logic that post outlines, but we run it on premise because we know the Active Directory module will surface everything we need. If we hunt through the PowerShell that makes up the solution here, instead of connecting to Azure AD like they do –

Connect-AzureAD -AadAccessToken $aadToken -AccountId $context.Account.Id -TenantId $context.tenant.id
$Users = Get-AzureADUser -All $True

We are instead going to connect to on-premise AD and choose what attributes we want to flow to Sentinel

Get-ADUser -filter {Enabled -eq $TRUE} -SearchBase 'OU=CorporateUsers,DC=YourDomain,DC=COM' -SearchScope 2 -Properties * | select UserPrincipalName, SamAccountName, EmployeeID, Country, Office, EmailAddress, WhenCreated, ProxyAddresses

Maybe we want to filter out disabled users, only search for a particular OU (and those under it) and bring back some specific fields unique to your environment. The key one in terms of identity is having SamAccountName and UserPrincipalName in the same table, using AD as our source, but maybe your application uses EmployeeID in its logs, so bring that up too. Let’s check out our UserDetails_CL table that we sent our data to-

We can see that our test users have different formats for their SamAccountName fields, maybe they lived through a few naming standards and they have different EmployeeID lengths. Now that we have the data we can use KQL to find what we need though. Let’s say we have an alert triggered when someone fails to logon more than 3 times in 5 minutes, looks like our test account has flagged that alert

Unfortunately it’s impossible to really tell who this is, if it’s threatening or where to go from here, there is a good chance this is just noise. So let’s expand our query to bring in our UserDetails_CL table full of our info from on premise AD and see who this is. We use the KQL let operator to assign results to a variable for re-use

let alert=
SecurityEvent
| where TimeGenerated > ago (5m)
| where EventID == "4771"
| summarize count()by TargetAccount
| where count_ > 3
| extend SamAccountName = TargetAccount
| project SamAccountName;
let userdetails=
UserDetails_CL
| where TimeGenerated > ago(24h)
| extend SamAccountName = SamAccountName_s
| extend EmployeeID = EmployeeID_s
| extend UserPrincipalName = UserPrincipalName_s
| project SamAccountName, EmployeeID, UserPrincipalName;
alert
| lookup kind=leftouter userdetails on SamAccountName

So first we run our alert query, then next our UserDetails_CL custom table. If you are going to keep this table up to date, and run your PowerShell nightly, then query that table for the last 24 hours of records so you get the most current data. Then finally we combine our two queries together; there are plenty of ways in KQL to aggregate data across tables – union, join, lookup. I like using lookup in this case because we are going to join on top of this query next.

Now we have a bit more information about this user, in particular their UserPrincipalName which is used in many other places, like Azure AD. We can then join our output to another query, this time looking for Azure AD logs by ‘replaying’ that UserPrincipalName forward.

let alert=
SecurityEvent
| where TimeGenerated > ago (5m)
| where EventID == "4771"
| summarize count()by TargetAccount
| where count_ > 3
| extend SamAccountName = TargetAccount
| project SamAccountName;
let userdetails=
UserDetails_CL
| where TimeGenerated > ago(24h)
| extend SamAccountName = SamAccountName_s
| extend EmployeeID = EmployeeID_s
| extend UserPrincipalName = UserPrincipalName_s
| project SamAccountName, EmployeeID, UserPrincipalName;
alert
| lookup kind=leftouter userdetails on SamAccountName
| join kind=inner 
(SigninLogs
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, IPAddress, Location, UserAgent) on UserPrincipalName

So we have looked up the SecurityEvent data from on-premise, flagged an account that failed to logon more than 3 times in 5 minutes, looked up their current AD details using our custom table we ingested, then joined that data to the Azure AD logs using their UserPrincipalName.

We can see the same user has connected to Exchange Online PowerShell and we get the collated identity information for the event.

You can use the same logic to bind any tables together, use third party MFA that comes in via SysLog or CEF?

let MFA = Syslog_CL
| extend MFAOutcome  = extract("outcome=(.*?) duser", 1, SyslogMessage_s)
| extend SamAccountName = extract("duser=(.*?) cs2", 1, SyslogMessage_s)
| extend MFAMethod = extract("cs2=(.*?) cs3", 1, SyslogMessage_s)
| extend MFAApplication = extract("cs3=(.*?) ca", 1, SyslogMessage_s)
| extend MFAIPaddr = extract("src=(([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.(([0-9]{1,3})))",1,SyslogMessage_s) 
| extend MFATime = TimeGenerated
| where MFAOutcome == "FAILURE"
| project MFATime, SamAccountName, MFAOutcome, MFAMethod, MFAApplication, MFAIPaddr;
let UserInfo = 
UserDetails_CL
| extend UserPrincipalName = UserPrincipalName_s
| extend SamAccountName = SamAccountName_s;
MFA
| lookup kind=leftouter UserInfo on SamAccountName
| project MFATime, MFAOutcome, MFAMethod, MFAIPaddr, SamAccountName_s, UserPrincipalName
| join kind=inner 
(SigninLogs
| where ResultType == "50158"
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, IPAddress, Location, UserAgent) on UserPrincipalName
| where MFATime between ((TimeGenerated-timespan(10min)).. (TimeGenerated+timespan(10min))) and IPAddress != MFAIPaddr

In this example the third party MFA uses SamAccountName as an identifier and the logs come into a Syslog table. We parse out the relevant details – MFA outcome (pass/fail), SamAccountName, MFA method (push, phone call, text etc), IP address and time, then find only MFA failures. We lookup our SamAccountNames in our UserDetails_CL table to get the UserPrincipalNames. Finally we query Azure AD for any logon events that have triggered a MFA request (ResultType = 50158). Then we add additional logic where we are only interested in events MFA and logon events within 20 minutes of each and where the IP address that logged onto Azure AD is different to the MFA IP address, which could suggest an account has been compromised but the owner of the account denied the MFA prompt from a different location.

Using Sentinel to automatically respond to identity alerts — 6th Jul 2021

Using Sentinel to automatically respond to identity alerts

Revoking a users sessions in Azure AD is a fantastic way to automatically respond to identity alerts like impossible travel or unfamiliar sign in properties, it becomes an even stronger response the greater your MFA coverage is, and the more apps you use Azure AD for authentication. However automating that response for legitimate actions, like where a user is using a VPN or has a new device, can lead you down a cycle where their sessions are revoked, they sign back in, trigger an alert again, have their sessions revoked and so on.

We can use a Logic App and a custom table to put some intelligence behind this and effectively whitelist a user for a given period. You could possibly achieve the same result using a Sentinel watchlist, but it is currently difficult to remove users from watchlists, though I suspect that functionality will come soon. For this example we have called the table RevokedUsers_CL

First, lets choose a Sentinel analytics rule to trigger this on. For this example we will use impossible travel, you can write your queries and do your entity mapping to suit your environment, but a straight forward example is below

SecurityAlert
| where AlertName == 'Impossible travel activity'
| where Status contains "New"
| parse Entities with * 'AadUserId": "' aadid_ '",' *
| extend ep_ = parse_json(ExtendedProperties)
| project aadid_, CompromisedEntity

Then we map our Account to AADUserID and Name to aadid_ and CompromisedEntity respectively

Now we can build our Logic App to automatically revoke the sessions for users that are flagged for impossible travel. We create a new app and set the trigger as ‘When Azure Sentinel incident creation rule was triggered’ then we want to get our entities that we mapped earlier, and create a couple of variables for use later.

Then we take the entities from the incident and append them to our variables, because each hit on our analytics rule will trigger a separate alert and incident, these will only ever be a single user.

Now we are going to run a query against our custom table to see if the user exists and has had their sessions revoked recently.

RevokedUsers_CL
| where TimeGenerated > ago(3d)
| where AADUserID_g =~ "@{variables('AADUserID')}"
| extend UserPrincipalName = UserPrincipalName_s
| extend AADUserID = AADUserID_g
| project TimeGenerated, UserPrincipalName, AADUserID

For this example we will look through at the last 3 days of logs in our custom table, and just tidy the columns up a bit using the extend operator. If the persons Azure AD Object ID appears in that result, we know they have had their session revoked in the last 3 days and will leave them alone, if it comes back empty then we will revoke their sessions. You could make it 24 hours or a week or whatever suits your environment.

Next we parse the response from Sentinel, because we will need to run a condition over it in the next step

We add a condition to our Logic App to check if the response is empty by checking the length of the result

length(body('Parse_JSON_response')?['value'])

If it equals 0 then the user hasn’t had their sessions revoked in the last 3 days (because they weren’t found in our query) so we will perform the true action, if there is a response then we perform the false action.

So for true, we get their details from Azure AD, refresh their tokens (using an account with sufficient permissions), then we compose a small JSON payload to send back into our custom table to keep it up to date and send it using the Log Analytics Data Collector. We can also send the user an email saying ‘Hi, we noticed strange activity so we have logged you out’ and add a comment to the Sentinel incident, those steps are obviously optional. If the result is false and they have already been logged out in the last 3 days (because our query returned data), we just add a comment to our incident saying ‘this user has already been logged out recently’.

The last step is to create an automation rule that will run this Logic App for us automatically when an incident is created. Edit the analytics rule and selected the automated response option.

Add a new incident automation

Give your automation rule a name and then select the Logic App you just created

Now when your analytics rule fires, they will be automatically logged out of Azure AD and their details put into your rolling whitelist.