One of the more difficult things to learn in KQL (apart from joining tables together) is how to deal with multi-value sets of data. If you work in particular types of data, such as Azure AD sign in data, or Security Alert data, you will see lots of these data sets too. There is no avoiding them. What do I mean by multi-value? We are talking about a set of data that is a JSON array and has multiple objects within it. Those objects may even have further nested arrays. It can quickly get out of hand and become difficult to make sense of.
Let’s look at an example. Azure AD Conditional Access policies. Each time you sign into Azure AD, it will evaluate all the policies in your tenant. Some it will apply, and maybe enforce MFA, some may not be applied because you aren’t in scope of the policy. Others may be in report only mode or be disabled entirely. At the end of your sign in, the logs will show the outcome of all your policies. The data looks a bit like this:
We get this massive JSON array, then within that we get an object for each policy, showing the relevant outcome. In this example we have 6 policies, starting from 0. Within each object, you may have further arrays, such as the ‘enforcedGrantControls’. This is because a policy may have multiple controls, such as requiring MFA and a compliant device.
You can have a look in your own tenant simply enough by looking at just the Conditional Access data.
SigninLogs
| take 10
| project ConditionalAccessPolicies
Where multi-value data can get tricky is that the order of the data, or the location of particular data can change. If we again take our Conditional Access data, it can change order depending on the outcome of the sign in. Any policies that are successful, such as a user completing MFA, or a policy that failed, by a user failing MFA, will be moved to the top of the JSON array.
So, when I successfully complete MFA on an Azure management web site, the ‘CA006: Require multi-factor authentication for Azure management’ (as seen above) policy will go to the top of the array. When I sign into something other than Azure, that policy will be ‘notApplied’ and be in a different location within the array.
Why is this a problem? KQL is very specific, so if we want to run an alert when someone fails ‘CA006: Require multi-factor authentication for Azure management’ we need to make sure our query is accurate. If we right-click on our policy and do the built-in ‘Include’ functionality in Sentinel:
It gives us the following query:
SigninLogs
| project ConditionalAccessPolicies
| where ConditionalAccessPolicies[3].displayName == "CA006: Require multi-factor authentication for Azure management"
We can see in this query that we are looking for when ConditionalAccessPolicies[3].displayName == “CA006: Require multi-factor authentication for Azure management”. The [3] indicated shows that we are looking for the 4th object in our array (we start counting at 0). So, what happens when someone fails this policy? It will move up the JSON array into position 0, and our query won’t catch it.
So how do we deal with these kinds of data? I present to you, mv-expand and mv-apply.
mv-expand
mv-expand, or multi-value expand, at its most basic, takes a dynamic array of data and expands it out to multiple rows. When we use mv-expand, KQL expands out the dynamic data, and simply duplicates any non-dynamic data. Leaving us with multiple rows to use in our queries.
mv-expand is essentially the opposite of summarize operators such as make_list and make_set. With those we are creating arrays, mv-expand we are reversing that, and expanding arrays.
As an example, let’s find the sign-in data for my account. In the last 24 hours, I have had 3 sign-ins into this tenant.
Within each of those, as above, we have a heap of policies that are evaluated.
I have cut the screenshot off for the sake of brevity, but I can tell you that in this tenant 22 policies are evaluated on each sign in. Now to see what mv-expand does, we add that to our query.
If we run our query, we will see each policy will be expanded out to a new record. The timestamp, location and username are simply duplicated, because they are not dynamic data. In my tenant, I get 22 records per sign in, one for each policy.
If we look at a particular record, we can see the Conditional Access policy is no longer positional within a larger array, because we have a separate record for each entry.
Now, if we are interested in our same “CA006: Require multi-factor authentication for Azure management” policy, and any events for that. We again do our right-click ‘Include’ Sentinel magic.
We will get the following query
SigninLogs
| project TimeGenerated,UserPrincipalName, Location, ConditionalAccessPolicies
| mv-expand ConditionalAccessPolicies
| where ConditionalAccessPolicies.displayName == "CA006: Require multi-factor authentication for Azure management"
This time our query no longer has the positional [3] we saw previously. We have expanded our data out and made it more consistent to query on. So, this time if we run our query, we will get a hit for every time the policy name is “CA006: Require multi-factor authentication for Azure management”, regardless of where in the JSON array it is. When we run that query, we get 3 results, as we would expect. One policy hit per sign in for the day.
Once you have expanded your data out, you can then create your hunting rules knowing the data is in a consistent location. So, returning to our original use case, if we want to find out where this particular policy is failing, this is our final query:
So, we have used mv-expand to ensure our data is consistent, and then looked for failures on that particular policy.
And we can see, we have hits on that new hunting query.
mv-apply
mv-apply, or multi-value apply adds to mv-expand, by allowing you to create a sub-query, and then returning the results. So, what does that actually mean? mv-apply actually runs mv-expand initially but gives us more freedom to create an additional query before returning the results. mv-expand is kind of like a hammer, we will just expand everything out and deal with it later. mv-apply gives us the ability to filter and query the expanded data, before returning it.
The syntax for mv-apply can be a little tricky to start with. To make things easy, let’s use our same data once again. Say we are interested in the Conditional Access stats for any policy that references ‘Azure’ or ‘legacy’ (for legacy auth), or any policy that has failures associated with it.
We could do an mv-expand as seen earlier, or we can use mv-apply to create that query during the expand.
SigninLogs
| project TimeGenerated,UserPrincipalName, Location, ConditionalAccessPolicies
| mv-apply ConditionalAccessPolicies on
(
where ConditionalAccessPolicies.displayName has_any ("Azure","legacy") or ConditionalAccessPolicies.result == "failure"
| extend CADisplayName=tostring(ConditionalAccessPolicies.displayName)
| extend CAResult=tostring(ConditionalAccessPolicies.result)
)
| summarize count() by CADisplayName, CAResult
So, for mv-apply, we start with mv-apply on. After that we create our subquery. Our sub-query is defined in the ( and ) seen after mv-apply. Interestingly, and quite unusual for KQL, is that the first line of the sub query does not require a | to precede it. Subsequent lines within the subquery do require it, as usual with KQL.
In this query we are looking for any policy names with ‘Azure’ or ‘legacy’ in them, or where the result is a failure. Then our query says if there is match on any of those conditions, then extend out our display name and result to new columns. Then finally we can summarize our data to provide some stats.
We are returned only the stats for matches of our sub-query. Either where the policy name has ‘Azure’ or ‘legacy’ or where the result is a failure.
Think of mv-apply as the equivalent of a loop statement through your expanded data. As it runs through each loop or row of data, it applies your query to each row.
It is important to remember order of operations when using mv-apply, if you summarize data inside the mv-apply ‘loop’ it will look much different to when you do it after the mv-apply has finished. Because it is within the ‘loop’, it will summarize it for every row of expanded data.
mv-apply is particularly valuable when dealing with JSON arrays that have additional arrays within them. You can mv-apply multiple times to get to the data you are interested in. On each loop, you can filter your query. Using a different data set, we can see an example of this. In the Azure AD audit logs, there is very often a quite generic event called ‘Update user’. This can be triggered on numerous things: name or licensing changes, email address updates or changes to MFA details etc.
In a lot of Azure AD audit logs, the interesting data is held in the ‘targetResources’ field. However, beneath that is a field called ‘modifiedProperties’. The modifiedProperties field has the detail of what actually changed on the user.
AuditLogs
| where TimeGenerated > ago(90d)
| where TargetResources has "PhoneNumber"
| where OperationName has "Update user"
| where TargetResources has "StrongAuthenticationMethod"
| extend InitiatedBy = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName)
| extend targetResources=parse_json(TargetResources)
| mv-apply tr = targetResources on (
extend targetResource = tr.displayName
| mv-apply mp = tr.modifiedProperties on (
where mp.displayName == "StrongAuthenticationUserDetails"
| extend NewValue = tostring(mp.newValue)
))
| project TimeGenerated, NewValue, UserPrincipalName,InitiatedBy
| mv-expand todynamic(NewValue)
| mv-expand NewValue.[0]
| extend AlternativePhoneNumber = tostring(NewValue.AlternativePhoneNumber)
| extend Email = tostring(NewValue.Email)
| extend PhoneNumber = tostring(NewValue.PhoneNumber)
| extend VoiceOnlyPhoneNumber = tostring(NewValue.VoiceOnlyPhoneNumber)
| project TimeGenerated, UserPrincipalName, InitiatedBy,PhoneNumber, AlternativePhoneNumber, VoiceOnlyPhoneNumber, Email
| where isnotempty(PhoneNumber)
| summarize ['Count of Users']=dcount(UserPrincipalName), ['List of Users']=make_set(UserPrincipalName) by PhoneNumber
| sort by ['Count of Users'] desc
In this example, we use mv-apply to find where the displayName of the modifiedProperties is ‘StrongAuthenticationUserDetails’. This indicates a change to MFA details, perhaps a new phone number has been registered. This particular query then looks for when it is indeed a phone number change. It then summarizes the number of users registered to the same phone number. This query is looking for Threat Actors that are registering the same MFA number to multiple users.
By using a ‘double’ mv-apply, we filter out all the ‘Update user’ events that we aren’t interested in, and focus down on the ‘StrongAuthenticationUserDetails’ events. We don’t get updates to say licensing events, that would be captured more broadly in an ‘Update user’ event.
Summary
mv-apply and mv-expand are just a couple of the ways to extract dynamic data in KQL. There are additional operators, such as bag_unpack, and even operators for other data types, such as parse_xml. I find myself coming constantly back to mv-expand and mv-apply, mostly because of the ubiquitousness of JSON in security products.
I am a very visual person. When looking at data I love to look at the trend of that data and see if it tells a story. If you are using Sentinel, Log Analytics or Azure Data Explorer this can be particularly important. Those platforms can handle an immense amount of data and making sense of it can be overwhelming. Luckily KQL arms us with a lot of different ways to turn our data into visualizations. There are some subtle differences between the capabilities of Azure Monitor (which drives Sentinel) and Azure Data Explorer with visualizations. Check out the guidance here for Azure Data Explorer and here for Azure Monitor (Log Analytics and Sentinel).
In order to produce any kind of visualization in KQL, first we need to summarize our data. I like to separate the styles of visualizations into two main types. First, we have non time visualizations, that is just when we want to produce a count of something and then display it. Secondly, we have time-based visualizations, and that is when we want to visualize our data over a time period. With those we can see how our data changes over time.
Regular Visualizations
For all these examples, we will use our Azure Active Directory sign in logs. That data is full of great information you may want to visualize. Applications, MFA methods, location data, all those and more are stored in our sign in data, and we can use any of them (or combinations of them) as a base for our visuals.
Let’s start simply. We can see the most popular applications in our tenant by doing a simple count. The following query will do that for you.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AppDisplayName
Now you will see you are output a table of data.
To turn that into a visualization, we use our render operator. Now you can also do this by clicking in the UI itself on ‘Chart’ and then choosing our options. That isn’t fun though, we want to learn how to do it ourselves. It is also simple; we just need to tell KQL what type of visual we want. To build a standard pie chart we just add one more line to our query.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AppDisplayName
| render piechart
With these kind of counts in order to make them display a little ‘cleaner’, I would recommend ordering them first. That way they will display consistently. Depending on how much data you have, you may also want to limit the results. Otherwise, you may just have too many results in there and it becomes hard to read. We can achieve both of those by adding a single line to our query. We tell KQL to take the top 10 results by the count. Then again render our pie chart.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AppDisplayName
| top 10 by Count
| render piechart
In my opinion this cut down version is much easier to make sense of. If you don’t want to use a pie chart, you can use bar or column charts too. Depending on your data, you may want to pick one over the other. I personally find column and bar charts are a little easier to understand than pie charts. They are also better at giving context because they show the scale a little better.
We can see the same query visualized as a bar chart.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AppDisplayName
| where isnotempty( AppDisplayName)
| top 10 by Count
| render barchart
And a column chart.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AppDisplayName
| where isnotempty( AppDisplayName)
| top 10 by Count
| render columnchart
In these examples we can really see the different between our top 2 results, and everything else. Some other examples you may find useful in your Azure AD logs are single vs multifactor authentication requirement on sign in.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AuthenticationRequirement
| render piechart
We can dig even deeper and look at types of MFA being used.
SigninLogs
| where TimeGenerated > ago(150d)
| where AuthenticationRequirement == "multiFactorAuthentication"​
| mv-expand todynamic(AuthenticationDetails)​
| extend ['Authentication Method'] = tostring(AuthenticationDetails.authenticationMethod)
| where ['Authentication Method'] !in ("Password","Previously satisfied")​
| summarize Count=count()by ['Authentication Method']​
| where isnotempty(['Authentication Method'])
| sort by Count desc
| render piechart
This would show you the breakdown of the different MFA types in your tenant such as mobile app notification, text message, phone call, Windows Hello for Business etc.
You can even see the different locations accessing your Azure AD tenant.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by Location
| top 10 by Count
| render columnchart
Have a play with the different types to see which works best for you. Pie charts can be better when you have fewer types of results. For instance, the single vs multifactor authentication pie chart really tells you a great story about MFA coverage in this tenant.
Time series visualizations
Time series visualizations build on that same work, though this time we need to add a time bucket or bin to our data summation. Which makes sense if you think about it. To tell KQL to visualize something over a time period, we need to put our data into bins of time. So then when it displays, we can see how it looks over a long time period. We can choose what size we want those bins to be. Maybe we want to look at 30 days of data total, then break that into one day bins. Let’s start basic again, and just look at total successful sign ins to our tenant per day over the last 30.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by bin(TimeGenerated, 1d)
| render timechart
We can see in our query we are looking back 30 days, then we are putting our results into 1-day bins of time. Our resulting visual shows how that data looks over the period.
If you wanted to reduce the time period of each bin, you can. You could reduce the bin time down to say 4 hours, or 30 minutes. Here is the same data, but with 4-hour time bins instead of 1 day.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by bin(TimeGenerated, 4h)
| render timechart
There are no more or less sign ins to our tenant, we are just sampling the data more frequently to build our visual.
You can even use more advanced data aggregation and summation before your visual. You can for instance take a count of both the total sign ins and the distinct user sign ins and visualize both together. We then query both over the same 4-hour time period and can see both on the same visual.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count(), ['Distinct Count']=dcount(UserPrincipalName) by bin(TimeGenerated, 4h)
| render timechart
Now we can see total sign ins in blue and distinct user sign ins in orange, over the same time period. Another great example is using countif(). For instance, we can maybe look at sign ins from the US vs those not in the US. We do a countif where the location equals US and again for when it doesn’t equal the US.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize US=countif(Location == "US"), NonUS=countif(Location != "US") by bin(TimeGenerated, 4h)
| render timechart
Again, we see the breakdown in our visual.
We can still use bar charts with time-series data as well. This can be a really great way to show changes or anomalies in your data. Take for instance our single factor vs multifactor query again. This time though we will group the data into 1-day bins. Then render them as a column chart.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AuthenticationRequirement, bin(TimeGenerated, 1d)
| render columnchart
The default behaviour with column charts is for KQL to ‘stack’ the columns into a single column per time period (1-day in this example). We can tell KQL to unstack them though so that we get a column for each result, on every day.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by AuthenticationRequirement, bin(TimeGenerated, 1d)
| render columnchart with (kind=unstacked)
We see the same data, just presented differently. Unstacked column charts can have a really great visual impact.
Summarize vs make-series
For time queries, KQL has a second, similar operator called make-series. When we use summarize and there are no results in that particular time period, then there is simply no record for that particular time ‘bin’. This can have the effect of your visuals tending to be a little ‘smoother’ over periods where you did have results. If you use make-series, we can tell KQL to default to 0 when there are no hits, making the visual more ‘accurate’. Take for example our regular successful Azure AD signs. Our summarize query and visual looks like this.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize Count=count() by bin(TimeGenerated, 4h)
| render timechart
This is the equivalent query with make-series. We tell KQL to default to 0 and to use a time increment (or step) of 4 hours.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| make-series Count=count() default=0 on TimeGenerated step 4h
| render timechart
And our resulting visual.
You can see the difference in those times where there was no sign ins. From September 24th to 26th the line is at zero on the make-series version. On the summarize version it moves between the two points where there were results.
Make-series has some additional capability, allowing true time-series analysis. For example, you can overlay a trend line over your series. For example. looking at conditional access failures to our tenant may be interesting. We can search for error code 53003 and then overlay a trend to our visual.
Now we can see our orange trend as well as the actual count. Trend information may be interesting for things like MFA and passwordless usage or tracking usage of a particular application. Simply write your query first like normal, then apply the trend line over the top.
Cleaning up your visuals
To make your visuals really shine you can even tidy up things like axis names and titles within your query. Using our last example, we can add new axis titles and a title to the whole visual. We call our y-axis “Failure Count”, the x-axis “Day” and the whole visual “Conditional access failures with trend over time”.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType != "53003"
| make-series Count=count() default=0 on TimeGenerated step 1d
| extend (RSquare, SplitIdx, Variance, RVariance, TrendLine)=series_fit_2lines(Count)
| project TimeGenerated, Count, TrendLine
| render timechart with (xtitle="Day", ytitle="Failure Count", title="Conditional access failures with trend over time")
You can also adjust the y-axis scale. For example, if you ran our previous query and saw the results. You may have thought ‘these are pretty low, but if someone was to look at this visual, they will see the peaks and worry’. Even though the peaks themselves are relatively low. To counter that you can extend the y-axis, by setting a manual max limit. For instance, let’s set it to 400.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType != "53003"
| make-series Count=count() default=0 on TimeGenerated step 1d
| extend (RSquare, SplitIdx, Variance, RVariance, TrendLine)=series_fit_2lines(Count)
| project TimeGenerated, Count, TrendLine
| render timechart with (xtitle="Day", ytitle="Failure Count", title="Conditional access failures with trend over time",ymax=400)
We can see that our scale changes and maybe it better tells our story.
Within your environment there are so many great datasets to draw inspiration from to visualize. You can use them to search for anomalies from a threat hunting point of view. They can also be good for showing progress in projects. If you’re enabling MFA then check your logs and visualize how that journey is going.
Have a read of the guidance for the render operator. There are additional visualization options available. Also just be aware of the capability differences between both the platform – Azure Data Explorer vs Azure Monitor, and the agent you are using. The full Kusto agent has additional capabilities over the web apps.
If you follow my Twitter or GitHub account, you know that I recently completed a #365daysofKQL challenge. Where I shared a hunting query each day for a year. To round out that challenge, I wanted to share what I have learnt over the year. Like any activity, the more you practice, the better you become at it. At about day 200, I went back to a lot of queries and re-wrote them with things I had picked up. I wanted to make my queries easier to read and more efficient. Some people also asked if I was ever short of ideas. I never had writers block or struggled to come up with ideas. I am a naturally curious person, so looking through data sets is interesting to me. On top of that there is always a new threat, or a new vulnerability around. Threat actors come up with new tactics and you can then try and find those. Then you can take those queries and apply them to other data sets. On top of that, vendors, especially Microsoft are always adding new data in. There is always something new to look at.
I have also learned that KQL is a very repeatable language. You can build ‘styles’ of queries, and then re-use those on different logs. If you are looking for the first time something happened. Or if something happened at a weird time of the day. That becomes a query pattern. Sure, the content of the data you are looking at may change. The structure of the query remains the same.
So without further ado, what I have learnt writing 365 queries.
Use your own account and privilege to generate alerts
If you follow InfoSec news, there is always a new activity you may want to alert on. As these new threats are uncovered, hopefully you don’t find them in your environment. But you want to be ready. I find it valuable to look at the details and attempt to create those logs and then go find them. From there you can tidy your query up so it is accurate. You don’t want to run malicious code or do anything that will cause an outage. You can certainly simulate the adversary though. Take for instance consent phishing. You don’t want to actually install a malicious app. You can register an app manually though. You could then query your Azure AD audit logs to find that event. Start really broadly with just seeing what you have done with your account.
AuditLogs
| where InitiatedBy contains "youruseraccount"
You will see an event ‘Add service principal’, that is what we are after. In the Azure AD audit log, this is a ‘OperationaName’. So we can then tune our query. We know we want any ‘Add service principal’ events. We can also look through and see where our username is and our IP. So we can extend those to new columns. For our actual query we don’t want to include our user account, so take that out.
AuditLogs
| where OperationName == "Add service principal"
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend IPAddress = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| project TimeGenerated, OperationName, Actor, IPAddress
Now we have a query that detects each time someone adds a service principal. If someone is consent phished, this will tell us. Then we can investigate. We can then delete our test service principal out to clean up our tenant.
Look for low volume events
One of the best ways to find interesting events, is to find those that are low volume. While not always malicious they are generally worth investigating. Using our Azure AD audit log example, it is simple to find low volume events.
AuditLogs
| summarize Count=count() by OperationName, LoggedByService
| sort by Count asc
This will return the count of all the operations in Azure AD for you, and list those with the fewest hits first. It will also return which service in Azure AD triggered it. Your data will look different to mine, but as an example you may see.
Now you look at this list and you can see if any interest you. Maybe you want to know each time an Azure AD Conditional Policy is updated. We can see that event. Or when a BitLocker key is read. You can then take those operations and start building your queries out.
You can do the same on other data sources, like Office 365.
OfficeActivity
| summarize Count=count() by Operation, OfficeWorkload
| sort by Count asc
The data is a little different, we have Operation instead of OperationName. And we have OfficeWorkload instead of LoggedByService. But the pattern is the same. This time we are returned low count events from the Office 365 audit log.
Look for the first time something occurs and new events
This is a pattern I love using. We can look at new events in our environment that we haven’t previously seen. Like me, I am sure you struggle with new alerts, or new log sources to your environment. Let KQL do it for you. These queries are simple and easily re-useable. Again, let’s use our Azure AD audit log as an example.
let existingoperations=
AuditLogs
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
| distinct OperationName;
AuditLogs
| where TimeGenerated > ago(7d)
| summarize Count=count() by OperationName, Category
| where OperationName !in (existingoperations)
| sort by Count desc
First we cast a variable called ‘existingoperations’. That queries our audit log for events between 180 and 7 days ago. From that list, we just list each distinct OperationName. That becomes our list of events that have already occurred.
We then re-query the audit log again, this time just looking at the last week. We take a count of all the operations. Then we exclude the ones we already knew about from our first query. Anything remaining is new to our environment. Have a look through the list and see if anything is interesting to you. If it is, then you can write your specific query.
Look for when things stop occurring
The opposite to new events occurring is when events stop occurring. One of the most common use cases for this kind of query is tell me when a device is no longer sending logs. To keep on top of detections we need to make sure devices are still sending their logs.
SecurityEvent
| where TimeGenerated > ago (1d)
| summarize ['Last Record Received'] = datetime_diff("minute", now(), max(TimeGenerated)) by Computer
| project Computer, ['Last Record Received']
| where ['Last Record Received'] >= 60
| order by ['Last Record Received'] desc
This query will find any device that hasn’t send a security event log in over 60 minutes in the last day. Maybe the machine is offline, or there are network issues? Worth checking out either way.
We can use that same concept to find all kinds of things. How about user accounts no longer signing in? That is also something that is no longer occurring. This time though, it isn’t really an ‘alert’. It is great way to clean up user accounts though.
SigninLogs
| where TimeGenerated > ago (365d)
| where ResultType == 0
| where isnotempty(UserType)
| summarize arg_max(TimeGenerated, *) by UserPrincipalName
| where TimeGenerated < ago(60d)
| summarize
['Inactive Account List']=make_set(UserPrincipalName),
['Count of Inactive Accounts']=dcount(UserPrincipalName)
by UserType, Month=startofmonth(TimeGenerated)
| sort by Month desc, UserType asc
We can find all our user accounts, both members and guests, that haven’t signed in for more than 60 days. We can also retrieve the last month they last accessed our tenant.
Look for when things occur at strange times
KQL is amazing at dealing with time data. We can include any kind of logic into our queries to detect only during certain times. Or on certain days. Or a combination of both. An event that happens over a weekend of outside of working hours perhaps requires a faster response. A couple of good examples this are Azure AD Privileged Identity Management and adding a service principal to Azure AD. Maybe Monday to Friday, during business hours these activities are pretty normal. Outside of that though? We can tell KQL to focus on those times.
let Saturday = time(6.00:00:00);
let Sunday = time(0.00:00:00);
AuditLogs
// extend LocalTime to your time zone
| extend LocalTime=TimeGenerated + 5h
| where LocalTime > ago(7d)
// Change hours of the day to suit your company, i.e this would find activations between 6pm and 6am
| where hourofday(LocalTime) !between (6 .. 18) or hourofday(LocalTime)
| where OperationName == "Add member to role completed (PIM activation)"
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ['Azure AD Role Name'] = tostring(TargetResources[0].displayName)
| project LocalTime, User, ['Azure AD Role Name'], ['Activation Reason']=ResultReason
This query searches for PIM activations on weekends or between 6pm and 6am during the week. You can then re-use that same logic to detect on other things during those times.
Summarize to make sense of large data sets
I have written about data summation previously. If you send data to Sentinel chances are you will have a lot of it. Even a small Azure AD tenant generates a lot of data. 150 devices in Defender is a lot of logs. Summarizing data in KQL is both easy and useful. Maybe you are interested in what your users are doing when they connect to other tenants. Each log entry on its own probably isn’t exciting. If you allow that activity then it isn’t really a detection. You wouldn’t generate an alert each time someone accessed another tenant. You may be interested in other tenants more broadly though.
SigninLogs
| where TimeGenerated > ago(30d)
| where UserType == "Guest"
| where AADTenantId == HomeTenantId
| where ResourceTenantId != AADTenantId
| summarize
['Count of Applications']=dcount(AppDisplayName),
['List of Applications']=make_set(AppDisplayName),
['Count of Users']=dcount(UserPrincipalName),
['List of Users']=make_set(UserPrincipalName)
by ResourceTenantId
| sort by ['Count of Users'] desc
This query looks for each ResourceTenantId. Which is the Id of the tenant your users are accessing. For each tenant, it returns what applications, a count of applications, which users and a count of users accessing it. Maybe you see in that data there is one tenant that your users are accessing way more than any other. It may be worth investigating why or adding additional controls to that tenant via cross-tenant settings.
Another good example, we can use Defender for Endpoint logs for all kinds of great info. Take for example LDAP and LDAPS traffic. Hopefully you want to migrate to LDAPS, which is more secure. If you look at each LDAP event to see what’s in your environment, it will be overwhelming. Chances are you will get thousands of results a day.
DeviceNetworkEvents
| where ActionType == "InboundConnectionAccepted"
| where LocalPort in ("389", "636", "3269")
| summarize
['Count of Inbound LDAP Connections']=countif(LocalPort == 389),
['Count of Distinct Inbound LDAP Connections']=dcountif(RemoteIP, LocalPort == 389),
['List of Inbound LDAP Connections']=make_set_if(RemoteIP, LocalPort == 389),
['Count of Inbound LDAPS Connections']=countif(LocalPort in ("636", "3269")),
['Count of Distinct Inbound LDAPS Connections']=dcountif(RemoteIP, LocalPort in ("636", "3269")),
['List of Inbound LDAPS Connections']=make_set_if(RemoteIP, LocalPort in ("636", "3269"))
by DeviceName
| sort by ['Count of Distinct Inbound LDAP Connections'] desc
This query looks at all those connections, and summarizes it down so it’s easier to read. For each device on our network we summarize those connections. For each we get the total count of connections, a count of distinct endpoints and the list of endpoints. Maybe we have thousands and thousands of events per day. When we run this query though, it is really just a handful of noisy machines. Suddenly that LDAPS migration isn’t so daunting.
Change your data summary to change context
Once you have written your queries that summarize your data, you can then change the context easily. You can basically re-use your work and see something different in the same data. Take these two queries.
DeviceNetworkEvents
| where TimeGenerated > ago(30d)
| where ActionType == "ConnectionSuccess"
| where RemotePort == "3389"
//Exclude Defender for Identity that uses an initial RDP connection to map your network
| where InitiatingProcessCommandLine <> "\"Microsoft.Tri.Sensor.exe\""
| summarize
['RDP Outbound Connection Count']=count(),
['RDP Distinct Outbound Endpoint Count']=dcount(RemoteIP),
['RDP Outbound Endpoints']=make_set(RemoteIP)
by DeviceName
| sort by ['RDP Distinct Outbound Endpoint Count'] desc
This first query finds which devices in your environment connect to the most other endpoints via RDP. These devices are a target for lateral movement as they have more credentials stored on them.
DeviceLogonEvents
| where TimeGenerated > ago(30d)
| project DeviceName, ActionType, LogonType, AdditionalFields, InitiatingProcessCommandLine, AccountName, IsLocalAdmin
| where ActionType == "LogonSuccess"
| where LogonType == "Interactive"
| where AdditionalFields.IsLocalLogon == true
| where InitiatingProcessCommandLine == "lsass.exe"
| summarize
['Local Admin Count']=dcountif(DeviceName,IsLocalAdmin == "true"),
['Local Admins']=make_set_if(DeviceName, IsLocalAdmin == "true")
by AccountName
| sort by ['Local Admin Count'] desc
This second query looks for logon events from your devices. It finds the users that have accessed the most devices as a local admin. Which will find us which accounts are targets for lateral movement.
So two very similar queries. Both provide information about lateral movement targets. However, we change our summary target so we get unique context in the results.
Try to write queries looking for behavior rather than static IOCs
This is another topic I have written about before. We want to, where possible, create queries based on behavior rather than specific IOCs. While IOCs are useful in threat hunting, they are likely to change quickly.
Say for example you read a report about a new threat. It says in there that the threat actor used certutil.exe to connect to 10.10.10.10.
Easy, we will catch if someone uses certutil.exe to connect to 10.10.10.10.
What if the IP changes though? Now the malicious server is on 10.20.20.20. Our query no longer will catch it. So instead go a little broader, and catch the behavior.
The query now detects any usage of certutil.exe connecting to any public endpoint. I would suspect this is very rare behavior in most environments. Now it is irrelevant what the IP is, we will catch it.
Use your data to uplift your security posture
Not every query you write needs to be about threat detection. Of course we want to catch attackers. We can however use the same data to provide amazing insights about security posture. Take for instance Azure Active Directory sign in logs. We can detect when someone signs in from a suspicious country. Just as useful though is all the other data contained in those logs. We can see visibility into conditional access policies, legacy authentication, MFA events, device and location information.
Legacy authentication is always in the news. There is no way to put MFA in front of it, so it is the first door attackers knock on. We can use our sign in data to see just how big a legacy authentication problem we have.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| where ClientAppUsed !in ("Mobile Apps and Desktop clients", "Browser")
| where isnotempty(ClientAppUsed)
| evaluate pivot(ClientAppUsed, count(), UserPrincipalName)
This query finds any apps that make up legacy authentication. Those that aren’t a modern app or a browser. Then it creates a easy to read pivot table. The table will show each user that has connected with legacy authentication. For each app it will give you a count. Maybe you have 25000 legacy authentication connections in a month, which seems impossible to address. When you look at it closer though, it may just be a few dozen users.
Similarly, you could try to improve your MFA posture.
SigninLogs
| where TimeGenerated > ago(30d)
//You can exclude guests if you want, they may be harder to move to more secure methods, comment out the below line to include all users
| where UserType == "Member"
| mv-expand todynamic(AuthenticationDetails)
| extend ['Authentication Method'] = tostring(AuthenticationDetails.authenticationMethod)
| where ['Authentication Method'] !in ("Previously satisfied", "Password", "Other")
| where isnotempty(['Authentication Method'])
| summarize
['Count of distinct MFA Methods']=dcount(['Authentication Method']),
['List of MFA Methods']=make_set(['Authentication Method'])
by UserPrincipalName
//Find users with only one method found and it is text message
| where ['Count of distinct MFA Methods'] == 1 and ['List of MFA Methods'] has "text"
This example looks at each user that has used MFA to your Azure AD tenant. For each, it creates a set of different MFA methods used. For example, maybe they have used a push notification, a phone call and a text. They would have 3 methods in their set of methods. Now we add a final bit of logic. We find out where a user only has a single method, and that method is text. We can take this list and do some education with those users. Maybe show them how much easier a push notification is.
Use your data to help your users have a better experience
If you have onboarded data to Sentinel, or use Advanced Hunting, you can use that data to help your users out. While we aren’t measuring performance of computers or things like that, we can still get insights where they may be struggling.
Take for example Azure AD self service password reset. When a user goes through that workflow they can get stuck in a few spots, and we can find it. Each attempt at SSPR is linked by the same Correlation Id in Azure AD. So we can use that Id to make a list of actions that occurred during that attempt.
AuditLogs
| where LoggedByService == "Self-service Password Management"
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ['User IP Address'] = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| sort by TimeGenerated asc
| summarize ['SSPR Actions']=make_list(ResultReason) by CorrelationId, User, ['User IP Address']
If you have a look, you will see things like user submitted new password, maybe the password wasn’t strong enough. Hopefully a successful password reset at the end. Now if we want to help our users out we can dig into that data. For instance, we can see when a user tries to SSPR but doesn’t have an authentication method listed. We could reach out to them and help them get onboarded.
AuditLogs
| where LoggedByService == "Self-service Password Management"
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ['User IP Address'] = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| sort by TimeGenerated asc
| summarize ['SSPR Actions']=make_list(ResultReason) by CorrelationId, User, ['User IP Address']
| where ['SSPR Actions'] has "User's account has insufficient authentication methods defined. Add authentication info to resolve this"
| sort by User desc
If a user puts in a password that doesn’t pass complexity requirements we can see that too. We could query when the same user has tried 3 or more times to come up with a new password and is rejected. We all understand how frustrating that can be. They would definitely appreciate some help and you could maybe even use it as a change to move them to Windows Hello for Business, or passwordless. If you support those, of course.
AuditLogs
| where LoggedByService == "Self-service Password Management"
| extend User = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ['User IP Address'] = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| sort by TimeGenerated asc
| summarize ['SSPR Actions']=make_list_if(ResultReason, ResultReason has "User submitted a new password") by CorrelationId, User, ['User IP Address']
| where array_length(['SSPR Actions']) >= 3
| sort by User desc
Consistent data is easy to read data
One of the hardest things about writing a query is just knowing where to look for those logs. The second hardest thing is dealing with data inconsistencies. If you have log data from many vendors, the data will be completely different. Maybe one firewall calls a reject a ‘deny’, another calls it ‘denied’, then your last firewall calls it ‘block’. They are the same in terms of what the firewall did. You have to account for the data differences though. If you don’t, you may miss results.
You can rename tables or even extend your own whenever you want. You can do that to unify your data, or just make it easier to read.
Say you have two pretend firewalls, one is a CheckPoint and one a Cisco. Maybe the CheckPoint shows the result as a column called ‘result’. The Cisco however uses ‘Outcome’.
You can simply rename one of them.
CheckPointLogs_CL
| project-rename Outcome=result
In our CheckPoint logs we have just told KQL to rename the ‘result’ field to ‘Outcome’
You can even do this as part of a ‘project’ at the end of your query if you want.
We have renamed our fake columns to Source IP, Destination IP, Port, Outcome.
If we do the same for our Cisco logs, then our queries will be so much easier to write. Especially if you are joining between different data sets. They will also be much easier to read both for you and anyone else using them.
Be careful of case sensitivity
Remember that a number of string operators are KQL are case sensitive. There is a really useful table here that outlines the different combinations. Using a double equals sign in a query, such as UserPrincipalName == “reprise99@learnsentinel.com” is efficient. Remember though, that if my UserPrincipalName was reprise99@learnSentinel.com with a capital S, it wouldn’t return that result. It is a balancing act between efficiency and accuracy. If you are unsure about the consistency of your data, then stick with case insensitive operators. For example. UserPrincipalName =~ “reprise99@learnsentinel.com” would return results regardless of sensitivity.
This is also true for a not equals operator. != is case sensitive, and !~ is not.
You also have the ability to use either tolower() or toupper() to force a string to be one or the other.
This can help you make your results more consistent.
Use functions to save you time
If you follow my Twitter you know that I write a lot of functions. They are an amazing timesaver in KQL. Say you have written a really great query that tidies data up. Or one that combines a few data sources for you. Save it as function for next time.
My favourite functions are the ones that unify different data sources that are similar operations. Take adding or removing users to groups in Active Directory and Azure Active Directory. You may be interested in events from both environments. Unfortunately the data structure is completely different. Active Directory events come in via the SecurityEvent table. Whereas, Azure Active Directory events are logged to the AuditLogs table.
This function I wrote combines the two and unifies the data. So you can search for ‘add’ events, and it will bring back when users were added to groups in either environment. When you deploy this function you can easily create queries such as.
GroupChanges
| where GroupName =~ "Sentinel Test Group"
It will find groups named ‘Sentinel Test Group’ in either AD or AAD. It will return you who was added or removed, who did it and which environment the group belongs to. The actual KQL under the hood does all the hard work for you.
let aaduseradded=
AuditLogs
| where OperationName == "Add member to group"
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend Target = tostring(TargetResources[0].userPrincipalName)
| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| extend GroupID = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].newValue)))
| where isnotempty(Actor) and isnotempty(Target)
| extend Environment = strcat("Azure Active Directory")
| extend Action = strcat("Add")
| project TimeGenerated, Action, Actor, Target, GroupName, GroupID, Environment;
let aaduserremoved=
AuditLogs
| where OperationName == "Remove member from group"
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].oldValue)))
| extend GroupID = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].oldValue)))
| extend Target = tostring(TargetResources[0].userPrincipalName)
| where isnotempty(Actor) and isnotempty(Target)
| extend Action = strcat("Remove")
| extend Environment = strcat("Azure Active Directory")
| project TimeGenerated, Action, Actor, Target, GroupName, GroupID, Environment;
let adchanges=
SecurityEvent
| project TimeGenerated, EventID, AccountType, MemberName, SubjectUserName, TargetUserName,TargetSid
| where AccountType == "User"
| where EventID in (4728, 4729, 4732, 4733, 4756, 4757)
| parse MemberName with * 'CN=' Target ',OU=' *
| extend Action = case(EventID in ("4728", "4756", "4732"), strcat("Add"),
EventID in ("4729", "4757", "4733"), strcat("Remove"), "unknown")
| extend Environment = strcat("Active Directory")
| project
TimeGenerated,
Action,
Actor=SubjectUserName,
Target,
GroupName=TargetUserName,
GroupID =TargetSid,
Environment;
union aaduseradded, aaduserremoved, adchanges
It may look complex, but it isn’t. We are just taking data that isn’t consistent and tidying it up. In AD when we add a user to a group, the group name is actually stored as ‘TargetUserName’ which isn’t very intuitive. So we rename it to GroupName, and we do the same for Azure AD. The Actor and Target are named different in AD and AAD, so let’s just rename them. Then we just add a new column for environment.
KQL isn’t just for Microsoft Sentinel
Not everyone has the budget to use Microsoft Sentinel, and I appreciate that. If you have access to Advanced Hunting you have access to an amazing amount of info there too. Especially if you have an Azure AD P2 license. The following data is available for you, at no additional cost to your existing Defender and Azure AD licensing.
Device events – such as network or logon events.
Email events – emails received or sent, attachment and URL info.
Defender for Cloud Apps – all the logs from DCA and any connected apps.
Alerts – all the alert info from other Defender products.
Defender for Identity – if you use Defender for Identity, all that info is there.
Azure AD Sign In Logs – if you have Azure AD P2 you get all the logon data. For both users and service principals.
The data structure between Sentinel and Advanced Hunting isn’t an exact match, but it is pretty close. Definitely get in there and have a look.
Visualize for impact
A picture is worth a thousand words. With all this data in your tenant you can use visualizations for all kinds of things. You can look for anomalies, try to find strange attack patterns. Of course they are good to report up to executives too. Executive summaries showing total email blocked, or credential attacks stopped always play well. When building visualizations, I want them to explain the data with no context needed. They should be straight forward and easy to understand.
A couple of examples I really think are valuable. The first shows you successful self service password reset and account unlock events. SSPR is such a great time saver for your helpdesk. It is also often more secure than a traditional password reset as the helpdesk can’t be socially engineered. It is also a great visualization to report upward. It is a time saver, and therefore money saver for your helpdesk, and it’s more secure. Big tick.
AuditLogs
| where TimeGenerated > ago (180d)
| where OperationName in ("Reset password (self-service)", "Unlock user account (self-service)")
| summarize
['Password Reset']=countif(OperationName == "Reset password (self-service)" and ResultDescription == "Successfully completed reset."),
['Account Unlock']=countif(OperationName == "Unlock user account (self-service)" and ResultDescription == "Success")
by startofweek(TimeGenerated)
| render timechart
with (
ytitle="Count",
xtitle="Day",
title="Self Service Password Resets and Account Unlocks over time")
With KQL we can even rename our axis and title in the query, copy and paste the picture. Send it to your boss, show him how amazing you are. Get a pay increase.
And a similar query, showing password vs passwordless sign ins into your tenant. Maybe your boss has heard of passwordless, or zero trust. Show him how you are tracking to help drive change.
SigninLogs
| where TimeGenerated > ago (180d)
| mv-expand todynamic(AuthenticationDetails)
| project TimeGenerated, AuthenticationDetails
| extend AuthMethod = tostring(AuthenticationDetails.authenticationMethod)
| summarize
Passwordless=countif(AuthMethod in ("Windows Hello for Business", "Passwordless phone sign-in", "FIDO2 security key", "X.509 Certificate")),
Password=countif(AuthMethod == "Password")
by bin(TimeGenerated, 1d)
| render timechart with (title="Passwordless vs Password Authentication", ytitle="Count")
Don’t be afraid of making mistakes or writing ‘bad’ queries
For normal logs in Sentinel, there is no cost to run a query. For Advanced Hunting, there is no cost to query. Your licensing and ingestion fees give you the right to try as much as you want. If you can’t find what you are looking for, then start broadly. You can search across all your data easily.
search "reprise99"
It may take a while, but you will get hits. Then find out what tables they are in. Then narrow down your query. I think of writing queries like a funnel. Start broad, then get more specific until you are happy with it.
In my day to day work and putting together 365 queries to share, I have run just under 60,000 queries in Sentinel itself. Probably another 10,000 or more in Advanced Hunting. A lot of them would have caused errors initially. That is how you will learn! Like anything, practice makes progress.
As I transition to a new role I will keep sharing KQL and other security resources that are hopefully helpful to people. The feedback I have had from everyone has been amazing. So I appreciate you reading and following along.
This was how many queries I ran per day this year!
If you have spent any time in Azure Active Directory, chances are you have stumbled across Azure AD Conditional Access. It is at the very center of Microsoft Zero Trust. At its most basic, it evaluates every sign in to your Azure AD tenant. It takes the different signals that form that sign in. The location a user is coming from, the health of a device. It can look at the roles a user has, or the groups they are in. Even what application is being used to sign in. Once it has all that telemetry, it decides not only if you are allowed into the tenant. It also dictates the controls required to access. You must complete MFA, or your device be compliant. You can block sign ins from particular locations, or need specific applications to be allowed in. When I first looked at Conditional Access I thought of it as a ‘firewall for identity’. While that is somewhat true, it undersells the power of Conditional Access. Conditional Access can make decisions based on a lot more than a traditional firewall can.
Before we go hunting through our data, let’s take a step back. To make sense of that data, here are a couple of key points about Conditional Access.
Many policies can apply to a sign in. The controls for these policies will be added together. For instance, if you have two policies that control access to Exchange Online. The first requires MFA and the second device compliance. Then the policies are added together. The user must satisfy both MFA and have a compliant device.
Each individual policy can have many controls within it, such as MFA and requiring an approved application. They are evaluated in the following order.
A block policy overrides any allow policy, regardless of controls. If one policy says allow with MFA and one says block. The sign in is blocked.
These are important to note because when we look through our data, we will see multiple policies per sign in. To make this data easier to read, we are going to use the mv-expand operator. The guidance says it “Expands multi-value dynamic arrays or property bags into multiple records”. Well, what does that mean? Let’s look at example using the KQL playground. This a demo environment anyone can access. If you log on there, we can look at one sign in event.
SigninLogs
| where CorrelationId == "cadd2fee-a8b0-4daf-9ac8-cc3ae8ebe15b"
| project ConditionalAccessPolicies
We can see many policies evaluated. You see the large JSON structure listing them all. From position 0 to position 11. So 12 policies in total have been evaluated. The problem when hunting this data, is that the position of policies can change. If ‘Block Access Julianl’, seen at position 10 is triggered, it would move up higher in the list. So we need to make our data consistent before hunting it. Let’s use our mv-expand operator on the same sign in.
Our mv-expand operator has expanded each of the policies into its own row. We went from one row, with our 12 policy outcomes in one JSON field, to 12 rows, with one outcome each. We don’t need to worry about the location within a JSON array now. We can query our data knowing it is consistent.
For each policy, we will have one of three outcomes
Success – the controls were met. For instance, a user passed MFA on a policy requiring MFA.
Failure – the controls failed. For instance, a user failed MFA on a policy requiring MFA.
Not applied – the policy was not applied to this sign in. For instance, you had a policy requiring MFA for SharePoint. But this sign in was for Service Now, so it didn’t apply.
If you have policies in report only mode you may see those too. Report only mode lets you test policies before deploying them. So the policy will be evaluated, but none of the controls enforced. You will see these events as reportOnlySuccess, reportOnlyFailure and reportOnlyNotApplied.
User Sign In Insights
Now that we have the basics sorted, we can query our data. The more users and more policies you have, the more data to evaluate. If you were interested in just seeing some statistics for your policies, we can do that. You can use the evaluate operator to build a table showing all the outcomes.
//Create a pivot table showing all conditional access policy outcomes over the last 30 days
SigninLogs
| where TimeGenerated > ago(30d)
| extend CA = parse_json(ConditionalAccessPolicies)
| mv-expand bagexpansion=array CA
| evaluate bag_unpack(CA)
| extend
['CA Outcome']=tostring(column_ifexists('result', "")),
['CA Policy Name'] = column_ifexists('displayName', "")
| evaluate pivot(['CA Outcome'], count(), ['CA Policy Name'])
These are the same 12 policies we saw earlier. We now have a useful table showing the usage of each.
Using this mv-expand operator further, we can really dig in. This query looks for the users that are failing the most different policies. Is this user compromised and the attackers are trying to find a hole in your policies?
//Find which users are failing the most Conditional Access policies, retrieve the total failure count, distinct policy count and the names of the failed policies
SigninLogs
| where TimeGenerated > ago (30d)
| project TimeGenerated, ConditionalAccessPolicies, UserPrincipalName
| mv-expand ConditionalAccessPolicies
| extend CAResult = tostring(ConditionalAccessPolicies.result)
| extend CAPolicyName = tostring(ConditionalAccessPolicies.displayName)
| where CAResult == "failure"
| summarize
['Total Conditional Access Failures']=count(),
['Distinct Policy Failure Count']=dcount(CAPolicyName),
['Policy Names']=make_set(CAPolicyName)
by UserPrincipalName
| sort by ['Distinct Policy Failure Count'] desc
One query I really love running is the following. It hunts through all sign in data, and returns policies that are not in use.
//Find Azure AD conditional access policies that have no hits for 'success' or 'failure' over the last month
//Check that these policies are configured correctly or still required
SigninLogs
| where TimeGenerated > ago (30d)
| project TimeGenerated, ConditionalAccessPolicies
| mv-expand ConditionalAccessPolicies
| extend CAResult = tostring(ConditionalAccessPolicies.result)
| extend ['Conditional Access Policy Name'] = tostring(ConditionalAccessPolicies.displayName)
| summarize ['Conditional Access Result']=make_set(CAResult) by ['Conditional Access Policy Name']
| where ['Conditional Access Result'] !has "success"
and ['Conditional Access Result'] !has "failure"
and ['Conditional Access Result'] !has "unknownFutureValue"
| sort by ['Conditional Access Policy Name'] asc
This query uses the summarize operator to build a set of all the outcomes for each policy. We create a set of all the outcomes for that policy – success, not applied, failure. Then we exclude any policy that has a success or a failure. If we see a success or failure event, then the policy is in use. If all we see is ‘not Applied’ then no sign ins have triggered that policy. Maybe the settings aren’t right, or you have excluded too many people?
We can even use some of the more advanced operators to look for anomalies in our data. The series_decompose_anomalies operator lets us hunt through time series data. From that data is flags anything it believes is an anomaly.
//Detect anomalies in the amount of conditional access failures by users in your tenant, then visualize those conditional access failures
//Starttime and endtime = which period of data to look at, i.e from 21 days ago until today.
let startdate=21d;
let enddate=1d;
//Timeframe = time period to break the data up into, i.e 1 hour blocks.
let timeframe=1h;
//Sensitivity = the lower the number the more sensitive the anomaly detection is, i.e it will find more anomalies, default is 1.5
let sensitivity=2;
//Threshold = set this to tune out low count anomalies, i.e when total failures for a user doubles from 1 to 2
let threshold=5;
let outlierusers=
SigninLogs
| where TimeGenerated between (startofday(ago(startdate))..startofday(ago(enddate)))
| where ResultType == "53003"
| project TimeGenerated, ResultType, UserPrincipalName
| make-series CAFailureCount=count() on TimeGenerated from startofday(ago(startdate)) to startofday(ago(enddate)) step timeframe by UserPrincipalName
| extend outliers=series_decompose_anomalies(CAFailureCount, sensitivity)
| mv-expand TimeGenerated, CAFailureCount, outliers
| where outliers == 1 and CAFailureCount > threshold
| distinct UserPrincipalName;
//Optionally visualize the anomalies
SigninLogs
| where TimeGenerated between (startofday(ago(startdate))..startofday(ago(enddate)))
| where ResultType == "53003"
| project TimeGenerated, ResultType, UserPrincipalName
| where UserPrincipalName in (outlierusers)
| summarize CAFailures=count()by UserPrincipalName, bin(TimeGenerated, timeframe)
| render timechart with (ytitle="Failure Count",title="Anomalous Conditional Access Failures")
I am not sure I would want to alert on every Conditional Access failure. You are likely to have a lot of them. But what about users failing Conditional Access to multiple applications, in a short time period? This query finds any users that get blocked by Conditional Access to 5 of more unique applications within an hour.
SigninLogs
| where TimeGenerated > ago (1d)
| project TimeGenerated, ConditionalAccessPolicies, UserPrincipalName, AppDisplayName
| mv-expand ConditionalAccessPolicies
| extend CAResult = tostring(ConditionalAccessPolicies.result)
| extend CAPolicyName = tostring(ConditionalAccessPolicies.displayName)
| where CAResult == "failure"
| summarize
['List of Failed Application']=make_set(AppDisplayName),
['Count of Failed Application']=dcount(AppDisplayName)
by UserPrincipalName, bin(TimeGenerated, 1h)
| where ['Count of Failed Application'] >= 5
Audit Insights
The second key part of Conditional Access monitoring is auditing changes. Much like a firewall, changes to Conditional Access policies should be alerted on. Accidental or malicious changes to your policies can decrease your security posture significantly. Any changes to policies are held in the Azure Active Directory audit log table.
Events are logged under three different categories.
Add conditional access policy
Update conditional access policy
Delete conditional access policy
A simple query will return any of these actions in your environment.
AuditLogs
| where TimeGenerated > ago(7d)
| where OperationName in ("Update conditional access policy", "Add conditional access policy", "Delete conditional access policy")
You will notice one thing straight away. It is difficult to work out what has actually changed. Most of the items are stored as GUIDs buried in JSON. It is hard to tell the old setting from the new. I wouldn’t even bother trying to make sense of it. Instead let’s update our query to this.
Now we are returned the name of our policy, and its Id. Then we can jump into the Azure portal and see the current settings. This is where your knowledge of your environment is key. If you know the ‘Sentinel 101 Test’ policy requires MFA for all sign ins, and someone has changed the policy, you need to investigate.
We can add some more logic to our queries. For instance, we could alert on changes made by people who have never made a change before. Has an admin has been compromised? Or someone not familiar with Conditional Access was asked to make a change.
//Detects users who add, delete or update a Azure AD Conditional Access policy for the first time.
//First find users who have previously made CA policy changes, this example looks back 90 days
let knownusers=
AuditLogs
| where TimeGenerated > ago(90d) and TimeGenerated < ago(1d)
| where OperationName in ("Update conditional access policy", "Add conditional access policy", "Delete conditional access policy")
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| distinct Actor;
//Find new events from users not in the known user list
AuditLogs
| where TimeGenerated > ago(1d)
| where OperationName in ("Update conditional access policy", "Add conditional access policy", "Delete conditional access policy")
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend ['Policy Name'] = tostring(TargetResources[0].displayName)
| extend ['Policy Id'] = tostring(TargetResources[0].id)
| where Actor !in (knownusers)
| project TimeGenerated, Actor, ['Policy Name'], ['Policy Id']
We can even look for actions at certain times of the day, or particular days. This query looks for changes after hours or on weekends.
//Detect changes to Azure AD Conditional Access policies on weekends or outside of business hours
let Saturday = time(6.00:00:00);
let Sunday = time(0.00:00:00);
AuditLogs
| where OperationName in ("Update conditional access policy", "Add conditional access policy", "Delete conditional access policy")
// extend LocalTime to your time zone
| extend LocalTime=TimeGenerated + 5h
// Change hours of the day to suit your company, i.e this would find activations between 6pm and 6am
| where dayofweek(LocalTime) in (Saturday, Sunday) or hourofday(LocalTime) !between (6 .. 18)
| extend ['Conditional Access Policy Name'] = tostring(TargetResources[0].displayName)
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| project LocalTime,
OperationName,
['Conditional Access Policy Name'],
Actor
| sort by LocalTime desc
Managing Exclusions
Like any rules or policies in your environment, there is a chance you will need exclusions. Conditional Access policies are very granular in what you can include or exclude. You can exclude on locations, or OS types, or particular users. It is important to alert on these exclusions, and ensure they are fit for purpose. For this example I have excluded a particular group from this policy.
We can see that an ‘Update conditional access policy’ event was triggered. Again, the raw data is hard to read. So jump into the portal and check out what has been configured. Now, one very important note here. If you add a group exclusion to a policy, it will trigger an event you can track. However, if I then add users to that group, it won’t trigger a policy change event. This is because the policy itself hasn’t changed, just the membership of the group. From your point of view you will need to have visibility to both events. If your policy is changed you would want to know. If 500 users were added to the group, you would also want to know. So we can query group addition events with the below query.
AuditLogs
| where OperationName == "Add member to group"
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend Target = tostring(TargetResources[0].userPrincipalName)
| extend GroupName = tostring(parse_json(tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].newValue)))
| where GroupName has "Conditional Access Exclusion"
| project TimeGenerated, Actor, Target, GroupName
When you are creating exclusions, you want to limit those exclusions down as much as possible. We always talk about the theory of ‘least privilege’. With exclusions, I like to think of them as ‘least exclusion’. If you have a workload that needs excluding, then can we exclude a particular location, or IP, or device? This is a better security stance than a blanket exclusion of a whole policy.
You can often use two policies to achieve the best outcome. Think of the example of Exchange Online, you want to enforce MFA for everyone. But you have a service account that does some automation, and it keeps failing MFA. It signs on from one particular IP address. If you exclude it from your main policy then it is a blanket exclusion. Instead build two policies.
Policy 1 – Require MFA for Exchange Online
Includes all users, excludes your service account
Includes all locations
Includes Exchange Online
Control is require MFA
Policy 2 – Exclude MFA for Exchange Online
Includes only your service account
Includes all locations, excludes a single IP address
Includes Exchange Online
Control is require MFA
As mentioned at the outset, Conditional Access policies are combined. So this combined set of two policies achieves what we want. Our service account is only excluded from MFA from our single IP address. Let’s say the credentials for that account are compromised. The attacker tries to sign in from another location. When it signs into Exchange Online it will prompt for MFA.
If we had only one policy we don’t get the same control. If we had a single policy and excluded our service account, then it would be excluded from all locations. If we had a single policy and excluded the IP address, then all users would be excluded from that IP. So we need to build two policies to achieve the best outcome.
Of course we want to balance single exclusions with the overhead of managing many policies. The more policies you have, the harder it is to work out the effect of changes. Microsoft provides a ‘what-if’ tool for Conditional Access. It will let you build a ‘fake’ sign in and tell you which policies are applied.
Recommendations
Learning to drive and audit Conditional Access is key to securing Azure AD. Having built a lot of policies over the years, here are some of my tips.
Never, ever lock yourself out of the Azure portal! You get a UI warning if it believes you may be doing this. Support will be able to get you back in, but it will take time. Exclude your own account as you build policies.
Create broad policies that cover the most use cases. If your standard security stance is require MFA to access SSO apps then build one policy. Apply that policy to as many apps and users as possible. There is no need to build an individual policy for each app.
When you create exclusions, use the principal of ‘least exclusion’. When you are building an exclusion, have a think about the flow on effect. Will it decrease security for other users or workloads? Use multiple policies where practical to keep your security tight.
Audit any policy changes. Find the policy that was changed and review it in the Azure portal.
Use the ‘what-if’ tool to help you build policies. Remember that multiple policies are combined, and controls within a single policy have an order of operations.
Blocks override any allows!
Try not to keep ‘report only’ policies in report only mode too long. Once you are happy, then enable the policy. Report only should only be there to validate your policy logic.
If you use group exclusions, then monitor the membership of those groups. Users being added to a group that is excluded from a policy won’t trigger a policy change event. Keep on top of how many people are excluded. Once someone is in a group they tend to stay there forever. If an exclusion is temporary, make sure they are removed.
This article is presented as part of the #AzureSpringClean event. The idea of #AzureSpringClean is to promote well managed Azure environments. This article will focus on Azure Active Directory and how we can leverage KQL to keep things neat and tidy.
Much like on premise Active Directory, Azure Active Directory has a tendency to grow quickly. You have new users or guests being onboarded all the time. You are configuring single sign on to apps. You may create service principals for all kinds of integration. And again, much like on premise Active Directory, it is in our best interest to keep on top of all these objects. If users have left the business, or we have decommissioned applications then we also want to clean up all those artefacts.
Microsoft provide tools to help automate some of these tasks – entitlement management and access reviews. Entitlement management lets you manage identity and access at scale. You can build access packages. These access packages can contain all the access a particular role needs. You then overlay just in time access and approval workflows on top.
Access reviews are pretty self explanatory. They let you easily manage group memberships, application and role access. You can schedule access reviews to make sure people only keep the appropriate access.
So if Microsoft provide these tools, why should we dig into the data ourselves? Good question. You may not be licensed for them to start with, they are both Azure AD P2 features. You also may have use cases that fall outside of the capability of those products. Using KQL and the raw data, we can find all kinds of trends in our Azure AD tenant.
First things first though, we will need that data in a workspace! You can choose which Log Analytics workspace from the Azure Active Directory -> Diagnostics setting tab. If you use Microsoft Sentinel, you can achieve the same via the Azure Active Directory data connector.
You can pick and choose what you like. This article is going to cover these three items –
SignInLogs – all your normal sign ins to Azure AD.
AuditLogs – all the administrative activities in your tenant, like guest invites and redemptions.
ServicePrincipalSignInLogs – sign ins for your Service Principals.
Two things to note, you need to be Azure AD P1 to export this data and there are Log Analytics ingestion costs.
Let’s look at seven areas of Azure Active Directory –
Users and Guests
Service Principals
Enterprise Applications
Privileged Access
MFA and Passwordless
Legacy Auth
Conditional Access
And for each, write some example queries looking for interesting trends. Hopefully in your tenant they can provide some useful information. The more historical data you have the more useful your trends will be of course. But even just having a few weeks worth of data is valuable.
To make things even easier, for most of these queries I have used the Log Analytics demo environment. You may not yet have a workspace of your own, but you still want to test the queries out. The demo environment is free to use for anyone. Some of the data types aren’t available in there, but I have tried to use it as much as possible.
You can access the demo tenant here. You just need to login with any Microsoft account – personal or work, and away you go.
Users and Guests
User lifecycle management can be hard work! Using Azure AD guests can add to that complexity. Guests likely work for other companies or partners. You don’t manage them fully in the way you would your own staff.
Let’s start by finding when our users last signed in. Maybe you want to know when users haven’t signed in for more than 45 days. We can even retrieve our user type at the same time. You could start by disabling these accounts.
SigninLogs
| where TimeGenerated > ago(365d)
| where ResultType == "0"
| summarize arg_max(TimeGenerated, *) by UserPrincipalName
| project TimeGenerated, UserPrincipalName, UserType, ['Days Since Last Logon']=datetime_diff("day", now(),TimeGenerated)
| where ['Days Since Last Logon'] >= 45 | sort by ['Days Since Last Logon'] desc
We use a really useful operator in this query called datetime_diff. It lets us calculate the time between two events in a way that is easier for us to read. So in this example, we calculate the difference between the last sign in and now in days. UTC time can be hard to read, so let KQL do the heavy lifting for you.
We can even visualize the trend of our last sign ins. In this example we look at when our inbound guests last signed in. Inbound guests are those from other tenants connecting to yours. To do this, we summarize our data twice. First we get the last sign in date for each guest. Then we group that data into each month.
SigninLogs
| where TimeGenerated > ago (360d)
| where UserType == "Guest"
| where AADTenantId != HomeTenantId and HomeTenantId != ResourceTenantId
| where ResultType == 0
| summarize arg_max(TimeGenerated, *) by UserPrincipalName
| project TimeGenerated, UserPrincipalName
| summarize ['Count of Last Signin']=count() by startofmonth(TimeGenerated)
| render columnchart with (title="Guest inactivity per month")
Another interesting thing with Azure AD guests is that invites never expire. So once you invite a guest the pending invite will be there forever. You can use KQL to find invites that have been sent but not redeemed.
let timerange=180d;
let timeframe=30d;
AuditLogs
| where TimeGenerated between (ago(timerange) .. ago(timeframe))
| where OperationName == "Invite external user"
| extend GuestUPN = tolower(tostring(TargetResources[0].userPrincipalName))
| summarize arg_max(TimeGenerated, *) by GuestUPN
| project TimeGenerated, GuestUPN
| join kind=leftanti (
AuditLogs
| where TimeGenerated > ago (timerange)
| where OperationName == "Redeem external user invite"
| where CorrelationId <> "00000000-0000-0000-0000-000000000000"
| extend d = tolower(tostring(TargetResources[0].displayName))
| parse d with * "upn: " GuestUPN "," *
| project TimeGenerated, GuestUPN)
on GuestUPN
| project TimeGenerated, GuestUPN, ['Days Since Invite Sent']=datetime_diff("day", now(), TimeGenerated)
For this we join two queries – guest invites and guest redemptions. Then search for when there isn’t a redemption. We then re-use our datetime_diff to work out how many days since the invite was sent. For this query we also exclude invites sent in the last 30 days. Those guests may just not have gotten around to redeeming their invites yet. Once a user has been invited, the user object already exists in your tenant. It just sits there idle until they redeem the invite. If they haven’t accepted the invite in 45 days, then it is probably best to delete the user objects.
Service Principals
The great thing about KQL is once we write a query we like, we can easily re-use it. Service principals are everything in Azure AD. They control what your applications can access. Much like users, we may no longer be using service principals. Perhaps that application has been decommissioned. Maybe the integration that was in use has been retired. Much like users, if they are no longer in use, we should remove them.
Let’s re-use our inactive user query, and this time look for inactive service principals.
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(365d)
| where ResultType == "0"
| summarize arg_max(TimeGenerated, *) by AppId
| project TimeGenerated, ServicePrincipalName, ['Days Since Last Logon']=datetime_diff("day", now(),TimeGenerated)
| where ['Days Since Last Logon'] >= 45 | sort by ['Days Since Last Logon'] desc
Have a look through the list and see which can be deleted.
Service principals can fail to sign in for many reasons, much like regular users. With regular users though we get an easy to read description that can help us out. With service principals, we unfortunately just get an error code. Using the case operator we can add our own friendly descriptions to help us out. We just say, when our result code is this, then provide us an easy to read description.
AADServicePrincipalSignInLogs
| where ResultType != "0"
| extend ErrorDescription = case (
ResultType == "7000215", strcat("Invalid client secret is provided"),
ResultType == "7000222", strcat("The provided client secret keys are expired"),
ResultType == "700027", strcat("Client assertion failed signature validation"),
ResultType == "700024", strcat("Client assertion is not within its valid time range"),
ResultType == "70021", strcat("No matching federated identity record found for presented assertion"),
ResultType == "500011", strcat("The resource principal named {name} was not found in the tenant named {tenant}"),
ResultType == "700082", strcat("The refresh token has expired due to inactivity"),
ResultType == "90025", strcat("Request processing has exceeded gateway allowance"),
ResultType == "500341", strcat("The user account {identifier} has been deleted from the {tenant} directory"),
ResultType == "100007", strcat("AAD Regional ONLY supports auth either for MSIs OR for requests from MSAL using SN+I for 1P apps or 3P apps in Microsoft infrastructure tenants"),
ResultType == "1100000", strcat("Non-retryable error has occurred"),
ResultType == "90033", strcat("A transient error has occurred. Please try again"),
ResultType == "53003",strcat("Access has been blocked by Conditional Access policies. The access policy does not allow token issuance."),
"Unknown"
)
| project TimeGenerated, ServicePrincipalName, ServicePrincipalId, ErrorDescription, ResultType, IPAddress
You may be particularly interested in signins with expired or invalid secrets. Are the service principals still in use? Perhaps you can remove them. Or you may be interested where conditional access blocks a service principal sign-in.
Have the credentials for that service principal leaked? It may be worth investigating and rotating credentials if required.
Enterprise Applications
For applications that have had no sign in activity for a long time that could be a sign of a couple of things. Firstly, you may have retired that application. If that is the case, then you should delete the enterprise application from your tenant.
Secondly, it may mean that people are bypassing SSO to access the application. For example, you may use a product like Confluence. You may have enabled SSO to it, but users still have the ability to sign on using ‘local’ credentials. Maybe users do that because it is more convenient to bypass conditional access. For those applications you know are still in use, but you aren’t seeing any activity you should investigate. If the applications have the ability to prevent the use of local credentials then you should enable that. Perhaps you have the ability to set the password for local accounts, you could set them to something random the users don’t know to enforce SSO.
If those technical controls don’t exist, you may need to try softer controls. You should try get buy in from the application owners or users and explain the risks of local credentials. A good point to highlight is that when a user leaves an organization then their account is disabled. When that happens, they lose access to any SSO enforced applications. In applications that use local credentials the lifecycle of accounts is likely poorly managed. Application owners usually don’t want ex employees still having access to data, so that may help enforce good behaviour.
We can find apps that have had no sign ins in the last 30 days easily.
SigninLogs
| where TimeGenerated > ago (365d)
| where ResultType == 0
| summarize arg_max(TimeGenerated, *) by AppId
| project
AppDisplayName,
['Last Logon Time']=TimeGenerated,
['Days Since Last Logon']=datetime_diff("day", now(), TimeGenerated)
| where ['Days Since Last Logon'] > 30 | sort by ['Days Since Last Logon'] desc
Maybe you are interested in application usage more generally. We can bring back some stats for each of your applications. Perhaps you want to see total sign ins to each vs distinct sign ins. Some applications may be very noisy with their sign in data. But when you look at distinct users, they aren’t as busy as you thought.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize ['Total Signins']=count(), ['Distinct User Signins']=dcount(UserPrincipalName) by AppDisplayName | sort by ['Distinct User Signins'] desc
You may be also interested in the breakdown of guests vs members for each application. Maybe guests are accessing something they aren’t meant to. If you notice that you can put a group in front of that app to control access.
For this query we use the dcountif operator. Which returns a distinct count of a column where something is true. So for this example, we return a distinct user count where the UserType is a member. Then again for guests.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| summarize ['Distinct Member Signins']=dcountif(UserPrincipalName, UserType == "Member"), ['Distinct Guest Signins']=dcountif(UserPrincipalName, UserType == "Guest") by AppDisplayName | sort by ['Distinct Guest Signins']
Use your knowledge of your environment to make sense of the results. If you have lots of guests accessing something you didn’t expect, then investigate.
Privileged Access
As always, your privileged users deserve a more scrutiny. You can detect when a user accesses particular Azure applications for the first time. This query looks back 90 days, then detects if a user accesses one of these applications for the first time.
//Detects users who have accessed Azure AD Management interfaces who have not accessed in the previous timeframe
let timeframe = startofday(ago(90d));
let applications = dynamic(["Azure Active Directory PowerShell", "Microsoft Azure PowerShell", "Graph Explorer", "ACOM Azure Website"]);
SigninLogs
| where TimeGenerated > timeframe and TimeGenerated < startofday(now())
| where AppDisplayName in (applications)
| project UserPrincipalName, AppDisplayName
| join kind=rightanti
(
SigninLogs
| where TimeGenerated > startofday(now())
| where AppDisplayName in (applications)
)
on UserPrincipalName, AppDisplayName
| where ResultType == 0
| project TimeGenerated, UserPrincipalName, ResultType, AppDisplayName, IPAddress, Location, UserAgent
You could expand the list to include privileged applications specific to your environment too.
If you use Azure AD Privileged Identity Management (PIM) you can keep an eye on those actions too. For example, we can find users who haven’t elevated to a role for over 30 days. If you have users with privileged roles but they aren’t actively using them then they should be removed. This query also returns you the role which they last activated.
AuditLogs
| where TimeGenerated > ago (365d)
| project TimeGenerated, OperationName, Result, TargetResources, InitiatedBy
| where OperationName == "Add member to role completed (PIM activation)"
| where Result == "success"
| extend ['Last Role Activated'] = tostring(TargetResources[0].displayName)
| extend Actor = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| summarize arg_max(TimeGenerated, *) by Actor
| project Actor, ['Last Role Activated'], ['Last Activation Time']=TimeGenerated, ['Days Since Last Activation']=datetime_diff("day", now(), TimeGenerated)
| where ['Days Since Last Activation'] >= 30
| sort by ['Days Since Last Activation'] desc
One of the biggest strengths of KQL is manipulating time. We can use that capability to add some logic to our queries. For example, we can find PIM elevation events that are outside of business hours.
let timerange=30d;
AuditLogs
// extend LocalTime to your time zone
| extend LocalTime=TimeGenerated + 5h
| where LocalTime > ago(timerange)
// Change hours of the day to suit your company, i.e this would find activations between 6pm and 6am
| where hourofday(LocalTime) !between (6 .. 18)
| where OperationName == "Add member to role completed (PIM activation)"
| extend RoleName = tostring(TargetResources[0].displayName)
| project LocalTime, OperationName, Identity, RoleName, ActivationReason=ResultReason
If this is unexpected behaviour for you then it’s worth looking at. Maybe an account has been compromised. Or it could be a sign of malicious insider activity from your admins.
MFA & Passwordless
In a perfect world we would have MFA on everything. That may not be the reality in your tenant. In fact, it’s not the reality in many tenants. For whatever reason your MFA coverage may be patchy. You could be on a roadmap to deploying it, or trying to onboard applications to SSO.
Our sign on logs provide great insight to single factor vs multi factor connections. We can summarize and visualize that data in different ways to track your MFA progress. If you want to just look across your tenant as a whole we can do that of course.
SigninLogs
| where TimeGenerated > ago (30d)
| summarize ['Single Factor Authentication']=countif(AuthenticationRequirement == "singleFactorAuthentication"), ['Multi Factor Authentication']=countif(AuthenticationRequirement == "multiFactorAuthentication") by bin(TimeGenerated, 1d)
| render timechart with (ytitle="Count", title="Single vs Multifactor Authentication last 30 days")
There is some work to be done in the demo tenant!
You can even build a table out of all your applications. From that we can count the percentage of sign ins that are covered by MFA. This may give you some direction to enabling MFA.
let timerange=30d;
SigninLogs
| where TimeGenerated > ago(timerange)
| where ResultType == 0
| summarize
TotalCount=count(),
MFACount=countif(AuthenticationRequirement == "multiFactorAuthentication"),
nonMFACount=countif(AuthenticationRequirement == "singleFactorAuthentication")
by AppDisplayName
| project AppDisplayName, TotalCount, MFACount, nonMFACount, MFAPercentage=(todouble(MFACount) * 100 / todouble(TotalCount))
| sort by MFAPercentage desc
If that much data is too overwhelming, why not start with your most popular applications? Here we use the same logic, but first calculate our top 20 applications.
let top20apps=
SigninLogs
| where TimeGenerated > ago (30d)
| summarize UserCount=dcount(UserPrincipalName)by AppDisplayName
| sort by UserCount desc
| take 20
| project AppDisplayName;
//Use that list to calculate the percentage of signins to those apps that are covered by MFA
SigninLogs
| where TimeGenerated > ago (30d)
| where AppDisplayName in (top20apps)
| summarize TotalCount=count(),
MFACount=countif(AuthenticationRequirement == "multiFactorAuthentication"),
nonMFACount=countif(AuthenticationRequirement == "singleFactorAuthentication")
by AppDisplayName
| project AppDisplayName, TotalCount, MFACount, nonMFACount, MFAPercentage=(todouble(MFACount) * 100 / todouble(TotalCount))
| sort by MFAPercentage asc
Passwordless technology has been around for a little while now, but it is only starting now to hit mainstream. Azure AD provides lots of different options. FIDO2 keys, Windows Hello for Business, phone sign in etc. You can track password vs passwordless sign ins to your tenant.
let timerange=180d;
SigninLogs
| project TimeGenerated, AuthenticationDetails
| where TimeGenerated > ago (timerange)
| extend AuthMethod = tostring(parse_json(AuthenticationDetails)[0].authenticationMethod)
| where AuthMethod != "Previously satisfied"
| summarize
Password=countif(AuthMethod == "Password"),
Passwordless=countif(AuthMethod in ("FIDO2 security key", "Passwordless phone sign-in", "Windows Hello for Business", "Mobile app notification","X.509 Certificate"))
by startofweek(TimeGenerated)
| render timechart with ( xtitle="Week", ytitle="Signin Count", title="Password vs Passwordless signins per week")
Passwordless needs a little more love in the demo tenant for sure!
You could even go one better, and track each type of passwordless technology. Then you can see what is the most favored.
let timerange=180d;
SigninLogs
| project TimeGenerated, AuthenticationDetails
| where TimeGenerated > ago (timerange)
| extend AuthMethod = tostring(parse_json(AuthenticationDetails)[0].authenticationMethod)
| where AuthMethod in ("FIDO2 security key", "Passwordless phone sign-in", "Windows Hello for Business", "Mobile app notification","X.509 Certificate")
| summarize ['Passwordless Method']=count()by AuthMethod, startofweek(TimeGenerated)
| render timechart with ( xtitle="Week", ytitle="Signin Count", title="Passwordless methods per week")
Legacy Authentication
There is no better managed Azure AD tenant than one where legacy auth is completely disabled. Legacy auth is a security issue because it isn’t MFA aware. If one of your users are compromised, they could bypass MFA policies by using legacy clients such as IMAP or ActiveSync. The only conditional access rules that work for legacy auth are allow or block. Because conditional access defaults to allow, unless you explicitly block legacy auth, those connections will be allowed.
Microsoft are looking to retire legacy auth in Exchange Online on October 1st, 2022 which is fantastic. However, legacy auth can be used for non Exchange Online workloads. We can use our sign in log data to track exactly where legacy auth is used. That way we can not only be ready for October 1, but maybe we can retire it from our tenant way before then. Win win!
The Azure AD sign in logs contain useful information about what app is being used during a legacy sign in. Let’s start there and look at all the various legacy client apps. We can also retrieve the users for each.
SigninLogs
| where TimeGenerated > ago(30d)
| where ResultType == 0
| where ClientAppUsed in ("Exchange ActiveSync", "Exchange Web Services", "AutoDiscover", "Unknown", "POP3", "IMAP4", "Other clients", "Authenticated SMTP", "MAPI Over HTTP", "Offline Address Book")
| summarize ['Count of legacy auth attempts'] = count()by ClientAppUsed, UserPrincipalName
| sort by ClientAppUsed asc, ['Count of legacy auth attempts'] desc
This will show us each client app, such as IMAP or Activesync. For each app it lists the most active users for each. That will give you good direction to start migrating users and applications to modern auth.
If you want to visualize how you are going with disabling legacy auth, we can do that too. We can even compare that to how many legacy auth connections are blocked.
SigninLogs
| where TimeGenerated > ago(180d)
| where ResultType in ("0", "53003")
| where ClientAppUsed in ("Exchange ActiveSync", "Exchange Web Services", "AutoDiscover", "Unknown", "POP3", "IMAP4", "Other clients", "Authenticated SMTP", "MAPI Over HTTP", "Offline Address Book")
| summarize
['Legacy Auth Users Allowed']=dcountif(UserPrincipalName, ResultType == 0),
['Legacy Auth Users Blocked']=dcountif(UserPrincipalName, ResultType == 53003)
by bin(TimeGenerated, 1d)
| render timechart with (ytitle="Count",title="Legacy auth distinct users allowed vs blocked by Conditional Access")
Hopefully your allowed connections are on the decrease.
You may be wondering why blocks aren’t increasing at the same time. That is easily explained. For instance, say you migrate from Activesync to using the Outlook app on your phone. Once you make that change, there simply won’t be legacy auth connection to block anymore. Visualizing the blocks however provides a nice baseline. If you see a sudden spike, then something out there is still trying to connect and you should investigate.
Conditional Access
Azure AD conditional access is key to security for your Azure AD tenant. It decides who is allowed in, or isn’t. It also defines the rules people must follow to be allowed in. For instance, requiring multi factor authentication. Azure AD conditional access evaluates every sign into your tenant, and decides if they are approved to enter. The detail for conditional access evaluation is held within every sign in event. If we look at a sign in event from the demo environment, we can see what this data looks like.
So Azure AD evaluated this sign in. It determined the policy ‘MeganB MCAS Proxy’ was in scope for this sign in. So then it enforced the sign in to go through Defender for Cloud Apps (previously Cloud App Security). You can imagine if you have lots of policies, and lots of sign ins, this is a huge amount of data. We can summarize this data in lots of ways. Like any security control, we should regularly review to confirm that control is in use. We can find any policies that have had either no success events (user allowed in). And also no failure events (user blocked).
SigninLogs
| where TimeGenerated > ago (30d)
| project TimeGenerated, ConditionalAccessPolicies
| mv-expand ConditionalAccessPolicies
| extend CAResult = tostring(ConditionalAccessPolicies.result)
| extend CAPolicyName = tostring(ConditionalAccessPolicies.displayName)
| summarize CAResults=make_set(CAResult) by CAPolicyName
| where CAResults !has "success" and CAResults !has "failure"
In this test tenant we can see some policies that we have a hit on.
Some of these policies are not enabled. Some are in report only mode. Or they are simply not applying to any sign ins. You should review this list to make sure it is what you expect. If you are seeing lots of ‘notApplied’ results, make sure you have configured your policies properly.
If you wanted to focus on conditional access failures specifically, you can do that too. This query will find any policy with failures, then return the reason for the failure. This can be simply informational for you to show they are working as intended. Or if you are getting excessive failures maybe your policy needs tuning.
You could visualize the same failures if you wanted to look at any trends or spikes.
let start = now(-90d);
let end = now();
let timeframe= 12h;
SigninLogs
| project TimeGenerated, ResultType, ConditionalAccessPolicies
| where ResultType == 53003
| extend FailedPolicy = tostring(ConditionalAccessPolicies[0].displayName)
| make-series FailureCount = count() default=0 on TimeGenerated in range(start,end, timeframe) by FailedPolicy
| render timechart
Summary
I have provided you with a few examples of different queries to help manage your tenant. KQL gives you the power to manipulate your data in so many ways. Have a think about what is important to you. You can then hopefully use the above examples as a starting point to find what you need.
If you are licensed for the Microsoft provided tools then definitely use them. However if there are gaps, don’t be scared of looking at the data yourself. KQL is powerful and easy to use.
There are also a number of provided workbooks in your Azure AD tenant you can use too. You can find them under ‘workbooks’ in Azure AD. They cover some queries similar to this, and plenty more.
You need to combine the Microsoft tools and your own knowledge to effectively manage your directory.
The InfoSec community is amazing at providing insight into ransomware and malware attacks. There are so many fantastic contributors who share indicators of compromise (IOCs) and all kinds of other data. Community members and vendors publish detailed articles on various attacks that have occurred.
Usually these reports contain two different things. Indicators of compromise (IOCs) and tactics, techniques and procedures (TTPs). What is the difference?
Indicators of compromise – are some kind of evidence that an attack has occurred. This could be a malicious IP address or domain. It could be hashes of files. These indicators are often shared throughout the community. You can hunt for IOCs on places like Virus Total.
Tactics, techniques and procedures – describe the behaviour of how an attack occurred. These read more like a story of the attack. They are the ‘why’, the ‘what’ and the ‘how’ of an attack. Initial access was via phishing. Then reconnaissance. Then execution was via exploiting a scheduled task on a machine. These are also known as attack or kill chains. The idea being if you detected the attack earlier in the chain, the damage could have been prevented.
Using a threat intelligence source which provides IOCs is a key part to sound defence. If you detect known malicious files or domains in your environment then you need to react. There is, however, a delay between an attack occurring and these IOCs being available. Due to privacy, or legal requirements or dozens of other reasons, some IOCs may never be public. Also they can change. New malicious domains or IPs can come online. File hashes can change. That doesn’t make IOCs any less valuable. IOCs are still crucial and important in detection.
We just need to pair our IOC detection with TTP/kill chain detection to increase our defence. These kind of detections look for behaviour rather than specific IOCs. We want to try and detect suspicious activities, so that we can be alerted on potential attacks with no known IOCs. Hopefully these detections also occur earlier in the attack timeline and we are alerted before damage is done.
If we take for example the Trojan.Killdisk / HermeticWiper malware that has recently been documented. There are a couple of great write ups about the attack timeline. Symantec released this post which provides great insight. And Senior Microsoft Security Researcher Thomas Roccia (who you should absolutely follow) put together this really useful infographic. It visualizes the progression of the attack in a way that is easy to understand and follow. This visualizes both indicators and TTPs.
This article won’t focus on IOC detection, there are so many great resources for that. Instead we will work through the infographic and Symantec attack chain post. For each step in the chain, we will try to come up with a behavioural detection. Not one that focuses on any specific IOC, but to catch the activity itself. Using event logs and data taken from Microsoft Defender for Endpoint, we can generate some valuable alert rules.
From Thomas’ infographic we can see some early reconnaissance and defence evasion.
The attacker enumerated which privileges the account had. We can find these events with.
DeviceProcessEvents
| where FileName == "whoami.exe" and ProcessCommandLine contains "priv"
| project TimeGenerated, DeviceName, InitiatingProcessAccountName, FileName, InitiatingProcessCommandLine, ProcessCommandLine
We get a hit for someone looking at the privilege of the logged on account. This activity should not be occurring often in your environment outside of security staff.
The attacker then disabled the volume shadow copy service (VSS), to prevent restoration. When services are disabled they trigger Event ID 7040 in your system logs.
This query searches for the specific service disabled in this case. You could easily exclude the ‘ServiceName == “Volume Shadow Copy”‘ section. This would return you all services disabled. This may be an unusual event in your environment you wish to know about.
If we switch over to the Symantec article we can continue the timeline. So post compromise of a vulnerable Exchange server, the first activity noted is.
The decoded PowerShell was used to download a JPEG file from an internal server, on the victim’s network.
The article states they have decoded the PowerShell to make it readable for us. Which means it was encoded during the attack. Maybe our first rule could be searching for PowerShell that has been encoded? We can achieve that. Start with a broad query. Look for PowerShell and anything with an -enc or -encodedcommand switch.
DeviceProcessEvents
| where ProcessCommandLine contains "powershell" or InitiatingProcessCommandLine contains "powershell"
| where ProcessCommandLine contains "-enc" or ProcessCommandLine contains "-encodedcommand" or InitiatingProcessCommandLine contains "-enc" or InitiatingProcessCommandLine contains "-encodedcommand"
If you wanted to use some more advanced operators, we could extract the encoded string. Then attempt to decode it within our query. Query modified from this post.
DeviceProcessEvents
| where ProcessCommandLine contains "powershell" or InitiatingProcessCommandLine contains "powershell"
| where ProcessCommandLine contains "-enc" or ProcessCommandLine contains "-encodedcommand" or InitiatingProcessCommandLine contains "-enc" or InitiatingProcessCommandLine contains "-encodedcommand"
| extend EncodedCommand = extract(@'\s+([A-Za-z0-9+/]{20}\S+$)', 1, ProcessCommandLine)
| where EncodedCommand != ""
| extend DecodedCommand = base64_decode_tostring(EncodedCommand)
| where DecodedCommand != ""
| project TimeGenerated, DeviceName, InitiatingProcessAccountName, InitiatingProcessCommandLine, ProcessCommandLine, EncodedCommand, DecodedCommand
We can see a result where I encoded a PowerShell command to create a local account on this device.
We use regex to extract the encoded string. Then we use the base64_decode_tostring operator to decode it for us. This second query only returns results when the string can be decoded. So have a look at both queries and see the results in your environment.
This is a great example of hunting IOCs vs TTPs. We aren’t hunting for specific PowerShell commands. We are hunting for the behaviour of encoded PowerShell.
The next step was –
A minute later, the attackers created a scheduled task to execute a suspicious ‘postgresql.exe’ file, weekly on a Wednesday, specifically at 11:05 local-time. The attackers then ran this scheduled task to execute the task.
Attackers may lack privilege to launch an executable under system. They may have privilege to update or create a scheduled task running under a different user context. They could change it from a non malicious to malicious executable. In this example they have created a scheduled task with a malicious executable. Scheduled task creation is a specific event in Defender, so we can track those. We can also track changes and deletions of scheduled tasks.
There is a good chance you get significant false positives with this query. If you read on we will try to tackle that at the end.
Following from the scheduled task creation and execution, Symantec notes that next –
Beginning on February 22, Symantec observed the file ‘postgresql.exe’ being executed and used to perform the following
Execute certutil to check connectivity to trustsecpro[.]com and whatismyip[.]com Execute a PowerShell command to download another JPEG file from a compromised web server – confluence[.]novus[.]ua
So the attackers leveraged certutil.exe to check internet connectivity. Certutil can be used to do this, and even download files. We can use our DeviceNetworkEvents table to find this kind of event.
We search for DeviceNetworkEvents where the initiating process command line includes certutil. We can also filter on only connections where the Remote IP is public if you have legitimate internal use.
We can see where I used certutil to download GhostPack from GitHub. I even attempted to obfuscate the command line, but we still found it. This is another great example of searching for TTPs. We don’t hunt for certutil.exe connecting to a specific IOC, but anytime it connects to the internet.
The next activity was credential dumping –
Following this activity, PowerShell was used to dump credentials from the compromised machine
There are many ways to dump credentials from a machine, many are outlined here. We can detect on procdump usage or comsvcs.dll exploitation. For comsvcs –
We can see as part of the running these scripts, the execution policy was changed. PowerShell execution bypass activity can be found easily enough.
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| project InitiatingProcessAccountName, InitiatingProcessCommandLine
| where InitiatingProcessCommandLine has_all ("powershell","bypass")
This is another one that is going to be high volume. Let’s try and tackle that now.
With any queries that are relying on behaviour there is a chance for false positives. With false positives comes alert fatigue. We don’t want a legitimate alert buried in a mountain of noise. Hopefully the above queries don’t have any false positives in your environment. Unfortunately, that is not likely to be true. The nature of these attack techniques is they leverage tools that are used legitimately. We can try to tune these alerts down by whitelisting particular servers or commands. We don’t want to whitelist the server that is compromised.
Instead, we could look at adding some more intelligence to our queries. To do that we can try to add a baseline to our environment. Then we alert when something new occurs.
We build these types of queries by using an anti join in KQL. Anti joins can be a little confusing, so let’s try to visualize them from a security point of view.
First, think of a regular (or inner) join in KQL. We take two queries or tables and join them together on a field (or fields) that exist in both tables. Maybe you have firewall data and Active Directory data. Both have IP address information so you can join them together. Have a read here for an introduction to inner joins. We can visualize an inner join like this.
So for a regular (or inner) join, we write two queries, then match them on something that is the same in both. Maybe an IP address, or a username. Once we join we can retrieve information back from both tables.
When we expand on this, we can do anti-joins. Let’s visualize a leftanti join.
So we can again write two queries, join them on a matching field. But this time, we only return data from the first (left) query. A rightanti join is the opposite.
For rightanti joins we run our two queries. We match on our data. But this time we only return results that exist in the second (or right) query.
With joins in KQL, you don’t need to join between two different data sets. Which can be confusing to grasp. You can join between the same table, with different query options. So we can query the DeviceEvent table for one set of data. Query the DeviceEvent table again, with different parameters. Then join them in different ways. When joining the same table together I think of it like this –
Use a leftanti join when you want to detect when something stops happening.
Use a rightanti join when you want to detect when something happens for the first time.
Now let’s see how we apply these joins to our detection rules.
Scheduled task creation is a good one to use as an example. Chances are you have legitimate software on your devices that create tasks. We will use our rightanti join to add some intelligence to our query.
Let’s look at the following query.
DeviceEvents
| where TimeGenerated > ago(30d) and TimeGenerated < ago(1h)
| where ActionType == "ScheduledTaskCreated"
| extend ScheduledTaskName = tostring(AdditionalFields.TaskName)
| distinct ScheduledTaskName
| join kind=rightanti
(DeviceEvents
| where TimeGenerated > ago(1h)
| where ActionType == "ScheduledTaskCreated"
| extend ScheduledTaskName = tostring(AdditionalFields.TaskName)
| project TimeGenerated, DeviceName, ScheduledTaskName, InitiatingProcessAccountName)
on ScheduledTaskName
| project TimeGenerated, DeviceName, InitiatingProcessAccountName, ScheduledTaskName
Our first (or left) query looks at our DeviceEvents. We go back between 30 days ago and one hour ago. From that data, all we care about are the names of all the scheduled tasks that have been created. So we use the distinct operator. That first query becomes our baseline for our environment.
Next we select our join type. Kind = rightanti. We join back to the same table, DeviceEvents. This time though, we are only interested in the last hour of data. We retrieve the TimeGenerated, DeviceName, InitiatingProcessAccountName and ScheduledTaskName.
Then we tell KQL what field we want to join on. We want to join on ScheduledTaskName. Then return only data that is new in the last hour.
So to recap. First find all the scheduled tasks created between 30 days and an hour ago. Then find me all the scheduled tasks created in the last hour. Finally, only retrieve tasks that are new to our environment in the last hour. That is how we do a rightanti join.
Another example is PowerShell commands that change the execution policy to bypass. You probably see plenty of these in your environment
DeviceProcessEvents
| where TimeGenerated > ago(30d) and TimeGenerated < ago(1h)
| project InitiatingProcessAccountName, InitiatingProcessCommandLine
| where InitiatingProcessCommandLine has_all ("powershell","bypass")
| distinct InitiatingProcessAccountName, InitiatingProcessCommandLine
| join kind=rightanti (
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| project
TimeGenerated,
DeviceName,
InitiatingProcessAccountName,
InitiatingProcessCommandLine
| where InitiatingProcessAccountName !in ("system","local service","network service")
| where InitiatingProcessCommandLine has_all ("powershell","bypass")
)
on InitiatingProcessAccountName, InitiatingProcessCommandLine
This query is nearly the same as the one previous. We look back between 30 days and one hour. This time we query for commands executed that contain both ‘powershell’ and ‘bypass’. This time we retrieve both distinct commands and the account that executed them.
Then choose our rightanti join again. Run the same query once more for the last hour. We join on both our fields. Then return what is new to our environment in the last hour. For this query, the combination of command line and account needs to be unique.
For this particular example I excluded processes initiated by system, local service or network service. This will find events run under named user accounts only. This is an example though and it is easy enough to include all commands.
In summary.
These queries aren’t meant to be perfect hunting queries for all malware attack paths. They may definitely useful detections in your environment though. The idea is to try to help you think about TTP detections.
When you read malware and ransomware reports you should look at both IOCs and TTPs.
Detect on the IOCs. If you use Sentinel you can use Microsoft provided threat intelligence. You can also include your own feeds. Information is available here. There are many ready to go rules to leverage that data you can simply enable.
For TTPs, have a read of the report and try to come up with queries that detect that behaviour. Then have a look how common that activity is for you. The example above of using certutil.exe to download files is a good example. That may be extremely rare in your environment. Your hunting query doesn’t need to list the specific IOCs to that action. You can just alert any time certutil.exe connects to the internet.
Tools like PowerShell are used both maliciously and legitimately. Try to write queries that detect changes or anomalies in those events. Apply your knowledge of your environment to try and filter the noise without filtering out genuine alerts.
All the queries in this post that use Device* tables should also work in Advanced Hunting. You will just need to change ‘timegenerated’ to ‘timestamp’.
Defenders are often looking for a single event within their logs. Evidence of malware or a user clicking on a phishing link? Whatever it may be. Sometimes though you may be looking for a series of events, or perhaps trends in your data. Maybe a quick increase in a certain type of activity. Or several actions within a specific period. Take for example RDP connections. Maybe a user connecting to a single device via RDP doesn’t bother you. What if they connect to 5 different ones in the space of 15 minutes though? That is more likely cause for concern.
If you send a lot of data to Sentinel, or even use Microsoft 365 Advanced Hunting, you will end up with a lot of information to work with. Thankfully, KQL is amazing at data summation. There is actually a whole section of the official documentation devoted to aggregation. Looking at the list it can be pretty daunting though.
The great thing about aggregation with KQL in Log Analytics is that you can re-apply the same logic over and over. Once you learn the building blocks, they apply to nearly every data set you have.
So let’s take some examples and work through what they do for us. To keep things simple, we will use the SecurityAlert table for all our examples. This is the table that Microsoft security products write alerts to. It is also a free table!
count() and dcount()
As you would expect count() and dcount() (distinct count) can count for you. Let’s start simple. Our first query looks at our SecurityAlert table over the last 24 hours. We create a new column called AlertCount with the total. Easy.
SecurityAlert
| where TimeGenerated > ago(24h)
| summarize AlertCount=count()
To build on that, you can count by a particular column within the table. We do that by telling KQL to count ‘by’ the AlertName.
SecurityAlert
| where TimeGenerated > ago(24h)
| summarize AlertCount=count() by AlertName
This time we are returned a count of each different alert we have had in the last 24 hours.
You can count many columns at the same time, by separating them with a comma. So we can add the ProductName into our query.
SecurityAlert
| where TimeGenerated > ago(24h)
| summarize AlertCount=count() by AlertName, ProductName
We get the same AlertCount, but also the product that generated the alert.
For counting distinct values we use dcount(). Again, start simply. Let’s count all the distinct alerts in the last 24 hours.
SecurityAlert
| where TimeGenerated > ago(24h)
| summarize DistinctAlerts=dcount(AlertName)
So in our very first query we had 203 total alerts. By using dcount, we can see we only have 41 distinct alert names. That is normal, you will be getting multiples of a lot of alerts.
We can include ProductName into a dcount query too.
SecurityAlert
| where TimeGenerated > ago(24h)
| summarize DistinctAlerts=dcount(AlertName) by ProductName
We are returned the distinct alerts by each product.
To build on both of these further, we can also count or dcount based on a time period. For example you may be interested in the same queries, but broken down by day.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertCount=count() by bin(TimeGenerated, 1d)
So let’s change our very first query. First, we look back 7 days instead of 1. Then we will put our results into ‘bins’ of 1 day. To do that we add ‘by bin(TimeGenerated, 1d)’. We are saying, return 7 days of data, but put it into groups of 1 day.
If we include our AlertName, we can still do the same.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertCount=count() by AlertName, bin(TimeGenerated, 1d)
We see our different alerts placed into 1 day time periods. Of course, we can do the same for dcount.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertCount=dcount(AlertName) by bin(TimeGenerated, 1d)
Our query returns distinct alert names per 1 day blocks.
countif() and dcountif()
The next natural step is to look at countif() and dcountif(). The guidance for these states “Returns a count with the predicate of the group”. Well, what does that mean? It’s more simple that it seems. It means that it will return a count or dcount when something is true. Let’s use our same SecurityAlert table.
In this query, we look for all Security Alerts in the last 24 hours. But we want to only count them when they are high severity. So we include a countif statement.
When we ran this query originally we had 203 results. But filtering for high severity alerts, we drop down to 17. You can include multiple arguments to your query.
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize HighSeverityAlerts=countif(AlertSeverity == "High" and ProductName == "Azure Active Directory Identity Protection")
This includes only high severity alerts generated by Azure AD Identity Protection. You can countif multiple items in the same query too.
This query returns two counts. One for high severity alerts and the second for anything generated by MS ATP. You can break these down into time periods too, like a standard count. Using the same logic as earlier.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize HighSeverityAlerts=countif(AlertSeverity == "High") by bin(TimeGenerated, 1d)
We see high severity alerts per day over the last week.
dcountif works exactly as you would expect too. It returns a distinct count where the statement is true.
This will return distinct count of alert names where the alert severity is high. And once again, by time bucket.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize DistinctAlerts=dcountif(AlertName, AlertSeverity == "High") by bin(TimeGenerated, 1d)
arg_max() and arg_min()
arg_max and arg_min are a couple of very simple but powerful functions. They return the extremes of your query. Let’s use our same example query to show you what I mean.
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize arg_max(TimeGenerated, *)
If you run this query, you will be returned a single row. It will be the latest alert to trigger. If you had 500 alerts in the last day, it will still only return the latest.
arg_max tells us to retrieve the maximum value. In the brackets we select TimeGenerated as the field we want to maximize. Then our * indicates return all the data for that row. If we switch it to arg_min, we would get the oldest record.
We can use arg_max and arg_min against particular columns.
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize arg_max(TimeGenerated, *) by AlertName
This time we will be returned a row for each alert name. We tell KQL to bring back the latest record by Alert. So if you had the same alert trigger 5 times, you would just get the latest record.
These are a couple of really useful functions. You can use it to calculate when certain things last happened. If you look up sign in data and use arg_max, you can see when a user last signed in. Of if you were querying device network information. Querying the latest record would return you the most up to date information.
You can use your time buckets with these functions too.
SecurityAlert
| where TimeGenerated > ago(30d)
| summarize arg_max(TimeGenerated, *) by AlertName, bin(TimeGenerated, 7d)
With this query we look back 30 days. Then for each 7 day period, we return the latest record of each alert name.
make_list() and make_set()
make_list does basically what you would think it does, it makes a list of data you choose. We can make a list of all our alert names.
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize AlertList=make_list(AlertName)
What is the difference between make list and make set? Make set will only return distinct values of your query. So in the above screenshot you see ‘Unfamiliar sign-in properties’ twice. If you run –
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize AlertList=make_set(AlertName)
Then each alert name would only appear once.
Much like our other aggregation functions, we can build lists and sets by another field.
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize AlertList=make_list(AlertName) by AlertSeverity
This time we have a list of alert names by their severity.
Using make_set in the same query would return distinct alert names per severity. It won’t shock you if you are still reading to know that we can make lists and sets per time period too.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertList=make_set(AlertName) by bin(TimeGenerated, 1d)
This query gives us a list of alert names per day over the last 7 days. And the same to make a set.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertList=make_set(AlertName) by AlertSeverity, bin(TimeGenerated, 1d)
make_list_if() and make_set_if()
make_list_if() and make_list_if() are the natural next step to this. They create lists or sets based on a statement being true.
This supports multiple parameters and using time blocks too of course.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize AlertList=make_set_if(AlertName,AlertSeverity == "Medium" and ProductName == "Microsoft Defender Advanced Threat Protection") by bin(TimeGenerated, 1d)
This query creates a set of alert names for us per day. But it only returns results where the severity is medium and the alert is from MS ATP.
Visualizing your aggregations
Now the really great next step. Once you have summarized your data you can very easily build really great visualizations with it. First, lets summarize our alerts by their severity
SecurityAlert
| where TimeGenerated > ago(1d)
| summarize Alerts=count()by AlertSeverity
Easy, that returns us a summarized set of data.
Now to visualize that in a piechart, we just add one simple line.
KQL will try and guess axis titles and things for you, but you can adjust them yourself. This time we unstack our columns, and rename the axis and title.
SecurityAlert
| where TimeGenerated > ago(7d)
| summarize Alerts=count()by AlertSeverity, bin(TimeGenerated, 1d)
| render columnchart with (kind=unstacked, xtitle="Day", ytitle="Alert Count", title="Alert severity per day")
Aggregation Examples
Interested in aggregating different types of data? A couple are listed below.
This example looks for potential RDP reconnaissance activity.
DeviceNetworkEvents
| where TimeGenerated > ago(7d)
| where ActionType == "ConnectionSuccess"
| where RemotePort == "3389"
// Exclude Defender for Identity which uses RDP to map your network
| where InitiatingProcessFileName <> "Microsoft.Tri.Sensor.exe"
| summarize RDPConnections=make_set(RemoteIP) by bin(TimeGenerated, 20m), DeviceName
| where array_length(RDPConnections) >= 10
We can break this query down by applying what we have learned. First we write our query to look for the data we are interested in. In this case, we look at 7 days of data for successful connections on port 3389. Then we use one of our summarize functions. We make a set of the remote IP addresses each device is connecting to, and place that data into 20 minute bins. We know that each remote IP addresses will be distinct, because we made a set, not a list. Then we look for devices that have connected to 10 or more IP’s in a 20 minute period.
Have you started your passwordless journey? You can visualize your journey using data aggregation.
SigninLogs
| project TimeGenerated, AuthenticationDetails
| where TimeGenerated > ago (90d)
| extend AuthMethod = tostring(parse_json(AuthenticationDetails)[0].authenticationMethod)
| where AuthMethod != "Previously satisfied"
| summarize
Password=countif(AuthMethod == "Password"),
Passwordless=countif(AuthMethod in ("FIDO2 security key", "Passwordless phone sign-in", "Windows Hello for Business"))
by bin(TimeGenerated, 7d)
| render columnchart
with (
kind=unstacked,
xtitle="Week",
ytitle="Signin Count",
title="Password vs Passwordless signins per week")
We look back at the last 90 days of Azure AD sign in data. Then use our countif() operator to split out password vs passwordless logins. Then we put that into 7 day buckets, so you can see the trend. Finally, build a nice visualization to present to your boss to ask for money to deploy passwordless. Easy.
For people that use a lot of cloud workloads you would know it can be hard to track cost. Billing in the cloud can be volatile if you don’t keep on top of it. Bill shock is a real thing. While large cloud providers can provide granular billing information. It can still be difficult to track spend.
The unique thing about Sentinel is that it is a huge datastore of great information. That lets us write all kinds of queries against that data. We don’t need a third party cost management product, we have all the data ourselves. All we need to know is where to look.
It isn’t all about cost either. We can also also detect changes to data. Such as finding new information that can be helpful, or detect when data isn’t received.
Start by listing all your tables and the size of them over the last 30 days. Query adapted from this one.
union withsource=TableName1 *
| where TimeGenerated > ago(30d)
| summarize Entries = count(), Size = sum(_BilledSize) by TableName1, _IsBillable
| project ['Table Name'] = TableName1, ['Table Entries'] = Entries, ['Table Size'] = Size,
['Size per Entry'] = 1.0 * Size / Entries, ['IsBillable'] = _IsBillable
| order by ['Table Size'] desc
You will get an output of the table size for each table you have in your workspace. We can even see if it is free data or billable.
Now table size by itself may not have enough context for you. So to take it further, we can compare time periods. Say we want to view table size last week vs this week. We do that with the following query.
let lastweek=
union withsource=_TableName *
| where TimeGenerated > ago(14d) and TimeGenerated < ago(7d)
| summarize
Entries = count(), Size = sum(_BilledSize) by Type
| project ['Table Name'] = Type, ['Last Week Table Size'] = Size, ['Last Week Table Entries'] = Entries, ['Last Week Size per Entry'] = 1.0 * Size / Entries
| order by ['Table Name'] desc;
let thisweek=
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| summarize
Entries = count(), Size = sum(_BilledSize) by Type
| project ['Table Name'] = Type, ['This Week Table Size'] = Size, ['This Week Table Entries'] = Entries, ['This Week Size per Entry'] = 1.0 * Size / Entries
| order by ['Table Name'] desc;
lastweek
| join kind=inner thisweek on ['Table Name']
| extend PercentageChange=todouble(['This Week Table Size']) * 100 / todouble(['Last Week Table Size'])
| project ['Table Name'], ['Last Week Table Size'], ['This Week Table Size'], PercentageChange
| sort by PercentageChange desc
We run the same query twice, over our two time periods. Then join them together based on the name of the table. So we have our table, last weeks data size, then this weeks data size. Then, to make it even easier to read, we calculate the percentage change in size.
You could use this data and query to create an alert when tables increase or decrease in size. To reduce noise you can even filter on table size or percentage change. You could add the following to the query to achieve that. A small table may increase in size by 500% but is still small.
| where ['This Week Table Size'] > 1000000 and PercentageChange > 1.10
Of course, it wouldn’t be KQL if you couldn’t visualize your log source data too. You could provide a summary of your top 15 log sources with.
union withsource=_TableName *
| where TimeGenerated > ago(30d)
| summarize LogCount=count()by Type
| sort by LogCount desc
| take 15
| render piechart with (title="Top 15 Log Sources")
You could go to an even higher level, and look for new data sources or tables not seen before. To find things that are new in our data, we use the join operator, using a rightanti join. Rightanti joins say, show me results from the second query (the right) that weren’t in the first (the left). The following query will return new tables from the last week, not seen for the prior 90 days.
union withsource=_TableName *
| where TimeGenerated > ago(90d) and TimeGenerated < ago(7d)
| distinct Type
| project-rename ['Table Name']=Type
| join kind=rightanti
(
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| distinct Type
| project-rename ['Table Name']=Type )
on ['Table Name']
Let’s have a closer look at that query to break it down. Joining queries in KQL is the most challenging aspect to learn.
We run the first query (our left query), which finds all the table names from between 90 and 7 days ago. Then we choose our join type, in this case rightanti. Then we run the second query, which finds all the tables from the last 7 days. Then finally we choose what field we want to join the table on, in this case, Table Name. We tell KQL to only display items from the right (the second query), that don’t appear in the left (first query). So only show me table names that have appeared in the last 7 days, that didn’t appear in the 90 days before. When we run it, we get our results.
We can flip this around too. We can find tables that have stopped sending data in the last 7 days too. Keep the same query and change the join type to leftanti. Now we retrieve results from our first query, that no longer appear in our second.
union withsource=_TableName *
| where TimeGenerated > ago(90d) and TimeGenerated < ago(7d)
| distinct Type
| project-rename ['Table Name']=Type
| join kind=leftanti
(
union withsource=_TableName *
| where TimeGenerated > ago(7d)
| distinct Type
| project-rename ['Table Name']=Type )
on ['Table Name']
Logs not showing up? It could be expected if you have offboarded a resource. Or you may need to investigate why data isn’t arriving. In fact, we can use KQL to calculate the last time a log arrived for each table in our workspace. We grab the most recent record using the max() operator. Then we calculate how many days ago that was using datetime_diff.
union withsource=_TableName *
| where TimeGenerated > ago(90d)
| summarize ['Days Since Last Log Received'] = datetime_diff("day", now(), max(TimeGenerated)) by _TableName
| sort by ['Days Since Last Log Received'] asc
Let’s go further. KQL has inbuilt forecasting ability. You can query historical data then have it forecast forward for you. This example looks at the prior 30 days, in 12 hour blocks. It then forecasts the next 7 days for you.
union withsource=_TableName *
| make-series ["Total Logs Received"]=count() on TimeGenerated from ago(30d) to now() + 7d step 12h
| extend ["Total Logs Forecast"] = series_decompose_forecast(['Total Logs Received'], toint(7d / 12h))
| render timechart
It doesn’t need to be all about cost either. We can use similar queries to alert on things that are new we may otherwise miss. Take for instance the SecurityAlerts table. Microsoft security products like Defender or Azure AD protection write alerts here. Microsoft are always adding new detections which are hard to keep on top of. We can use KQL to detect alerts that are new to our environment we have never seen before.
SecurityAlert
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
// Exclude alerts from Sentinel itself
| where ProviderName != "ASI Scheduled Alerts"
| distinct AlertName
| join kind=rightanti (
SecurityAlert
| where TimeGenerated > ago(7d)
| where ProviderName != "ASI Scheduled Alerts"
| summarize NewAlertCount=count()by AlertName, ProviderName, ProductName)
on AlertName
| sort by NewAlertCount desc
When we run this, any new alerts from the last week not seen prior are visible. To add some more context, we also count how many times we have had the alerts in the last week. We also bring back which product triggered the alert.
Microsoft and others add new detections so often it’s impossible to keep track of. Let KQL to the work for you. We can use similar queries across other data. Such as OfficeActivity (your Office 365 audit traffic).
OfficeActivity
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
| distinct Operation
| join kind=rightanti (
OfficeActivity
| where TimeGenerated > ago(7d)
| summarize NewOfficeOperations=count()by Operation, OfficeWorkload)
on Operation
| sort by NewOfficeOperations desc
For OfficeActivity we can bring back the Office workload so we know where to start looking.
Or Azure AD audit data.
AuditLogs
| where TimeGenerated > ago(180d) and TimeGenerated < ago(7d)
| distinct OperationName
| join kind=rightanti (
AuditLogs
| where TimeGenerated > ago(7d)
| summarize NewAzureADAuditOperations=count()by OperationName, Category)
on OperationName
| sort by NewAzureADAuditOperations desc
For Azure AD audit data we can also return the category for some context.
I hope you have picked up some tricks on how to use KQL to provide insights into your data. You can query your own data the same way you would hunt threats. By looking for changes to log volume, or new data that could be interesting.
There are also some great workbooks provided by Microsoft and the community. These visualize a lot of similar queries for you. You should definitely check them out in your tenant.
Functions in Microsoft Sentinel are an overlooked and underappreciated feature in my experience, there is no specific Sentinel guidance provided by Microsoft on how to use them, however they are covered more broadly under the Azure Monitor section of the Microsoft docs site. In general terms though, they allow us to save queries to our Sentinel workspace, then invoke them by a simple name. So imagine you have written a really great query that looks for Active Directory group changes from Security Event logs, and your query also parses the data to make it look tidy and readable. Instead of having to re-use that same query over and over, you can save it to your workspace as a function, and then simply refer to it when needed.
We are going to use the example of Active Directory group membership changes to show how you can speed up your queries using functions. So first up we need to write our query, for anyone that has spent time looking at Security Event logs, the data can be a little inconsistent, and we sometimes need to parse the information we want from a large string of text. We also get a lot of information we just may not care about. Perhaps we only care about what time the event occurred, who was added or removed from the group, the group name and who was the person that made the changes.
To do that we can use the following query –
SecurityEvent
| project TimeGenerated, EventID, AccountType, MemberName, SubjectUserName, TargetUserName, Activity, MemberSid
| where EventID in (4728,4729,4732,4733,4756,4757)
| where AccountType == "User"
| parse MemberName with * 'CN=' UserAdded ',OU=' *
| project TimeGenerated, UserWhoAdded=SubjectUserName, UserAdded, GroupName=TargetUserName, Activity
Now we get a nice sanitized output that is easy to read showing the data we care about.
Now we are happy with our query, let’s save it as a function so we can refer it to it easily. Above your query you can see ‘Save’, if you click on that you will see an option for ‘Save as function’
Choose a name for your function, what you call it here is what you will use to then invoke it, you also choose a Legacy category to help categorize your functions.
So for this example we will call it ‘ADGroupChanges’, it will show you the code it is saving underneath, then hit Save. Give it a couple of minutes to become available to your workspace. When you start to type the name in your query window, you will notice it will now list the function as available to you. The little 3d rectangle icon highlights it as a function.
You can just run that ADGroupChanges on its own with no other input and it will simply run the saved code for you, and retrieve all the Active Directory group changes. Where you get real power from functions though is that you can continue to use your normal Kusto skills and operators against the function. You aren’t bound by only what is referenced in the function code. So you can do things like time limit your query to the last hour.
ADGroupChanges
| where TimeGenerated > ago(1h)
This applies our function code then only retrieves the last hour of results. You can include all your great filtering operators like has and in. The below will search for changes in the last hour and also where the name of the group has “Sentinel” in it.
ADGroupChanges
| where TimeGenerated > ago(1h)
| where GroupName has "Sentinel"
Or if you are looking for actions from a particular admin you can search on the UserWhoAdded field.
ADGroupChanges
| where TimeGenerated > ago(1h)
| where UserWhoAdded has "admin123"
Of course you can do combinations of any of these. Such as finding any groups that admin123 added testuser123 to in the last 24 hours.
ADGroupChanges
| where TimeGenerated > ago(24h)
| where UserWhoAdded has "admin123" and UserAdded has "testuser123"
If you ever want to check out what the query is under the function, just browse to ‘Functions’ on your workspace and they are listed under ‘Workspace functions’
If you hover over your function name, you will get a pop up appear, just select ‘Load the function code’ and it will load it into your query window for you.
If you want to update your function, just edit your query then save it again with the same name. That is what we are going to do now, by adding some more information to our query from our IdentityInfo table. Our Security Event log contains really only the basics of what we want to know, but maybe we want to enrich that with some better identity information. So if we update our query to the below, where we join our UserAdded field to our IdentityInfo table, we can then retrieve information from both, such as department, manager, and location details.
SecurityEvent
| project TimeGenerated, EventID, AccountType, MemberName, SubjectUserName, TargetUserName, Activity, MemberSid
| where EventID in (4728,4729,4732,4733,4756,4757)
| where AccountType == "User"
| parse MemberName with * 'CN=' UserAdded ',OU=' *
| project TimeGenerated, UserWhoAdded=SubjectUserName, UserAdded, UserAddedSid=MemberSid, GroupName=TargetUserName, Activity
| join kind=inner(
IdentityInfo
| where TimeGenerated > ago (21d)
| summarize arg_max(TimeGenerated, *) by AccountUPN)
on $left.UserAdded==$right.AccountName
| project TimeGenerated, UserWhoAdded, UserWhoAddedUPN=AccountUPN, GroupName, Activity, UserAdded, EmployeeId, City, Manager, Department
Save over your function with that new code. Now we have the information from our Security Event table showing when changes occurred, plus our great identity information from our IdentityInfo table. We can use that to write some queries that wouldn’t otherwise be available to us because our Security Event logs simply don’t contain that information.
Looking for group changes from users within a particular department? We can query on that.
ADGroupChanges
| where TimeGenerated > ago(24h)
| where Department contains "IT Service Desk"
You can combine your queries to take information from both the tables we joined in our function, say you are interested in querying for changes to a particular group name where the users are in a particular location, we can now do that having built our function.
ADGroupChanges
| where TimeGenerated > ago(250m)
| where GroupName == "testgroup123" and City contains "Seattle"
This will find changes to ‘testgroup123’ where the user added is from Seattle. Under the hood we look up group name from our Security Event table and City from our IdentityInfo table.
Functions are also really useful if you are trying to get other team members up to speed with Sentinel or KQL. Instead of needing them to do the parsing or data clean up in their own queries, you can build a function for them and have them stick to easier to understand operators like contains or has as a stepping stone to building out their skill set. KQL is a really intuitive language but it can still be daunting to people who haven’t seen it before. They are also a great way for you to just save time yourself, if you have spent ages building really great queries and parsers then just save them for future use, rather than trying to remember what you did previously. The ‘raw’ data will always be there if you want to go back and look at it, the function is just doing the hard work for us.
If you want a few more examples I have listed them here – they include a function to retrieve Azure Key Vault access changes, to find all your domain controllers via various log sources, and a function to join identity info with both sign in logs and risk events.
Azure AD app registrations are at the heart of the Microsoft Identity Platform, and Microsoft recommend you rotate secrets on them often. However, there is currently no native way to alert on secrets that are due to expire. An expired secret means the application will no longer authenticate, so you may have systems that fail when the secret expires. A number of people have come up with solutions, such as using Power Automate or deploying an app that you can configure notifications with. Another solution is using Logic Apps, KQL and Microsoft Sentinel to build a very low cost and light weight solution.
We will need two Logic Apps for our automation – the first will query Microsoft Graph to retrieve information (including password expiry dates) for all our applications, our Logic App will then push that data into a custom table in Sentinel. Next we will write a Kusto query to find apps with secrets due to expire shortly, then finally we will build a second Logic App to run our query on a schedule for us, and email a table with the apps that need new secrets.
Both our Logic Apps are really simple, the first looks as follows –
The first part of this app is to connect to the Microsoft Graph to retrieve a token for re-use. You can set your Logic App to run on a recurrence for whatever schedule makes sense, for my example I am running it daily, so we will get updated application data into Sentinel each day. Then we will need the ClientID, TenantID and Secret for an Azure AD App Registration with enough permission to retrieve application information using the ‘Get application‘ action. From the documentation we can see that Application.Read.All is the lowest privilege access we will need to give our app registration.
Our first 3 actions are just to retrieve those credentials from Azure Key Vault, if you don’t use Azure Key Vault you can just add them as variables (or however you handle secrets management). Then we call MS Graph to retrieve a token.
Put your TenantID in the URI and the ClientID and Secret in the body respectively. Then parse the JSON response using the following schema.
Now we have our token, we can re-use it to access the applications endpoint on Microsoft Graph to retrieve the details for all our applications. So use a HTTP action again, and this time use a GET action.
In this example we are just retrieving the displayname of our app, the application id and the password credentials. The applications endpoint has much more detail though if you wanted to include other data to enrich your logs.
One key note here is dealing with data paging in Microsoft Graph, MS Graph will only return a certain amount of data at a time, and also include a link to retrieve the next set of data, and so on until you have retrieved all the results – and with Azure AD apps you are almost certainly going to have too many to retrieve at once. Logic Apps can deal with this natively thankfully, on your ‘Retrieve App Details’ action, click the three dots in the top right and choose settings, then enable Pagination so that it knows to loop through until all the data is retrieved.
Then we parse the response once more, if you are just using displayname, appid and password credentials like I am, then the schema for your json is.
Then our last step of our first Logic App is to send the data using the Azure Log Analytics Data Collector to Microsoft Sentinel. So take each value from your JSON and send it to Sentinel, because you will have lots of apps, it will loop through each one. For this example the logs will be sent to the AzureADApps_CL table.
You can then query your AzureADApps_CL table like you would any data and you should see a list of your application display names, their app ids and any password credentials. If you are writing to that table for the first time, just give it 20 or so minutes to appear. So now we need some KQL to find which ones are expiring. If you followed this example along you can use the following query –
AzureADApps_CL
| where TimeGenerated > ago(7d)
| extend AppDisplayName = tostring(displayName_s)
| extend AppId = tostring(appId_g)
| summarize arg_max (TimeGenerated, *) by AppDisplayName
| extend Credentials = todynamic(passwordCredentials_s)
| project AppDisplayName, AppId, Credentials
| mv-expand Credentials
| extend x = todatetime(Credentials.endDateTime)
| project AppDisplayName, AppId, Credentials, x
| where x between (now()..ago(-30d))
| extend SecretName = tostring(Credentials.displayName)
| extend PasswordEndDate = format_datetime(x, 'dd-MM-yyyy [HH:mm:ss tt]')
| project AppDisplayName, AppId, SecretName, PasswordEndDate
| sort by AppDisplayName desc
You should see an output if any applications that have a secret expiring in the next 30 days (if you have any).
Now we have our data, the second Logic App is simple, you just need it to run on whatever scheduled you like (say weekly), run the query for you against Sentinel (using the Azure Monitor Logs connector), build a simple HTML table and email it to whoever wants to know.
If you use the same Kusto query that I have then the schema for your Parse JSON action is.
Now that you have that data in Microsoft Sentinel you could also run other queries against it, such as seeing how many apps you have created or removed each week, or if applications have expired secrets and no one has requested a new one; they may be inactive and can be deleted.