Shadow Trackers
  • Home
  • Blog
  • Presentations

20 Years and now what?

2/3/2022

0 Comments

 

Anton Chuvakin has been working with SIEM technology for a long time.  Twenty years in fact.  I've followed him online for many years, listening to his talks, and gained wisdom from his insights.  While I don't always agree with him.  I respect the breadth of his knowledge, experience, and contributions to the industry.  Which is why, when he posted his 20 Years of SIEM blog, I couldn't figure out why reading it made me angry.

But then it hit me:  I was frustrated because he clearly pointed out that we are dealing with many of the same problems in security that the SIEM was supposed to fix.

But I also think that realization is not as bad as it sounds.  What do I mean by that?

First, look at what he says were some of the problems back then that the first SIEM vendors claimed they solved:

        - Data Collection
        - Event Correlation
        - Alert Overload
        - Log Standardization

Fast forward to today and ask yourself, have any of these issues been resolved by the current tools? 

I say: Nope.

These are the same complaints I still hear from customers, see on mailing lists (yes, those all still exist), and read on twitter, slack, and discord. Sometimes it seems like for all of our new technologies, we haven't made any progress at all. Indeed, I hear that we in infosec are "failing" at security at least once per conference.

Now, I know what some of you are thinking.  Because you are a vendor who believes their tool has solved these problems or you are part of a SOC who has conquered them and you are calling me a fear monger or thinking I don't know what I'm talking about.  But hear me out; It's not that I think these problems can't be solved today, it's that I think that people underestimate the amount of resources (people, processes, and technology) that it takes to solve them.  Too often, vendors market their tool can be installed in the morning and by the afternoon, 95% of the problems above are fixed with with minimal interaction or manpower.  This misleading marketing has been proclaimed since the SIEM first debuted and for pretty much every tool released since: UBA, SOAR, Threat Intelligence Platforms, EDR, MDR, and now XDR to name a few.  However, I named these few for a reason as the goal of each of these tools is to help security teams make sense of all the data, reduce alert overload, and increase automated correlation. 

hmmmm...... Sound familiar?

So what does it say that the industry, years later and with all these tools, is STILL struggling with the same issues? 

Actually, I think two good things. 
(Ha! You thought I was going super negative didn't you?)

First, although we are facing the same issues, they are much more complex versions of the issues we faced 20 years ago. Look at it this way:  If we used today's tools in the same environments we were in 20 years ago, life in the SOC would have been a piece of cake.  Imagine how fast we would have detected and blocked the I Love You virus, Code Red worm, or the SQL slammer worm with a fully integrated SIEM, TIP, HIPS/AV, SOAR, and EDR?  We would have been home by 5pm every day. 

Second, we need to face the reality that solving these issues with technology isn't going to be what makes us succeed.  Improving people and processes along with our technology upgrades is what makes a SOC successful.  Organizations that create a plan that continually invests in and builds up people, process, and technology see continual progress in overcoming these perennial issues. 

So, while I'm grateful for the advances from the technology vendors in addition to  independent contributors that have been made over the years, I am part of the chorus that thinks we need to work harder in infosec to improve our processes and increase the knowledge and skills of our people.  Some of that increase is on us individually to improve ourselves and some of that is on companies to find ways to pay for training and be open to review and change inefficient processes.

And hopefully in the next 20 years, while we may be facing some of those same issues, we are more prepared and capable of finally conquering them.

0 Comments

Some quick thoughts on EDR

1/6/2022

0 Comments

 
In the early part of January 2021, some tweets appeared by @likethecoins regarding a research study that was published on the effectiveness of Endpoint Detection and Response (EDR) tools.  

Here's the TL;DR

A group of Greek researchers performed an analysis of multiple EDRs against four common multistage attacks.  They evaluated the ability of the EDR to detect and block each attack.  Only two successfully alerted and blocked all four attacks (Sentinel One with test features and FortiEDR).  They are still in the process of following up with some vendors as they mistakenly tested an EPP product instead of the EDR product or they tested the EDR product without the full set of available features. 

Most notable product missing from their testing?  FireEye HX.  I have not seen any note as to why their product was not included.

The great this is that their research methods are transparent and those methods, tools, and configurations are published in the original paper so others can duplicate the study if desired.

Updated result table (key is underneath):
Picture
Some thoughts:

1.  The EDR tools do not perform as people expect them to; that is, they do not detect and block badness as most defenders define badness.  

2.  They lean toward the permissive because they don't want to impact operations.  So an activity that is bad 99% of the time because normal users do not do those things, is not often flagged or blocked or (sometimes) even logged.  This is an activity that would (should) be alerted and investigated were it in a SIEM with and would have a low FP rate.  But as @likethecoins says:

There are many legit applications that perform actions that look malicious, but are actually benign. The process of determining what behaviors are malicious or not takes careful analysis and often depends on the specific environment.

But this leads to a false assumption that EDRs provide MORE protection than they actually do.

3.  One important issue found by the researchers is that many EDRs do not log activity, or keep the logs for long.  This means that if malicious activity is detected via other means (firewall, IDS, NDR, SIEM), the EDR might not have a record of it and thus the timeline tools within the EDR won't help the analysts.  

4.  To be most effective, the tool (and the people using the tool) must 'know' the environment.  But the ability to create custom rules and alerts varies by EDR.  This would be critical to being able to customize the EDR to the environment it is in.  

5.  The study emphasizes that tools do not and cannot replace people.

​Articles with follow up and updated information:

https://www.scmagazine.com/analysis/endpoint-security/edr-study-to-undergo-re-tests-after-misclassification-error-with-selected-systems

https://therecord.media/state-of-the-art-edrs-are-not-perfect-fail-to-detect-common-attacks/
0 Comments

Coffee or Die

2/16/2021

2 Comments

 
Some years ago there was a bit of a debate among some NOVA Hackers about the best way to make coffee.  At the time I liked coffee, but I didn't know much about how it could or should be made.  Except maybe drop the pod into the machine and hit start or place an order at Starbucks.  So I volunteered to do some research and present a talk at one of the NOVA Hackers meetings.  At that time select talks were being recorded by one of our members, Brett Thorson,  as permitted by the speaker.  I allowed this talk to be recorded.
This talk completely changed how I viewed, bought, and brewed coffee.  The research I did for this talk taught me that most coffee is old and stale when bought from the store or made at most shops and restaurants (including Starbucks!).  Also, it is usually not made correctly.  The beans may not be ground correctly, the temperature of the water is wrong, or the amount of time the grounds are immersed in the water is too short or too long.  Furthermore, most home coffee brewers do not properly heat the water or immerse the grounds for you.  The result is coffee that is too weak, or too strong, or too bitter, or too bland.  It would wake you up, but drinking it would not be an enjoyable experience.
Now some of you may be thinking the same thought as one of my friends who says: "The only bad coffee, is no coffee."  But I learned that the extra effort that goes into making a really good cup of joe is not as time consuming as you may think.  And once you have done it a few times, it will be a smooth part of your routine.  Too many people are settling for the equivalent of heating up frozen chicken tenders when they could be making and eating cordon bleu. 
After doing this talk, I starting buying only beans that were freshly roasted (examples: 1, 2, 3, 4).  I got rid of our Keurig. I started using the burr grinder that my wife bought.  And now I make coffee either with a french press (lots of choices, I suggest stainless steel) or with a pour over.  I have not bought a good coffee brewers, but I know that I cannot get away with the $24.99 Amazon special.  All of this adds about five (5) minutes to my morning routine, but it has made my morning routine so much more enjoyable.

So here is the video from December 2013.  I hope you enjoy and learn something.

NOTES:
1.  During this talk, I forgot to mention that the type of water is important.  My suggestion is to use filtered water.
2.  The group was pretty rowdy that night.  Don't let the interactions between myself and the audience distract you.
2 Comments

Revisit Frequently (pt1)?

4/11/2020

1 Comment

 
So a while back I wrote a blog post about integrating Mark Baggett's freq_server into Splunk (Here).  I then referenced it a few times and told some people about it, but I didn't think about it much.  

Then, while working on prepping to teach 555, I decided to build it into my class demonstrations.  So I opened up the blog, followed the instructions, ran my search.... and it didn't work.  Badly.  I'll cut the melodrama short, but suffice to say after a frustrating day of troubleshooting, I got it working again.  Below is the updated transforms.conf file you need to use if you have Splunk version 8.

[freqserver]
external_cmd = freq.py domain
external_type = python
python.version = python2
fields_list = domain, frequency
​

I still need to update this for Mark's new version of freq_server.  So watch this space for another blog post in a couple of days.

1 Comment

My Own Private Index (sort of)

12/16/2019

0 Comments

 
A couple of months ago I was asked if I could create a private index in Splunk (<8.0) that only certain users could see.  Heck, I said, that was easy.  Configuring roles for that was well documented.  Then I was told that those users also needed to upload data into that index.  Whoa.... now THAT presented a little bit of a conundrum.  I didn't want to give those users admin access to my Splunk just so they could upload data.  But there didn't seem to be a clear built in role for 'data upload'.  I did some Google research and found several posts on answers.splunk.com that gave me some direction, but none worked quite the way I wanted.  Piecing the information together from all the posts I had read, I come up with a set of permissions that worked to create a role that could search, report, alert, etc (user permissions) and also upload data.  So let me describe how to set this up in your environment.

First I created the following authorize.conf file in $SPLUNK_HOME/etc/system/local:
[role_user]
srchIndexesDefault = *
srchMaxTime = 8640000
srchFilter = index!=<index>

[role_power]
schedule_rtsearch = disabled
srchIndexesDefault = *
srchFilter = index!=<index>
srchMaxTime = 8640000

[role_see_index]
cumulativeRTSrchJobsQuota = 0
cumulativeSrchJobsQuota = 0
edit_monitor = enabled
edit_tcp = enabled
#srchFilter =
#importRoles = power;user
indexes_edit = enabled
srchIndexesAllowed = main;<index>
srchIndexesDefault = main;<index>
srchMaxTime = 0
use_file_operator = enabled
change_own_password = enabled
edit_search_schedule_window = enabled
get_metadata = enabled
get_typeahead = enabled
input_file = enabled
list_inputs = enabled
output_file = enabled
request_remote_tok = enabled
rest_apps_view = enabled
rest_properties_get = enabled
rest_properties_set = enabled
search = enabled
accelerate_search = enabled
pattern_detect = enabled
list_metrics_catalog = enabled
export_results_is_visible = enabled
run_collect = enabled
run_mcollect = enabled
edit_input_defaults = enabled
edit_modinput_web_ping = enabled
edit_sourcetypes = enabled
edit_upload_and_index = enabled
list_settings = enabled

NOTE: Replace anywhere you see <index> with the name of your index (i.e. my_private_index).
ALSO NOTE: I commented out the user and power user inheritance.  If you didn't, you would also inherit the restriction on the very index you want this role to view! 

Then I created an index in my Splunk index.  Below is a quick walk through if you are not already familiar with the process.  I didn't do anything special to create this index.  While the pictures below show a stand alone Splunk server, at work I have an index cluster.

​After I created the index, I created two users, Bob and Alice.  Bob I left as a regular user and Alice I gave the new role I created above.

Picture

Now, when I logged in a Alice, I could see all the data in my index.

Picture

But Bob cannot see anything in that index, even if he searches specifically for it:

Picture
Picture

Alice can also add data to that index:
​
Picture

There are some caveats with regards to risks to the integrity and confidentiality of the data in the private index:
  • Alice can add data to any index she had permissions to.  Therefore, you either need to train and trust the individuals with that role or restrict that role to only the private indexes.
    • This means that these individuals are a risk to the integrity of all other data you are ingesting into your SIEM because if they chose the wrong index by the wrong mouse click, their data is now in your windows logs or firewall logs or whatever.
  • Administrators can see the private indexes.  I'm not aware of a way to block them out.  If anyone does, let me know.​
    • This means there is a risk to the confidentiality of the data in the private index.  If that data is TRULY meant to be eyes only for users in the new role you created, this method is not what you should use, unless admins have been given (preferably documented) permission to have access to that data.
Picture

I hope this helps some folks who have a unique situation.  Let me know any suggested improvements or comments.  
0 Comments
<<Previous

    Author

    Craig Bowser works as a network security professional who is a Christian, father, geek, scout leader, and occasional woodworker

    Follow on Twitter @shad0wtrackers

    Email:
    shadowtrackersnet
    (at)
    gmail
    .
    com

    Archives

    February 2022
    January 2022
    February 2021
    April 2020
    December 2019
    October 2019
    September 2019
    May 2019
    April 2019
    March 2019
    December 2018
    September 2018
    January 2018
    December 2017
    November 2017
    August 2017
    May 2017
    April 2017
    February 2017
    January 2017
    September 2016

    Categories

    All
    Automation
    Bro
    Cason Zimmerman
    Conferences
    ELK
    Growing
    Kismet
    Learning
    Microcenter
    Pi3
    Python
    Reflection
    SANS
    SIEM
    Splunk
    Windows
    Wls

    RSS Feed