File Alternation Reporting Tool

the code to FART can be downloaded here: fart.txt. You will want to rename it to not have the .txt extension when it is installed.

File Intergrity Monitoring

You want to know if the wrong files on your server get altered. For example, if someone replaces your standard "ls" or "ping" commands with something malicious, you would like to see if that happened. Sometimes those alterations are expected, for example, after a system update or upgrade. Sometimes (hopefully rarely), they aren't, and indicate someone was messing with your computer.

There are a lot of file integrity solutions out there, and they are all a pain in the butt to work with. My goal with fart was to make a file integrity solution that sucks less than others, but with reasonable expectations.

People want something that will say, "THIS FILE WAS MODIFIED BY A HOSTILE PERSON!", and total silence when the changes are not an issue. The problem is very complicated. For example, I run fart against my webserver. I would like to know if someone has attacked my webserver and defaced it, so I don't ignore changes within my webserver space. But I also make changes to my webserver, intentional changes that are not an issue. I'd like a solution that would ignore me making changes, but report someone breaking into my webserver as me, and making changes to the same pages.

Bad news: I don't know how to do that, and I don't think that is possible. Almost every computer system has a lot of files that are expected to be altered often, others that may be changed periodically for good reasons, but could be altered improperly or for hostile reasons (for example, the /etc/hosts file). Figuring out the difference between a hostile change and a routine change is not something a computer is going to Just Do any time soon.

I contend that any file integrity checker is going to have to be closely monitored by humans, at least until artificial intelegence has reached a point where computers administer themselves (and then, THAT will have to be closely monitored by humans!). The goal of this system is to present useful information about file changes to administrators so they can figure out what to ignore and what to look at more closely.

History

My employer's compliance team asked me to come up with a way to "ensure file integrity". Honestly, we all looked at it as a "compliance annoyance" -- something that the law required us to be able to point to (and yes, we were in a business that was closely regulated), but not something that anyone expects to really provide useful info. But...boxes must be checked! You said we need it, we make it happen. Cost of doing business, all that rot.

Put bluntly and jumping ahead a bit, this project was an absolute goldmine of new information about the systems we managed. It just rocked. We (thankfully) never found an intruder, but we found things that disturbed us a few times, and we learned a lot about what our SW vendor did to our systems, what our administrators did without communicating to the rest of the team, and lots of other things that were unexpected payback on this project. At least for me, this went from busy-work to "oh wow, this is cool" very quickly.

I spent a lot of time trying to figure out how to get a file checksum out of every interesting file on a system, and how to figure out what files were interesting vs. files that we expected changes on. It was going to be slow to run and unpleasant to set up.

Suddenly, I had an epiphany. We already had a list of all the files on our systems that were altered in the last 24 hours -- the logs from the IBS backups! Every file we tell it to back up that changes is backed up and logged. What to know what files were altered? There it is!

That reduces the problem to coming up with a way to identify files that were expected to change vs. those that were NOT expected to change. Well, if you want to look for something in a file in the Unix world, the answer is usually "grep", it's just a matter of formulating your question. grep on most modern Unixes accepts a -f option, which allows a lot of individual searches to be made. grep also supports the -v option, which inverts the search -- all the things that DON'T match the grep commands within the pattern file. And there's the start of the answer: a filter file of things we expect to see changes in, and thus should ignore, then we can just do a:

$ grep -v -f system.fart /bu/z-logs/system-2022-08-10
and we get a list of everything that changed where we didn't expect changes!

That made it look easy, don't even need to run as root! Of course, it wasn't that easy, because it never is.

Of course, the first thing that's a "problem" is IBS puts other things into its log files, but all that is at the top and bottom of the file, so that's easy to strip out.

The rsync logs are not prefixed with a /, they are relative to the source and destination. So, changes in the log file /var/log/maillog will show as var/log/maillog. There are a few problems here -- the maintainers will be thinking the exclusions should have a leading slash, you will need a way to indicate certain exclusions SHOULDN'T be anchored at root (i.e., if you have a few instances of a file that is expected to be recreated in a few different places, you might not want to have a rule for each possible, but rather a generic rule, like man/mandoc.db$, which will pick up both usr/X11R6/man/mandoc.db and usr/local/man/mandoc.db. So, leading slashes are replaced with a ^ to indicate the beginning of a line.

Another issue is that the grep filter files are just a collection of raw regular expressions -- comments and blank lines to make the file managable and readable to humans will need to be stripped out. It also became clear there was some merit to have multiple filter files for FART -- a FART-system wide filter, a per-host filter, and perhaps some "on-demand" filters to broaden the list of files to ignore after system or application updates (you want to know if system and application binaries are altered normally, but you don't want to lose unexpected changes when you have massive numbers of expected changes). In the environment that I developed FART on, we had about 100 systems that were mostly the same, but there were some odd-ball cases.

One thing we discovered on our system is that our application software re-created a lot of files every night, including potentially senstive files, like /etc/ssh/sshd_config (yes, applications should not be touching sshd_config. However, this one did, and we weren't going to be able to fix it). Our application rewrote that file every night, same contents, same size, but a different date, so it got backed up, and if it got backed up, it got on the FART report. So this required adding code to actually compare files that were flagged, and see if they really were different, or just regenerated or even just touched. So yes, FART has to be run as root in order to verify that files actually are changed, and has to be run on the system where the IBS backups are stored (running it on another system would have been nice).

So, in short, all FART does is combine some "filter" files, strip out the comments and blank lines, then run a reverse grep of those rules against the log files. Any things that show on the log are checked for binary changes between the backup of the FART report and the preceeding backup. If no changes, the line is tossed, otherwise, it is printed, optionally with a command line to cut/paste to run a diff between the files, or even show the diff between files.

Using FART

As indicated above, I made no attempt to make a "everything is great"/"you've been hacked!" tool. Just too much grey area to not keep a human brain in the loop. And, it's really easy for one human brain to look at a FART report and wave their hands and say, "nothing to report" reflexively, without actually looking.

So, the process we put in place was to have two people charged with looking it over out of our team of six. Anyone could, and all were encouraged to, look at the report and respond to them, but the two people every week were REQUIRED to. The idea is both would look at the report individually, comment on anything that looked scary and investigate or ask for help investigating. This worked (in my opinion) fairly well. We had one person who pretty much waved his hands and said "all was good" no matter what was on it, but when he was on the check list, the rest of us knew to keep an eye on it, too. Having two people creates a bit of a competition to find something in spite of the monotony of checking the thing every day. The two people we chose were the person on call and the person who was on-call prevously. Responses didn't need to be sent out on the day the report came through, but every report should get two responses. Plus, if someone did something that would cause something to pop up in the FART report, they were encouraged to fess up and take credit for the noise in the report.

The filter files also have to be managed carefully. After our original setup, we had a rule that any permanent alterations to the filter rules needed a tracking ticket, and the ticket number should be in the filter file (another reason that comments need to be in that file). Temporary changes (i.e., the day after a service pack install or other upgrades), we could add temporary rules to say, "Ignore the app program binary directory today", which we would comment and show in our daily reply.

Installing FART

The code to FARTcan be downloaded here: fart.txt. You will want to rename it to not have the .txt extension when it is installed. Copy it to probably the same directory you have the IBS script installed to, probably either /opt/bin or /usr/local/sbin. Make it executable.

Before going further, you need to alter your IBS filter files so that certain things you didn't want backed up before because they were binaries you would reinstall with the OS, so that things like most of /usr is backed up every time. You then want to run this cycle so that those files are backed up before you start tuning your FART filters.

FART filter files are stored in the /etc/ibs/ directory. There is a global default file, GLOBAL.fart and a per-host system, (hostname).fart. These two are combined, along with any temporary filters on the command line, for every IBS log file FART is run against.

You will want to capture the output from FART and mail it to the people administering the systems. In a formal office, I had it run as a root crontab task and the console output was directly mailed to the administrators from within cron; at home, I have FART drop the output to a tmp directory file, and that file is mailed to me from a non-root user, as I have the root user e-mail directed to a different folder than I want the FART output going to.

Comments on individual directories

But in general, don't expect perfect fart filter files on the first pass. That would most likely mean you have screwed them horribly by blocking too much. Your goal is to find out what is happening on your system.

Crafting your FART filter files

IF you have a very consistent environment you are backing up with this IBS server, it would make sense to have MOST of your filter rules in the GLOBAL.fart filter file. In fact, in the environment that I developed FART in originally, we didn't use machine-specific filter files, that version of FART didn't support it. There were a few filter rules that were only for specific systems, but for the most part, the per-machine rules were not missed. In most environments, that won't be the case.

However, you may want to run FART with a different filters specified on the command line -- for example, the AIX team would get the output from fart -f aix.fart aix.list, the linux team would get the output from fart -f linux.fart linux.list, etc. Or maybe you will implement an "include" function in the FART fitler processing logic, so you can add a linux base FART filter to linux host .filter files, etc. Or webservers vs Database engines. Whatever works in your environment.

Be liberal with comments. You will forget why you put things in. Not the end of the world, remove 'em see where they start tossing errors.

If you wish to exclue everything in a directory, that would be with an entry like:

/tmp
If you wish to exclude a particular file in /tmp, you could do that with:
/tmp/onefile$
The "$" says "after this should be nothing" -- standard regular expression. Without the "$", someone could create a directory or a file called /tmp/onefileyoumissedthis and FART would ignore it

Be as selective as you can -- don't discard the entire /var/cron directory because there's a log file in it -- discard the log files, keep monitoring the crontabs themselves.

Be aware that filtering out /tmp rather than /tmp/ will filter out every line that starts with "/tmp" -- even things that might be a bit terrifying, like a new directory called "/tmpmalware". For this reason, give careful thought to every line -- should it include a trailing "/"? Should it includ a trailing "$"? Should it include NEITHER, because sometimes ambiguity is useful (for example I have my crontab logs blocked with /var/cron/log to catch not only the file "log" but also its rotated out older versions (log.1.gz, log.2.gz, etc.).
 

Holland Consulting home Page
Contact Holland Consulting
 

since August 10, 2022

Page Copyright 2022, Nick Holland, Holland Consulting.