Educause Security Discussion mailing list archives

Re: File Integrity Monitoring - PCI


From: Benjamin Stein <bgstein () UCDAVIS EDU>
Date: Sat, 21 Oct 2017 00:02:48 +0000

We use Tripwire in our unit and work to focus the reporting to something manageable in 4-6 hours week (for a 100ish 
servers and devices).


In general this means that we configure it to detect a broad range of changes on Windows and Linux servers and also 
have it checking some device configs.  The default rule sets from Tripwire of what particular files or folders and what 
particular settings (hashes, permissions, content, streams...) to monitor on particular OS versions are good and easily 
tuned. Much of the value of this product is in these rule sets.  With a variety of systems and services it is difficult 
to entirely filter noise with reference systems and authorized change windows.  As a small unit we capture a huge 
number of changes but flag just a few things for review.  All the changes are available in the database for review and 
reporting and you configure when the data gets filtered out.


This has met our requirements (so far, knock on wood).


Ben

________________________________
From: The EDUCAUSE Security Constituent Group Listserv <SECURITY () LISTSERV EDUCAUSE EDU> on behalf of Kevin Wilcox 
<wilcoxkm () APPSTATE EDU>
Sent: Friday, October 20, 2017 4:16:56 PM
To: SECURITY () LISTSERV EDUCAUSE EDU
Subject: Re: [SECURITY] File Integrity Monitoring - PCI

On 18 October 2017 at 13:58, Justin Harwood <Justin.Harwood () cpcc edu> wrote:

?Can anyone recommend what you are doing with your college as it relates to File Integrity Monitoring // PCI - 
Products/Processes/Things that worked/didn't work, etc.  Would be interested in seeing your thoughts on a manageable 
solution.

Justin - I think my question is really, "what are you trying to accomplish?"

For example, you could say, "I just want to know if files change and
check the box".
- okay, that's cool, it's an honest response. If you're on a budget,
OSSEC is fantastic. Upside: you'll know when files change. Downside:
you won't know who did it. Arguable either way: there is no
distinction between you changing a file as part of an update or an
attacker changing a file (also arguable, if it happens outside of
maintenance then are your admins aggressors?). Does it make you
compliant? Ask your QSA but it probably meets the requirement.

You could say, "I want to know if files change and who did it".
- okay, even better. You can turn on filesystem auditing through Group
Policy or with something like auditd and collect it with via Windows
Event Forwarding, a Splunk agent or winlogbeat; if you need to, send
it to a syslog cluster with something like nxlog. If they're Windows
systems, enable powershell logging with at-least script block logging.
Go straight for transcription logging if you can get it. Alert on the
data with your SIEM, log agg solution, tripwire, OSSEC, etc. Upside:
you know when files change and who did it. Downside: it's somewhat
chatty and a little disk intensive. Does it make you compliant? Ask
your QSA but it probably meets the requirement.

Or you could say, "PCI compliance is important but I want to know
what's happening in that ecosystem. If something changes then it needs
to be corrected ASAP.".
- This is my personal favourite and it's the direction I'm pushing,
well, pretty much everyone - to the point I'm trying to make it the
_baseline_ for _all servers_ and for all systems in regulated
environments. File audit logging + sysmon on Windows with full
process/network accounting or auditd on Linux with full execve/socket
accounting, forward it with WEF/Splunk/winlogbeat/filebeat/nxlog to
your SIEM or log agg solution of choice. Use a configuration manager
like puppet, chef, etc., that on <x> delta ensures <y resources> are
configured a certain way. Upside: you know every process that runs,
its parent PID, the network connections it makes, files that are
opened, the whole bit. Potential malware infection? No problem, track
it down (at least to the point it kills your telemetry, which should
be enough to alert you to Shut It Down). Potential bad actor? You know
what they did and when (again, until they kill your telemetry). As
long as the configuration agent is running, even config changes get
corrected. Use the config manager to ensure your telemetry is running.
Downside: it's a little chatty, some work and takes some storage. If a
node stops checking in, go check on it immediately. Does it make you
compliant? I'd like to meet the QSA who seriously wants to argue that
it fails to meet requirements. Don't stop at PCI, do this across your
entire server environment. Do it for your sys-admins, net-admins and
security folks. Do it for anyone doing research. Seriously, push it to
all managed systems.

Because most folks are in Windows environments,
https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon.

Don't sell it as PCI compliance. Sell it as PCI compliance plus
troubleshooting plus incident response. FIM is just a fancy way of
saying, "know your system".

With a bit of luck there will be at least one "telemetry for
compliance and incident response" presentation and a full-day seminar
on log aggregation with the Elastic stack (including alerting) at SPC
in 2018 *fingers crossed*.

kmw

Current thread: