Quantcast
Channel: binary foray
Viewing all 76 articles
Browse latest View live

A fluery of updates!

$
0
0
Pretty much all my software has been updated.

The biggest changes include switching to comma separators in all of the command line tools by default. The option to export to TSV is still there via the '--cs false' switch/value.

Other changes include adding a warning to the command line tools if they are run without admin rights.

Timeline Explorer was updated with new 3rd party controls and nuget packages as well as support for importing csv separator files from my software. TLE was also tweaked to better handle file types not natively supported (ie random csv files or Excel files).

Registry Explorer export to Excel format was tweaked so allow the timestamps to be treated as timestamps in Excel once opened. Prior to this the timestamps were being treated as strings.

Shellbags Explorer had its controls updated as well. SBECmd was tweaked to be able to walk a directory like c:\users and find all the Registry hives. Prior to this, SBE would error out if it ran into a directory it was not allowed to go in. This shouldnt happen any more.



Both http://ericzimmerman.github.io/ and chocolatey have been updated

Enjoy!




Introducing MFTECmd!

$
0
0
MFTECmd (code name "Solved problem"😃) is a command line MFT parser built around my MFT project, found here. I wrote this program for a lot of reasons to include getting to know NTFS better, wanting to fix deficiencies in other parsers, providing to the community a pure C# based implementation of an MFT parser, and so on. In short, I felt I could make learning about and providing a means for people to leverage the information inside the MFT in ways that other tools don't. I also have plans to write a GUI based MFT viewer which will allow for an entirely different kind of interaction (think Technical details view in Registry Explorer).

My design goals for the project include:
  1. Parse all the data
  2. Expose all data when needed
  3. Balance the details extracted so as to benefit an examiner, but not overwhelm them
  4. Cross validate my findings against other MFT parsers (i.e. accuracy)
  5. Be fast!

During the development of this project I found and reported bugs for several other projects that deal with the MFT, some of which have been around for over a decade. Why do I bring this up? Because fresh eyes are never a bad thing. Strive to trust, but verify!

I would like to thank the people that tested things for me over the course of a week or so. A special shout out goes to MikePilkington who went above and beyond in helping out. It is much appreciated!


MFTECmd is pretty simple to use:



After supplying an MFT to parse via -f, one or both of the --csv and/or --de switches are required. --csv will dump out a CSV file containing information from the MFT across all available records. --de will dump all details about a given MFT entry/sequence number.

When comparing the results of MFT parsing tools, it is important to compare apples to apples. In other words, only rely on what is in the $MFT file itself when looking at results. Things like extended or non-resident attributes are not available with just the $MFT and as such, the results will look different when parsing the file system in a full disk image vs. just the $MFT.

The other thing to be aware of is that you will NOT see an entry per FILE record found. In some cases, entries are extension records for other records. In these cases, the extension record is automatically pulled into the base record and the base record is processed as a whole. Doing it the other way around causes a lot of problems and makes things look strange in that some entries will not show having a STANDARD_INFO, when in reality they do, etc.

This tool strives to present data to an examiner like you would expect to see as it relates to looking at a file system (vs a purely technical dump of the decoded bytes). You can of course always see all the entry details by using the corresponding switch to inspect any record you want (--de).

Exporting to CSV

First, lets look at exporting to CSV.



The raw output looks like this:


Things to notice here are the fact that we have sub-second precision and a variety of other Boolean properties, etc.

The other critical thing to note that is different from other tools is that MFTECmd creates a line in the CSV for every FILE_NAME attribute (OK, almost, short names are not included by default). Tools that do not take this approach do not get hard links correct (among other things). This approach also makes the output much less "horizontal" (i.e. tons of columns to scroll thru).

While this data can be pulled into Excel by double clicking it (or via Data | Import. NOTE: when importing via this method, Excel messes up any embedded CRLFs in things like ADSs, etc.), it looks best when viewed in Timeline Explorer.

If we drop the file into TLE, we get this:


Because of the amount of data available in the MFT, a balance has to be sought when exporting information. The following data is exported to CSV. Some of these fields are self explanatory and do not include any elaboration.

EntryNumber
SequenceNumber
InUse: Whether the record is free or not
ParentEntryNumber
ParentSequenceNumber
ParentPath: Full path to the parent directory (NOT the absolute path to the file itself)
FileName:
Extension: For non-directories, the file extension, if any.
FileSize: The size of the file, in bytes. For an ADS, it is the size of the ADS
ReferenceCount: This is NOT the value stored in the MFT record, as it is usually not correct at all. rather, this number is calculated by looking at all non-DOS FILE_NAME records and finding the total number of unique parent MFT references that exist (i.e. hard links)
ReparseTarget: Where a reparse point redirects to
IsDirectory: True if this entry is for a directory, false for a file
HasAds: True if this entry has one or more ADSs
IsAds: True if the details being displayed correspond to an ADS. While an ADS technically doesn't have any timestamp associated with it as far as created/modified, etc. the corresponding FILE_NAME's details are used. This may change in the future and the timestamps will not be shown for ADSs
SI<FN: True if the STANDARD_INFO created or last modified is less than the corresponding FILE_NAME time
uSecZeros: True if STANDARD_INFO created, modified, or last access has 0s for sub-second precision
Copied: True if STANDARD_INFO modified < STANDARD_INFO created time
SiFlags: Things like "hidden" or "system", etc.
NameType: DOS, Windows, Posix, etc.
Created0x10: STANDARD_INFO created timestamp
Created0x30: FILE_NAME created timestamp
LastModified0x10
LastModified0x30
LastRecordChange0x10
LastRecordChange0x30
LastAccess0x10
LastAccess0x30
UpdateSequenceNumber
LogfileSequenceNumber
SecurityId: Offset pointing into $Secure
ObjectIdFileDroid
LoggedUtilStream
ZoneIdContents: For ADSs with a name of "Zone.Identifier", the contents of the ADS are extracted and saved to this column. This allows you to see the Zone ID and in some cases, the origin of where a particular file came from (URL will be included in the ADS).

Note that the Created timestamps are next to each other vs having all the STANDARD_INFO timestamps next to each other. This serves several purposes, but the biggest one would be the ability to quickly compare the 0x10 attribute to the 0x30 attribute. If you do not like this arrangement, simply drag the columns however you like in TLE and that order will be persisted. Some of the lesser used columns above are also hidden by default in TLE.

Let's take a closer look at some of these columns.

In the example below, notice how many of the Created0x30 columns are blank. This is because those timestamps are EXACTLY the same as the Created0x10 timestamp. When the 0x30 matches the corresponding timestamp from the 0x10 column, the 0x30 value is left blank. This provides several benefits including a much smaller file size, but the primary benefit is a lot less noise to have to wade through when looking for and examining entries where the two timestamps do NOT match.


When looking at the contents of the Zone Id Contents column, it is often helpful to see the contents of the entire ADS. Since the ADS contains embedded CRLFs, this becomes difficult to represent, but if you hover over the value, a tool-tip is displayed showing the entire contents. Of course, if you filtered in the column header for ZoneID=4 for example, only rows containing that value would be displayed.

While I could change the /r/n found in the ADS, this would be changing data and I am very hesitant to do that. I will most likely try to come up with a different way to display this data. You can also select relevant cells and then use CTRL-v to copy the contents of the Zone Id Contents cells to the clipboard.



Because TLE allows you to group by columns, you can do things like this too:


Here we grouped by the HasAds column and expanded the group where this condition is true. The entries in this group are the files that have ADSs attached to them. The same could be done for the IsAds column as well (in addition to Copied, Timestomped, etc.)

Grouping by different columns has all kinds of potential uses. You can even group by more than one column at once (IsAds, then by Extension, for example).

To illustrate another awesome way to view data in TLE, check out what happens when we group by Zone Id Contents column:



While it can be difficult to see the full contents of the column in the normal view due to the CRLFs present, when we creatively group, things just fall out for us to find! =)

Getting entry details

The -de switch dumps all the details about a given entry and sequence number. Let's look at an example.


--de is generally used to inspect all the details after reviewing the CSV output. --de accepts both hex and decimal notation as well.

In the above output, you can see all the details related to the entry including information from the header and all of the attributes. Notice that the second DATA attribute has a Name (Zone.Identifier) which means this is an ADS. You can also see the CRLFs in the DATA bytes (0D-0A).

Wrapping up

When dealing with free entries, the parser will attempt to locate a directory to associate the free file or directory to based on the FILE_NAME parent MFT reference info. In the cases where this data is no longer available however, the file will be placed in a ".\PathUnknown\Directory with ID 0x0000<entry #>-<seq #>" parent directory. Some examples look like this:




In my testing (and others as well), MFTECmd is at least 5-10 times faster than anything else out there. As the saying goes however, speed is fine, but accuracy is final. In this case tho, MFTECmd is both accurate AND fast.

If you run into an issue, try the --vl switch to narrow down where the issue lies and PLEASE consider sharing either the MFT itself or at least the FILE record so I can fix the issue.

You can get MFTECmd from the usual place and a Chocolatey package has been submitted as well.



MFTECmd v0.2.6.0 released

$
0
0
This version adds a lot of polish to the --de output and adds several new options as well.

Changelog:
  • body file output (NOTE: INDEX_ROOT entries are not included (yet? maybe never))
  • Remove msg about -d switch in -f switch
  • Added --dd and --do switches
  • Added auto decoding of resident data to ASCII (1252) and Unicode when using --de
  • Cleaned up output of --de so its easier to read
  • Added --cs option to allow for tab delimited results

This version includes body file export support via the --body and --bdl switches. --body expects a path to save body file data to, and --bdl is the single character drive letter that should be used in body file output for the file path. You can use --csv and --body at the same time as well.

The --dd and --do switches allow for exporting a FILE record from an MFT file based on the offset to the FILE record. --dd is the path to save the results to, and --do is the offset (in decimal or hex) to the record to save. This option is useful if MFTECmd has an issue parsing something and you want to share the record causing the problem, etc. In the case of an error, use the --vl option to show the offset to the record leading up to the crash, then feed that into --de to recover the record.

In several places, I added automatic decoding of resident data to both Unicode and ASCII (1252 code page) strings.  Here is an example


 This same decoding happens for extended attributes that are resident as well.

In general, the --de command output has been cleaned up significantly, including aligning of timestamps,  formatting of entry and sequence #s to allow for copy pasting (useful for following parent MFT references, etc)


Finally, the --cs option allows for using tab delimiters.

Get it in the usual places (chocolatey is delayed for some reason)!
 

Introducing VSCMount

$
0
0
Nothing crazy here, just a simple way to mount Volume Shadow Copies from the command line without having to do much of anything except provide the drive letter to where the VSCs are and where you want the VSCs to be mounted to.

The first requirement is having a source drive that has VSCs on it. This can be something like a fixed disk in your own system, a write blocked hard drive, or an image file mounted with Arsenal Image mounter (as it emulates a physical disk which we need to get to VSCs).

When using AIM, be sure to use the "Write temporary" option as seen below.




With a disk mounted, we can now use VSCMount. Usage is very simple:



Let's run thru two scenarios. First, lets mount all the VSCs on the C drive.



Notice that the actual folder used when mounting the VSCs reflects the base directory provided along with the source for the VSCs. This is to allow you to mount multiple VSCs from multiple drives at the same time and know where they came from, etc.



The individual VSCs can be accessed by drilling down into any of the directories.

If I were to mount VSCs from the J drive (an E01 mounted via AIM), it would look like this:



which results in this:



Notice however, that I still have access to the ones we did initially:



The --ud option will add the VSC creation timestamp to the directory names inside the mount point, like this:


Which makes it a lot easier to drill down into things based on a pivot point from a timeline, etc.


To clean up, simply delete any of the directories inside the mount points or delete the directory containing all the symbolic links.

Enjoy and please let me know if there is anything else you want VSCMount to do for you.

You can get it in the usual place.















Everything gets an update, Sept 2018 edition

$
0
0
All of my software has been updated (well, almost all). Here is a list of what's changed

General

  • nuget package updates
  • 3rd party control updates
  • Moving away from LibZ to Fody.Costura (this makes all my software work on FIPS enabled machines. For the 5 of you that enabled FIPS, enjoy!)
  • Better error messages (which means easier fixes)

The programs listed below had additional changes beyond what is listed above.

IT IS RECOMMENDED TO UPDATE EVERYTHING REGARDLESS

WxTCmd

  • Handle situation where one of the tables does not exist

Hasher

  • Add skin support
  • BIG update from a 3rd party control standpoint

Timeline Explorer

  • Add CTRL-Up (return to top row), CTRL-Down (go to last row), CTRL-Left (go to leftmost column), and CTRL-Right (go to rightmost column) shortcut keys per this request
  • Added file support for an internal Microsoft format and kape

ShellBags Explorer

  • New GUIDs
  • Several new ShellItem types
  • Improved handling of zip content shellitem types
  • Added Summary tab which contains the most important information about the selected shellitem
  • Made the Details tab look better
  • The selected value in the grid will be shown in bold
  • Lots of fixes for fringe cases
  • You can now load dirty Registry hives by holding SHIFT while selecting or dragging and dropping. You will still be warned about the hive being dirty (and you should feel dirty for doing it), but SBE will load them just in case you do not have the LOG files

AppCompatCacheParser, AmcacheParser, and ReCmd

  • Fix when looking for transaction logs and only a filename for a hive was specified (vs a relative or absolute path)

Registry Explorer

  • Lots of fixes and tweaks
  • You can now bypass the wizard for replaying transaction LOGs for dirty Registry hives by holding SHIFT while selecting or dragging and dropping. You will still be warned about the hive being dirty (you are still dirty), but Registry Explorer will load them just in case you do not have the LOG files. In this situation, the file's bytes are updated in memory only and a new, updated hive is not saved to disk. To load the dirty hive as well, select it without SHIFT, then say no to replaying the logs, and yes to loading dirty hives

bstrings

  • Can open files in raw mode if needed (i.e. bypasses locked files). This happens automatically

Both chocolatey and the download page is updated.

MFTECmd 0.3.6.0 released

$
0
0

MFTECmd 0.3.6.0 is now available.

Changes include:

- Added support for $Boot, $SDS, and $J files ($LogFile is coming soon)
- Changed the output format for body file to 1252 vs UTF8 because log2timeline
- Added --blf to write LF vs CRLF because log2timeline
- Added --ds option to dump FULL security details including all ACE records, etc.
- Misc fixes and tweaks

Let's take a look at the new stuff.

First, we see the new switches here:



Notice there is still only a single switch to pass in files. MFTECmd will determine the kind of file being passed in and act accordingly. Let's take a look at the new files it can parse.

First, $Boot, which looks like this:


In this case, we just passed in the file by itself. If you wanted to get the details in a file, just add --csv and supply a directory and MFTECmd will write out the details for you.

As far as the information that's displayed, you can see things such as sector size, cluster size, where the MFT starts, how big FILE and Index records are (they aren't ALWAYS 1,024 bytes in size!), as well as the volume serial number in a few different formats. Finally, the signature is shown.

$Boot is the only file type that does not require --csv, due to the small amount of information that is contained in the file.

The $SDS alternate data stream contains all of the security information associated with files. It contains things about the owner and group, along with all the granular permissions.


The resulting CSV looks like this:


The other columns look like this:



There is a significant amount of information available in SDS records. The CSV file was designed to be a balance between showing the most relevant and useful details vs information overload with trying to write out very nested data in a horizontal fashion.

The CSV pairs down the details by showing the total number of ACE types present and the UNIQUE types, because who wants to look at the same string 8 times?

Never fear tho, because you can get the full details about a security record via the --ds switch. To look at a security record, simply supply the Id of the record in either decimal or hex form.

For example, Id 1579 looks like this:



Notice that you get the owner SID as well as the group SID, followed by all of the ACE records for the discretionary and system ACLs.

This pairs nicely when looking at an MFT processed by MFTECmd because the CSV file generated will contain a Security Id column (hidden by default). This value can then be dropped into MFTECmd with the corresponding $SDS file to view the owner of the file and all the other details.

Here we have the MFT details filtered for ntuser.dat. Notice the full path of the file includes the username, but let's verify that with the information in the $SDS file.



Notice that the entry number is 490501 and the sequence number is 9. Let's get the details for that MFT record:



Notice that the Security Id is displayed in the STANDARD_INFO attribute and has a value of 0x11FE. This converts to 4606 in decimal



Which, amazingly, is the same exact number that we saw in the $MFT CSV output from above.

Using MFTECmd again, we can dump the details for that file, using either the decimal or hex notation. Since we did a decimal one already, let's do this one in hex.



Notice here we see the hex and decimal values for the security id, along with the owner.

Let's verify that SID to see who it is using X-Ways:



Nice!

Finally, $J parsing dumps all the details about changes to files (but not the changes themselves!) such as create, delete, adding to, and so on.



The resulting CSV looks like this:



Here you can see the file names, entry and sequence number for the file and parent, when the activity happened, and finally, the reasons for the update to show up in the log (extend, close, etc).

You can of course now pivot into MFTECmd to see the full details of any of the entries. Here is an example from futher down the list:



The same could be done for the parent entry and sequence number as well.

I will be adding $LogFile support for the next release. Get the update here.

Enjoy!

Registry Explorer and RECmd 1.2.0.0 released!

$
0
0

This release sees changes in several different places. Let's start with the main Registry parser.

New in this release is the ability to expand a path with wildcards to all matching paths. We will see this in use when we talk about RECmd and what's new there.

Also, the parser, when recovering deleted values, checks to see if the non-resident value data record has been reallocated to something else. We will see what this looks like when we talk about Registry Explorer changes below.

Let's take a closer look at both tools

Registry Explorer

The general changelog looks like:

NEW: Updated controls and nuget packages
NEW: Display deleted values with red gradient in values list
NEW: Display deleted values with non-resident data with purple gradient when the data record that value points to has been reallocated to another cell somewhere else.
FIX: Handle fringe errors related to save paths, loading bookmarks, etc

The biggest changes in this release came about from work Dave Cowen was doing as it related to having Maxim Suhanov on the Forensic Lunch. Maxim showed that in certain situations the list that tracks values in a key can still hold a reference to a deleted value. Registry Explorer now checks for this and displays the recovered value differently than the active ones.

When this happens. you also get a new key icon. Both the new icon and an example of what this will look like is shown below.



The other situation that I mentioned above was when a recovered value that has non-resident data points to a data record that has been reused elsewhere. This situation is shown using a purple gradient, like this:



Note that the value has both the "deleted" and "data record reallocated" booleans set to true here. While some tools will not show you any data at all when this happens, Registry Explorer shows you what data the value points to so YOU get to make the decision if it is relevant. Also unlike other tools, Registry Explorer is actually available for use by anyone, for free, forever, and is open source. =)

The Technical Details view has also been updated to add the "Data record allocated" condition.

The Legend has been updated to show this new information as well.



RECmd


Finally, let's talk about RECmd. In short, RECmd has essentially been completely rewritten, adding support for plugins and an all-new batch mode.

The new options for RECmd look like this:



This brings RECmd in line with my other software with common switches, recursion, and so on. Note that, like my other Registry based command line tools, there is an --nl switch which disables transaction log support if you do not have them (or just do not want to use them).

The searching stuff has not been changed (the "s*" options), nor has the other find related stuff (base64 and size).

Note there is now a --debug and --trace switch. The --debug switch can help with troubleshooting and seeing a bit of what is going on under the hood. --trace on the other hand is a firehose of super granular information that can help diagnose problems. I would stick with --debug!

And now, batch mode!


The BIG change is the --bn switch. This is used for "batch mode" which we will see here in a moment.


Before we get into that though, let's look at where RECmd.exe lives now. It has been moved to the same directory as RegistryExplorer.exe because the two programs share the same plugins now.


If you have ever used a plugin in Registry Explorer you have seen how nice they can make the analytical process. While Registry Explorer allows for exporting plugin data out to Excel, this can be tedious when doing the same thing over and over (just ask Dave Cowen!).

But exactly how are plugins leveraged in RECmd? The answer is via batch mode!

Batch mode is essentially a way to automate RECmd to parse hives, search for keys, and export to a common format.

Let's take a closer look at how we define a batch file. A batch file uses YAML configuration to define the rules. Here is an example:


So what is going on here?

The specification for a batch file is as follows:

Header
• Description: A general description of what this batch file is going to find
• Author: Name of made this batch file (can me more too, like contact info)
• Version: A version number that should be incremented as changes happen
• Id: A unique (across all other batch files) GUID that identifies this batch file
• Keys: A list of things to look for

Keys collection
Each entry consists of:
• Description: A user-friendly description of what this key will find. Can be anything from the key name to a friendlier description of what it means, etc.
• HiveType: The type of hive this entry corresponds to. Valid choices are: NTUSER, SAM, SECURITY, SOFTWARE, SYSTEM, USRCLASS, COMPONENTS, BCD, DRIVERS, AMCACHE, SYSCACHE
• KeyPath: The path to the key to look for
ValueName: OPTIONAL value that, when present, is looked for under KeyPath
• Recursive: Whether to process KeyPath recursively or not
• Comment: Like Description in that you can add various things here that end up in the CSV

HiveType determines which kind of hive the entry corresponds to. This saves time in that RECmd wont search a SOFTWARE hive for keys that won't ever exist (because they are NTUSER specific for example).

For the KeyPath, wildcards are supported. For example, "ControlSet00*\Services" would expand to "ControlSet001\Services" and "ControlSet002\Services", assuming there were two ControlSet keys of course. This can be extended as much as you like, so this works fine too:

SOFTWARE\Microsoft\Office\*\*\User MRU\*

RECmd would see those wildcards and determine which keys actually exist. These keys are then processed and extracted out to CSV. See the sample files for more examples, but it really is that easy!

This can save you not only time, but will actually find results you couldn't possibly have a way to know even existed without searching beforehand to find all possible key paths, updating the config, and so on. It is a HUGE time saver!


RECmd comes with several example batch files that you can use, including a really comprehensive one from Mike Cary (RECmd_Batch_MC.reb). Mike's batch file contains over 40 key/value pairs and includes examples of wild card usage, recursion, and so on.

With a batch file created, you then tell RECmd where to find it. Here is an example:



Now this particular example only NTUSER hives existed in the -d directory, but because we can have any type of hive defined in a batch file (SYSTEM or SOFTWARE, etc.), if we pointed it to a different location, say, an E01 file mounted via Arsenal Image Mounter, it might look like this:



As RECmd runs in batch mode, several things are happening. First, every key and value that is found is added to a "normalized" CSV file. What this means is that you will be able to look at keys and values across hives in a consistent manner using a tool like Timeline Explorer. The challenge here is finding a balance between too little and too much data to put in such a normalized file.

As an example, one of these files might look like what we see below. The next two images are from the same file, but split across two images because of the overall width.




First, we see that we get the full path to the hive where the data came from.

The next few fields come from the batch config. Hive Type lets you quickly group by a particular hive. Description lets you document things as needed. Category allows you to group different keys and values into the same category (UserAssist and AppCompatCache being "Execution" related for example).

Next, we get the Key Path and Value Name. In the second screenshot, we continue with Value Type. Value Type is going to reflect the actual value type (if its a straight up key/value) or "(plugin)" which tells us that a plugin was used to populate those entries.

Next, we see Value Data, Value Data2, and Value Data 3 columns. These are the "normalized" columns that come from each plugin. Now some plugins have 10 things of interest, some have less. This is where the balance comes into play. Each plugin takes the data from each of its row and "maps" it into one of these three fields. This might include paths, run count, focus time, execution times, and so on. In those cases, you will see the description in the data itself (as can be seen above for UserAssist plugin).

Continuing along, we then have the Comment from the batch config, whether the key was processed recursively, and if the key is deleted. Next is the key's last write timestamp, and finally, when a plugin is used, the full path to the plugin details file.

To make this last part easier to understand, let's take a step back.

When RECmd runs in batch mode, several files will get generated in the --csv directory. Here is an example:



Now in this particular example, we see data from TWO executions of RECmd in batch mode. Let's break them down.

Let's do the green box first. Notice all of the files begin with the same timestamp. this lets you group things by the overall RECmd run. The first file in the list then references "RECmd" and "Batch" along with the batch file that was used for the run (BatchExample). This file is the normalized view we just discussed.

Notice also, however, that we have four files for TypedURLs and one for UserAssist. Notice how this is right after the timestamp, again, to group them together. Finally, after the key name, we see the full path to the Registry hive where this data came from.

Looking at the second one, in red, it is very similar, but we see the summary along with two TypedURLs and three UserAssist keys, along with the filename where that data came from. Depending on your batch file you may end up with dozens of plugin CSV files being generated.

Going back to our normalized CSV output, as you scan through the data, you will more than likely have what you need in the normalized view to determine what is going on. Recall though that some plugins might have 10 fields, whereas you may only get 3-4 of them in the normalized view.

What if you wanted to drill down into the details, exactly as the plugin generated them? This is where the "Plugin Detail File" comes into play. If you open this file in Timeline Explorer you will see exactly the same data that you would if you loaded the hive in Registry Explorer and went to the same key and reviewing the plugin generated data.

Here is an example of what one of the TypedURLs CSVs might look like:



By giving you both a normalized view AND quick and easy access to the details, you can get to the answers that much faster.

Consider this capability from a research standpoint. If you are always looking at, say, the syscache.hve file over and over after making changes, a simple batch config that hits the key in the hive (which therefore calls the Syscache plugin) will not only generate the summary but also extract out the exact, full details that the plugin generates, right to a CSV! This file can then be ingested or massaged as needed.

The use cases for this kind of thing are limitless in that I do not have to have anything to do with your use of RECmd. You can create as many batch files as you like. You can make new plugins (or ask me to. I will be happy to help), and the plugin DLLs can go right into the Plugins folder. This lets Registry Explorer and RECmd benefit from the plugin.

RECmd and batch mode opens up a massive amount of possibilities from an automated analysis perspective in addition to the more common use case of just looking at a lot of Registry keys in a single place.

With that said, RECmd really shines because of its standardization of the output, which lets you use tools like Timeline Explorer to filter, the ability to pivot to plugin data, and so on.

I really hope you get a lot of good answers in your cases from these changes.

If anything is unclear, please feel free to reach out and as always, if you run into any issues, please let me know and I will get them fixed as soon as possible.

Enjoy! You can get the updates at the usual place.









Locked file support added to AmcacheParser, AppCompatCacheParser, MFTECmd, ShellBags Explorer (and SBECmd), and Registry Explorer (and RECmd)

$
0
0
So what does this mean for you?

More access to more data, more faster!

What does it allow you to do? Automate more and leverage these tools for more proactive threat hunting because they now all run on live systems exactly the same as they do in the post-artifact collection world.

Up until today, the expectation was for my programs to not have to deal with files being "in use" or open in other applications. This was generally OK, but it becomes more and more painful when doing research and for live response tasks.

Let's see some examples of what things look like with the new versions.

Here we see what things look like in the past when running tools as a non-administrator:



But now, when executed as an admin, we see something much different (ignore the version # here as I had not updated it yet):


Want to dump Amcache.hve every 30 minutes on all your running computers and push the CSVs to a central location for processing and stacking? OK!




The GUI side of things was not left out either.

ShellBags Explorer has always allowed you to load the live Registry (based on the currently logged in user), but the downside of this, until now, was that the last write timestamps were not available (because the .Net Registry class does not expose them).

When loading an offline hive tho, you did get the timestamps, because it is using my parser under the hood.

So with this version of SBE, if you load the active Registry without Admin rights, you will see what you always saw in that the first and last interacted timestamps are not available. You will also get a warning as seen below.



However, if you run SBE as an administrator and then load the live Registry (like your own, or even better, on a bad guy's machine), you now get the first and last interacted with timestamps!



This is fully automatic and SBE takes into account and processes any transaction logs as well.


Finally, we have Registry Explorer. If you load a hive that is in use here, Registry Explorer will handle it by opening the hive and replaying any transaction logs that are needed and then displaying it in the interface. There is no wizard displayed to select logs or save the updated hive, etc.

Here is an example of an in-use Amcache and NTUSER.DAT hive loaded and ready to go!



Notice also (shown in the lower right) that the Status messages will reflect the operations Registry Explorer took as well.

Not to be left out, RECmd has also been improved in that when searching an entire disk or mounted image for Registry hives, there are a lot less false positives. RECmd also handles locked files without issue, as seen below.




Enjoy the improvements and please let me know if you run into any issues or if there are any programs I missed updating to support locked files.










Introducing KAPE!

$
0
0
(From the manual, which is included, and you should read...)

What is KAPE?

Kroll Artifact Parser and Extractor (KAPE) is primarily a triage program that will target a device or storage location, find the most forensically relevant artifacts (based on your needs), and parse them within a few minutes. Because of its speed, KAPE allows investigators to find and prioritize the more critical systems to their case. Additionally, KAPE can be used to collect the most critical artifacts prior to the start of the imaging process. While the imaging completes, the data generated by KAPE can be reviewed for leads, building timelines, etc.

How KAPE works

KAPE serves two primary functions: 1) collect files and 2) process collected files with one or more programs. By itself, KAPE does not do anything in relation to either of these functions; rather, they are achieved by reading configuration files on the fly and, based on the contents of these files, collecting and processing files. This makes KAPE very extensible in adding or extending functionality.

KAPE uses the concepts of targets and modules to do its work. KAPE comes with a range of default targets and modules for most common operations needed in most forensic exams. These can also be used as examples to follow to make new targets and modules.

At a high level, KAPE works by adding file masks to a queue. This queue is then used to find and copy out files from a source location. For files that are locked by the operating system, a second pass takes place that bypasses the locking. At the end of the process, KAPE will make a copy and preserve metadata about all available files from a source location into a given directory.

The second (optional) stage of processing is to run one or more programs against the collected data. This works by either targeting specific file names or directories. Various programs are run against the files and the output from the programs is then saved in directories named after a category, such as EvidenceOfExecution, BrowserHistory, AccountUsage, and so on.

By grouping things by category, examiners of all levels have the means to discover relevant information regardless of the individual artifact that a piece of information came from. In other words, it is no longer necessary for an examiner to know to process prefetch, shimcache, amcache, userassist, and so on as it relates to evidence of execution artifacts. By thinking categorically and grouping output in the same way, a wider range of artifacts can be leveraged for any given requirement.


Moving on...

There is, of course, a lot more detail in the manual (it's good, read it!), but let's look at some usage scenarios and how to make it do something next.

KAPE can be used on a live system or against a dead box system in the form of a write-blocked hard drive or a mounted E01. In every case, usage is the same. KAPE handles in-use files and volume shadow copies as well, making it very thorough in its approach to finding and collecting data.

The takeaway here is that KAPE wants a drive letter, directory, or UNC path as its source of data. If you can point KAPE at a path, it will do its thing.

So, in the case of a live system, we would simply make KAPE available by connecting some kind of external storage for example. From there we can target any drive letter or directory for collection.

For a dead box system, once Windows recognizes the write-blocked device or you have an E01 mounted (use Arsenal Image Mounter, NOT FTK Imager) and it has been assigned a drive letter, you are ready to begin.

At this point, we are assuming you have a target drive letter available. Several examples will be shown at some point below, for both a live system as well as a mounted image (again, use Arsenal Image Mounter. FTK Imager does not expose volume shadow copies).


Targets

Targets are responsible for defining the files and directories for KAPE to copy. The full specification for a target and its properties is in the manual, but it is pretty straightforward.

Targets (and modules) are written using YAML. This is an easy to use and understand format and many examples are included in KAPE.

A target for file system artifacts looks like this:


This is a simple example in that we just have specified the full paths for each file we are interested in. In some cases, like $MFT and $SDS, we have to use an additional property, AlwaysAddToQueue, because Windows will, on a running system, lie and say that file does not exist (via normal means anyways).

Let's look at another example:



Again we see paths specified to artifacts, but what is different here is the use of wildcards. We have no way of knowing beforehand all of the profiles on a computer, so the wildcard is automatically expanded by KAPE to find all the profiles that exist, then go get all the lnk files that exist (using a wildcard again, because we have no idea what the names will be).

Another interesting property is IsDirectory along with Recursive. These allow you to give a base directory and have all files/folders under it copied, such as jump lists in the first example.


With targets defined (and KAPE comes with close to two dozen of them, ranging from filesystem to Registry to Event logs to Recycle bin, Outlook, and more), the target you are interested in is passed into KAPE and KAPE takes care of the following:

  1. Expanding the target file to all matching files
  2. Attempting to copy the file using "regular" means
  3. If a file is in use, defer the copy
  4. At the end of regular copying, process all deferred files using raw disk reads to get a copy of the file
  5. Recreate any directory structure and apply full resolution timestamps from the original directory
  6. Copy the files to the target destination folder and reapply full resolution timestamps from the source file
  7. SHA-1 hash the file
  8. Record all of this in log files
This all happens in seconds depending on what you are targeting.

Targets inside targets inside targets (inception!)

Targets can reference other targets too, but what does this mean? 

Before we get into that, let's talk about how you should design targets and how they have been designed so far.

Targets should be specific in that they focus on a certain kind of file or files. For example, a target should be granular in that it only looks for event logs. Another target only looks for Registry hives. Yet another just looks for Chrome profiles, and so on.

Why do it this way? By keeping things granular and specific, YOU can choose to only target what YOU want. If you only want Chrome and Firefox information, you can just run those targets. 

But what about the situation where you do not know what browser is in use or you want to target the file system, Registry hives, and jump lists? 

This is where the concept of a compound target comes into play. Let's look at one:



So what is going on here? Notice that, rather than file paths, wildcards, and so on, we are referencing other target files!

When KAPE runs, it will automatically expand each of the targets above using what is in that target file. In this case, the details from InternetExplorer.tkape, Chrome.tkape, and Firefox.tkape will be expanded and each of the files located and copied! If that particular browser is not installed, that data would simply not be found.

With this in mind, you can see how powerful and flexible this approach is in that YOU get to decide what to collect and when to collect it. If you just want Registry hives and event logs, use those two targets in a new target called HivesAndEventLogs.tkape, then use that target.

There is also a special target, !All, that simply locates all other targets and runs them all. While this works, it will not be as fast as using a more specific set of targets for collection.

Modules

Put simply, modules run programs. More specifically, they run a SINGLE program. This is important to understand as modules are written for a single purpose.

Let's look at an example. In this case, a module for PECmd is shown below:


As with targets, the full spec for a module is outlined in the manual, but it is pretty simple.

The Processors group contains one or more entries for PECmd. In this case, there are three, because PECmd knows how to export data in three different formats. Looking in the header, you can see the ExportFormat is set to 'csv' which means the first processor in the list would be used (the one with --csv switch).

Variable names

The values surrounded by % are variables that KAPE will replace at runtime. All the available variables are specified in the manual as well as in all the included modules for reference and examples, but it is pretty straightforward. 

  • %sourceDirectory% will be replaced with the value of --msource.
  • %destinationDirectory% will be replaced with --mdest plus the category from the module (ProgramExecution in the case of PECmd)
This allows KAPE to work regardless of source or destination directories, drive letters, UNC paths, etc.

Handling redirection

Here is another example of a module. It is somewhat similar, but notice there is an additional property under the processor, ExportFile.


The command line is pretty specific here to just process a single file. The point here is not the commandline, but the presence of ExportFile.

ExportFile is used when a program does not know how to save its output to a file directly and relies on command line redirection (via > for example) to save results. Because of this limitation with the program, you must use the ExportFile property specify where to save the output from the program.

NOTE: YOU CANNOT USE REDIRECTION on the CommandLine! Do not try to do something like this in the CommandLine property:

D:\temp\HP._lnk > hpout.txt

as it WILL NOT WORK.

The take away here is you are covered in both situations where programs can write directly to a file, like PECmd does, or, for programs that use redirection, you just have to use ExportFile to capture the output. You can name the value for the ExportFile property any file name you like.


Running KAPE

KAPE requires administrator rights, so the first thing to do is open up an administrative level command prompt or PowerShell window. From there, running KAPE by itself shows us all available options.

Before we see it in use, let's take a second to look at the options:



While there are quite a few options, things are generally broken down into two main categories: Targets and Modules.

Target options start with 't' and module options start with 'm' and are grouped together as shown above.

For targets, --tsource,-- tdest, and --target are all required.

For modules, --msource, --mdest, and --module are required. There is an exception to this rule however that we will see later. When using target and module options together, you can omit --msource and KAPE will assign the value for --tdest to --msource automatically.

Seeing available targets and modules

Notice there are two sets of switches that list targets and modules:

--tlist and --tdetail
--mlist and --mdetail

The 'list' commands will dump out the target name and other information about each target or module. Here we see --tlist in action:


If we add --tdetail to the command, we get this:


Notice that all of the path information inside each target is also shown when using --tdetail.

--mlist and --mdetail work in the same way, except they show available modules and their details.

Target source option

The --tsource switch tells KAPE where to start looking for files. This can be a hard drive, external drive, network share, F-Response mapped remote disk, UNC path, mounted E01, and so on. As long as it can be referenced using a path notation supported by Windows, it will work.

Target destination options

The --tdest switch tells KAPE where to create copies of the directories and files it locates. This is the simplest use case.

KAPE can also, however, place copies of the files it finds inside of either a VHD or VHDX (preferred) container by using either the --vhd or --vhdx switches. In either case, you must also supply a base name for the container. This base name will be used in naming the container that KAPE creates. In other words, this is NOT the full name of the container created, but rather, only part of it. 

Using this option would look like this:

kape.exe --tsource c --tdest c:\temp\tout --target evidenceofexecution --vhdx MyBaseNameExample

This would  result in all of the files found being copied into a VHDX container (located under C:\temp\tout, named:

2019-02-13T172926_evidenceofexecution_MyBaseNameExample.vhdx

There are several things to notice here. One is the timestamp at the front. The second is the name of the target is included in the file name (evidenceofexecution). Finally, we see the base name before the extension.

Because we used a VHDX container, this allows us to simply double-click the container to mount it in Windows. Doing this results in a new drive letter showing up, like this:


(To make things a bit easier to see, I am only showing some of the Prefetch files that exist in the VHDX in order to illustrate the layout. There are dozens more prefetch files in the actual VHDX.)

The image above is from Directory Opus (the best file manager for Windows there is!). What the image shows, however, is the directories and files that exist in the VHDX in grouped format which is easier to see what is going on than using File Explorer.

First, notice the drive label shows the date the container was made. Looking right, we see a list of files and directories in the VHDX container. The full path has been recreated from C on down because we told KAPE to process the C drive via --tsource.

In addition to the files themselves, notice there are also two CopyLog files, one a text file, and one a CSV file. These contain full details about what KAPE did:





Notice that both include the source path, destination path, source SHA-1, and the timestamps from the source file. The CSV also adds details related to how long it took to copy files and whether the file was locked or not (DeferredCopy column).

All of the files themselves, as mentioned before, have their full timestamps applied to them. Looking at a prefetch file inside the container, the properties look like this:



Notice the timestamps are from December and not Feb 13, which is when the container was created.

The VHD option works exactly the same way, except you end up with a VHD container.

Note: the first time you mount a container in Windows, it has to be done in read-write mode! Once it is initially mounted and unmounted, you can use PowerShell to mount the container as read-only, but it MUST be done r-w the first time or Windows will not recognize the file system. This does nothing to the data inside the container, just the VHDX file itself.


To unmount the container, right-click on the new drive letter and choose Eject.



In the images above, notice that the directory in the root of the container is the C drive. If we used the option to process volume shadow copies, we would end up with a different set of directories. If we ran this command (the same command as before, we just added --vss to the end):

kape.exe --tsource c --tdest c:\temp\tout --target evidenceofexecution --vhdx MyBaseNameExample --vss

We would see the following when we mounted the VHDX:



Notice we have three additional top-level directories, one for each of the VSCs that KAPE processed and found matching files. When KAPE ran, it looked like this:




Here we can see several neat things. One is that KAPE, just by adding the --vss switch, located and mounted all the shadow copies on the C drive. It then walked the C drive and each VSC, locating files along the way. It found 1,039 files in total, but only copied out 829 because of duplicated SHA-1 hash values.  THIS ENTIRE OPERATION TOOK 7.4243 SECONDS.

From here, we can see the VHDX file being created and then zipped (which makes it significantly smaller for transport).

All told, KAPE found, deduplicated, and forensically copied 829 files, placed them in a VHDX container, then zipped it in 10.7241 seconds.

The other thing to notice in the output above is there were several files that were locked. This is not an issue, however, as you can see the deferred files were copied at the end without any errors.

You can use the --zv switch to disable zipping of the container by adding --zv false to the command line.

Finally, the VHD(x) files generated by KAPE can be dropped into tools like X-Ways Forensics for immediate, targeted analysis!


Other target options

The --tflush option tells KAPE to delete the directory (if it exists) specified by --tdest before writing anything to it. This ensures that there are NO other files or directories in --tdest prior to KAPE placing files there (always a good thing).

Module source option

The --msource switch tells KAPE where to start looking for files for processing. This can be a hard drive, external drive, network share, F-Response mapped remote disk, UNC path, mounted E01, and so on, just like we saw with --tsource.

This does not have to be a directory that came from the target options, which means you can use KAPE to run live response on a running system or against a mounted E01 without using the Target options if you wanted to.

Module destination option

The --mdest switch tells KAPE where to instruct processors to save files to. Recall that modules run a program against files. The resulting output (a csv, json, or html file for example) would be saved to a directory underneath that specified by --mdest.

Other module options

The --mflush option tells KAPE to delete the directory (if it exists) specified by --mdest before writing anything to it. This ensures that there are NO other files or directories in --mdest prior to modules placing files there.

The --mef switch allows you to override the default processor as specificed in a module configuration. For example, if you wanted json output from the PECmd module, you can use --mef json and KAPE will select the appropriate processor from the available processors defined in the module.

Consider the following command:

kape.exe --tsource c --tdest c:\temp\tout --target evidenceofexecution --tflush --mdest C:\Temp\mout --module PECmd --mflush

This would look like:


This is similar to what we saw with the target modules, but look after the copy completes. KAPE starts processing the data with the PECmd module. This would end up in the following data being created in --mdest:


Since Prefetch is related to evidence of execution, the PECmd module has a category of ProgramExecution specified in the module file. If we also ran other modules that looked at evidence of execution artifacts, like ammcompatcache or amcache, those results would end up in this same directory since those modules also use this same category.

The two CSVs shown above can be opened and analyzed in Timeline Explorer or any other program of your choice.

Notice, in this case, it took a massive 2.6467 seconds to find, copy and process all the prefetch files and get them ready for analysis.

If we add on the --vss option, it ends up looking like this:


And in this case, KAPE found and processed several hundred more prefetch files from VSCs, copied them out, and ran PECmd against them in 5.8383 seconds.

Other useful options

The --debug and --trace switches can be used when writing your own targets and modules as well as for things like progress indicators over slower links. 

--debug adds information about files being found, copied, and so on.



--trace adds even more details to the output including what KAPE expanded targets and modules to, etc.




To see just how much KAPE is doing, run KAPE a few times with both --debug and --trace enabled, then review the ConsoleLog file as compared to when neither of the switches was used.


Combining Targets and Modules

As we saw above, KAPE can use Target options or Module options. One does not rely on the other. However, both options can be used at the same time. This essentially allows you to build your own "collection and processing chains" that can do whatever you want them to do.

For example, say you want to collect prefetch, Registry hives, and jump lists, then run PECmd, RECmd, JLECmd, and Plaso to generate a supertimeline. You can accomplish this very easily by building a Target that pulls the necessary files, then building a Module that calls the appropriate modules to run the aforementioned programs.

The command line might look like this:

kape.exe --tsource c: --tdest L:\collect --target QuickTimeline --mdest L:\output --module QuickTimeline

So what is going on here? Where did the "QuckTimeline" target and module come from? Quite simply, you create it! Remember that target and module configuration files are just YAML, so using your favorite text editor, make a copy of an existing target (like the WebBrowsers one), then update the new file to point to the other targets you want, like this:



Starting with one of the included module files, our new module would be handled in exactly the same way:




With the target and module in place, we would just run KAPE as shown in the example above and KAPE does the rest!

KAPE will first look for and copy all files based on the Target file, copy them all to L:\collect, then call each processor against the files in L:\collect. The output from each program will be saved to L:\output which will contain directories for each category. The CSVs in these directories can be loaded into Timeline Explorer and analyzed, all within a few seconds!

Running this command might look like this:



Alot of this we have seen before, but notice, in my case, that I do not have the binary in the right place for the plaso module to work properly. In this case, KAPE tells us this is the case, but it does not cause KAPE to not function properly.

The other thing to notice here is that several other processors were found and executed. 

In this case, --mdest would look like this:




Other use cases

KAPE has special options, such as %d, that can be used on the command line for target and module destination paths, like this:

kape.exe --tsource c: --tdest L:\collect%d --target EvidenceOfExecution--mdest L:\output%d --module PECmd

First, note that we did NOT specify --msource. When --msource is not given on the command line, it is inherited from the value of --tdest. You can see why this would be necessary when using %d, because you would not know the name of the folder to use for --msource before hand. =)

So what is this one doing? When KAPE runs, it replaces %d with a timestamp in the form YYYYMMddHHmmss, so what we would really end up with is:

L:\Collect20190213113605

and

L:\Output20190213113605

Notice the timestamps match on each folder as well.

Using this approach, you can use a scheduled task to automatically copy and process any files you want over any interval you want (perhaps dumping prefetch every hour to a root directory and KAPE will handle adding the timestamp).

If you have watched Dave Cowen's test kitchen and seen him manually locating and extracting files to different names, consider the ability to use KAPE and a Syscache target to automatically collect relevant Registry hives and other files every 15 minutes (or on demand, by repeating the command), then comparing the contents of each to find the trigger when updates happen.

This process can also be used to automatically find and package VHDX files of evidence over time by writing to a read-only Google drive or Dropbox share, and so on. In other words, you can create exemplar data sets of Registry hives, prefetch, file system data, and so on into VHDX containers for people to test their tools against, validate tools, and so on.

Another situation is needing to share Registry hives with someone. You can use the RegistryHives target along with --vhdx and in a few seconds you have a nice package to send off to whomever needs it.

The use cases are unlimited and this is really just scratching the surface on what is possible.


And last, but not least, making it even easier to use.

KAPE is a command line tool at heart, and it is not difficult to use once you see the basics. 

With that said, KAPE has a secondary helper program, gkape, that wraps the command line version and makes it easier to use and get familiar with. 

The main interface looks like this:


As options are enabled, other sections open up. Here we see how things change when the target and module options are checked and a few of the required properties (outlined in red) are populated:



As options are populated, the command line is built at the bottom. Continuing to pick the required options, we would end up with this:



When a valid command line is built, the Execute and Copy command button are enabled. 

Clicking Execute will run KAPE in a new window:



And all other aspects of KAPE work exactly the same way. Going to one or both of --tdest and --mdest would show you all of the files KAPE collected and processed.

KAPE is now available for free to everyone!


Finally, there is a public GitHub repository, located at https://github.com/EricZimmerman/KapeFiles that you can do pull requests into if you write useful targets and modules and want to share them with the community (PLEASE DO!)


You can get KAPE here. I hope you find it a useful tool for your toolbox!

KAPE v0.8.1.0 released!

$
0
0

TL;DR:

Use the same URL you were emailed to download the update!

Changes in 0.8.1.0:


  • Add support for UNC paths for --tsource and --tdest
  • Better detection when out of storage space on destination
  • Add check when --mdest and --tdest are the same and disallow it
  • Warn when --msource != --tdest
  • Clarify EULA section 1.3 as it relates to usage

Let's explore some of these changes, shall we?

UNC path support


First, let's write to a UNC path:


Then, read back from it:



New checks added

For some reason, several people were doing some interesting things with the various source and destination paths.

You almost always want --msource to be the same as --tdest, so that KAPE can process all the files it just found.

Note that --msource is NOT expected to be the path to KAPE's modules directory. KAPE already knows where those are. --msource is where you want KAPE to look for files to process. =)

When KAPE detects --msource not being equal to --tdest it will warn you, as shown below:





Because this is such a common scenario, you do not even NEED to specify --msource at all when using target and module options at once, so this command would work just fine:

.\kape.exe --tsource C --target evidenceofexecution --tdest C:\Temp\tout --tflush --module LECmd  --mdest C:\Temp\tout\



which looks like this (note the message about setting --msource):




Another interesting observation was people wanting to set --tdest to be the same as --mdest while using the flush options.

This essentially was telling KAPE to collect to a directory, then delete that directory prior to processing the collected files. Of course, this would not work since all the files were just deleted!

Now KAPE detects this and warns you before exiting.



GUI tweaks

gkape was also updated to make things a little nicer, including updated path selection dialog boxes and some of the safety checks described above.

The dialog boxes allow you to type a path vs. selecting them via the mouse:



Here we see the warning about setting the destination directories the same. KAPE clears Module destination after the OK button is clicked.



Here we see KAPE warning about setting Module source incorrectly:



Here KAPE is warning about when Module source is not the same as Target destination in response to clicking OK above:



The other change in the GUI is that the command line will only have double quotes around strings when they contain spaces. This makes things a bit tidier.

EULA clarifications

The initial version of the EULA was too restrictive in who can use KAPE. Section 1.3 has been updated, including removing the language about people using KAPE as it relates to professional services. 

If you have any questions on this, please email kape@kroll.com and we can get them answered ASAP.


PLEASE UPDATE ASAP

Again, you can get the update right now by clicking the same link you used to initially download KAPE.

If this is your first time seeing this, you can download KAPE here.


Thanks!




KAPE v0.8.2.0 released!

$
0
0
Changes in this release include:

  • Change ConsoleLog from being file based to memory based. ConsoleLog is saved to --tdest and/or --mdest as necessary
  • Remove --dcl option since ConsoleLog is in memory now
  • Added --sync switch to automatically update Targets and Modules from the KapeFiles GitHub repository
  • Add --overwrite along with --sync to overwrite any local targets and modules
  • In the ConsoleLog, remove extra line feeds and only show first letter of log level
  • gkape updated to allow for editing and creating new targets and modules, including validation
  • Added ability to specify multiple targets and modules on the command line (--target filesystem,eventlogs for example)
  • Add Progress information to Title bar of Console or PowerShell window
  • gkape interface overhauled
  • Added PowerShell script for automatic updates of the main KAPE package
  • Add --mvars switch which allows passing in key:value pairs to modules
  • Polish and tweaks

A deeper look at some of the bigger changes

Multiple targets and modules can be passed into KAPE at once

KAPE's --target and --module switches now both accept a comma-delimited list of targets and modules to run. When using more than one target or module, they should be passed in without any spaces between the names, like this:

--target Amcache,EventLogs,FileSystem
--module AmcacheParser,AppCompatCacheParser,MFTECmd

These options allow for quickly running more than one thing at a time without the need to make a compound target or module. Of course, making a compound file is the recommended practice, especially for targets and modules you plan to run on a regular basis as it is easier to choose a single, compound target or module to run vs. specifying them all on the command line.

Automatic updates of the core package

New in this version is the inclusion of a PowerShell script, Get-KAPEUpdate.ps1, that compares the local version of KAPE with what is available online. If an updated version exists, it is downloaded and the local installation of KAPE is updated with the new version. 



When a new version is posted online, it will contain all available targets and modules from the public KAPE GitHub repository, found here, that contains all available Target and Module configurations.

The next option we will look at was written to assist in keeping KAPE up to date with changes to targets and modules between releases.

Automatic syncing with GitHub repo

To help keep targets and modules up to date, a new option, --sync, has been added to KAPE. By default, using --sync will update all targets and module configurations with what is available on the GitHub repository. The default is to overwrite all existing files with whatever is on the server, but there is another option you can use to disable this, --sow. Since --sow is true by default (sow stands for 'sync overwrite'), if you want to disable overwriting, you would use --sync --sowfalse.

Here is an example of what this might look like. Notice that KAPE reports both updates and changes to existing configurations. When KAPE compares existing configurations, the SHA-1 is used to determine if the local files are the same as the remote files.


If overwriting is disabled via --sow false, any updated configurations would be reported as having an update available but skipped since overwriting is disabled.

Using this option allows you to stay current with the most up to date target and module configurations. For modules, it is of course still necessary for you to place the proper binaries where they belong before using the modules.

Other KAPE changes

Another new switch, --mvars, was added in this release. This switch allows for passing key:value parameters into KAPE which can then be used as variables in module files. For example, this command:

--mvars foo:bar,name:Eric,Level:Over9000

would result in the following variables being available in module files: 

%foo%
%name%
%Level%

Each of these would then be replaced with the value portion of the key:value pair, so if a module's command line referenced something like:

-f %sourceFile% -r %foo% -n %name% -L %Level%

KAPE would replace the variables at run time before executing the command, like this:

-f C:\some\path\toFile.txt -r bar -n Eric -L Over9000

This allows you to pass things in like computer names, IP addresses, or anything else you need without me having to add explicit support for every variable.

Keys and values should not contain spaces.

gkape interface updated

gkape has been completely overhauled to make using KAPE much more streamlined. Here is what the new interface looks like:


Usage is generally the same, but all of the new KAPE command line options are also available in gkape. as the gkape window is resized, the grids for Targets and Modules expand, allowing you more space to see the details related to configurations.

All of the source and destination boxes now allow free-form typing. Each of these also remembers the previously entered items which allow selecting from a list on subsequent gkape runs. To remove an item, click the X to the left of the value you want to remove in the drop down.

For Targets and Modules, a grid is now displayed that shows the Name and Description (and Category, for modules) of each configuration. The checkbox to the left lets you select more than one target or module to run. The grids also allow for filtering and grouping, so finding all ProgramExecution modules becomes very easy for example.

On the modules side, adding key:value pairs is supported via the Variables section at the bottom of the Module options section. To remove an item, click it, then press the DELETE key.

A button to Sync with GitHub was added to the bottom of the interface.

Visual tweaks

When running via the command line or via gkape, KAPE now shows overall progress related to it copying files in the Title bar of the command window:




KAPE can be updated from the original link you received when you signed up. To get KAPE for the first time, go here! Enjoy!

KAPE 0.8.3.0 released

Introducing EvtxECmd!!

$
0
0
I am happy to announce the first beta version of my Windows Event Log (evtx) parser. We will be talking about the command line version today, but I have plans for a GUI as well.

Let's start with a look at the options:


Most of the options are like all my other programs in that you can process a single file or a directory (recursively) using -f or -d.

With either -f or -d, the event log(s) will be parsed, and information about the file(s) is/are displayed, like this:


The metrics about event IDs and how many of each ID was seen is shown by default, but can be suppressed using the --met false option.

The next set of options, --csv, --json, and --xml can be used one at a time or in conjunction with one another. Overriding the default filename is also possible using the associated option (--csvf for example).

Also note in the screen shot above that the file was in use and EvtxECmd dealt with this cleanly. This means you can use EvtxECmd on both dead box, KAPE collections, or on a running system and get the same results.

Before looking at the export options a bit closer, let's talk about two options that let you include or exclude event IDs from the output:

  • inc: Given a list of event IDs, exclude all other event IDs not in the list
  • exc: Given a list of event IDs, exclude all event IDs in the list

Both options expect a list of integers, separated by a comma, with no spaces, like this:

--inc 4624,4625,5140

Export formats

CSV, XML, and json export are all available and can be generated at the same time by including the option. In other words, you do not have to do CSV, then repeat for XML, and so on.

The XML option will format the event data in exactly the same way as Windows does with one exception, the namespace is removed. This saves disk space and makes querying the data easier. Here is an example of what the XML data may look like:


The XML export format will be different from others in that every attempt to adhere to the standard defined by Microsoft has been followed. This means you will not see attributes when they are of the NULL type, and so on. For most people, this is an irrelevant detail, but it is important to be as technically correct as possible.

A word about CSV (and json)

The CSV export format in EvtxECmd normalizes the event record into standard fields from the common area of the XML payload, such as Computer, Channel, EventID, Level, TimeCreated, and so on. The fields inside the <System> element are generally always the same and these properties are extracted from the XML into data for the CSV.

The custom data for any given event lives in the EventData (usually) and this is what makes event logs a difficult thing to deal with, because every single event ID may have an entirely different payload. Historically, tools have gotten around this by generating different CSV files for different event IDs, which again leads to analytical issues in that you may need to load dozens of files to see what is happening. When this kind of thing is hard coded into tool. you are also limited from looking at certain events until a vendor has gotten around to including that event in their program (if they deem it worth doing in the first place).

This however, is NOT the case with EvtxECmd. All event records are normalized across all event types and across all event log file types! 

Heresy you say! How is this possible? Well, the solution is by using a map to convert the customized data into several standardized fields in the CSV (and json) data. 

The standardized fields that are available include:
  • UserName: user and/or domain info as found in various event IDs
  • RemoteHost: IP address and/or host name (or both!) information found in event IDs
  • ExecutableInfo: used for things like process command line, scheduled task, info from service install, etc. 
  • PayloadData1-6: Six fields to put whatever you see fit into
These fields can certainly be expanded upon as people find common ones that can be used across different event IDs. What I want to avoid however is having 100 columns that are usually not populated in most cases. If you come up with a good one, please let me know!

With these fields in mind, let's look at a map in detail.

A map from here to there

Map files are used to convert the EventData (the unique part of an event) to a more standardized format. Map files are specific to a certain type of event log, such as Security, Application, etc.

Because different event logs may reuse event IDs, maps need to be specific to a certain kind of log. This specificity is done by using a unique identifier for a given event log, the Channel element. We will see more about this in a moment.

Once you know what event log and event ID you want to make a map for, the first thing to do is dump the log's records to XML, using EvtxECmd.exe as follows:

EvtxECmd.exe -f <your eventlog> --xml c:\temp\xml

When the command finishes, open the generated xml file in c:\temp\ and find your event ID of interest. Let's say its from the Security log and its event ID 4624, It might look like this:



Just about everything in the <System> element is normalized by default, but if you want to include anything from there you can do so using the techniques we will see below.

In most cases, the data in the <EventData> block is what you want to process. This is where xpath queries come into play.

So let's take a look at a map to make things a bit more clear.

In the example map below, there are four header properties that describe the map: who wrote it, what its for, the Channel, and the event ID the map corresponds to. 

The Channel and EventId property are what make a map unique, not the name of the file. As long as the map ends with '.map' it will be processed.

The Channel is a useful identifier for a given log type. It can be seen in the <Channel> element ("Security" in the example above).

The Maps collection contains configurations for how to look for data in an events EventData and extract out particular properties into variables. These variables are then combined and mapped to the event record's first class properties.

For example, consider the first map, for Username, below.

The PropertyValue defines the pattern that will be used to build the final value that will be assigned to the Username field in the CSV. Variables in patterns are surrounded by % on both sides, so we see two variables defined: %domain% and %user%

In the map entries Values collection, we actually populate these variables by giving the value a name (domain in the first case) and an xpath query that will be used to set the value for the variable ("/Event/EventData/Data[@Name=\"SubjectDomainName\"]" in the first case).

When a map is processed, each map entry has its Values items processed so the variables are populated with data. Then the PropertyValue is updated and the variables are replaced with the actual values. This final PropertyValue is then updated in the event record which then ends up in the CSV/json, etc.

It is that simple! Be sure to surround things in double quotes and/or escape quotes as in the examples. When in doubt, test your map against real data!


To see what this looks like in practice, consider the Security_4624.map, shown below:



Notice that for each item in the Maps collection, the Property points to the field to update in the CSV with the custom data (Username or RemoteHost for example).


Map files are processed in alphabetical order. This means you can create your own alternative maps to the default by doing the following:

1. make a copy of the map you want to modify
2. name it the same as the map you are interested in, but prepend 1_ to the front of the filename.
3. edit the new map to meet your needs

Example:

Security_4624.map is copied and renamed to:

1_Security_4624.map

Edit 1_Security_4624.map and make your changes

When the maps are loaded, since 1_Security_4624.map comes before 4624.map, only the one with your changes will be loaded.

This also allows you to update default maps without having your customizations blown away every time there is an update.

json is special

When exporting to json via --json, the data will be exactly what is found in the CSV output, except presented in json format. One record is displayed per line. This allows you to use maps to populate data and then ingest the data in CSV or json format.

But what if you want ALL the details in json? This is where the --fj (for full json) switch comes into play. When this switch is used with --json, the full XML payload is converted to json and this is what is saved. The catch here is that YOU are now responsible for mapping all of the data from EventData into other properties, etc.

Here is the normal json output (pretty printed):



And here is the same record using --fj (also pretty printed):



The payoff

The maps are the key to making this process work. When loading the CSV data into Timeline Explorer (direct support coming in the next version), we get this:


Notice how we can see all of the 4624 and 4672 entries along with data pulled from their <EventData> element?

This allows you to see ALL events in line with ALL other events, regardless of WHERE the logs came from and what the payload is (assuming you have mapped the data). Of course filtering and all that still works, but you are no longer bound to someone else's notion of what the best way to do things is.

As we saw earlier, you get to make or update maps to suit your needs, your cases, your work flow, your idea of what is best. I have and will continue to make map files and include them here, and I would love to have your contributions to make this an even easier process for the community.

To make your own maps, fork my evtx repo, look at the README and example maps in the evtx\Maps directory, then simply copy an existing map, rename it, update it to match the event ID/Channel you are interested in, TEST IT, then do a PR into the main repository.

Wrapping up

In one of the next releases of EvtxECmd, I will include a --sync option that will download all of the maps from the evtx repository to make it even easier to use and stay up to date.


Of course, EvtxECmd can be used with a module in KAPE as well, making the collection and processing of event logs to CSV a process that takes just a few seconds!!



That's it for now! This has been a fun project and I think the uses here are limitless.

You can get EvtxECmd from the usual place, or just run Get-ZimmermanTools and it will be discovered and downloaded for you.




KAPE 0.8.6.1 released

$
0
0

KAPE 0.8.6.1 released

Changes in this release include:

- When using transfer options, transfer module output to destination when --zm true is used. This pushes the output from modules as a zip file to the destination server. You can still optionally transfer target collection too
- For batch mode, add --ul switch. This stands for "Use linear" and when set on an entry (it should be the first one ideally), KAPE will run each instance from _kape.cli one at a time, vs spawning all at once. Useful for fine grained control over batch mode
- In gkape, remember selected targets and modules in gkape when viewing config via double click. This makes it possible to examine configurations and not have to reselect everything previously selected
- Change --mvars separator to ^ since comma was often used in variable definitions. Also tweaked how variables containing : are treated (they just work now vs. being truncated)
- When KAPE updates a module's output file to avoid overwriting an existing file, report the name of the new output file to the Console so its possible to know which input file corresponds to which output file
- Fix rare issue with module processing when standard out and standard error get written to concurrently
- Change redirecting StandardError to output file in modules to writing it to the Console. This prevents programs that mix normal output on StdErr from messing up output files
- Added 'Append' property (optional) to Module's processor section. If true, data is appended to the value for ExportFile. If append is false, a new, unique filename is generated to prevent files from being overwritten
- Standardize all timestamps used in log files, file names, etc. to correspond to same timestamp (when KAPE was executed) vs when files get created. this makes it easier to group related things together
- Added AWS S3 transfer support via --s3* switches
- Added Azure Storage transfer (SAS Uri) via --asu switch
- Updated gkape for newest features

Highlights

The documentation has been updated to include all of these features.

--zm output transfer

In previous versions, when using the --zm option to zip module output, the resulting zip file was not transferred using SFTP. This version sends the resulting zip file using any of the transfer options (SFTP, S3, or Azure Storage). Depending on how KAPE is called, you do not even have to have the files collected via targets tranferred if you do not need them. Just omit the --vhdx | --vhd | --zip switch and include the --zm switch and KAPE will act accordingly. Should you want target and module output, include both.

Batch mode improvements

By default, batch mode spins up one instance of kape.exe per line in the batch file. The --ul switch prevents this, and when it is found, tells the master instance of kape.exe to only execute one instance of kape.exe at a time. This allows for much finer grained control. This switch should be part of the first command line in the batch file so it takes affect early on.

Module output tweaks

Some command line programs to not keep track of their input file. Take RegRipper for example. Each ntuser.dat hive found would be processed by a module and result in a file named NTUSER.TXT being generated. On the second ntuser hive processed, a file named NTUSER.TXT already exists, so KAPE changes the output filename to NTUSER_1.TXT and so on.

In the example below, we see the same phenomenon, but in this version, KAPE now reports to the console the name of the output file so it can be matched up with the corresponding input file.


Another change in this version is redirecting STANDARD_ERROR to the Console vs. the output file. This prevents contention when writing to the same output file. In the example above, the STANDARD_ERROR text is shown in yellow and has a giant red arrow pointing to it.

In another tweak related to this kind of thing, a new property, Append, was added to modules. This allows you to control whether or not output ends up in an incrementing file (NTUSER.TXT, NTUSER_1.TXT, NTUSER_2.TXT and so on), or if all output is appended to the first NTUSER.TXT.

AWS S3 and Azure Storage transfer options added

Switches have been added to allow for S3 and Azure Storage transfers. You can use these switches together or independently, so if you wanted to, you could transfer things to SFTP, S3, and Azure, all at the same time.

Here is an example for S3:


And Azure Storage:



Note that for ALL transfer options, at least WRITE permission is required or KAPE will complain and stop before doing anything. KAPE attempts to create a text file on the destination. Once this is created, KAPE tries to delete the text file. If write only permissions exist, the delete fails, and KAPE logs it (seen above in the Azure example).

Thanks to Matt and Troy for the idea and test buckets/storage to build this functionality!

Use the Get-KAPEUpdate.ps1 PowerShell script to update

KAPE 0.8.7.0 released!

$
0
0

KAPE 0.8.7.0 released!

Changelog

 - Refactored --sync command to allow for and respect subdirectories in Targets and Modules. --sync will reorganize things based on the KapeFile repo. Configs not in KapeFiles repo end up under !Local directory
- Overhauled Targets and modules organization. Compound targets and modules DO NOT need to be updated to new locations. KAPE will locate the base configs as needed on the fly
- With the new config organization, KAPE can now pull all configs under a directory specified in --target or --module. In this way, directories act like a compound config
- tlist and mlist now expect a path to look for configs. Use . to start. All configs in the provided path are displayed as well as subdirectories
- Added Folder column in gkape in Targets and Modules grids. Grouping by this column makes it easy to see what is in various folders
- Tweaks to transfer setting validation to ensure destination is writable
- Removed --sow switch
- When in SFTP server mode, display the KAPE switches needed to connect to the SFTP server for each defined user. This makes it as easy as copy/paste to tell KAPE to push to SFTP server
- Add --sftpu switch, which defaults to TRUE, that determines whether to display SFTP server user passwords when in SFTP server mode
- Added FollowReparsePoint and FollowSymbolicLinks to Target definition. These are optional and should be used on an as needed basis. The default for both is false if they are not present. This is the behavior KAPE has always had up to this point. Setting to true will follow the reparse or sym link which some programs use (Box, OneDrive, etc)
- Other minor tweaks and nuget updates


The highlights

New layout for Targets and Modules

Both the Targets and Modules directories have been reorganized. This will make it easier going forward to find and add new things in the correct place. The new organization looks like this (partial output):


There are several things to be aware of due to this change.

  • You can now use the names of the directories in --target and --module switches. This makes directories serve as a type of auto-generated compound configuration. This means something like this:
          --target WebBrowsers

          is functionally the same as:

          --target Browsers

          assuming WebBrowsers.tkape contains references to each of the tkape files underneath
          the Browsers folder.
  • You DO NOT need to specify the directory name before the config you want, both on the command line AND in compound targets and modules. KAPE will find and resolve everything automagically, so using KAPE has not changed at all in this respect.
  • Read the previous point again.
  • Any Targets and Modules NOT from the KapeFiles repository will be moved to the !Local folder when --sync'ing
  • Target names and modules names must be unique. In other words, do not have a Target named 'MyTarget.tkape' in more than one folder or KAPE will inform you of this.
  • The --tlist and --mlist switches now expect a path after them, like '--tlist .' for example. This will display all the configurations in the specified location, followed by a list of any directories found, like this:



Other changes

For SFTP server mode, the default is to show all the configured users' connection switches (except IP since you have to pick one), like this:


This makes it easy to get up and running quickly with almost no typing at all! Disable this option via '--sftpu false'

Finally, this version also includes updated EZ Tools and new configurations for cloud based storage (Thanks Chad Tilbury!). Along with the new cloud storage stuff are new properties to handle reparse points and symbolic links on a case by case basis. Be sure you test and validate things when using these properties!!

Enjoy and please let me know if you have any issues!!!




KAPE 0.9.2.0 released!

$
0
0

ANNOUNCEMENT!

KAPE has been nominated for a 4Cast award for non-commercial software of the year! Please take 18.5 seconds to vote for KAPE! =)

END ANNOUNCEMENT!


NOTE: If you get an strange errors when updating or using --sync, use the --debug option to see which file is causing the issue. For most, simply deleting the offending file will fix things. Worst case, just delete your local KAPE install and redownload.

Changelog

  • REMOVE IsDirectory from Target definitions. Any existing targets not part of the official repo will need to be adjusted
  • In Target definitions, Path is now ALWAYS assumed to be a directory. This means it should NOT contain wildcards like *.pf. These should be moved to the FileMask property. All official targets have been updated to reflect this. FileMask is still optional. If it is not specified, * is assumed, which will match all files in Path
  • In Target definitions, Recursive is optional. If missing, it is assumed to be false. Existing targets with Recursive: false set cleaned up (property deleted)
  • Swept existing targets for empty comments and deleted them
  • Cleaned up Path properties in Targets (Paths should end with \ by convention. This is not required, but makes it more obvious as to what the path contains)
  • Added ability to reference subdirectories under Targets in Target definitions. Example: To pull in all targets under Targets\Antivirus, use Path: Antivirus\*
  • Allow regex in Target FileMask spec. Example: FileMask: regex:(2019|DSC|Log).+\.(jpg|txt) tells KAPE to use the regex to match against *complete* filenames. KAPE will add \A to the beginning of the regex and \z to the end, to ensure the entire filename is matched.
  • Because of the change above, it is also now possible to do things in non-regex based FileMasks. Example: FileMask: 'Foo*.VHD'. Prior to this change, only *.VHD was possible. 
  • Added WaitTimeout to module definition as an optional property. When present, and greater than 0, signifies the number of minutes KAPE should wait for a module to finish. If this is excedded, KAPE will stop waiting and move on.
  • Updated nuget packages
  • Updated targets

Target definition changes

This version cleans up a lot of things related to target files. Specifically, the IsDirectory property has been removed. This means that Path is always expected to be a directory now.

Here is an example of the old format:


Vs the same Target in the new format:



If FileMask is omitted, it is assumed to be *, which will match everything under Path.

For 0.9.2.0, I reviewed every existing target and did the following:
  1. Remove IsDirectory
  2. Update Path to be only a directory if it contained a file mask
  3. Moved the file mask to the FileMask property
  4. Removed Recursive: false from all targets (since its default)
  5. Deleted empty comments
By convention, the Path property should end with a \ to keep things consistent, but this is not mandatory (I do feel it makes it easier to understand what is going on however).

Also new in this version is much improved FileMask capabilities. In fact, you can now use full blown Regular Expressions as well as more traditional file masks, like *.jpg or Foo*bar.txt.

This means that, for all existing targets, nothing needs to be changed as the old way still works. If you want to do regex matching against the ENTIRE filename, prefix the Filemask with regex:, like this:

FileMask: regex:(2019|DSC|Log).+\.(jpg|txt)

This allows for pretty much unlimited flexibility when looking for files, especially when wanting to walk an entire file system looking for certain extensions. By adding a single entry in regex format, a single pass of the file system will happen, vs one pass per file mask. How much time you gain here is of course a matter of several other factors, but its nice to have the option!

Finally, for compound targets, you can now reference a directory under the Targets folder, should you wish to dynamically include all target files under that directory. Example:


This tells KAPE to look for any tkape files under the Targets\Antivirus folder and include them in the compound target. This has been possible for a long time via the command line, using the name of the directory in the --targets option, but this makes it possible to specify them in target files.

Module definition changes

Troy Larson asked for the ability to have KAPE wait a predetermined amount of time for a module, vs. letting a run away module go on indefinitely.

To meet this requirement, an optional WaitTimeout value was added to the module header, like this:


This value is specified as the number of minutes to wait. In the above example, AppWithTimeout will sleep for 5 minutes, but KAPE will only wait around for 1 minute for it to finish. When KAPE is run with this module, the following happens:


If no timeout is specified, KAPE will wait forever for a module to finish


Viewing all 76 articles
Browse latest View live