Monday, June 22, 2015

SQLite Deleted Data Parser Update - Leave no "Leaf" unturned

One of the things I love about open source is that people have the ability to update and share code.  Adrian Long, aka @Cheeky4n6Monkey, did just that. Based upon some research, he located additional deleted data that can be harvested from re-purposed SQLite pages - specifically the Leaf
Table B-Tree page type. He updated my code on GitHub and BAM!! just like that, the SQLite Deleted Data parser now recovers this information.

He has detailed all the specifics and technical goodies in a blog post, so I won't go into detail here. It involved a lot of work and I would like to extend a huge thank you to Adrian for taking the time to update the code and for sharing his research.

You can download the most recent version on my GitHub page. I've also update the command line and GUI to support the changes as well.

Tuesday, June 9, 2015

Does it make sense?

Through all my high school and college math classes, my teachers always taught me to step back after a problem was completed and ask if the answer made sense.  What did this mean?  It meant don't just punch numbers into the calculator, write the answer, and move on. It meant step back, review the problem, consider all the known information and ask, "Does the answer I came up with make sense?"

Take for instance the Pythagorean Theorem. Just by looking at the picture, I can see that C should be longer than A or B. If my answer for C was smaller than A or B, I would know to recheck my work.

Although the above example is relatively simple, this little trick applied to more complicated math problems, and many times it helped me catch incorrect answers.

But what does this have to do with DFIR?  I believe the same principle can be applied to investigations. When looking at data, sometimes stepping back and asking myself the question, "Does what I am looking at make sense?" has helped me locate issues I may not have otherwise caught.

I have had a least a couple of DFIR situations where using this method paid off.

You've got Mail...
I was working a case where a user had Mozilla Thunderbird for an email client. I parsed the email with some typical forensic tools and begin reviewing emails.

While reviewing the output, I noticed it seemed pretty sparse, even though the size of the profile folder was several gigs. This is where I stepped back and asked myself, does this make sense? It was a large profile, yet contained very few emails.  This led to my research and eventual blog post on the Thunderbird MBOXRD file format. Many of the programs were just parsing out the MBOX format and not the MBOXRD format, and thus, missing lots of emails. Had I just accepted the final output of the program, I would have missed a lot of data.

All your files belong to us...
Many times I will triage a system either while waiting for an image to complete, or as an alternate to taking an image. This is especially useful when dealing with remote systems that need to be looked at quickly. Exporting out the MFT and other files such as the Event Logs and Registry files results in a much smaller data set than a complete image. These artifacts can then be used to create a mini-timeline so analysis can begin immediately  (see Halan's post here for more details on creating mini-timelines).
To parse the MFT file into timeline format, I use a tool called Analyze MFT  to provide a bodyfile. Once the MFT is in bodyfile format, I use Harlan Carvey's to convert it into TLN format and add it into the timeline.

While working a case, I created timelines using the above method for several dozen computers. After the timelines were created, I grepped out various malware references from these timelines. While reviewing the results, I noticed many of the malware files had a file size of zero. Weird. I took a closer look and noticed ALL the malware files contained a file size of zero. Hmmm.. what did that mean? What are the chances that ALL of those files would have a zero file size??? Since the full disk images were not available, validateing this information with the actual files was not an option. But I stepped backed and asked myself, given what I knew about my case and how the malware behaved.. does that make sense?

So I decided to "check my work" and do some testing with Analyze MFT. I created a virtual machine  with Windows XP and exported out the MFT.  I parsed the MFT with Analyze MFT and began looking at the results for files with a zero file size.

I noticed right away that all the prefetch files had a file size of zero, which is odd.  I was able to verify that the prefetch files sizes were in fact NOT zero by using other tools to parse the MFT, as well as looking at the prefetch files themselves in the VM. My testing confirmed that Analyze MFT was incorrectly reporting a file size of zero for some files.

After the testing I reached out to David Kovar, the author of Analyze MFT, to discuss the issue. I also submitted a bug to the github page.

If I had not "checked my work" and assumed that the the file size of  zero meant the files were empty, it could have led to an incorrect "answer".

So thanks to those teachers that ground the "does it make sense" check into my head, as it has proved to be a valuable tip that has helped me numerous times  (more so then the Pythagorean Theorem...)

Tuesday, April 7, 2015

Dealing with compressed vmdk files

Wherever I get vmdk files, I take a deep breath and wonder what issues might pop up with them. I recently received some vmkd files and when I viewed one of these in FTK Imager (and some other mainstream forensic tools), it showed up as the dreaded "unrecognized file system".

To verify that I had not received some corrupted files, I used the VMWares disk utility to check the partitions in the vmdk file. This tool showed two volumes, so it appeared the vmdk file was not corrupted:

When I tired to mount the vmdk file using vmware-mount, the drive mounted, but was not accessible. A review of their documentation, specifically the limitation section, pointed out that the utility could not mount compressed vmdk files:

You cannot mount a virtual disk if any of its files are encrypted, compressed, or have read-only permissions. Change these attributes before mounting the virtual disk

Bummer. It appeared I had some compressed vmdk files.

So after some Googling and research, I found a couple different ways to deal with these compressed vmdk files - at least until they are supported by the mainstream forensic tools. The first way involves converting the vmdk file, and the second way is by mounting it in Linux.

Which method I choose ultimately depend on my end goals. If I want to bring the file into one of the mainstream forensics tools, converting it into another format may work the best. If  I want to save disk space, time and do some batch processing, mounting it in Linux may be ideal.

One of the first things I do when I get an image is create a mini-timeline using fls and some of Harlan's tools. Mounting the image in Linux enables me to run these tools without the additional overhead of converting the file first.

Method 1: "Convert"

The first method is to "convert" the vmdk file.
I'm using "quotes" because my preferred method is to "convert" it right back to the vmdk file format, which in essence, decompresses it.

The vmdk file format is usually much smaller then the raw/dd image and appears to take less time time to "convert".

I used the the free VBoxManger.exe that comes with VirtualBox. This is a command line tool located under C:\Program Files\Oracle\VirtualBox. This tool give you the option to convert the compressed vmdk (or a regular vmkd) into several formats: VHD, VDI, VMDK and Raw. The syntax is:

VboxManage.exe clonehd "C:\path\to\compressed.vmkd" "C:\path\to\decompressed.vmdk" --format VMDK.

It give you a nice status indicator during the process:

Now the file is in a format that can worked with like any other normal vmdk file.

Method 2: Mount in Linux

This is the method that I prefer when dealing with LOTS of vmdk files. This method uses Virtual Box Fuse, and does not require you to decompress/convert the vmkd first.

I had a case involving over 100 of these compressed files. Imagine the overhead time involved with converting 100+ vmdk files before you can even begin to work with them. This way, I was able to write a script to mount each one in Linux, run fls to create a bodyfile, throw some of Harlan's parsers into the mix, and was able to create a 100+ mini-timelines pretty quickly.

There is some initial setup involved but once that's done, it's relatively quick and easy to access the compressed vmdk file.

I'll run though how to install Virtual Box Fuse, how to get the compressed vmkd file  mounted, then run fls on it.

1)Install VirtualBox:

sudo apt-get install virtualbox-qt

2) Install Virtual Box Fuse. It is no longer in the app repository, so you will need to download and install the .deb file - don't worry, it's pretty easy, no compiling required :)

Download the .deb from from Launchpad under "Published Versions". Once you have it downloaded, install it by typing:

sudo dpkg -i --force-depends virtualbox-fuse_4.1.18-dfsg-1ubuntu1_amd64.deb

Note  - this version is not compatible with Virtual Box v. 4.2. At the time of this writing, when I installed Virtual Box on my Ubuntu distro, it was version 4.1 and worked just fine. If you have a newer version of virtual box, it will still work - you just unpack the .deb file and run the binary without installing it. See the bottom of the thread here for more details.

3)Mount the compressed VMDK file read-only

vdfuse -r -t VMDK -f /mnt/evidence/compressed.vmdk /mnt/vmdk

This will created a device called "EntireDisk" and Parition1, Parition2 etc. under /mnt/vmdk

(even though I got this fuse error - everything seems to work just fine)

At this point and time, you can use fls to generate a bodyfile. fls is included in the Sleuth Kit, and is installed on SIFT by default. You may need to specify the offset for your partition.  Run mmls to grab this:

Now that we have the offsets, we can run fls to generate a bodyfile:

fls -r -o 2048 /mnt/vmdk/EntireDisk -m C: >> /home/sansforensics/win7-bodyfile
fls -r -o 206848 /mnt/vmdk/EntireDisk -m C: >> /home/sansforensics/win7-bodyfile

Next, if you want access to the files/folders etc, you will need to mount the EntireDisk Image as an ntfs mount for each partition. This is assuming you have an Windows system - if not, adjust the type accordingly:

Mount Partition 1, Offset 2048:

Mount Parition2, Offset 206848:

There are multiple ways to deal with this compressed format, such as using VMWare or VirtualBox GUI to import/export the compresses file... these are just some examples of a couple of ways to do it. I tend to prefer command line options so I can script out batch files if necessary.

Wednesday, March 4, 2015

USN Journal: Where have you been all my life

One of the goals of IR engagements is to locate the initial infection vector and/or patient zero. In order to determine this, timeline analysis becomes critical, as does determining when the  malware was created and/or executed on a system.

This file create time may become extremely critical if you're dealing with multiple or even hundreds of systems and trying to determine when and where the malware first made its way into the environment.

But what happens when the malware has already been remediated  by a Systems Administrator, deleted by an attacker, or new AV signatures are being pushed out, resulting in the malware being removed?

Many of the residual artifacts demonstrate execution,  however, it seems very few actually document when the file was created on the system. This is where the USN Journal recently helped me on a case. The USN Journal is by no means new.. but I just wanted to talk about a case study and share my experience with it, as I feel it's an often overlooked artifact.

For purposes of demonstrative data, I downloaded and infected a Windows 7 VM with malware.  This malware was from a phishing email that contained a zip file, This zip file contained a payload, voice.exe. For more details on this malware sample, check out
So lets run through some typical artifacts that demonstrate execution along with the available timestamps and see what they do and don't tell us...

The MFT contains the filesystem information - Modified, Accessed and Created dates, file size etc. However, a deleted file's MFT record may be overwritten. If you're lucky, your deleted malware file will still have an entry in the MFT - however, in my case this was not to be.

The ShimCache
I won't go into to much detail here as Mandiant has a great white paper on this artifact. Basically, on most systems this artifact contains information on files that have been executed including path, file size and last modified date. I parsed this registry key with RegRipper, and located an entry for the test malware, voice.exe:

ModTime: Wed Jan 28 15:28:46 2015 Z

So what does this tell me? That voice.exe was in the Downloads path, was executed, and has a last modified date of 01/28/2015 - <sigh> no create date </sigh>.
The User Assist is another awesome key... it displays the last time a file was executed, along with a run count. Once again, using RegRipper to parse this I located an entry for the test malware:

Mon Feb 23 02:33:34 2015 Z C:\Users\user1\Downloads\voice#5734223\voice.exe (2)*

By looking at this artifact, I can see that the file was executed twice - once on February 23rd, however, I don't know when the first time was. It could have been minutes, hours or days earlier. It still does very little to let me know when the file was created on the system, although I do know it should be sometime before this time stamp.

Prefetch File
This is a great artifact that can show execution and even multiple times of execution. But what if the system is a Server, where prefetching may not be enabled?  In my case, prefetching was enabled, but there was no prefetch file for the malware in question - at least that is what I thought until I checked the USN Journal. And, once again, it does not contain information related to when the file was created on the system.

Ok, so I've reviewed a couple of typical artifacts that demonstrated that the malware executed (for more artifacts related to execution, check out this blog post by Mandiant "Did it Execute") With timeline analysis, I may even get an idea of when this file was most likely introduced on the system - however, a definitive create date would be nice to have. This brings me too.....the USN Journal.

USN Journal
There are a couple of tools I use to parse the USN Journal. A quick, easy to use script is available from Harlan Carvey's GitHub page.

Parsing the USN Journal and looking for the malware in question, I see some interesting entries for voice.exe, highlighted in red below:

Sweeet! I now have a File_Create timestamp for voice.exe I can use in my timeline. I can also see that voice.exe was deleted relatively quickly ~ 30 seconds after it was created. This deletion occured about the same time when the prefetch file for it was created. This might be an indication that the malware deleted itself upon execution.

It's also interesting to note that around the same time the prefetch file was created for voice.exe, a file called testmem.exe was created and executed (highlighted in yellow)..hmmmm.

Time to dig deeper. For a little more detail on the USN Journal, there is the TriForce tool. This tool processes three files: $MFT, $J and $Logfile. It then cross references these three files to build out some additional relationships. In my experience, this tool takes a bit longer to run. As you can see by the output below, I now have full file paths that may help add a little more context:

That testmem.exe just became all that more suspicious due to it's location - a temp folder.

By reviewing the USN Journal file, I was able to establish a create date of the malware. This create date located in the USN Journal gave me an additional pivot point to work with. This pivot point lead to some additional findings -  a suspicious file, testmem.exe. (For some more on timeline pivoting, check out Harlan's post here). Not only did the create date help locate additional artifacts, but it can also help me home in on which systems may be patient zero. Malware may arrive on a system long before its executed.
Just because it's not there - doesn't mean it didn't exist. For the case I was working, I did not have any existing prefetch files for the malware. However, when I parsed the USN Journal, I saw the prefetch file for the malware get created and deleted within the span of 40 minutes. I also saw some additional temporary files get created and deleted, some of which were not in the MFT.

Alass, as sad as it is, my relationship with the USN Journal does have some shortcomings (and it's not my fault). Since it is a log file, it does "roll over" The USJ Journal can be limited in the amount of data that it holds - sometimes it seems all I see in it are Windows Update files. If a system is heavily used, and if the file was deleted months ago, it may no longer be in the USN Journal. However, all is not lost though, Rumor has it that there may be some USN Journal files located in the Volume Shadow Copies so there may still be hope. Also, David Cowen points out the log file is not circular (as I once thought), and just frees the old pages to disk:

"After talking to Troy Larson though I now understand that this behavior is due to the fact that the journal is not circular but rather pages are allocated and deallocated as the journal grows"

This means that you can carve USN Journals records! Check out his blog post here for more information.

Happy Hunting!

Additional reading on the USN Journal

*The run count number was modified from 1 to 2 on this output to illustrate a point.

Tuesday, October 7, 2014

Timestomp MFT Shenanigans

I was working a case a while back and I came across some malware that had time stomping capabilities. There have been numerous posts written on how to use the MFT as a means to determine if time stomping has occurred, so I won't go into too much detail here.

Time Stomping

Time Stomping is an Anti-Forensics technique. Many times, knowing when malware arrived on a system is a question that needs to be answered. If the timestamps of the malware has been changed, ie, time stomped, this can make it difficult to identify a suspicious file as well as answer the question, "When".

Basically there are two "sets" of timestamps that are tracked in the MFT. These two "sets" are the $STANDARD_INFORMATION and $FILE_NAME. Both of these track 4 timestamps each - Modified, Access, Created and Born. Or if you prefer - Created, Last Written, Accessed and Entry Modified (To-mato, Ta-mato). The $STANDARD_INFORMATION timestamps are the ones normally viewed in Windows Explorer as well as most forensic tools.

Most time stomping tools only change the  $STANDARD_INFORMATION set. This means that by using tools that display both the $STANDARD_INFORMATION and $FILE_NAME attributes, you can compare the two sets to determine if a file may have been time stomped.  If the $STANDARD_INFORMATION predates the $FILE_NAME, it is a possible red flag (example to follow).

In my particular case, by reviewing the suspicious file's $STANDARD_INFORMATION and $FILE_NAME attributes, it was relatively easy to see that there was a mismatch, and thus, combined with other indicators, that time stomping had occurred. Below is an example of what a typical malware time stomped file looked like. As you can see, the $STANDARD_INFORMATION highlighted in red predates the $FILE_NAME dates (test data was used for demonstrative purposes)

System A \test\malware.exe

M: Fri Jan  1 07:08:15 2010 Z
A: Tue Oct  7 06:19:23 2014 Z
C: Tue Oct  7 06:19:23 2014 Z
B: Fri Jan  1 07:08:15 2010 Z

M: Thu Oct  2 05:41:56 2014 Z
A: Thu Oct  2 05:41:56 2014 Z
C: Thu Oct  2 05:41:56 2014 Z
B: Thu Oct  2 05:41:56 2014 Z

However, on a couple of systems there were a few outliers where the time stomped malware $STANDARD_INFORMATION and $FILE_NAME modified and born dates matched:

System B \test\malware.exe

M: Fri Jan  1 07:08:15 2010 Z
A: Tue Oct  7 06:19:23 2014 Z
C: Tue Oct  7 06:19:23 2014 Z
B: Fri Jan  1 07:08:15 2010 Z

M: Fri Jan  1 07:08:15 2010 Z
A: Thu Oct  2 05:41:56 2014 Z
C: Thu Oct  2 05:41:56 2014 Z
B: Fri Jan  1 07:08:15 2010 Z

Due to other indicators, it was pretty clear that these files were time stomped, however, I was curious to know what may have caused these dates to match, while all the others did not. In effect, it appeared that that Modified and Born dates were time stomped in both the $SI and $FN timestamps, however this was not the MO in all the other instances.

Luckily, I remembered a blog post written by Harlan Carvey where he ran various file system operations and reported the MFT and USN change journal output for these tests. I remembered that during one of his tests, some dates had been copied from the $STANDARD_INFORMATION into the $FILE_NAME attributesA quick review of his blog post revealed the following had occurred during a rename operation . Below is a quote from Harlan's post:

"I honestly have no idea why the last accessed (A) and creation (B) dates from the $STANDARD_INFORMATION attribute would be copied into the corresponding time stamps of the $FILE_NAME attribute for a rename operation"

In my particular case it was not the accessed date and creation dates (B) that appeared to have been copied, but the modified and creation dates (B). Shoot.. not the same results as Harlan's test... but his system was Windows 7, and the system I was examining was Windows XP.  Because my system was different, I decided to follow the procedure Harlan used and do some testing on a Windows XP to see what happened when I did a file rename.


My test system was Widows XP Pro SP3 in a Virtual Box VM.  I used FTK Imager to load up the vmdk file after each test and export out the MFT record. I then parsed the MFT record with Harlan Carvey's mft.exe.

First, I created "New Text Document.txt" under My Documents. As expected, all the timestamps in both the $STANDARD_INFORMATION and $FILE_NAME were the same:

12591      FILE Seq: 1    Link: 2    0x38 4     Flags: 1  
.\Documents and Settings\Mari\My Documents\New Text Document.txt
    M: Thu Oct  2 23:22:05 2014 Z
    A: Thu Oct  2 23:22:05 2014 Z
    C: Thu Oct  2 23:22:05 2014 Z
    B: Thu Oct  2 23:22:05 2014 Z
  FN: NEWTEX~1.TXT  Parent Ref: 10469  Parent Seq: 1
    M: Thu Oct  2 23:22:05 2014 Z
    A: Thu Oct  2 23:22:05 2014 Z
    C: Thu Oct  2 23:22:05 2014 Z
    B: Thu Oct  2 23:22:05 2014 Z
  FN: New Text Document.txt  Parent Ref: 10469  Parent Seq: 1
    M: Thu Oct  2 23:22:05 2014 Z
    A: Thu Oct  2 23:22:05 2014 Z
    C: Thu Oct  2 23:22:05 2014 Z
    B: Thu Oct  2 23:22:05 2014 Z

Next, I used the program SetMACE to change the $STANDARD_INFORMATION timestamps to "2010:07:29:03:30:45:789:1234" . As expected, the $STANDARD_INFORMATION changed, while the $FILE_NAME stayed the same. Once again,  this is common to see in files that have been time stomped:

12591      FILE Seq: 1    Link: 2    0x38 4     Flags: 1  
.\Documents and Settings\Mari\My Documents\New Text Document.txt
    M: Wed Jul 29 03:30:45 2010 Z
    A: Wed Jul 29 03:30:45 2010 Z
    C: Wed Jul 29 03:30:45 2010 Z
    B: Wed Jul 29 03:30:45 2010 Z

  FN: NEWTEX~1.TXT  Parent Ref: 10469  Parent Seq: 1
    M: Thu Oct  2 23:22:05 2014 Z
    A: Thu Oct  2 23:22:05 2014 Z
    C: Thu Oct  2 23:22:05 2014 Z
    B: Thu Oct  2 23:22:05 2014 Z
  FN: New Text Document.txt  Parent Ref: 10469  Parent Seq: 1
    M: Thu Oct  2 23:22:05 2014 Z
    A: Thu Oct  2 23:22:05 2014 Z
    C: Thu Oct  2 23:22:05 2014 Z
    B: Thu Oct  2 23:22:05 2014 Z

Next, I used the rename command via the command prompt to rename the file from New Text Document.txt to "Renamed Text Document.txt" (I know - creative naming). The interesting thing here is, unlike the Windows 7 test where two dates were copied over, all four dates were copied over from the original files $STANDARD_INFORMATION into the $FILE_NAME:

12591      FILE Seq: 1    Link: 2    0x38 6     Flags: 1  
.\Documents and Settings\Mari\My Documents\Renamed Text Document.txt
    M: Wed Jul 29 03:30:45 2010 Z
    A: Wed Jul 29 03:30:45 2010 Z
    C: Thu Oct  2 23:38:36 2014 Z
    B: Wed Jul 29 03:30:45 2010 Z
  FN: RENAME~1.TXT  Parent Ref: 10469  Parent Seq: 1
    M: Wed Jul 29 03:30:45 2010 Z
    A: Wed Jul 29 03:30:45 2010 Z
    C: Wed Jul 29 03:30:45 2010 Z
    B: Wed Jul 29 03:30:45 2010 Z

  FN: Renamed Text Document.txt  Parent Ref: 10469  Parent Seq: 1
    M: Wed Jul 29 03:30:45 2010 Z
    A: Wed Jul 29 03:30:45 2010 Z
    C: Wed Jul 29 03:30:45 2010 Z
    B: Wed Jul 29 03:30:45 2010 Z

Based upon my testing, a rename could have caused the 2010 dates to be the same in both the $SI and $FN attributes in my outliers. This scenario "in the wild" makes sense...the malware is dropped on the system, time stomped, then renamed to a file name that is less conspicuous on the system. This sequence of events on a Windows XP system may make it difficult to use the MFT analysis alone to identify time stomping.

So what if you run across a file where you suspect this may be the case? On Windows XP you could check the restore points change.log files. These files track changes such as file creations and renames. Once again, Mr. HC has a perl script that parses these change log files, If you see a file creation and a rename, you can use the restore point as a guideline to when the file was created and renamed on the system.

You could also parse the USN change journal to see if and when the suspected file had been created and renamed. Tools such as Triforce or Harlan's do a great job.

If the change.log file and and journal file do not go back far enough, checking the compile date of the suspicious file with program like CFF Explorer may also help shed some light. If a program has a compile date years after the born date,.. red flag.

I don't think anything I've found is ground breaking or new. In fact,the Forensics Wiki entry on timestomp demonstrates this behavior with time stomping and moved files, but I thought I would share anyways.

Happy hunting, and may the odds be ever in your favor...

Tuesday, September 2, 2014

SQLite Deleted Data Parser - GUI Added

Last year I wrote a Python script to parse deleted data from SQLite Databases (original post here).
Every once in a while, I get emails asking for help on how to use the SQLite Parser from users who are not that familiar with using Python or command line tools in general.

As an everyday user of command line tools and  Python, I forget the little things that may challenge these users (we were all there at one point and time!) This includes things like quotes around file paths, which direction slashes go, and how to execute a python script if Python is not in your environment variable.

So, to that end, I have created a Windows GUI for the SQLite Parser to make the process a little less painful.

The GUI is pretty self explanatory:
  • Choose the path to the SQLite database
  • Choose the file to save the results to
  • Select Formatted or Raw output

This means there are now three flavors of the SQLParser available:
  • - python script
  • sqlparse_CLI.exe - Windows command line tool
  • sqlparse_GUI.exe - Windows GUI tool
All three files are available for download here on on my GitHub page.

Coming soon... a blog post/tutorial on how to use python scripts :-)

Monday, July 21, 2014

Safari and iPhone Internet History Parser

Back in June, I had the opportunity to speak at the SANS DFIR Summit.  One of the great things about this conference was the ability to meet and socialize with all the attendees and presenters. While I was there, I had a chance to catch up with Sarah Edwards who teaches the Mac 518 class for SANS.

I'm always looking for new projects to work on, and she suggested a script to parse Safari Internet History. So the 4th of July long weekend rolled around and I had some spare time to devote to a project. In between the fireworks and a couple of Netflix shows (OK, maybe 10 shows), I put together a python script that parses out several plist files related to Safari Internet History: History.plist, Bookmarks.plist, TopSites.plist and Downloads.plist.

Since the iPhone also uses Safari, I decided to expand the script to parse some iPhone Safari artifacts: History.plist, Bookmarks.db and RecentSearches.plist. I imagine the iPad also contains Safari Internet History, but I did not have one at my disposal to test. If you want to send one to me for testing, I would be happy to take it off your hands :-).

In this post I'll run through each of the artifacts I located and explain how to use the script to parse out the files.

Plist Files: A love/hate relationship

First, a little background on plist files. Plist files are awesome because they can contain all sorts of information such as Internet History, Recent Docs, Network IDs etc.  There are free tools for both Windows and OS X that will allow you view the data stored in the plist file. For Windows, you can use plist Editor.  If you have a Mac, a free plist editor is included in Apple's XCode Developer Tools which can be downloaded through the App Store.

However, plist files also stink because while the plist format is standardized, it's entirely up to the programmer to store whatever they want, in whatever format they want.

A (frustrating) example of this is date information. In the Safari History.plist file the date is defined as a "String", and is stored in Mac Absolute time. Mac Absolute time is the number of seconds since June January 1, 2001.  Below is an example of this from a Safari History.plist file viewed in the XCode plist editor:

History.plist file in XCode plist editor
In the Safari Bookmarks.plist file, the date is stored in a field defined as "Date".The date is stored in a more standard format:

Bookmarks.plist file in XCode plist editor
This means that each plist file needs to reviewed manually to determine what format the data is in, and how it's stored before it can be parsed.

So, moving on to the artifacts...

Where's the beef?

On a Mac OS X, the Safari Internet History is located under the folder /Users/%USERNAME%/Library/Safari. As I mentioned before, I located four plist files in this folder containing Internet History: History.plist, Bookmarks.plist, TopSites.plist and Downloads.plist. I've written the script to read either an individual file, or the entire folder at once.

(If you're wondering about the Safari cookie files, I already wrote a separate tool support these, which can be found on my downloads page.)

This file contains the the last visited date, URL, page title and visit count. To run the parser over this file and get a tsv file use the following syntax: --history -f  history.plist -o history-results.tsv

The Top Site feature of Safari identifies 12 Top Sites based upon how often and how recent the sites were visited. There are several ways to view the tops sites in Safari, such as starting a new tab or selecting it from the Menu>View>Top Sites. Small thumbnails of each Top Site are displayed. The user has the option to Pin or Delete a site from the Top Sites. Pinning a site keeps it in the Top Sites List, while deleting it removes it. The list can be increased to hold up to 24 sites.

The thumbnails for the webpage previews for Safari can be found under /Users/%Username%/Library/Caches/ Below is how the TopSites appear to a user ( this may vary depending on the browser version):

The TopSite.plist file contains the Page Title and URL.  It also stores values to indicate if it's a Pinned or Built in Site. Built in Sites are pre-populated sites such as iCloud or the Apple Website.

TopSites that have been deleted are tracked in the TopSites.plist as "BannedURLStrings".

To parse the TopSites.plist file use the following syntax: --topsites -f  TopSites.plist -o topsite-results.tsv

Downloads are stored in the Downloads.plist file. When a file is downloaded, an entry is made containing the following: 1)Download URL; 2)File name including the path where it was downloaded to; 3)Size of the file; 4)Number of bytes downloaded so far.  The user may clear this list at anytime by selecting "Clear" from the Downloads dialog box:

To parse the Downloads.plist file use the following syntax: --downloads -f  Downloads.plist -o download-results.tsv

Safari tracks three different types of bookmarks in the Bookmarks.plist file: Favorites, Bookmarks and the Reading List.

The Bookmarks Bar (aka Favorites) is located at the top of the browser:

The Favorites are also displayed on the side bar:

Bookmark Menu
A folder titled "Bookmark Menu" is created by default when a user creates bookmarks. It contains a hierarchical structure of bookmarks and folders - these are shown in the red box below:

The user may add folders, as demonstrated with the "test bookmarks" folder below:

Reading List
The Reading List is another type of bookmark. According to Safari documentation, "Reading List helps you save webpages and links for you to read later, even when you are not connected to the internet". These items show up when the user selects the Reading List icon:

Safari downloads and stores information such as cached pages related to the Reading List under  /Users/%USERNAME%/Library/Safari/ReadingListArchives. I didn't spend too much time researching this as my parser is focused on the bookmarks.plist file, but keep it in mind as it may turn up some interesting stuff.

All three types of bookmarks (Favorites, Bookmarks and Reading Lists) are stored in the Bookmarks.plist file.

The Bookmarks.plist file tracks the Page Title and URL for the Favorites and the Bookmarks, however, the Reading List entries contain a little bit more information. The Reading Lists also contains a date added, date last fetched, fetch result, and preview text.  There are also a couple of boolean entries, Added Locally and Archived on Disk.

Out of all the plist files mentioned so far, I think this one looks the most confusing in the plist editor programs.  The parent/child relationships of the folders and sub folders can get pretty messy:

To parse the Bookmarks.plist file, use the following syntax: --bookmarks -f Bookmarks.plist -o bookmark-results.tsv

The Safari Parser will output this into a spreadsheet with the folder structure rebuilt, which is hopefully more intuitive then viewing in the plist editor:

All Four One and One for All
Instead of parsing each file individually, all four files can be parsed by pointing Safari Parser to a folder containing all four files.  This means you can export out the /Users/%Username%/Library/Safari folder and point the script at it. You could also mount the image and point it to the mounted folder. To parse the folder, use the following syntax: -d /Users/maridegrazia/Library/Safari -o /Cases/InternetHistory/Reports

This will create four tsv files with results from each of the above Internet History Files.

iPhone Internet History

Safari is also installed on the iPhone so I figured while I was at it I might as well expand the script to handle the iPhone Internet History files. I had some test data laying around, and  I was able to locate three files of interest: History.plist, Bookmarks.db and RecentSearches.plist.

While my test data came from an iPhone extraction, these types of files are also located in an iTunes backup on a computer. This means that even if you don't have access to the phone, you could get still get the Internet History. Check in the user's folder under \AppData\Roaming\Apple Computer\MobileSync\Backup, then use a tool like iphonebackupbrowser to browse the backups and export out the files:

The location of the History.plist file may vary depending on the model of the iPhone. Check \private\var\mobile\Library\Safari or \data\mobile\Library\Safari for this file.

Luckily, the History.plist file has the same format as the OS X version, so using the script to parse the iPhone History.plist file works the same: --history -f  history.plist -o history-results.tsv

The location of the Bookmarks.db file may vary depending on the model of the iPhone. Check \private\var\mobile\Library\Safari or \data\mobile\Library\Safari for this file. On an iPhone, this file is stored in an SQLite database rather then the plist format used on OS X.  In the test data I had, I did not see any entries for the Reading List. To parse the iPhone Bookmarks.db file, use the following syntax: --iPhonebookmarks -f bookmarks.db -o bookmark-results.tsv

Recent Searches
I located a RecentSearches.plist file under the cache folder. The location of this file may vary depending on the model of the iPhone. Check \private\var\mobile\Library\Caches\Safari or \data\mobile\Library\Caches\Safari. This file contained a list of recent searches, about 20 or so. Use the following syntax to parse this file: --iPhonerecentsearches -f recentsearches.plist -o recentsearches-results.tsv

You can also point the script to a directory with all three files and parse them at once: -d /Users/maridegrazia/iPhoneFiles -o /Cases/InternetHistory/Reports

The Script

The Safari Parser can be download here. It requires the biplist library which is super easy to install (directions below). However, I've also included a complied .exe file for Windows if you don't want to hassle with installing the library. A thank you  to Harlan Carvey for suggesting the PyInstaller to compile Windows binaries for python - it worked like a charm.

To install biplist in Linux just type the following:

sudo easy_install biplist

For Windows, if you don't already have it installed, you'll need to grab the easy install utility which is included in the setup tools from The setup tools will place easy_install.exe into your Python directory in the Scripts folder. Change into this directory and run:

easy_install.exe biplist

Remember to look at the plist files to manually to verify your results. I don't have access to every past or future version of Safari or iOS. As always, just shoot me an email or tweet if you need some modifications made.

References and Tools (my script to parse the Safari Internet History)
Safari 5.1 (OS X Lion): View and customize Top Sites
Plist Editor (free plist editor for Windows)
XCode (includes free Plist Editor for OS X)
iphonebackupbrowser ( free iTunes backup browser)