This sample goes through the basics of creating an Azure Function that triggers on a new Azure Storage Queue message. The function can then be deployed to Kubernetes with KEDA for event driven activation and scale.

Nice step-by-step walk through for using Azure Functions with Kubernetes.

This effort will detect hardware failures or memory leaks that can lead to operating system crashes just before they occur, so that Azure can then freeze virtual machines for a few seconds so the workloads can be moved to a healthy host.

Source: Advancing Microsoft Azure reliability | Blog | Microsoft Azure

This reminds me of an MSR research paper I read a few years ago.

Week 3 Write Up OSU CS 373

Maxwell Evdemon

 

All information learned comes from the Malware Defense Lecture by Craig Schmugar from McAfee. These lectures are provided by the OSU CS 373 Defense against the dark arts class.

 

 

Malware Defenses Lesson 1 -Attack Vectors, Malware Defense, Yara [1]

 

I learned about the attack chart, and how it has four key tenants. The first contact happens after the malware is created and then is being prepared to be released to the world. This malware for the first contact can be released/transferred through email, instant messaging, malware advertising, poisoned search results, watering holes, and physical access. I learned that poisoned search results is when the malware creators creates similar websites to that of popular trends that will lead the user to downloading malware instead. Watering holes is the idea of setting up malware in place were several people go, a place that is popular to spread the virus as greatly as possible. Physical access is the idea of attacking using some sort of physical means to distribute the malware, this can be done though usbs. The next tenant is that of the execution of the code. This step of the attack can be done through the auto run that will tun automatically on startup, which could be useful for immediately executing code on things such as usbs or programs. This can also be done through the idea of exploitation as a user thinks they are getting something, yet they get the malware, that would be an example. Another situation I learned, which is likely the simplest, but the user running some type of executable that does not provide the correct service, but rather the malware. I think that something the promises something yet gives another piece of malware would be the most common, at least that is the conclusion I came to. The third step is establishing a presence within the system, the most common thing it will do is try to blend in with other programs and hide, be that in the hosts directory or somewhere else on the computer. Something I learned is that malware can actually change the data and time stamps, so while they are useful for finding malware, they are not full proof. Sometimes the malware will have some sort of signature to try and pass itself off as an official driver or software. I also learned that these pieces of malware can hide with boot kits and rootkits, boot kits modify the master boot record and root kits attempt to hide using the operating systems standards. I learned a big part of this step is the idea of persistence, that these malware programs want to survive through shut down as well. This can be done through registry keys, auto run, and other forms of startup programs. The goal of most malware is to steal information, being that through parsing passwords, looking through logs, and stealing documents or emails. This idea of stealing or looking for information is the fourth step of the attack chart, being the malicious activity.

The goal overall goal is to stop these attacks, for the first contact step we can stop spam through the use of any Anti-spam, we can stop network issues through the use of Firewall or Network IPS can help on prevent going to websites that may host that malware. I learned about epoxy and how it can help go against physical media first contact along with that of disk encryption. On f the best ways to prevent these first contacts is to use multiple layers of defense.  For blocking and preventing the exploits during local side execution one can use client-side filtering to stop any form of spam and using network ISP and content filtering for to prevent and web or network issues. Forms of secondary authentication is also noted as being a useful way to prevent any local execution breaches. For host protection one can use host IPS, antivirus and whitelisting as ways to help prevent the local execution step. Moving on to the Establish presence prevention protecting the host and network is the best points, as I assume if they have access to the hosts then the malware will be able to change what they want. Since the malware is establishing a presence it will also be going through the network which to download any tools it needs to steal information, which is why network monitoring is important idea. Since the malware want to survive for as long as possible, they will also tend to go after that of the antimalware software to prevent it from working properly.  The last step/tenant is the malicious activity, I leaned that data loss prevention is used to see what data is going out and coming in which is useful for monitoring information, as this can usually see the malware at work. Using tools like botnet prevention and anti-key logging software can also be used to prevent information from being stolen as well. Since at this step the malware is installed and possibly hidden most of the prevent is to prevent the loss of data or monitoring data or see what it being sent, as the malware is already at installed and trying to take that information. Below there is picture representing these four tenants, this picture was taken from the Malware Defense PowerPoint lecture.

[1]

Another popular way to look at malware defense is to look at it as the user is the center of it all, and each layer has some form of protection for that user. Going from the outside in, the outside has protection such as firewalls or network intrusion prevention software tools, then a setup in is the message/web reputation, and the inner most layers would contain the host firewalls and host IPS. This goes back to the point of having multiple layers of protection being one of the safer ways to defend against malware attacks. One of the most important things a user can do to help protect their data form malware is to use a backup system and backup all of your data. Another issue I learned about is how most people and companies and tools have many ways to protect their data bur sometimes don’t use them for one reason or another. I also learned that when working with a layered malware defense system on how important each layer is and how they depend on one another, as they have their own ways of data collection and data which can be used to understand a threat better. Another important factor is that the threat of the malware is something that is constantly evolving, which can be an issue with some anti malware products. I leaned about the hierarchy of a management server or the epO were nodes can be managed, that server can be to connect to a point product will then connect to the scanner core. The scanner core each has a something that it is scanning for, and all generally are connect to some sort of host. The scanners that are connected to these hosts which control when to scan and what to scan for, which then interfaces with the engine which consumes the content. Then I leaned about the basic features of anti-malware products, where they typically have scripts and heuristics. I learned there are different types of scanning, in particular registry/cookie scanning, tradition file scanning, cloud-based scanning, and the system memory-based scanning. I leaned that the true threat of a cookie for malware is that it can be sued to track a user and what sites they access. When looking at the concept of anti-malware and decomposition, it typically refers to zipped files. Configuration is how they the anti-malware software is set up toe search the computer, this could involve not needing to look through some white listed files, when scanning cloud memory what sensitivity will the program look for, but it is generally up tot eh companies on the configuration. I would assume that some of these whitelisted files and lower sensitivity scans might not pick up some actual threats in situation but increasing the sensitivity may lead to finding threats that my not actually be threats. I learned more about Yara. Yara is an open language that is used in pattern matching when scanning within files or forms of memory. These will have something called rules associated with them which will determining how it looks for possible threats.  Yara has key words that are used for the formatting, strings are typically addressed within quotes and have different formats such as nocase which means it will be case insensitive, or wide which will strip zeros from Unicode strings. Byte patterns can also be looked for in the form of hexadecimal strings, with the language accepting? or ?? as a wild card for if hexadecimal string is not entirely known. I leaned that multiple wildcards in Yara is known as a jump and is defined as follows [3-4] means 3 to 4 bytes are wildcards. Yara also has conditions such as Boolean operations with and or and not, and other relation arithmetic. Below is a sample rule from the malware defense PowerPoint showing how Yara works in a way.

 

[1]

 

Getting into the Yara labs, I learned how to use the Yara editor tool and how to write Yara rules. The Yara editor come equipped with a rules browser that allows the user to look through any written rules. I also have a malware browser, which means you have to know the malware beforehand. You can also inspect any files directly in the Yara editor tool, that can also help with auto rule generation, however I leaned that these generated rules are not generally well made. You can execute all the rules to inspect the all the malware with all the rules are at one time. When writing a rule look for some strange commonality between all the malware files and then use that as a basis for writing the rules. Also use other tools like File insight for sting extractions which can be used for finding theses common grounds between the malware. It is important to keep track of your string types, especially when using arithmetic as you may run into issues. Always try to reduce the amount of strings and conditions within these rules, as it will keep things cleaning and take up less space. The command line tool Yara32 can be used to scan the windows directory to check and see if the rule written is too generic, as if it does then it may detect windows files as issues which is an issue when they are clearly not a threat. File insight should be the starting point for looking through the send. Strings all in using file insight will get every string and showing them in order in the binary and strings will only show strings grating then ten character and put them in order. Scanning the system 32 directory can be used to find the files that are “clean” as in found in windows normally, which can be used to write rules based off the malware strings that are clearly malicious specifically. When scanning the system 32 file it will not tell you if it did not detect any connections, but it will tell you the file name if it does detect some connection.  Finding some way to minimize the Yara signature to be specific is one of the main goals when writing good Yara rules. One thing that can be done with Yara is changing string rules to be sub strings rather than complete matches which can be important in situations where files are compressed. The challenge when using Yara is trying to think of how malware can possibly change from one to another, as there will be things that typically change, and situations that will remain the same. I learned thinking of malware in this way will help write better rules that could detect strings more pieces of flexible or changing malware. You can print matches rather than the rules when using -s as this will show what matched rather than just the rules when working in the command line.  Yara requires any encoded parts be converted to that of hexadecimal before jumps are available for use.

 

Malware Defenses Lesson 2 – Cuckoo tool, Automated Analysis [1]

 

I learned that typically over half a million new malware binaries are found every day, due to those large increase of malware everyday automated detection becomes necessary as the malware constantly changes. Which is why I assume Yara can be so useful as it can be used to find those similarities between those new malware binaries and figure out what that malware does faster, especially with automated testing. I learned about signed keys and how they can be used to give some piece of code authenticity, but also how these signed keys can be used in benefit of malware as well. I learned that windows won’t actually load a driver 64 bit if it is not signed in some way, this can be bypassed if it in test mode, however. I leaned that the process of signature creation and that of malware analysis are generally connected. Forms of automated analysis is crucial, and I learned that those automated signatures account for 99% of the content. Automated is good way to get signatures to cover a broad amount, but I assume lack some details and any handwritten signatures still tend to be the best for coverage. I learned that Detection per byte is a common tracking method when looking into signatures. Since malware is constantly changing machine learning is something that has become common, along with signatures being less focused on files. Advantages of automated detection involve, covering a larger amount of malware code than humans, they are consistent as they on do what they are programed to do, and performance is not a concern as unlike humans they can constantly be running and checking malware. However, that last advantage can also be a disadvantage if the automated process is not able to understand the malware code or lacks the complexity to decipher something to be an actual threat, as it can only successfully do what it is programmed to. This means that automated testing can miss this malware as I learned most malware can actually be virtual machine aware. I also learned that these automated testing systems are potentially in danger of denial of service attacks as those can be used to change and bypass some of that testing. One of the biggest issues that comes along with automated detection is that differentiation between actual threats and a code that acts similar but is not malware, as one of the major goals of antimalware work sir to not affect any non-malware code.

I learned about Cuckoo, as it is an automated malware analysis tool, that is used to find understanding in what malware can do in some isolated environment. The cuckoo host is typically responsible for the management of the analysis and the generation of reports. As for the results that are put forward by Cuckoo, it involves traces of win32 API calls that were done by any and all process set forth by the malware. It will take all file that are created, deleted or downloaded by the malware process during it’s run time. Cuckoo will take any memory dumps from the malware process and save them, so they can be further analyzed later. The program will record a network tract and store it in PACP format, along with screenshots of the desktop during the runtime of the malware. Cuckoo will lastly take a full memory dump form the machine as well. I learned that python should be installed for cuckoo to run properly, and make sure the virtual machine is hardened as well. Another way Cuckoo results can be used is with static analysis strings. In behavioral analysis for the Cuckoo output, one can see a time stamp with the process ids and other information on registry keys and file that have been modified. Using all these behavior events, one should be able to decide how much of a threat malware can be along with what it actually does. An example of a cuckoo output taken from the Malware defense PowerPoint is shown below,

 

[1]

 

When looking at malware I typically leaned for windows executables to start at XP. I learned Cuckoo can handle may different file formats such as, executables, PDFs, any Microsoft office documents, URLs, Html files, scripts like PHP or VB, any Zip files or Java JAR files, and many other formats. However as mentioned above about executables not running on some operating systems, the VM still needs to be able to run the malware for Cuckoo to be able to handle it properly. Getting into the lab and cuckoo demonstration, I learned how to use cuckoo in a hands-on scenario. Looking the cuckoo logs, it should show all the events that happened in order, but they can be sorted in other ways as well. Something I personally noticed was how in the logs it will also show if something was a success or failure. In these logs you can see what the malware is specifically doing along with its target files, be that looking for some file, opening and modifying files or creating some processes. I leaned that every time a create API process in cuckoo is called that cuckoo will take that and create it itself and then let the API process run, this is done so cuckoo will always know the process ids so I can keep track of them. I learned the goal of malware analysis is to discover is there is any malware/threats on a machine, then isolate that particular threat/malware, classify what the threat can and then remediate that threat/malware, setup a way to then defend against any of these malware/threats for the future, and finally describe that malware/threat. The last part I assume is important as documenting these threats will allows for users to understand and potentially be prepared for that threat in the future.

 

Lab Determine Malware –

Starting into the blog post for the Malware Defense lab.  When looking at the unknown piece of code hash: 068D5B62254DC582F3697847C16710B7 I found it trying to get into the registry keys and even try to access some an outside source through the fakenet program, however that might just be the python for cuckoo I realize, and that it had create 2 processes on the machine thanks to cuckoo logs. I also found that the program was trying to do something with prints.exe and also it I found that it was trying to do something with a file like EXA looking for and creating it if not found. It also seems to try and shar access among some admin files among the system. I saw that it did something to the local appdata for the admin user upon going there I found the prints.exe which is most likely trying to masquerade a the print.exe anouther windows file. That executable was also run as part of the program running. I found the ntshruis2.dll in that app data folder as well. When looking at the logs for the prints.exe I could see something which I recognize as kind of a red flag the fact that it is still editing that EXA file, and Borland/delphi/locales along with it want so much access to the registry keys, and a file it creates and deletes called deleteme.bat in the temp folder. All of this together sows me it is creating and running executables on the computer while also attempt to disguise itself as something else while creating files and trying to modify large chunks of the registry, this is clearly malware.

When looking at the next potential malware hash: 00670F2B9631D0F97C7CFC6C764DD9D9 I noticed that it created a few other processes mainly there was four in total, and just like the previous it was looking around at registry keys and oddly enough it looked for the analyzer through a query. Something else I noticed is the I was getting something of an error as if the analyzer was still writing to one of the log files, as well as me not being able to remove that file. This was a red flag along with the fact the bad file was not removed from the desktop after that analyze process as if it was interrupted or stopped for some reason. Then I noticed that it had added an internet explorer to the desktop which targets “http://www.3392.cn?99_20190715” which is clearly not the typical internet explorer target. I also found it trying to run something called qusula.exe in the logs. Through some research into what some of this means, I believe this to be some sort of redirect adware type, so I can confirm that this is in fact a virus. A quick reload of a snapshot also shows that that redirect is not normal.

Looking at the next potential malware A1874F714F7A15399B9FAE968180B303, the first thing that immediately notice is that it is extremely similar to that of the first code, as it also deals with the EXA file and the file A. In fact, it also seems to use the printers.exe as well and the cmd running the same amount of processes as the first file. In fact, I can’t seem to find may differences at all. I went to the local admin temp data and found all of the same things as the first file. This leads me to believe that these files might actually be the same virus just modified in a way to make it seem as if they are different. The fact that this one also pokes around the registry and tries to get access is suspicious as well. Since it was so similar to the first one I looked into it more and found out the proper file is the ntshrui.dll which leads me to believe just as I think the print.exe file is trying to be replaced by a prints.exe that the ntshruis.dll is trying to be replaced by ntshruis2.dll both with the intent of literally hiding in plain sight. The fact that these files are in the temp is also suspicious as these files should typically be in the system 32 folder. I still think this may be the same as the fist piece of malware but I confirmed that the print.exe and ntshrui.dll are still fine and there but that leads me to believe that they will no longer be used on the computer rather it will now point to the those imposter files in temp. This imposter type behavior and the creation of file and the constant attempts to access the registry leads me to believe that this id definitely malware as well.

I can confirm the next piece of potential malware hash: 4844FD851088A11E240CFE5B54096209 is not malware from the output file from the cuckoo it is in fact LADS which is a tool to show the ADS of encrypted files. I was able to find this out by also looking online at the text found within the logs themselves which lead me to LADS and it all matched up. The fact that it does not try to create any offspring process is a good sign as well, along with not trying to replace and current system files.

 

 

Blog Article for Lab-

 

Maxwell Evdemon 7/14/19

The following piece of code with the hash of 00670F2B9631D0F97C7CFC6C764DD9D9 is a piece of malware that modifies with the computer. Specifically, it modifies the internet explorer redirection, so it does not in fact take you to the internet explorer home page but rather to the following URL “http://www.3392.cn?99_20190715”. I believe that this may in fact be similar to other adware redirects as well, it may constantly redirect one to this page. As for trying to see if you are afflicted by this virus the simplest way would be to simply open up the properties for internet explorer. The following picture shows what the redirect will look like

 

 

Notice its target is clearly not the normal place windows will go. Another thing to keep in mind is that the program will also make sure that a new shortcut for windows explorer appears on the desktop. Next, the picture below will show what the target should look like.

 

Notice that that the comment is blank and the start in is different in the infected internet explorer when compared to the original. While this is the simplest way to see if the virus adware has run on the computer at least once and it in general also modifies files that are much harder to see normally such as creating and deleting files. For this I have created a Yara signature that should scan your computer for a detection if you wish to make sure that this virus adware has not infected your computer. This Yara signature is shown below and it should not get any detection’s unless it is that of the virus on your computer.

 

 

While the Yara rule/signature itself is a basic code signature it will find and detect this particular virus as seen below as it will match with the virus’s hash value. I found that running it on system 32 will not show any matches, when the virus is not present, as well so it is safe to run on the computer and will not result in any matches other than of this specific virus. The fact that the malware has the arrange unique string for the Yara means there should hopefully never be an overlap. If you pass this test with out finishing a single detection, you will pass that, and it will prove that you are not under that of this virus. This virus I believe is an adware in form, it will be obvious if you truly have it as internet explorer redirect will be messed up and redirect to some website, but on the chance that is may be lurking in your computer this Yara will detect it. I believe the primary goal of this malware I believe is to manipulate the register to get to the windows error logs and the then internet explorer itself to implement the adware and from there try to get more access to the windows system 32 as it does create files within that folder thins along with the registry file. Overall I recommend the user to be wary of malware such as this as at its surface level it seems to only try and change how internet explorer works, but it clearly has motives of changer how your computer functions through the system32 folder, in particular through the admin user.

Sources: PowerPoint/Lecture

[1]    C. Schmugar, Class Lecture, “Malware Defense” College of Engineering, Oregon State University,

Corvallis, Jan., 2014

 

Maxwell Evdemon

 

All information learned comes from the Advanced Forensics Lectures 1 and 2 by Christiaan Beek from McAfee/Intel software defense, and was provided by OSU CS 373 class.

 

Advanced Forensics 1 – Volatility, Forensic computation analysis, Yara signatures [1]

Analyzing memory is a popular, especially looking through that of memory dumps. Forensic investigations happen for a few cases with some being, fraud, intellectual theft of property, hack invasions and breaches of data, and inappropriate uses of the internet. This can also happen from child exploitation on the internet, and e-discovery civil litigation and criminal litigation. I learned that Forensic computing is typically the process of identifying, analyzing, and reserving some form of digital evidence that is seen as acceptable. It is process of finding data and using to solve some issue, I assume to usually incriminate some group that is breaking the law. It generally has three different steps to the process, the gathering of information, the analysis of that information, and the process of compiling that information and reporting it. I also learned that there are three categories to forensic computing, being the live forensics, post-mortem based of memory and disk and then the network-based information. I learned that forensics investigation is different than malware information, and the goal is to find evidence of what happened on that system, rather than judge the issue at hand. People are also finding ways to get around using computers, I learned that game station networks can be used to discuss things, I would assume that that would make it shared to connect people to a crime during an investigation. Another thing I learned is that this an be a dangerous field and that at times as this type of forensics can go hand and hand with police work. As some computer’s memory can be erased upon powering down or if it is unplugged from the network. For reporting the evidence, since this are typically handled in courts due to the nature of the crimes, it is important to provide a report in manner the court will understand.  When working in forensic the rules on should typically follow are, always minimize the data loss as I assume losing any form of data means one might lose critically important data, always record the work/evidence so nothing is lost, analyze all data found no matter how insignificant some things might be, and report the findings. Time is considered to be the most important thing during recording, both he actual time you work in and the system time, which will help greatly when looking through the timeline of a forensic investigation. I learned that forensics computing team typically has a writer to write down all events, if not it was highly recommended to write down everything physically on paper. I also learned that this field extends past the of the more basic investigation machines like computers and into strange things like smart TV’s, GPS data, and PLC controller where potential investigations can be quite difficult as tools do not generally exist for such strange platforms.

Evidence is something that can prove or reject a fact. I learned that evidence in terms of forensic computing it is typically found in the network, operating systems, databases and other applications, peripherals, usb/CDs/removable hard drives/media, and from people themselves. Getting familiar with operating systems is a must for forensic computing and will be dependent on typically what place of the world you are in for what operating system you may be investigating. Triage in forensic analysis is proving the same thing in multiple different ways, for what I assume is used to prove something. As multiple sources showing the same results should be trustworthy. One challenge investigators face is the hard disk, and the sheer size that a hard disk can contain, this alone can take a great amount of time. Whitelisting helps during these investigations due to the size of this files, as the investigator will no longer have to look through every file. The evidence should be kept the same at all times, this way there is no tampering with the data itself. This can be done by creating bit-image copies, creating some sort of cryptographic hash of all data. In order to ensure that the data has not been changed you can use that cryptographic hash/checksum and compare it to the original to confirm that they are the same and one can also lock in some original disk limit-limited area, I assume this way the data cannot be tampered with or accidently edited. I also learned that SSD’s can potentially do a sort of maintenance which will end up changing it’s data and thus invalidating the hash copy, which is another reason the best way for investigations is to write it all down in a way. I also learned that SSD’s also have their own tools and programs that are used for looking through them. Once all data from a hard disk has been thoroughly looked through, a write blocker is used to prevent an interference and is used a as form of one-way reader for the investigator to look into it more. Something interesting is, a forensic investigator cannot look through personal emails without approval from a judge, even though it could contain important information to an investigation, I believe this is likely due to some forms of privacy laws that protect a person’s information. The incident response process is the steps taken when responding to some issue. First issue occurs, then the initial response happens along with a strategy or plan to deal with the incident, next data is collected then analyzed and documented, from there depending on the situation there will either be legal or administrative action. During that entire incident response process the incident will also tried to be fixed or contained, this is also part of the overarching evaluation process. Most large companies have some sort of a response team, but smaller companies will need to bring in experts to help solve the incident. The picture below is from the power point lecture.

 

[1]

 

When looking for evidence when compared to the APT, advanced persistent threat, forensics investigator can map evidence to certain steps of that process, I assume that the process of mapping the data will allows for easier understanding of the incident during evaluation. When looking through the firmware or ISP logs those can be related to the reconnaissance step. Evidence found in the email gateway logs, proxy logs, internet history file, and java IDK files can all be traced to the delivery step of the ATP. Evidence found in the windows even logs or crash dump files can be related to the exploitation step. Any evidence found in the memory dump registry keys, or prefetch files can be mapped to the installation step. Memory dumps, firewall logs, IPS logs, proxy logs and netflow can all be resulted to the command and control step of a APT. memory dump, registry keys, prefetch files, remote tools and netfflow can all be mapped to actions and objectives step of the APT. You may notice some overlaps on the mapping, I assume this due to the fact that some of these pieces of evidence can be used for either or both those steps but having the mapping of the evidence should help in the valuation process still. I learned about the investigation cycle and the steps that are apart of it, those are verification, system description, and evidence acquisition. Those three relate to the actual cycle which is, timeline analysis, media analysis, string/byte searching, data recovery, and reporting analysis. Below is a picture from the advanced forensics lecture 1 slides showing the cycle.

 

[1]

I learned about Locard’s principle, the idea that all actions leave some form of evidence. This is true for computing forensics as well, particularly with network cables and usbs as, anything put into the system will change it. Anything that is originally contaminated in the files, cannot be changed back to its original form this is important as many investigations have to be dropped the second the evidence is changed. One of the first steps is disconnect the network cable as the attack could still have some remote access tot the machine. I also learned about an important concept for these investigative forensics, in particular the idea of order of volatility, the idea that you should always look and collect more volatile information first, RFC 3227. The volatile data will typically disappear upon shutting down the computer, which is why it is so important to get more important data first as it is all temporary in essence.  The order is usually as follows, first the system memory, then temporary system files, the process table and network connection (the process information that could be dumped, the network rounding information and ARP cache, the forensics acquisition of the disks, the remote logon and monitor data, the physical configuration and the network topology, and finally the backups of the data. After acquiring the volatile data move to acquire any non-volatile data such as time stamps, event logs, web application logs and if possible, the registry. After that look for any local files that may pass as evidence, this includes any unknown executable files, and tools that looks like that tack may have used, and then any other file that may relate to the incident case.

Starting into lab 1 of this lecture, something important I learned was to neve install forensics tools on the suspect computer as that counts as tampering with the evidence. Never store the memory dumps on the suspect machine, always store on some external storage device. This is also can be done through a network share to get the evidence off in a one-way path. I also learned about the FTK imager and how to use it. The FTK imager has a capture memory feature that allows you to look through the machine to a destination path, this will then start dumping memory from that path that has been selected. I also learned that FTK can be used to copy protected files through this way. FTK imager seems to primarily be used to copy files and data, for what I assume is used to look through data for analysis without actually tampering with the data. FTK has a read mode which allow the user to look at the data before a snapshot, this helps as it does not actually tamper with the data. The master file data cannot typically be copied, but the FTK imager can be used to copy and export of the master file which is critical for forensic investigation. The FTK can also create a physical disk image, in formats EN0 which is for Encase, raw which is compatible with Linux and various tools, and smart format, which can be seen at the destination in the FTK imager.  This process cannot change anything in the file, but it may change something in the memory which cannot be avoided. During the process of taking the image one can give it labels or descriptions, the file format has to be increased based of how big the image you are taking. That size should always be slightly bigger than that of disk you are imaging, or you run the risk of creating split files.

I learned that physical memory in this field is considered to be the RAM o f the computer, the short-term memory. This short-term memory will quickly disappear once the power connected to this short-term physical memory is disconnected. The short-term memory is an important memory dump for a forensics investigator, as I learned as the dumps can reveal anything hidden on the suspect computer. As for what can be obtained from this physical information, I learned that all running process during the snapshot, all loaded modules and dynamic link libraries with the added malware, all the running device drivers, all potential rootkits, all open files from all processes, all the registry keys from the processes, the open network sockets for each process, the IP address and port information for those process, any decrypted versions of encrypted data found, the content of any windows running, keystrokes recently made on the machine, email attachments or an other form or file transfer, any and all cryptographic key material, hard drive encryption keys that  are not normally found, any WEP and WPA wireless keys that may be unknown, and finally any usernames and passwords used on that computer.  With all that information it makes sense on why RAM and physical memory are so important to dump and analyze on any forensic investigation. Below is picture of a example memory dump, from the advanced forensics 1 lecture.

 

[1]

 

 

I learned that physical memory is divided into some thing called pages, these pages contain memory that is in turn mapped onto some physical memory. It is important to note that the same page of physical memory can be located at several memory address locations, and the memory on these pages does not get overwritten if the memory itself is free. I also learned about how process running on windows are assigned memory, each process has 4GiB divided into 2 GiB for the application and system respectively. I also learned about some of the ways to analyze memory dumps, looking for a sort of printable string in the data, reconstructing the data structures, or searching for a static signature which in turn should show the kernel data structures. The idea of looking at strings is what was done before some of the modern volatility tools we have today, which is why it is considered to be the original way to look through the memory. I learned that volatility stared as a way to read through windows memory dumps. Yara is a tool to create signatures, malware is then set against a certain signature and through the process of volatility it can then look for these signatures to identify them quickly. I assume this was done a way to save time when using volatility, as the investigator would be able to identify certain pieces of malware much easier.

Getting into Lab 2 I learned more about volatility specifically and how it is a free tool in python, where the user can write and create their own plugins. There is also plenty of plugins already made and available for users, for example Malfind, which is used to detect hidden or injected code into the memory. When working with operating systems it is always good to have up the volatility cheat sheet which details some of the useful commands for when working with volatility. When using volatility in windows it will be volitlity.exe and Linux it will be vol.py and it will have the same syntax overall. The syntax is as follows, -f “name of the memory dump” “plugin name”. Image info plug in is useful for looking up the operating system version and this will help in finding that. Volatility is primarily used to analyze memory dumps, but volatility can create memory dumps. In the case of having multiple suggested profiles you pick only one, and how that is done is by volatility -f “name of memory dump” — profile = “name of profile”.  For volatility -h is a help option and -psscan is will look in the memory dump for which process that were running on the system at some time. It will show the process id’s during this time, along with the parent process id. Which I assume can then be used to keep track of processes like the process explorer tool, except this time from the memory. The netscan and deskcan commands are for looking around the network and desktop activity respectively. The getsib command is used for which user rights the programs that were running which in turn I assume can be a great piece of evidence during forensic computing investigations. [1]

 

Advanced Forensics 2 – In-depth look at more tools [2]

 

I learned that one of the most important things, other than that of memory, to keep in mind during a forensic investigation is that of the registry, as many things are recorded within the register. I learned that even external hard drives and usbs are recorded within the register as it will show what was put directly into the computer. Regripper in tandem with FTK imager can be used to search for information you want. RegMon can be used to shows the registry in real time, which I assume would be useful for keeping track of the registry to make sure nothing changes when looking through malware. The default tool in windows is regedit, and it can be used to browse the registry files. I learned that there are five folders that are considered to be the most hierarchical and are refereed to as hives. Out of those five folders, two of them contain most of the registry information that forensic investigators are interested in are known as the HQ users and HQLM local machine.  The other three hives are generally considered to be shortcuts to reach other branches within the hives. I also learned about the structure of the windows registry, as each of the five hives mentioned above are composed of keys, and any values contained by those keys are considered to be sub keys. For these keys, values are the names of specific items that are held within a key. These will usually relate to an operating system or that of an application. Below is picture that represents this, I got this picture from the Advanced Forensics 2 PowerPoint/Presentation.

 

[2]

Looking at the registry always inspect autorun as it is one of the biggest places to find malware, I learned that is because malware want to survive the restart operation on the machine. This was it allows the virus to begin running upon restarting the machine through autorun. You can find wireless access points through the geolocation in the registry, this way you can see if a certain MAC address you are investigating connected there before. URLs are also stored within the internet explorer section of the registry which can be useful to see where the computer has gone to. I believe this shows just how important the registry table is during forensics analysis as it helps to find information a user could easily use as evidence in an investigation. When looking at timeline analysis, the $MFT is typically used which stands for master file table. From the $MFT you can see when files are created and deleted, but it normally cannot be accessed, but using FTK imager you can get the contents. Getting the information from the $MTF can then be used to create a volatile timeline analysis, but this process can also be done with the Regripper.

Getting into the first lab of this presentation about $MFT. I learned about timeliner and MFTparse. Timeliner is formatted like output = body >> “output file name”, in that output file you will see a overview of process that happened over the network along with unix times. Using MFTparser, the format is the same as timeliner to go to an output file. I learned what happens during this process is that it will go into the memory dump and find the master file table information and then output that information into that output text file, which can then be used for forensic analysis. The timeliner generally has less information than that of the MFTparser on output and will display its time in that of MAC time. I also learned that even if don’t have the disk but have memory dump you can still read run the disk commands similar to that of the master file table parser. Other things a forensic investigator should be interested in are the page files and index files, which I learned happen to be the internet explorer url files. Also look at windows event logs, the application files and condition files, prefetch files, and then anything that may be a sign of being tampered with from malware. The crash dumps of windows can also contain parts of malware files, which can be useful for reconstruction if necessary. Look through any antivirus log files as well, as it may show information on malware. I learned that the prefetch file in windows will contain the last 128 files that have been rand on the computer, which I assume would be useful for looking to see if parts of the have been accessed recently. The prefetch will also show the location and drivers that are utilized by some tool, which is why the prefetch folder is high on the volatility list as it is constantly changing. I also learned that you can look through the restores points to find possible malware as well, and that the malware will still be detected as long as it is part of those restore points. Other key files an investigator should look at are the hibernation file, any crash-dump files, the LNK files, and the shell-bag.

What is mostly used for data recovery is known as a concept of data carving. For data recovery, one needs to identify when they were deleted, look for some file fragments, possible unrecoverable data, recovery any possible pertinent files including that of images or emails, and then describe how you went about that process of data recovery. If a file is removed/deleted from that of a hard disk it places some information in a position, each of these positions have a start and end flag to show the data. I learned that when something is actually deleted, it is in fact not gone, but rather it removes the flags that mark where the data is. The only way that data is actually gone is if it is overwritten or hard wiped entirely. On SD cards for phones even if it is brought back to default you can still find information on them, so I assume that SD cards are a great way of finding out information that is wanted or unwanted. I learned that each file head will have some identification and foot that contains some bit to mark the end of the file. I learned about PhotoRec is carving program that can be used to look for specific things, in this case specifically photos. PhotoRec looks for specific signatures while running and is able to carve out anything with the matching signature, this is done by looking for a specified header to the footer, which is used as part of that signature. PhotoRec will only find complete files, but other data carving tools can look for parts of files. I assume this tool is very useful for investigations as it can help track down any evidence on a suspect computer/hard drive that had been deleted. I learned that sometimes the data is spread throughout a hard drive and need stop then be put back together. SleuthKit is tool used to look at lower level of data and is generally used for manually carving data rather than automated like PhotoRec. This moves to the next lab of this lecture. It is important to remember while using PhotoRec it assumes that the file is actually readable. In this lab I leaned how to use PhotoRec, it starts by point the tool towards some card that has data on it and also pointing it towards some output location. On the options part of PhotoRec you can choose what file you specifically want from that disk drive, if not interfered with it will default to simply carving all the matching files from that disk drive. The output screen of the tool will display how many files have been carved. [2]

 

Sources:

[1]    C. Beek, Class Lecture, “Advanced Forensics 1” College of Engineering, Oregon State University,

Corvallis, Jan., 2015

[2]    C. Beek, Class Lecture, “Advanced Forensics 2” College of Engineering, Oregon State University,

Corvallis, Jan., 2015

 

 

 

 

 

 

Interesting post, well worth reading.

In an event-driven architecture, components perform activity in response to receiving events and emit events to trigger activities in other components. In an event-sourced architecture, components record a history of events that occurred to the entities they manage, and calculate the state of an entity from the sequence of events that relate to it.

Source: Mistakes we made adopting event sourcing (and how we recovered)