General

  • Target

    Applied Incident Response (Steve Anson) (z-lib.org).epub

  • Size

    38.6MB

  • Sample

    220907-nw9h6acaa2

  • MD5

    2dd660158c81bafacd4002328e1ec2dd

  • SHA1

    18e959a5f58a0cda59a3d9958adf2c1a237a5806

  • SHA256

    4654817ffdd9dbd995fa2b83359486541caa417c76e0e95cbd6ec7b910e3007d

  • SHA512

    7b78aa71975e5511a8d6aa307faa74ede9e454a230b991a9823834369e83af1c896f1cfcbe0359231f43a5661aa11c4410bf8efe24e8f8d7874f09e533855a68

  • SSDEEP

    786432:i9r+wvGysCB2r0wtqe5EhrUZZDYeh73J3hhl1uyw4uzDrm1Z28PR/s0:iJ+wf4r0sQhjw5d1uyw4uzdsRf

Score
10/10

Malware Config

Extracted

Family

ryuk

Ransom Note
Index Symbols (not equal to), WQL operator, 75 (less than or equal to), WQL operator, 75 (less than) WQL operator, 75 (equal to) WQL operator, 75 (greater than or equal to), WQL operator, 75 (not equal to), WQL operator, 75 (greater than) WQL operator, 75 A account‐related events, 207–208 account management events, 216 BloodHound, 217 event type codes, 212–213 failure status codes, 213–214 Kerberos, 209–215 RDP connections, 212, 215 accounts privileged, protecting, 57 rogue accounts, 56–58 Active Cyber Defense Cycle, 22–23 AD (Active Directory), 390 ADF Triage G2, 340 adversary emulation, 409 Atomic Red Team, 410–415 Caldera, 414–416 Connection Manager, 414 MITRE ATT&CK, 410–413 purple teaming, 409 red team exercises, 409 AFF4 (Advanced Forensics File Format), 111–114 archive, 242 Rekall and, 247 after‐action meeting, 384–385 threat intelligence program review, 386 agents, remote memory collection, 125–128 aliases PowerShell, 88–89 WMIC, 72–73, 74 AlienVault Open Threat Exchange (OTX), 33 AmCache, 352 AMSI (Antimalware Scan Interface), 394 anomaly detection, 264–274 ANSI (American National Standards Institute), 74 APIs (application programming interfaces), 11 330–331 (WMIC switch), 73 AppGuard, PowerShell and, 395 application control solutions, 392 Application logs (Windows), 200–201 application whitelisting solutions, 392 AppLocker, 392 APT (advanced persistent threat), 16 architecture, zero‐trust, 32 ARP (Address Resolution Protocol), 104–105 Arsenal Image Mounter, 154 The Art of Memory Forensics (Ligh et al., 236 artifacts, disk forensics and, 342–344 ASD (Australian Signals Directorate) Essential Eight, 388–389 Atomic Red Team, emulation and, 410–415 attacks attacker motivation, 3–4 espionage, 5 extortion, 5 financial fraud, 4–5 hacktivism, 6 intellectual property, 4 power, 5–6 revenge, 6 crypto mining, 12 DDoS (distributed denial‐of‐service), 5, 7–8 DoS (denial‐of‐service), 7–8 file time stamps and, 317 Lockheed Martin Cyber Kill Chain model, 13 methods, 6–13 MitM (man‐in‐the middle), 11–12 MITRE ATT&CK model, 13 password attacks, 12–13 phishing, 9 spear phishing, 9–10 ransomware, 8–9 CryptoLocker, 9 Emotet, 9 GroundCrab, 9 Ryuk, 9 SamSam, 9 Sodinokibi, 9 WannaCry, 8, 9 SIM (subscriber identity module) cards, 11 sniffing, 11–12 supply chain, 4 Unified Kill Chain model, 13 VPNs (virtual private networks), 11 watering hole, 10 web attacks, 10–11 wireless, 11 worms Code Red, 8 LoveBug, 8 SQL Slammer, 8 WannaCry ransomware, 8 ATT&CK Matrix, 60, 386–388. See also MITRE ATT&CK threat hunting and, 404–407 Audit Policy Changes, 206 audits event logs, 206 process auditing, event logs and, 224–229 authentication NTLM (NT LAN Manager), 391 PowerShell Remoting, 93 automated triage, 340–341 plug‐in, 248 autostart locations, 59–60 Axiom (Magnet Forensics), 312 Timeline view, 317–318 VSS (Volume Shadow Copy Service), 338 B backward compatibility, 330 Baggett, Mark, 324 BAM/DAM, 352 BASEBOARD (WMIC), 74 baselines, importance, 236–242 BCDR (business continuity and disaster recovery), 38–40 beacons, recurring, 50 BIOS (WMIC), 74 Black Hills Information Security and Active Countermeasures, 50 BloodHound, 67, 217 Bluetooth, wireless attacks and, 11 botnets, crypto mining, 12 browser activity bookmarks, 334 cookies, 335 disk forensics, 333–334 downloads, 335–336 Firefox, 334 Burks, Doug, 161 Burp Suite, 51 BurpSuite Free Edition, 288 C C2 (command‐and‐control) infrastructure, 35 Caldera, adversary emulation, 414–416 canaries, 40–41 CAPE (Configuration and Payload Extraction) online malware analysis, 278 centralized logging, 33–34 CFF Explorer, 288 Chocolatey, 287–288 chokepoints, 32 Chrome file, 334 cookies, 335 CIM (Common Information Model), 68 PowerShell access, 95–98 CIRCL (Computer Incident Response Center Luxembourg), 49 CIS (Center for Internet Security) controls, 388–389 clean up, 16 cmdlets PowerShell, 85 123 88 96 124 85–87 87 87–88 96 89 remote triage, 90–91 89 89 PowerShell Remoting 92–93 94 92–93, 95 plug‐in, 248 CMSTP (Connection Manager Profiler Installer), 404 Executing Remote Scriptlet, 413–415 Code Red worm, 8 command line, WMIC (Windows Management Instrumentation Command‐line utility), 68 commands 52 245 369–370 368–369 52 commercialization of cybercrime, 17 communication, 26 compilers, decompilers, 307 COMPUTERSYSTEM (WMIC), 74 Connection Manager, adversary emulation, 414 connections lateral movement connection ports, 52 rogue, 49–52 unexpected, 51 Containment, Eradication, and Recovery phase, 24 controls, preventive execution controls, 392–394 isolation, 396–397 PowerShell, 394–396 privileged accounts, 389–392 segmentation, 396–397 Cowrie, 41 CQL (CIM Query Language), 74 crash dumps, 243 Credential Guard, 391 credentials acquiring, 17–20 hash attacks, 19 incident handling, 63–64 logons, interactive, 61–63 Mimikatz, 19 CredSSP (Credential Security Support Provider) protocol, 118 cron jobs, 60 crypto mining, 12 cryptocurrency, 12 CryptoLocker ransomware, 9 352 CSExec, 367–368 237, 239, 240 Cuckoo, 299–305 text‐based logs, 197 cyber resiliency, 21–22 cybercrime, commercialization, 17 D data, memory, sources, 242–244 data stream, hash value, 134 Davis, Richard, 313 utility, 107–108 DCOM (Distributed Component Object Model), 372 DCOM Server Process Launcher, 240 DCs (domain controllers), 390 DCShadow, 363 DCSync, 363 DDoS (distributed denial‐of‐service) attacks, 5 dead‐box forensics, 104, 135 dead‐box imaging, 137–139 bootable distribution, 143–148 hardware write blockers, 139–143 DeathStar, 67 debugging GDB (GNU Debugger), 307 Immunity Debugger, 307 OllyDbg, 307 Windbg, 307 deception techniques, 40–43 decompilers, 308 defense improvement, 388–389 execution controls, 392–394 isolation, 396–397 PowerShell, 394–396 privileged accounts, 389–392 segmentation, 396–397 Delpy, Benjamin, 353 Desktop Activity Moderator, 329 detecting anomalies, 264–274 Detection and Analysis phase, 24 DFIR (Digital Forensics & Incident Response), 313 DHCP (Dynamic Host Configuration Protocol), 333 digital forensics, 71–72 dead‐box forensics, 104, 135 disk imaging AFF4, 136 Arsenal Image Mounter, 154 EnCase Evidence File Format Version 2, 136 evidence integrity and, 133–134 FTK Imager, 139–143 hash algorithms, 134–135 live imaging, 149–155 Paladin, 143–148 VMs (virtual machines), 155–160 write blockers, 134 evidence, order of volatility, 103–105 Locard's exchange principle and, 104 osquery, 105 RAM and, 104–105 registry analysis autostart extensibility, 328 evidence of execution, 330 keys, 325 MRU (most recently used) items, 327 Registry Explorer, 326 Services Control Manager, 327 subkeys, 325 time stamps, 326 unauthorized executables, 329 USB Detective, 327 solid‐state drives, 138 Velociraptor, 105 directory replication attacks, 363 disk forensics, 311 browser activity, 333–334 bookmarks, 334 cookies, 335 downloads, 335–336 jump lists, 319–321 link files, 319–321 Linux system artifacts, 342–344 LNK files, 319–321 Prefetch, 321–322 registry analysis, 324–333 registry hive files, 325 SRUM (System Resource Usage Monitor), 322–324 SuperFetch, 322 time stamp analysis, 314–319 MACE/MACB, 315, 317 NTFS, 315 tools, 312–313 Axiom, 312 commercial, 313 EnCase, 312 F‐Response, 313 FTK (Forensic Toolkit), 312 Internet Evidence Finder tool, 312 SIFT (SANS Investigative Forensics Toolkit), 313 Timeline Explorer, 319 WxTCmd, 319 X‐Ways Forensics, 312 triage, automated, 340–341 UNIX system artifacts, 342–344 USN (updated sequence number) journal, 337–338 VSS (Volume Shadow Copy Service), 338–340 disk imaging AFF4 (Advanced Forensics File format 4), 136 Arsenal Image Mounter, 154 dead‐box imaging, 137–139 bootable distribution, 143–148 hardware write blockers, 139–143 EnCase Evidence File Format Version 2, 136 evidence integrity and, 133–134 forensic (See digital forensics) FTK Imager, 139–143 hash algorithms, 134 MD5, 135 SHA‐1, 135 SHA‐256, 135 live imaging local, 149–154 remote, 154–155 Paladin and, 143–148 solid‐state drives, 138 VMs (virtual machines), 155–160 write blockers, 134 Disk Manager, Paladin, 146–147 plug‐in, 248 238 plug‐in, 248, 255–256 DLLs (dynamic‐link libraries), 52 DMTF (Distributed Management Task Force), 68 DMZ (demilitarized zone), 376 DNS (Domain Name System), 49 logs, 35 passive data, 49 documentation, 381–382. See also reporting DoS (denial‐of‐service) attacks, 7–8 drivers, Rekall, 246 Duckwall, Alva, 353 DumpIt, 108, 115 dwell time, 15 238, 240 dynamic malware analysis automated, 299–305 Cuckoo, 299–305 manual, 287–299 BurpSuite Free Edition, 288 CFF Explorer, 288 FakeNet‐NG, 288, 292 FAME, 304 FLARE VM, 287, 292 FLOSS, 288 Ghidra, 288 OllyDbg, 288 Process Explorer, 290–291 Process Monitor, 290–291, 295–296 Python, 288 radare2, 288 RegShot, 288, 289 REMnux toolkit, 287 RetDec, 288 Volatility, 288 Wireshark, 288 YARA, 288 sandbox detection evasion, 305–306 E EDR (enterprise detection and response) suites, 235 Elastic Stack, 34, 182 Discover tab, 190 Elasticsearch, 186 log entries, 182–183 Logstash, 183–185 SIEM system, 186 Zeek and, 188–191 Elasticsearch, 34 ELK stack, 34 Emotet ransomware, 9 emulation, adversaries, 409 Atomic Red Team, 410–415 Caldera, 414–416 Connection Manager, 414 MITRE ATT&CK, 410–413 purple teaming, 409 red team exercises, 409 EnCase (OpenText), 312 EnCase Evidence File Format Version 2, 136 encryption, Kerberos, 364–365 ENVIRONMENT (WMIC), 74 ESE (Extensible Storage Engine), 322–323 espionage, 5 Essential Eight controls (ASD), 388–389 ESXi server, 157 evasion, sandbox detection, 305–306 event logs, 199–200 account‐related events, 207–208 account management events, 216 BloodHound, 217 event type codes, 212–213 failure status codes, 213–214 Kerberos, 207–210, 209–215 RDP connections, 212, 215 audit policies and, 206 binary XML and, 205–206 editing, 204–205 fields, default, 203–204 filtering, 202 object access, 218–221 network share event IDs, 218 object handle event IDs, 220–221 scheduled tasks, 219 PowerShell, queries, 231–233 PowerShell use audits, 229–231 process auditing, 224–229 remote, 201 Security, 207 RDP‐specific indicators, 371 sources, 200 system configuration changes, 221–224 Event Viewer, 200–201 evidence integrity, 133–134 order of volatility, 103–105 EWF (Expert Witness Format), 136 execution controls, 392–394 application control solutions, 392 application whitelisting solutions, 392 AppLocker, 392 WDAC (Windows Defender Application Control), 393 Windows Defender Exploit Guard, 393 exfiltration, 16 expansion/entrenchment, 15 exploitation, 14–15 237, 240 extortion, 5 F (WMIC switch), 73 FakeNet‐NG, 288, 292 Fenrir, 100 fileless malware, 103 plug‐in, 248 financial fraud, 4–5 Rekall and, 247 FIR (Fast Incident Response), 29 Firefox, 334 cookies, 335 firewalls, logs, 37 FLARE VM, 287, 292 Chocolatey, 287–288 FLOSS (FireEye Labs Obfuscated String Solver), 280–281 FLOSS (FireEye Labs Obfuscated String Solver), 280–281, 288 forensics. See digital forensics; disk forensics disk imaging, 135–136 frameworks, Kansa, 98–99 F‐Response, 125–128, 313 Management Console, 126 pmem option, 127 FTK (Forensic Toolkit) by AccessData, 312 FTK Imager, 139–143 local live disk imaging, 149–150 content image, custom, 150–151 wildcard‐based searches, 152–153 FTK Imager Lite, 108 string modifier, 283 G GDB (GNU Debugger), 307 cmdlet, 368–369 plug‐in, 248 Getting Started with PowerShell 3.0 Jumpstart, 91 cmdlet, 385 Ghidra, 288, 307 GitHub Kansa framework, 98 Volatility, 245 gMSA (group Managed Service Accounts), 57 privileged accounts, 391 GPUs (graphics processing units), 12 Rekall and, 247 text‐based logs, 195–196 GroundCrab ransomware, 9 GROUP (WMIC), 74 GRR Rapid Response, 99 GSM (Global System for Mobile Communications), wireless attacks and, 11 GUID (globally unique identifier), 244–245 194–195 text‐based logs, 194–195 H hacktivism, 6 plug‐in, 248, 256–257 hardware, write blockers, 134, 139–143 hash algorithms, 134 data stream, 134 MD5 (Message‐Digest Algorithm 5), 135 one‐way hash algorithms, 134 SHA‐1 (Secure Hash Algorithm 1), 135 SHA‐256 (Secure Hash Algorithm 256), 135 hash attacks, 18–19 MD4 hash, 61 Mimikatz, 19 NT hash, 61, 211 NTLM hash, 61 hashcat, 57 Hex‐Rays Decompiler, 307 key, 330 327 330–331 327 329 330–331 honey hashes, 42 honeypots, 40–41 HTTP (Hypertext Transfer Protocol), 51 web attacks and, 10–11 Hubbard, John, 182 Hull, David, 98 Hyper‐V, 287 I IDA Freeware, 307 IDA Pro, 307 237 plug‐in, 248 plug‐in, 245, 248 imaging. See disk imaging Immunity Debugger, 307 IMSI (international mobile subscriber identity) catchers, 11 incident handling, precautions, 63–64 incident response life cycle, 24 incident response team, 24–25 training, 27–30 information management system, 29–30 infrastructure, C2 (command‐and‐control), 35 intellectual property, theft, 4 interactive logons, 61–63 internal resources, Windows, 238 Internet Evidence Finder tool, 312 Internet Evidence Finder Triage Edition, 340 IOC (indicator of compromise), 48, 100 scanner, Loki, 285–286 IPFIX (Internet Protocol Flow Information Export), 36 IS NOT (compare value to NULL) WQL, 75 IS (compare value to NULL) WQL, 75 ISO files, 144 isolation, controls, preventive, 396–397 IT hygiene, 31–32 J Japan Computer Emergency Response Team Coordination Center, 57 JEA (Just Enough Administration), 57, 396 Joe Sandbox online malware analysis, 278 jump lists, disk forensics and, 319–321 K Kali, 244 Kansa incident response framework, 98–99 remote live memory analysis, 129 KAPE (Kroll Artifact Parser and Extractor) triage tool, 340 KCD (Kerberos Constrained Delegation), 118 plug‐in, 245 KDC (key distribution center), 62 Kerberoasting, 57, 363–365 Kerberos, 62, 207–210, 353 directory replication attacks, 363 encryption, 364–365 event 4768 result codes, 209 event 4776 error codes, 210–211 golden tickets, 361–363 Kerberoasting, 363–365 KerbTicket, 356 krbtgt, 361 long‐term key, 354 overpass‐the‐ticket attacks, 354–361 PAC (Privilege Attribute Certificate), 354 pass‐the‐ticket attacks, 354–361 Session Key Type, 356 silver tickets, 361–363 TGT, 361–363 TGT (ticket‐granting ticket), 62–63, 118, 208–209, 354–355, 361 stealing, 355 Khatri, Yogesh, 324 Kibana, 34 Kolide Fleet, 100 L LAPS (Local Administrator Password Solution), 390 lateral movement analysis 352 Kerberos attacks, 353 Kerberoasting, 363–365 overpass‐the‐ticket attacks, 354–361 pass‐the‐ticket attacks, 354–361 TGT, 361–363 352 pivots, 376–377 PowerShell Remoting, 374–376 PsExec, 365–368 RDP (Remote Desktop Protocol), 370–372 352 scheduled tasks, 368–369 Service Controller, 369–370 SMB (Server Message Block), 345–351 pass‐the‐hash attacks, 351–353 SSH tunnels, 376–377 352 WinRM (Windows Remote Management), 373–374 352 WMI (Windows Management Instrumentation), 372–373 352 352 352 plug‐in, 248 Lee, Rob, 401 Lee, Robert M., 22–23, 28, 401 LIKE (string pattern matching), WQL, 75–76 link files, disk forensics and, 319–321 109 live disk imaging local, 149–154 remote, 154–155 live memory analysis, 128 local, 129 remote, 129–130 Live RAM Capturer, 108 LNK files, disk forensics, 319–321 local live disk imaging, 149–155 local live memory analysis, 129 local memory collecting, 105–106 DumpIt, 115 utilities, 109–114 process, 109–117 Rekall, 109 storage media preparation, 107–109 USB devices, 106 VMs (virtual machines), 117 write speed, 106 system memory, copying to external media, 106 Locard's exchange principle, digital forensics and, 104 Lockheed Martin Cyber Kill Chain model, 13 tool, 342 logging. See also event logs centralized, 33–34 DNS logs, 35 firewall logs, 37 operating system generated logs, 36 system logs, 36 text‐based, 194–195 197 195–196 194–195 196, 197 197 zgrep, 195 LOGICALDISK (WMIC), 74 logons, interactive, 61–63 LogonTracer tool, 57 Logstash, 34 LoveBug worm, 8 LSASS (Local Security Authority Subsystem Service), 42, 61–62, 351 privileged accounts, 391–392

Extracted

Family

ryuk

Ransom Note
CHAPTER 1 The Threat Landscape Before we delve into the details of incident response, it is worth understanding the motivations and methods of various threat actors. Gone are the days when organizations could hope to live in obscurity on the Internet, believing that the data they held was not worth the time and resources for an attacker to exploit. The unfortunate reality is that all organizations are subject to being swept up in the large number of organized, wide‐scale attack campaigns. Nation‐states seek to acquire intelligence, position themselves within supply chains, or maintain target profiles for future activity. Organized crime groups seek to make money through fraud, ransom, extortion, or other means. So no system is too small to be a viable target. Understanding the motivations and methods of attackers helps network defenders prepare for and respond to the inevitable IT security incident. Attacker Motivations Attackers may be motivated by many factors, and as an incident responder you'll rarely know the motivation at the beginning of an incident and possibly never determine the true motivation behind an attack. Attribution of an attack is difficult at best and often impossible. Although threat intelligence provides vital clues by cataloging tactics, techniques, procedures and tools of various threat actor groups, the very fact that these pieces of intelligence exist creates the real possibility of false flags, counterintelligence, and disinformation being used by attackers to obscure their origins and point blame in another direction. Attributing each attack to a specific group may not be possible, but understanding the general motivations of attackers can help incident responders predict attacker behavior, counter offensive operations, and lead to a more successful incident response. Broadly speaking, the most common motivations for an attacker are intelligence (espionage), financial gain, or disruption. Attackers try to access information to benefit from that information financially or otherwise, or they seek to do damage to information systems and the people or facilities that rely on those systems. We'll explore various motives for cyberattacks in order to better understand the mindset of your potential adversaries. Intellectual Property Theft Most organizations rely on some information to differentiate them from their competitors. This information can take many forms, including secret recipes, proprietary technologies, or any other knowledge that provides an advantage to the organization. Whenever information is of value, it makes an excellent target for cyberattacks. Theft of intellectual property can be an end unto itself if the attacker, such as a nation‐state or industry competitor, is able to directly apply this knowledge to its benefit. Alternatively, the attacker may sell this information or extort money from the victim to refrain from distributing the information once it is in their possession. Supply Chain Attack Most organizations rely on a network of partners, including suppliers and customers, to achieve their stated objectives. With so much interconnectivity, attackers have found that is often easier to go after the supply chain of the ultimate target rather than attack the target systems head on. For example, attacking a software company to embed malicious code into products that are then used by other organizations provides an effective mechanism to embed the attacker's malware in a way that it appears to come from a trusted source. The NotPetya attack compromised a legitimate accounting software company, used the software's update feature to push data‐destroying malware to customer systems, and reportedly caused more than $10 billion in damages. Another way to attack the supply chain is to attack operations technology systems of manufacturing facilities that could result in the creation of parts that are out of specification. When those parts are then shipped to military or other sensitive industries, they can cause catastrophic failures. Financial Fraud One of the earliest motivations for organized cyberattacks, financial fraud is still a common motivator of threat actors today, and many different approaches can be taken to achieve direct financial gain. Theft of credit card information, phishing of online banking credentials, and compromise of banking systems, including ATM and SWIFT consoles, are all examples of methods that continue to be used successfully to line the pockets of attackers. Although user awareness and increased bank responsiveness have made these types of attacks more difficult than in previous years, financial fraud continues to be a common motivation of threat actors. Extortion We briefly mentioned extortion in our discussion of intellectual property theft, but the category of extortion is much broader. Any information that can be harmful or embarrassing to a potential victim is a suitable candidate for an extortion scheme. Common examples include use of personal or intimate pictures, often obtained through remote access Trojans or duplicitous online interactions, to extort money from victims in schemes frequently referred to as “sextortion.” Additionally, damage or the threat of damage to information systems can be used to extort money from victims, as is done in ransomware attacks and with distributed denial‐of‐service (DDoS) attacks against online businesses. When faced with the catastrophic financial loss associated with being taken off line or being denied access to business‐critical information, many victims choose to pay the attackers rather than suffer the effects of the attack. Espionage Whether done to benefit a nation or a company, espionage is an increasingly common motivation for cyberattacks. The information targeted may be intellectual property as previously discussed, or it may be broader types of information, which can provide a competitive or strategic advantage to the attacker. Nation‐states routinely engage in cyber‐espionage against one another, maintaining target profiles of critical systems around the globe that can be leveraged for information or potentially attacked to cause disruption if needed. Companies, with or without the support of nation‐state actors, continue to use cyber‐exploitation as a mechanism to obtain details related to proprietary technologies, manufacturing methods, customer data, or other information that allows them to more effectively compete within the marketplace. Insider threats, such as disgruntled employees, often steal internal information with the intent of selling it to competitors or using it to give them an advantage when seeking new employment. Power As militaries increasingly move into the cyber domain, the ability to leverage cyber power in conjunction with kinetic or physical warfare is an important strategy for nation‐states. The ability to disrupt communications and other critical infrastructure through cyber network attacks rather than prolonged bombing or other military activity has the advantages of being more efficient and reducing collateral damage. Additionally, the threat of being able to cause catastrophic damage to critical infrastructure, such as electric grids, that would cause civil unrest and economic harm to a nation is seen as having the potential to act as a deterrent to overt hostilities. As more countries stand up military cyber units, the risk of these attacks becomes increasingly present. As Estonia, Ukraine, and others can attest, these types of attacks are not theoretical and can be very damaging. Hacktivism Many groups view attacks on information systems as a legitimate means of protest, similar to marches or sit‐ins. Defacement of websites to express political views, DDoS attacks to take organizations off line, and cyberattacks designed to locate and publicize information to incriminate those perceived to have committed objectionable acts are all methods used by individuals or groups seeking to draw attention to specific causes. Whether or not an individual agrees with the right to use cyberattacks as a means of protest, the impact of these types of attacks is undeniable and continues to be a threat against which organizations must defend. Revenge Sometimes an attacker's motivation is as simple as wishing to do harm to an individual or organization. Disgruntled employees, former employees, dissatisfied customers, citizens of other nations, or former acquaintances all have the potential to feel as if they have been wronged by a group and seek retribution through cyberattacks. Many times, the attacker will have inside knowledge of processes or systems used by the victim organization that can be used to increase the effectiveness of such an attack. Open source information will often be available through social media or other outlets where the attacker has expressed his or her dissatisfaction with the organization in advance of or after an attack, with some attackers publicly claiming responsibility so that the victim will know the reason and source of the attack. Attack Methods Cyber attackers employ a multitude of methods, and we'll cover some of the general categories here and discuss specific techniques throughout the remaining chapters. Many of these categories overlap, but having a basic understanding of these methods will help incident responders recognize and deter attacks. DoS and DDoS Denial‐of‐service (DoS) attacks seek to make a service unavailable for its intended purpose. These attacks can occur by crashing or otherwise disabling a service or by exhausting the resources necessary for the service to function. Examples of DoS attacks are malformed packet exploits that cause a service to crash or an attacker filling the system disk with data until the system no longer has enough storage space to function. One of the most common resources to exhaust is network bandwidth. Volumetric network floods send a large amount of data to a single host or service with the intent of exceeding the available bandwidth to that service. If all the bandwidth is consumed with nonsense traffic, legitimate traffic is unable to reach the service and the service is unable to send replies to legitimate clients. To ensure that an adequate amount of bandwidth is consumed, these types of attacks are normally distributed across multiple systems all attacking a single victim and are therefore called distributed denial‐of‐service (DDoS) attacks. An example of such an attack is the memcached DDoS attack used against GitHub, which took advantage of publicly exposed memcached servers. Memcached is intended to allow other servers, such as those that generate dynamic web pages, to store data on a memcached server and be able to access it again quickly. When publicly exposed over the User Datagram Protocol (UDP), the service enables an attacker to store a large amount of data on the memcached server and spoof requests for that data as if they came from the intended victim. The result is that the memcached server responds to each forged request by sending a large amount of data toward the victim, even though the attacker needs to send only a small amount of data to generate the forged request. This concept of amplifying the attacker's bandwidth by bouncing it off a server that will respond with a larger payload than was sent is called an amplification attack. The amplification ratio for memcached was particularly high, resulting in the largest DDoS attacks by volume to date. Fortunately, since memcached replies originate from UDP port 11211 by default, filtering of the malicious traffic by an upstream anti‐DDoS solution was simplified. The misconfigured servers that allowed these initial attacks to achieve such high bandwidth are also being properly configured to disallow UDP and/or be protected by firewalls from Internet access. DDoS attacks rely on the fact that they are able to send more data than the victim's Internet service provider (ISP) link is able to support. As a result, there is very little the victim can do to mitigate such attacks within their network. Although an edge router or firewall could be configured to block incoming floods, the link to the organization's ISP would still be saturated and legitimate traffic would still be unable to pass. Mitigation of DDoS attacks is generally provided by ISPs or a dedicated anti‐DDoS provider that can identify and filter the malicious traffic upstream or through a cloud service where far more transmission capacity exists. We won't talk a great deal about incident response to DDoS attacks in this book since most mitigation will occur upstream. With online “Booters” or “Stressors” being commonly advertised on the clear net and dark web for nominal fees, all organizations that rely on the Internet for their business operations should have anti‐DDoS mitigation partners identified and countermeasures in place. Worms Worms are a general class of malware characterized by the fact that they are self‐replicating. Old‐school examples include the LoveBug, Code Red, and SQL Slammer worms that caused extensive damage to global systems in the early 2000s. Worms generally target a specific vulnerability (or vulnerabilities), scan for systems that are susceptible to that vulnerability, exploit the vulnerable system, replicate their code to that system, and begin scanning anew for other victims to infect. Because of their automated nature, worms can spread across the globe in a matter of minutes. The WannaCry ransomware is another example of a worm, which used the EternalBlue exploit for Windows operating systems to propagate and deliver its encryption payload, reportedly infecting more than 250,000 systems across 115 countries and causing billions of dollars in damage. Detection of worms is generally not difficult. A large‐scale attack will prompt global IT panic, sending national computer emergency response teams (CERTs) into overdrive, with researchers providing frequent updates to the IT security community on the nature of the attack. From an incident response perspective, the challenge is to adequately contain impacted systems, identify the mechanism by which the worm is spreading, and prevent infection of other systems in a very short amount of time. Ransomware Ransomware refers to a category of malware that seeks to encrypt the victim's data with a key known only to the attackers. To receive the key needed to decrypt and therefore recover the impacted data, victims are asked to pay a fee to the ransomware authors. In exchange for the fee, victims are told that they will receive their unique key and be able to decrypt and recover all the impacted data. To encourage payment from as many victims as possible, some ransomware campaigns even provide helpdesk support for victims who are having issues making payments (usually through cryptocurrency) or decrypting the files after the key has been provided. Of course, there is no guarantee that a payment made through cryptocurrency, which cannot be rescinded once made, will result in the encryption key being provided. For this reason, as well as to discourage these types of attacks in general, IT security practitioners generally advise against paying a ransom. Nonetheless, many organizations that are not adequately prepared and that do not have sufficient disaster recovery plans in place feel they have little choice but to make these payments despite the lack of guarantees. Ransomware has been a significant threat since at least the mid‐2000s. The CryptoLocker ransomware appeared in 2013 and led to several variants since then. The WannaCry worm, mentioned earlier, did significant damage in 2017. Since then, more targeted ransomware attacks have struck cities including Atlanta, Baltimore, and 23 separate cities in Texas that were targeted in the same campaign. Similar examples of attacks targeting medical and enterprise environments have also occurred in recent years. The GrandCrab ransomware targeted a variety of organizations, including IT support companies to use their remote support tools to infect more victims. Targeted attacks continue to be a common strategy for financially motivated attack groups using ransomware such as SamSam, Sodinokibi, and others. Smaller organizations that are perceived as having less ro
URLs

https://pages.nist.gov/800-63-3/sp800-63b.html

Extracted

Ransom Note
CHAPTER 2 Incident Readiness Armies train for war during times of peace, and before an imminent conflict, troops harden and fortify their position to provide an advantage in the battle to come. Incident responders know that all networks are potential targets for cyber threat actors. The modern reality is one of when, not if, a network will be attacked. We must therefore prepare ourselves, our network, and our battle plans to maximize our chances of success when the adversary comes. This chapter will look at ways to prepare your people, processes, and technology to support effective incident response and contribute to the cyber resiliency of your environment. Preparing Your Process In Chapter 1, “The Threat Landscape,” we explored some of the techniques employed by modern adversaries. Threat actors have significantly increased their capabilities and focus on launching cyberattacks. The result of this shift is that traditional, passive approaches to network defense are no longer effective. Perimeter‐based defenses, where we hide in our castles and fortify the walls, are no longer applicable to the modern threat. As network perimeters disappear, cloud technologies are embraced, networks operate with zero trust for other systems, and preventive security controls fail to stop the threat, we must embrace a new approach to secure our environments. That approach is referred to as cyber resiliency. The U.S. National Institute of Standards and Technology (NIST) released Special Publication 800‐160 Vol. 2, titled “Developing Cyber Resilient Systems: A Systems Security Engineering Approach,” in November 2019. Section D.1 of this document defines cyber resiliency as “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that include cyber resources.” The concept is predicated on the belief that preventing every cyberattack is impossible and that eventually an adversary will breach even the most secure network and maintain a presence within the environment. Recognizing that reality and shifting from a purely preventive security posture to one of prevention, detection, and response is vital for the security of every network system. You can download a copy of the NIST publication here: https://csrc.nist.gov/publications/detail/sp/800-160/vol-2/final Prevention, detection, and response represents a cycle of activities that are necessary to adequately defend any cyber environment. The preventive controls that have been the foundation of information security for decades continue to be critically important. You should place as many barriers as possible between your critical information assets and adversaries who would seek to exploit them. However, you must also recognize that these preventive controls will eventually fail to stop an adversary from gaining a foothold within your environment. When that occurs, your ability to defend your network is dependent on your detective controls. You must detect the actions of the adversary within your environment and understand the malicious nature of those actions to mount an effective response. This incident response process should seek to contain the adversary, eradicate changes to your environment made by the adversary, remove the adversary from your environment, and restore normal operations. However, in addition to dealing with the immediate threat, this incident response process must be used to learn more about your adversary and your defenses. It must identify preventive and detective controls that worked and that did not work, assess your visibility over your environment, and suggest improvements to your network defense. Together, prevention, detection, and response form a never‐ending cycle where preventive controls are used to frustrate adversary activity, detective controls alert you when the adversary breaches the network, and incident response eliminates the current threat and provides recommendations for improving network defenses. Another way to explain this process of actively defending your networks is the Active Cyber Defense Cycle put forward by Robert M. Lee in his paper, “The Sliding Scale of Cyber Security,” available here: www.sans.org/reading-room/whitepapers/ActiveDefense/sliding-scale-cyber-security-36240 This model is summarized in Figure 2.1. Figure 2.1 : The active cyber defense cycle Notice that incident response is one component of a larger, active defense process. To be effective, incident responders must coordinate and cooperate with the security monitoring team as well as the operations teams that control and configure the various systems in the network environment. Effective cybersecurity begins with good IT housekeeping, including understanding the assets that comprise the network. Security monitoring operations detect potential threats and escalate them for further incident response. The incident response team, supported by additional technical resources as necessary, should identify the scope of the incident and develop a plan to remediate the impact of the adversary. This plan should be communicated to and coordinated with the various operations teams that control the environment to contain, eradicate, and recover from the adversary action. Additional information about potential cyber threats faced by the organization, in the form of threat intelligence, should be consumed and used to improve the preventive and detective controls throughout the network. In large organizations, each of these functions may occupy an entire team. In other organizations, the people performing each of the roles outlined may overlap. Incident response may be done by a full‐time team or performed by ad hoc teams made up of different people pulled together each time an incident is declared. Regardless of how you operationalize the concept, each role is important to mounting an active, and effective, cyber defense. This book, of course, focuses on the incident response portion of the cycle but discusses the other related functions as well. We cover network security monitoring in detail in Chapter 7, “Network Security Monitoring,” and provide information on sources of threat intelligence in this chapter and in Chapter 14, “Proactive Activities.” This chapter will also look at detective technologies from the perspective of the data they provide to assist in the incident response process. Chapter 13, “Continuous Improvement,” explores some preventive controls that are of particularly high value in thwarting adversary activity. You will find that this chapter and Chapter 13, despite being on opposite sides of the book, are highly interconnected. This illustrates the fact that incident response bridges the reactive and proactive aspects of network defense, completing the cycle by using lessons learned from each incident to improve our preparation and defenses for the next one. There are multiple models on which to draw when developing an incident response process. The team at NIST has released Special Publication 800‐61 Rev. 2, the “Computer Security Incident Handling Guide,” to provide an overview of the incident response process. You can download this document here: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf Their model (shown in Figure 2.2) consists of four primary phases: Preparation; Detection and Analysis; Containment, Eradication, and Recovery; and Post‐Incident Activity. The relationship between these four phases is represented in section 3 of that document and is shown in Figure 2.2. Figure 2.2 : The NIST incident response life cycle Source: “Computer Security Incident Handling Guide”; Paul Cichonski, Tom Millar, Tim Grance, and Karen Scarfone; National Institute of Standards and Technology; 2012 There are two cycles within the incident response process. First, Detection and Analysis provides information that is used during Containment, Eradication, and Recovery, with the information gained performing Containment, Eradication, and Recovery feedback to improve Detection and Analysis capabilities. Second, lessons learned through Post‐Incident Activity feed information back to improve Preparation. This idea of incident response as an ongoing cycle rather than a short‐term mission is a critical concept for modern network defense. Your incident response process is something that should be used regularly in your environment, not something that should be placed in a container marked “Break Glass in Case of Emergency.” Another popular model for the incident response process is the PICERL model, named after the first letter of each phase. The phases of this model are Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. As you can see, this model is very similar to the NIST model, differing in the terms used to describe similar activities and in where the arbitrary lines between different phases are drawn. In truth, the activities of one phase typically blur into the activities of the next, requiring that phases be repeated as new information is discovered about the incident. An incident frequently begins with the identification or detection of a single anomaly. This anomaly will be analyzed to determine whether it is malicious in nature. Analysis of the anomaly may lead to additional information that can be used to detect or identify other potentially suspicious behavior within the environment. Containment steps may be taken immediately, or additional information may first be gathered to understand the scope of the incident before any mitigation steps are taken. Eventually, a coordinated effort to eradicate the influence of the adversary will be undertaken, and recovery of normal operations will begin. Whichever model you choose as a basis for your incident response process, it is important that you document the process clearly and train all appropriate staff in its implementation. The incident response process within your organization should outline the roles, responsibilities, and authorities of each member of the organization who will be involved in incident response. The process should be understood not only by those who will perform incident response but also by the leadership of related units with which the incident response team will need to interact. This helps ensure that each team is aware of its role in the active defense of the organization. You may decide to provide security monitoring teams and operations teams with playbooks related to triage and reporting of different types of possible incidents to help ensure a smooth transition from detection to response, and you will definitely want incident responders to coordinate frequently with business continuity and disaster recovery planning teams and technical operations teams to ensure that incident mitigation efforts can be coordinated in a timely manner. Your incident response team must have clearly defined authorities and technical capabilities to access the systems involved in an incident. Incident response often occurs outside of normal business hours under time‐sensitive circumstances, so addressing these concerns as you go is a recipe for disaster. Preparing for the inevitable situations where team members will need to spend money on supplies, gain administrative access to impacted systems, and communicate with key stakeholders on short notice will help ensure a smooth response. Legal and insurance policy issues must also be considered when developing your incident response process documents. Depending on the nature of the incident and the potential loss that may be involved, the terms of your insurance policy may dictate certain actions be taken—or not taken. Contractual obligations with third parties, including nondisclosure agreements, often require notification if information is taken without authorization. Regulatory notification requirements may also exist for compromise of personally identifiable information or other customer data. Ensure that you involve your legal team during any significant incident to ensure that all actions taken conform to applicable legal requirements. Your legal team should also be involved in the decision of when it is appropriate to notify law enforcement, partners, or customers regarding a potential incident. Your response plans should outline how communication will be handled both vertically in your chain of command and horizontally to other organizational units and third parties. A significant amount of time might be needed to explain the incident, provide current updates, and address the legitimate concerns of those who are impacted. If not anticipated, handling these interactions will consume so much time that they can impact your ability to mount an effective response. Plan for this inevitable outcome and assign appropriate staff to handle these communication needs. The person or persons tasked with this should be able to provide technical guidance, but also be able to handle crisis management tasks as these types of notifications are rarely well received and may spark a whole new set of issues that will need to be addressed. It is also important that you consider the incident response process a part of your daily operations to ensure that your staff remain familiar with and proficient in the steps necessary to mitigate malicious activity. You can find a wealth of sample documentation for incident response methodologies provided by CERT Societe Generale at https://github.com/certsocietegenerale/IRM. You can also find sample playbooks at www.incidentresponse.com/playbooks. However you decide to structure your incident response procedures, we encourage you to never forget that each incident will be unique. Playbooks and similar guides based on general categories of incidents can be useful tools to help direct your response; however, realize that the nature of an incident is often not understood until well into its investigation. Frequently, an incident begins with a simple anomaly and a need to understand it in more detail. View your incident response procedures as high‐level guidance to help you analyze and determine the next logical action to take each step along the way, providing an overall roadmap to how to successfully resolve an incident. You can think of this like playing with Lego bricks. The incident response procedures provide the high‐level guidance of what the finished product should look like, but in order to build anything, you need to have a set of bricks that can be assembled in a variety of ways. Your incident response procedures will provide the guidance, and the rest of this book will provide you with a variety of bricks, discreet technical skills that you can combine and assemble as needed to achieve an effective incident response. THANKS, MIKE! Michael Murr is an experienced incident handler, researcher, and developer who has worked in a variety of sensitive environments. The coauthor of the SANS “SEC504: Hacker Techniques, Exploits, and Incident Handling” class, Mike possesses a wealth of information regarding incident response. We appreciate him taking the time to review this chapter and make suggestions to ensure that we covered the most important and current topics. Preparing Your People Incident responders must be jacks of all IT trades and masters of many. It is a challenging task that requires dedication and commitment to ongoing education to keep up with the ever‐evolving threatscape. Technical training in a variety of fields is required to be an effective incident responder. Working your way through this book will provide a solid basis, and our companion website, www.AppliedIncidentResponse.com, provides dozens of links to free, online training resources to continue honing your skills. In addition to technical training, your team must be trained in your incident response policies and procedures. Since training on policy might not be the most exciting activity, consider using tabletop exercises as a means of not only understanding the applicable process, but also evaluating ways to improve that process. Engaging in mock incidents is a great way to highlight potential gaps in defenses, visibility, training, and other aspects of your network security with minimal cost. A successful tabletop exercise
URLs

https://csrc.nist.gov/publications/detail/sp/800-160/vol-2/final

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf

https://github.com/certsocietegenerale/IRM

https://bestpractical.com/rtir

https://github.com/certsocietegenerale/FIR

https://thehive-project.org

https://otx.alienvault.com

https://apps.nsa.gov/iaarchive/library/reports/spotting-the-adversary-with-windows-event-log-monitoring.cfm

https://github.com/BinaryDefense/artillery

https://github.com/cowrie/cowrie

https://github.com/mayhemiclabs/weblabyrinth

https://github.com/threatstream/mhn

https://isc.sans.edu/diary/Detecting+Mimikatz+Use+On+Your+Network/19311

https://github.com/EmpireProject/Empire/blob/master/data/module_source/management/New-HoneyHash.ps1

https://canarytokens.org

Extracted

Ransom Note
CHAPTER 5 Acquiring Memory Attackers and defenders are in a constant cat‐and‐mouse game, with defenders coming up with new ways to detect attacks and attackers evolving new methods to evade that detection. One of the current battlegrounds for this game is volatile system memory. As antivirus and other endpoint defenses improved their ability to detect threats on disk, attackers simply moved to so‐called fileless malware, performing malicious acts using existing system binaries or injecting malicious code directly into the memory of existing processes. Although many techniques are used to obfuscate malicious code in transit and at rest on nonvolatile storage (such as system disks), code that executes must be fed in a non‐obfuscated way into the processor. Since the processor uses memory as its storage space, analysis of random access memory (RAM) is a critical component in the incident response process. In this chapter, we'll look at ways to access and capture system memory from both local and remote systems. We'll delve more deeply into the analysis of memory in Chapter 9, “Memory Analysis.” Order of Volatility One of the core tenets of digital forensics is that, to the greatest extent possible, you should preserve the digital evidence in an unaltered state. We want all interaction with systems involved in investigations to be methodically performed to minimize any changes that we cause to the system and the data it contains. Digital storage can be categorized as either volatile or nonvolatile. Volatile storage requires a constant flow of electricity to maintain its state. The most obvious example of this type of storage is RAM. Nonvolatile storage, on the other hand, does not require a constant flow of electricity to retain its data. System disks, whether solid‐state or spinning platters, are examples of nonvolatile storage. When you collect evidence from a system, collect the most volatile data first to ensure that it is in as pristine a state as possible. We refer to this concept as the “order of volatility”; it should guide your evidence collection efforts during an incident response. Locard's exchange principle is a concept of forensic science that, in its broadest sense, states that as a person interacts with a crime scene, they both bring something to the scene and take something away from it. This concept is why law enforcement will seal off the area around a crime scene, detail the comings and goings of any personnel entering the crime scene, and use protective equipment to minimize the exchange between the investigators and the evidence that they seek to collect. The principle applies equally to digital investigations. As you interact with a system, you should always be cognizant of any changes that the interactions may cause and their potential impact on the value of the data on the system as evidence. This desire to preserve evidence led to the prevalence of dead‐box forensics, where the power was immediately removed from the system to protect the data stored on nonvolatile storage from any type of change. A verifiable, bit‐for‐bit image of the data contained on each drive would be collected through a forensic imaging process, and that image would be analyzed using a separate forensic analysis workstation. Since RAM is critically important to incident investigations, this approach needs to be reconsidered for incident response. The process of removing power from the system deletes the very evidence in RAM that we wish to preserve. This is a classic example of attackers modifying their techniques based on the processes followed by incident responders. Knowing that many investigators' first move would be to unplug a system led many attackers to begin relying more on RAM to store their malicious code. Despite the fact that the code would not be persistent across reboots, the stealthy advantage of existing only in RAM provided the attackers with sufficient advantage to justify the loss of persistence. When files were stored on disk, they were often stored in an encrypted or encoded format to thwart offline analysis. System RAM is in a constant state of flux. Even when a user is taking no action at the keyboard or mouse, modern operating systems are conducting many different functions under the hood. Network communications frequently occur, Address Resolution Protocol (ARP) caches update, security scans are performed, system optimization routines run, remote users take actions on the system, and so forth. All this activity means that even if the incident responder puts their hands behind their back, does not touch the keyboard, and merely observes the system, changes are occurring. Because RAM is involved in the ongoing operation of the system, getting a completely pristine, bit‐for‐bit forensic image of RAM as it exists at an exact moment in time, as you'll see later in this chapter, is not a viable objective. However, you do want to minimize the changes that you cause to the system during the incident response process. Data on disk, even after it is deleted, may still be recoverable from unallocated space with forensic techniques. Any actions that you take that cause the system to write to disk may overwrite potentially valuable evidence from unallocated space. Similarly, simply unplugging a system would cause you to lose valuable information from the volatile memory of the system. The collection process must therefore consider the realities of the technologies with which you are working to obtain the evidence necessary for your investigation, while minimizing the changes to that evidence caused by your actions. In this chapter, we'll cover ways to capture the contents of system RAM to preserve them for later analysis. RAM is extremely volatile, so this should be the first evidence that you collect once you determine that a system is impacted by an incident. Keep in mind, however, that since RAM is in constant flux, your analysis tools may not always be able to parse the captured data correctly. To ensure that you have access to the information you need, after you collect RAM, you should still interrogate the system at the command line using the techniques discussed in Chapter 4, “Remote Triage Tools,” in order to collect evidence of activity that is occurring on the system. You can also leverage tools like osquery and Velociraptor (both mentioned in Chapter 4) to obtain additional information about activity on the system before taking it offline for disk imaging or containment. Although you must always be cognizant of any changes made to the systems under investigation, you must also conduct your incident response in an efficient and effective manner. The scope of modern incidents and the vast amount of data stored on each potentially involved device means that it is no longer feasible for full disk imaging and analysis to be done on every system. Remote triage to identify systems of interest, capture and analysis of volatile memory, and targeted acquisition and analysis of data on disk is an appropriate and effective strategy when responding to an incident. All actions you take should be thoroughly understood and documented during the incident response process so that the reasonableness of each action can be clearly understood by anyone who may later review your actions, including senior management or court officers. You should always be guided by forensics best practices, but you must apply them in a manner that accounts for the realities of the technologies involved and the need to understand and resolve incidents in a timely manner. Hence the name of this book: Applied Incident Response. Local Memory Collection Collection of data stored in the RAM of a local system is often conducted by attaching an external storage device to the system and running a RAM acquisition utility to copy the contents of the system memory to that external media. Note that we use the word copy rather than image intentionally in this case. Because RAM is constantly in flux while the system is in operation, and because the act of copying several gigabytes of data to removable media takes time, before the copy operation has been completed, the data copied from RAM pages early in the process may have changed by the time the last pages of memory have been written to the external media (sometimes referred to as “RAM smear”). Although the term “RAM image” is sometimes used to refer to the contents of memory that have been dumped to a file for analysis, the term is a misnomer. Whereas nonvolatile disks can be powered off to preserve their data, attached to a forensics write blocker to avoid any changes being made during the imaging process, and a bit‐for‐bit identical image produced, the same cannot be done with RAM. Additionally, the tool used to make the copy from RAM must itself be loaded into RAM to execute, causing changes to the data as it does so. For these reasons, we'll avoid the verb image when discussing the process of collecting data from RAM and will instead use terms like acquire, copy, or dump. To minimize the changes that occur within RAM during the RAM acquisition process, you'll want to select media to which you can write as quickly as possible. External USB solid‐state drives tend to provide the fastest write times. USB thumb drives are another media choice that is often used during incident response. When purchasing external media for the purposes of collecting RAM, research the write speeds of the device and select the fastest devices that fit within your budget. Note that the marketing behind USB devices can be a bit confusing. USB 3.0 is the same as USB 3.1 Generation 1, with both providing a maximum theoretical transfer speed of 5 Gbps. USB 3.1 Generation 2 devices are currently less common but are capable of theoretical transfer speeds up to 10 Gbps. USB type A and type C merely describe the physical connection of the device and do not indicate anything relating to the transfer speed. Additionally, the chips contained within a device will also heavily impact the write speed that the device can achieve. Higher‐end devices that use faster NAND memory chips will frequently advertise their maximum write speeds to differentiate themselves from the competition and explain their higher cost. Write speed can translate into significant differences in practical application. For example, dumping the RAM on a laptop with 32 GB of RAM took us 30 minutes with a mid‐priced USB 3.1 Generation 1 (also known as USB 3.0) thumb drive, but it took only 10 minutes using a USB 3.1 Generation 2 solid‐state drive. Using faster media not only saves time, but it also reduces the amount of change to the data in RAM that occurs while the dump is being made. When selecting media, ensure that you have adequate storage capacity on each device in order to receive not only a copy of the active RAM, but also files associated with memory, such as the page file or swap space and even files from disk that are currently being referenced by processes in memory, so that you have space to dump all relevant information for that system onto the same storage media. Preparing Storage Media Once you have acquired a suitable external storage device, you'll need to prepare it for use. To avoid the possibility of cross‐contamination from other incidents or malware infection from general‐purpose use, you must forensically wipe your storage media prior to using it in an incident. Forensically wiping simply means overwriting all bits on the device with a known pattern, typically all zeros, to ensure that any data previously stored on the device is no longer present. Although you could use a Windows full format to delete logical data in the current volume of the removable device, you may not be deleting all data that exists outside the confines of that volume. It is therefore preferable to use a tool designed specifically to wipe all the data from the storage media. One open‐source possibility is the Paladin Linux toolkit made available for free by the folks at www.sumuri.com. A continuation of the previous Raptor project, Paladin provides several useful forensics tools in a bootable Linux distribution, including tools for wiping media, imaging storage devices, and conducting forensic analysis. Paladin can be downloaded from the Sumuri website at no cost. Although Sumuri lists a suggested price on the site, you can adjust that price to zero rather than make a voluntary contribution at the time of download. The downloadable Paladin manual, also on the Sumuri website, provides instructions for creating a bootable USB drive from the Paladin ISO, which is a forensically sound version of Ubuntu. Once Paladin is set up on your bootable USB, you boot your analysis system (not the target of your incident response) into the Paladin Linux distribution, connect the media to be wiped, and open the Paladin Toolbox. When Paladin boots, it mounts any internal media as read only so that no changes are made to the host system. If you prefer, you can boot the ISO file in a virtual machine, but for cleaning previously used USB devices, we prefer booting our computer directly into Paladin to reduce the chance of accidentally exposing our host OS to potential malware from past incidents. Once the Paladin Toolbox is open, select Disk Manager from the menu on the left side, highlight the device (not the volume/partition) that you wish to wipe, and click the Wipe button on the right of the Toolbox window. You'll be prompted to ensure that you wish to continue (please make sure you have selected the right device to avoid wiping your system drive or other critical data). Once you confirm that you are sure you selected the right device, Paladin asks if you would like to verify the wipe, which will read back the data on the media to ensure that all data was indeed wiped correctly. Once you make your selection (it's always best to choose to verify), Paladin will mount the selected device as read/write and proceed to wipe the media (see Figure 5.1). Under the hood, Paladin uses the utility from the U.S. Department of Defense to perform the wipe operation. You can follow detailed information about the process in the Wipe and Verify tabs at the bottom of the toolbox, or you can monitor the overall progress of the wipe and verify process (which can take some time depending on the capacity and transfer speed of your device) in the Task Logs tab near the bottom of the Toolbox window. When the process has completed, you can use the Format button located to the left of the Wipe button to add a filesystem to your device. Depending on your intended target system, choose a suitable filesystem, but avoid FAT since its 4 GB file size limitation may impede your ability to collect RAM if your collection tool dumps the contents of RAM to a single file. For Windows systems, NTFS or exFAT works fine, and for *nix systems, Ext4 or exFAT is often suitable. HFS+ is also available for macOS systems. Alternatively, simply plug the wiped USB into your analysis computer and use the native operating system to format the media to avoid any compatibility issues that sometimes occur when formatting a drive on Paladin for use with other operating systems. Figure 5.1 : Paladin Toolbox being used to wipe removable media The final step in preparing your media is to copy your RAM acquisition utility of choice onto the prepared drive. There are several different utilities that you can use, including Magnet RAM Capture from Magnet Forensics, DumpIt from Comae Technologies, Live RAM Capturer from Belkasoft, and FTK Imager Lite from AccessData, among others. We'll use the open‐source and extremely capable utilities, originally part of the Rekall project, for most of our examples in this chapter. You can find them for free on GitHub, as discussed in the next section. The Collection Process Rekall is a fork of the Volatility project, with the primary difference being that the team behind Rekall (which is a team from Google) wanted to expand the capabilities of the Volatility project to include live memory analysis. As systems, particularly servers, continue to use increasingly large amounts of RAM, the issues involved in collecting entire RAM dumps—and the potential for memory corruption to occur from the time the collect
URLs

https://github.com/google/rekall/releases

https://github.com/Velocidex/c-aff4/tree/master/tools/pmem

https://github.com/Velocidex/c-aff4/releases

https://docs.microsoft.com/en-us/sysinternals/downloads

https://docs.microsoft.com/en-us/powershell/scripting/learn/remoting/ps-remoting-second-hop

https://digital-forensics.sans.org/media/rekall-memory-forensics-cheatsheet.pdf

https://digital-forensics.sans.org/media/Poster_Memory_Forensics.pdf

https://github.com/google/rekall

Extracted

Ransom Note
CHAPTER 6 Disk Imaging Years ago, when a system was suspected to have been compromised, incident handlers would run a few command‐line utilities to extract basic information from memory, then power off the system, remove the hard drive, and capture a forensic image of the system for analysis. This was done routinely across many systems when performing incident response. Although the approach to incident response has changed since then, capturing a forensically sound image of an impacted system is still an important skill for an incident responder to have. We may not make a full‐disk image of as many systems as in the past, instead relying on memory forensics, remote triage, and other updated techniques, but there are times when a full forensic image and subsequent analysis is the most appropriate step for an incident responder to take. Frequently, this analysis will be done on systems during the early stages of the incident to better understand the tactics, techniques, and procedures (TTPs) of the adversary, to identify potential indicators of compromise that can be used to locate other impacted systems, and to preserve evidence of the incident should it be needed for future legal action. Protecting the Integrity of Evidence Preserving the integrity of the evidence is the cornerstone of digital forensic imaging. The imaging process is not a simple copy of data from one device to another, but rather a scientifically verifiable activity that captures all the data contained on one device into an image file that can be verified as a true and accurate representation of the original at any time. To achieve this level of accuracy, we rely on two fundamental principles: write blockers and hash algorithms. A write blocker is a hardware device or software program that allows data to be read from a piece of digital media without allowing any writes, or changes, to be made to that piece of media. A hardware write blocker is typically employed by connecting the media to be imaged to a separate forensic analysis workstation. Software write blockers are often used by booting the target system into a separate, forensically sound, operating system that can implement write blocking through software (such as by mounting the original media as read only). In both cases, the original operating system is not functioning at the time of the imaging and no changes are being made to the storage media while the images are collected. One‐way hash functions verify that the forensic image is an exact, bit‐for‐bit duplicate of the original. A one‐way hash algorithm is a mathematical function that accepts a stream of data as input, performs mathematical calculations on the input, and produces a fixed‐size output that we call a hash value. As long as the same input is provided to the same hash algorithm, the resulting hash value will always be the same. If the input changes by so much as one bit, the hash value resulting from the hash algorithm will be completely different. As our imaging tool reads the data from the original storage device, it calculates the hash value of the data stream as it is read. This data stream is then written to an image file on a separate piece of storage media. After the image file is created, the contents of the image file are read back and the hash value of the data in the image file is calculated. If the hash value of the data read from the original media is the same as the hash value of the data read from the image file, then the image is a 100 percent accurate duplication of the original. Forensic analysts leverage hash values in other ways as well. If the hash values of two files are the same, then by extension, their contents are also the same. If the hash values of two different files are different, then their contents are different. We can therefore use hash values to identify duplicate copies of data files on storage media being examined. We can also compile or obtain sets of hash values of known files, such as default installation files for Windows or Office products, to identify files that are not likely to be of investigative interest during forensic examination (the National Institute of Standards and Technology offers such hash sets for free at www.nist.gov/itl/ssd/software-quality-group/nsrl-download). Similarly, if we want to know whether a specific file is located on a piece of media (such as determining if a piece of stolen intellectual property is present), we can first calculate the hash value of the file in question and then scan for any other file with a matching hash value on that media. If we find a matching hash, that confirms the file is located on that media. Many different hash algorithms can be used. Some of the more common ones encountered in digital forensics are Message‐Digest Algorithm 5 (MD5), Secure Hash Algorithm 1 (SHA‐1), and SHA‐256. Each of these algorithms ideally produces a unique hash value for each data input. In recent years, MD5 and SHA‐1 have been found to be susceptible to forced collision vulnerabilities that can generate two separate data inputs that produce the same hash value. Although this vulnerability has been of significance for technologies such as digital certificates, its significance in the digital forensics community has been less severe. Hash values in digital forensics verify the accuracy of a forensic image and identify specific files that may be present on storage media. It is highly unlikely that random copy errors would generate colliding hash values; however, to safeguard against any potential risk, you should calculate hash values using two separate algorithms. There is currently no known vulnerability that could intentionally generate collisions across both MD5 and SHA‐1 for the same piece of data. By confirming that both the MD5 and SHA‐1 hashes are identical for the source data and the image, you can prove with certainty that the image is an exact duplicate of the original. The process of capturing a forensic image from storage media while the system itself is not running is sometimes called “dead‐box forensics.” However, we do not always have the luxury of turning a system off in order to collect an image of its data. For example, if we believe that a critical server has been impacted by an incident, turning the server off may produce a denial‐of‐service condition that is unacceptable to the business. Furthermore, taking a system off‐line may raise suspicions with the adversary that their presence has been detected. In such circumstances, we can perform a live forensic image of the system while it is still running. As we mentioned with acquiring memory, any time you are attempting to make a copy of data that is potentially changing, the best you are able to do is to verify that the data in the image is an accurate and true representation of each sector of the disk at the time it was read. The process to achieve an accurate image is similar to what we see with a dead‐box image. An imaging tool reads the data from the original media, calculates the hash value of the data as it is read, and then writes that same data to an image file. Once the image file is completed, the tool reads back the data from the image file and verifies that the hash value of the original matches the hash value of the image. It is important to note that if you were to then generate the hash value of the original drive again, it would not likely match the hash value of the image, since the original undergoes continuous changes while the system is running. The image is an accurate representation of the original system at the time the image was made, but that time has already passed by the time the image is completed. Here's another factor to consider when making a live system image: each file on an operating system has a series of time stamps that record things such as when the file was created, modified, or accessed. We will examine time stamps in detail in Chapter 11, “Disk Forensics,” but for now, note that it is important when making a forensic image to not modify these time stamps. The original values can be used to facilitate analysis. If we were to simply use the operating system to make a logical copy of data, the time stamps associated with each file might be modified as we access each one in order to copy it to another piece of media. Instead, as we saw with capturing data from RAM, we must use tools that are able to access the data without going through the standard operating system mechanisms to access the data on disk without modifying the file time stamps or other metadata. We can make different types of forensic images. We can duplicate some data (such as a single partition or volume) or all of the data on the original source media, and we can store that data in a number of different image formats. A complete image of all data on a piece of storage media is referred as a physical image since it captures all the data stored on that physical device, from the first bit to the last. When we capture a subset of the data, such as a single partition, we refer to that as a logical image since it captures only the data on a logical subsection of the device. When writing the image file to storage media, we can use several different formats. The most basic is a raw, or dd, format, as has been created by the *nix utility for decades. In a raw image, the zeros and ones read from the source media are simply written into a file (or set of files) exactly as they were found on the source. Although this creates an accurate image, the resulting image files are not compressed and do not contain any metadata. Other formats, such as the Expert Witness Format (EWF or E01 format), emerged to allow the image data to be stored in a compressed format. These formats can also store metadata, such as the hash value of the original source media, periodic checksums to identify which areas are corrupt in the event of an issue with the image, case information, and similar administrative data that can be useful when dealing with a forensic analysis. This helps ensure that the image is not modified at any time during evidence handling or analysis. Other formats, including the Advanced Forensics File format 4 (AFF4), which is an open‐source forensic format (refer to www.aff4.org), and the EnCase Evidence File Format Version 2, an enhancement to the EWF format created by Guidance Software (now a part of OpenText), support encryption and other improvements. Files in the EnCase Evidence File Format Version 2 use the Ex01 extension. Not all tools support all formats, so you should plan your imaging to support the tools that you will use to conduct your analysis. The raw (dd) and EWF (E01) formats have the broadest support among most tools and are generally a safe choice if you are not sure in advance which format will best meet your needs. In addition to imaging entire physical drives or logical volumes, we can also encapsulate individual, logical files, and/or folders into an image file for preservation. In such cases, individual hash values would be calculated for each file so that the accuracy of the file duplication could be verified later. This type of imaging is discussed in the “Live Imaging” section later in this chapter. Regardless of the type of image that will be made, remember the order of volatility for collection of evidence as we discussed in Chapter 5, “Acquiring Memory.” The most volatile data should be collected first. Therefore, before imaging nonvolatile storage media, you should first collect a RAM dump, then interrogate the system through command‐line queries, and finally move on to the collection of data from disk or solid‐state media. Dead‐Box Imaging When circumstances permit, the preferred method for taking a digital forensic image is with the target system not operating. This method allows us to control the imaging process and ensure that no changes are made to the original storage media (however, solid‐state media can pose some challenges as discussed later in this section). This type of image is typically made by removing the storage media from the original system and connecting it to a separate forensic workstation through a hardware write blocker. Alternatively, this type of image can also be made using the original system that is booted into an alternate operating system, such as a forensically sound operating system provided by the forensic analyst. Examples include Paladin (available at www.sumuri.com) or the SIFT workstation (digital-forensics.sans.org/community/downloads). Dead‐box imaging, where the original device's operating system is not being executed and changes to the original media are blocked, should be used whenever possible. In many circumstances, dead‐box imaging is simply not viable. For example, some servers cannot be shut down due to operational reasons. At other times, you may come across whole disk encryption. If you do not possess the necessary key, or your forensic analysis software is not compatible with the encryption method being used, then imaging the running system while the encrypted volume is mounted may be your only option to gain access to the data, since shutting the system down would remove the keys necessary to decrypt and access the information stored on the encrypted volume. WIPED STORAGE MEDIA As we discussed earlier, whenever you make a forensic image, you should store the resulting image files on a separate piece of storage media that has been forensically wiped and formatted. This is done to ensure that there is no potential cross‐contamination from one case to another, and it is a best practice for handling digital evidence. Many different forensics tools, such as EnCase Forensic and Sumuri's Paladin, are capable of overwriting media in preparation for use. Before we proceed, you need to understand a bit about the differences between spinning platter and solid‐state media. Solid‐state drives provide faster, more reliable, nonvolatile storage for digital devices of all types. Along with these improvements, however, have come some challenges for digital forensic examiners. Solid‐state drives are fundamentally different technology from the magnetic spinning platters of the past, and many of the assumptions of digital forensics that were based on spinning‐platter technology can no longer be taken for granted. For example, with spinning disks, data would frequently be recoverable for years after it had been deleted, since moving the read/write head back to each sector to overwrite data when it was no longer needed would be inefficient. As a result, old data could remain on the platter for a long period of time. With solid‐state media, before new data can be written to a previously used area, that area must first be reset. This process is handled by the command, which is performed by the controller of the solid‐state device itself. Once data is marked for deletion in the filesystem, the solid‐state device's firmware determines a reset schedule for that area of the media and prepares it for reuse. When the reset and reuse pre‐preparation finishes, the data is no longer recoverable through forensic techniques. Be aware that a command can occur without further interaction with the computer. The act of applying power to the solid‐state drive can initiate a operation, even if that device is connected to a hardware write blocker. The operation then overwrites otherwise potentially recoverable, deleted data. Different makes and models of solid‐state devices implement the command in different ways. Different operating systems and filesystems add to the complexity. Fully addressing this challenge is still an active area of research in digital forensics. Another aspect of solid‐state technology that complicates digital forensics is wear leveling. Each area of the solid‐state device can be overwritten only so many times before it is no longer reliable. If a file changes frequently, causing the area of the solid‐state media where it is stored to be overwritten more frequently than other areas, the solid‐state device controller will move that data to another area of the device. This is done in order to keep any one area from being used more than the rest of the device and helps ensure the overall longevity
URLs

https://accessdata.com/product-download

https://sumuri.com/software/paladin

https://marketing.accessdata.com/ftkimagerlite3.1.1

https://github.com/ArsenalRecon/Arsenal-Image-Mounter

Extracted

Ransom Note
CHAPTER 7 Network Security Monitoring Many of the events that we have discussed so far are recorded on endpoints within your network; however, valuable information can also be found on the network itself. Network security monitoring (NSM) techniques are used to monitor network communications for security‐relevant events. For maximum effect, we recommend a combination of full‐packet capture in addition to logging network activity. One of the most robust open‐source solutions to address NSM is Security Onion. This Linux distribution combines a multitude of different open‐source projects into an expandable solution that rivals any commercial NSM product available. Although this chapter is focused on network activity, we will also explore the Elastic Stack and ways to integrate host‐based data to provide enhanced visibility across your network. Security Onion The Security Onion project, started by Doug Burks in 2008, has evolved into a leading, open‐source NSM platform. Since 2014, the project has been supported by both community volunteers and the team at Security Onion Solutions (https://securityonionsolutions.com), who offer commercial support services and online training courses for the tool. Security Onion integrates several powerful open‐source projects to provide visibility into network traffic as well as host‐based indicators of compromise. We will examine the architecture for deployment of Security Onion in an enterprise, examine each of the major tools integrated into the platform, and look at options for expanding the capability of Security Onion even further. Architecture Security Onion is a robust tool that collects, stores, and processes a vast amount of data from the network. To do so, it is best configured across multiple different pieces of hardware, each optimized to perform a specific function. The ideal deployment architecture is shown in Figure 7.1, which was drawn from https://securityonion.net/docs/Elastic-Architecture. Figure 7.1 : The recommended Security Onion architecture Source: Doug Burks, “Security Onion ‐ Distributed Deployment.” Created by Security Onion Solutions. In Figure 7.1, the forward nodes (referred to as sensors in earlier versions of Security Onion) are placed throughout the network to provide the packet sniffing capability on which the entire system depends. Each forward node should consist of at least two network interfaces: one used for management, and one that is placed in monitor mode in order to capture traffic from the network. When traffic is captured, different types of processing are applied to that data. Full‐packet capture is performed on the forward node, using Berkeley Packet Filters (BPFs) to minimize traffic as needed before storing the data in a local packet capture (pcap) file. In addition to full‐packet capture, the open‐source project Zeek (formerly called Bro and with many of its internal files and directories still bearing its legacy name) generates a series of logs to describe the network activity that is sniffed. These text‐based logs require much less storage space than full‐packet capture, allowing longer‐term storage of metadata relating to network activity and facilitating rapid searching, as you will see later in this chapter. The data sniffed by the forward node is also processed against intrusion detection system (IDS) rule sets (using either Snort or Suricata). Once generated, the logs from the forward node are forwarded to the master node, by means of Syslog‐NG and an IDS agent. The pcap files remain local to the forward node but can be queried remotely as necessary. The master node is responsible for coordinating activities throughout the entire Security Onion system. This includes receiving information from each forward node, hosting interfaces through which analysts may connect to the system, generating alerts in response to defined events, and coordinating searches across the other nodes as necessary. Although the master node stores some of the log data that it receives, much of that data can be offloaded to storage nodes. This allows the solution to scale as needed. Storage nodes receive log information from the master node, store that data, and use Elasticsearch to create indices for rapid searching. We will look at each of these software components throughout this chapter. The hardware requirements for each node, as well as alternate architectures to meet a variety of use case scenarios, are available at the official Security Onion online documentation site, located at https://securityonion.net/docs. A print copy of the documentation is also available through Amazon (www.amazon.com/dp/179779762X), with proceeds going to the Rural Technology Fund. A key consideration with any NSM solution is to ensure adequate and proper placement of the sensors used to sniff the traffic. Placing sensors only at your perimeter does not provide adequate visibility into network operations. Forward nodes, or sensors, are connected to network taps or span ports with at least one of their network interfaces placed in monitor mode to facilitate ingestion of all packets crossing that point on the network. It is important to place sensors on internal segments to gain visibility into communications between internal hosts for the detection of lateral movement, activity from malicious insiders, and many other security threats. As we discussed in Chapter 2, “Incident Readiness,” incorporating segmentation into your network architecture creates chokepoints through which data must travel. These chokepoints provide opportunities for not only preventive controls like firewalls, but also detective controls such as the placement of an NSM sensor. Sensor placement decisions should also account for network address translation and proxy activity that occurs on your network. If malicious traffic is detected by a sensor, it is best for the sensor to have visibility into the IP address of each endpoint to the communication rather than recording an IP address of an intermediary acting on behalf of another system. For example, assume a client makes a DNS request for a known‐malicious domain, providing a network indicator of compromise. If you are monitoring traffic only at the edge of your network as it leaves your environment and heads to the Internet, then the NSM sensor monitoring at that location will report that your DNS server made the request to the known‐malicious site in response to a recursive query issued by the client. The NSM sensor data will provide the IP address of your DNS server, not the IP address of the internal host that initiated the communication, so you will need to use other data sources to determine the source of the problem inside your environment. The same situation is true with web proxies, network address translation (NAT) servers, and any other device that relays a request on behalf of another system. Placement of NSM sensors should therefore take your network architecture into account, and enough sensors should be placed throughout the environment to provide adequate visibility across all segments as needed. Keep in mind that you may be able to leverage NetFlow or IPFIX (IP Flow Information Export) data generated by network appliances to augment dedicated NSM sensors in your environment and increase overall network visibility. You can also ingest logs from DNS servers, proxy servers, and email servers to provide additional data points for analysis and help fill any gaps that may exist in your sensor deployment. Another consideration for NSM is encrypted traffic. As more network communication is encrypted by default, insights gleaned through traffic monitoring may be restricted. One approach to this problem is to introduce TLS/SSL (Transport Layer Security/Secure Sockets Layer) decryption devices, which terminate encrypted connections between clients and the description device, and then reinitiate new encrypted sessions outbound to the intended recipient to allow for man‐in‐the‐middle monitoring of the communication in transit. These systems have technical, privacy, and legal considerations that must be addressed should you choose to use them. Although that may be suitable in some situations, in many cases decrypting network traffic may hurt overall security more than help it. Carefully evaluate the security implications of any such solution under consideration, taking into account how you will protect user privacy, maintain accountability, and comply with any legal constraints that may be present in your jurisdiction. In addition to the policy considerations, many TLS/SSL inspection boxes use less secure encryption algorithms to reduce the load on the system when initiating the outbound connection to the endpoint. Carefully evaluate any solution to ensure that you are not introducing cryptographic weakness into your communications as a result. Security Onion, and most other NSM sensors, can restrict the traffic being captured using BPFs. By doing so, you can avoid capturing large amounts of encrypted communications that will take up storage, reduce retention time for packet captures of non‐encrypted communication, and put additional load on your sensors. Deployment of each sensor involves proper filtering of the packets captured and tuning of any IDS rules being applied. The specifics will be unique to each environment, and you can find additional guidance to apply to your situation here: https://github.com/security-onion-solutions/security-onion/wiki/PostInstallation As we will discuss later in this chapter, there is much that can be done to identify malicious traffic even when it is using TLS/SSL by examining the non‐encrypted portions of the communication. Security Onion also uses host‐based data sources in order to provide additional visibility to help overcome the challenges presented by encrypted network communications. Tools Security Onion includes many tools, so we will focus on the most critical components. Keep in mind that none of these tools is unique to the Security Onion distribution. Each can be downloaded separately from its open‐source project site and implemented independent of Security Onion if desired. To evaluate Security Onion in a simple manner, the ISO installation routine offers an Evaluation Mode that installs all required services on one system, which can be a virtual machine. To experiment with the various tools as we discuss them in this chapter, you can download the ISO from: https://github.com/Security-Onion-Solutions/security-onion/blob/master/Verify_ISO.md Security Onion provides flexibility as to which tools are implemented during a production system installation and allows users to choose specific components to meet their needs. For example, you can choose between using Snort or Suricata as the IDS for your Security Onion installation. Snort and Suricata each has its strengths and weaknesses, which you should evaluate before making a decision in a production environment; however, if you choose the Evaluation Mode during the installation process, Snort is installed by default. For this reason, we will focus on Snort in this discussion, but realize that Suricata is a powerful IDS and may offer advantages in some cases. You can learn more about Suricata at https://suricata-ids.org. PCAP PLAYBACK AND ANALYSIS In addition to being useful for continuous monitoring of a live network, Security Onion is also very effective at network forensic analysis of historical packet captures (pcap files) and comes with utilities to replay previously recorded pcap files. One such tool, , replays the contents of a pcap file as if it was being detected in real time by your sniffing interface. Using , the time stamps recorded for the traffic being replayed will be the current time since the sniffing interface will consider it to be a live communication happening as it is replayed. It is often desirable to ingest the contents of the pcap file into Security Onion for analysis while retaining the original time stamps from when the traffic was first recorded. To accomplish this, Security Onion provides the utility. This tool should only be used on a stand‐alone analysis workstation and not on a production Security Onion deployment since it stops services and makes changes to the system. An analyst can quickly spin up a virtual machine configured with a complete Security Onion installation, use to import a previously captured pcap, and leverage all the benefits of Security Onion to analyze that network communication. Security Onion comes with sample pcap files inside its directory, as explained in more detail at https://securityonion.net/docs/pcaps . These files can be automatically replayed in the Evaluation Mode installation of Security Onion with the command. This approach provides a convenient way to experiment with Security Onion's features. You can also find loads of sample pcap files and traffic analysis exercises (with solutions) to continue practicing with these techniques at www.malware-traffic-analysis.net . Snort, Sguil, and Squert Snort is a long‐standing staple of open‐source IDS and is freely available from www.snort.org. Like any IDS, Snort relies on a series of rules that describe known malicious behavior. Snort can be configured to passively alert when a signature match is detected (the default behavior within Security Onion), or it can be set up as an inline intrusion prevention system (IPS). Snort supports a variety of feeds to provide up‐to‐date signatures to detect threats. The community rules are freely available to anyone, and the Registered rule set is available for free with registration. The Subscription rule set is available only with a commercial agreement. With Security Onion, rules are configured to automatically update each day using the freely available community rule set. Additional rules sets can be configured as desired. An instance of Snort runs on each forward node within the Security Onion architecture. Each forward node sends the logs generated by Snort to the master node, where they are stored in a companion open‐source product known as Sguil (https://bammv.github.io/sguil/index.html). Sguil stores data in its server component (sguild) and provides access to that data through its associated Sguil GUI client (). In addition to data from Snort, host‐based data from Open Source HIDS SECurity (OSSEC) alerts (an open‐source, host‐based IDS and file integrity checker available from www.ossec.net) and/or other optional data sources are collected and displayed. All alerts received by Sguil are placed into a real‐time queue where they await examination and categorization by a security analyst. As with any IDS, rule sets must be properly tuned to your environment to surface high‐value alerts while reducing false positives or unnecessary alerts that can add an overwhelming amount of data to the queue, preventing analysts from having time to sort through all the events being reported. Figure 7.2 shows the RealTime Events queue of the Sguil client interface. Figure 7.2 : The RealTime Events queue of the Sguil interface In Figure 7.2, you can see that the real‐time queue contains many alerts waiting to be categorized. The alerts come from any configured data source, in this case OSSEC host‐based IDS agents and the snort Emerging Threats (ET) rule set that is being used to monitor the communication sniffed by Snort on sensor so‐ens34‐1. For each alert, we are given basic information such as the date and time, the source and destination IP address and port, the protocol number used in the communication, and the message configured for the specific alert rule that was triggered. Each alert is also assigned an Alert ID as it is ingested into the Sguil database. Below the real‐time event queue on the right side, you can see the details of the Snort rule for the event that is highlighted (in this case, an alert that traffic matching the behavior of the Tibs/Harnig Downloader has been detected between an internal and external host). Within the Snort alert itself, we can see there is a reference URL for additional information regarding the specific threat. Below the alert, we see a small sample of the packet capture that Snort believes is malicious. To move an event out of the real‐time qu
URLs

https://securityonionsolutions.com

https://securityonion.net/docs/Elastic-Architecture

https://securityonion.net/docs

https://github.com/security-onion-solutions/security-onion/wiki/PostInstallation

https://github.com/Security-Onion-Solutions/security-onion/blob/master/Verify_ISO.md

https://suricata-ids.org

https://securityonion.net/docs/pcaps

https://bammv.github.io/sguil/index.html

https://docs.zeek.org/en/stable/script-reference/log-files.html

https://docs.zeek.org/en/stable

https://docs.zeek.org/en/stable/scripts/base/protocols/dns/main.bro.html#type-DNS::Info

http://doc.emergingthreats.net/2025431"

https://lucene.apache.org

https://securityonion.readthedocs.io/en/latest/freqserver.html

https://securityonion.readthedocs.io/en/latest/domainstats.html

https://github.com/salesforce/ja3

Extracted

Ransom Note
CHAPTER 8 Event Log Analysis Microsoft Windows provides detailed auditing capabilities that have improved with each new operating system version. The event logging service can generate a vast amount of information about account logons, file and system access, changes to system configurations, process tracking, and much more. These logs can be stored locally, or they can leverage Window's Event Forwarding (WEF) store event logs on a remote Windows system. Microsoft provides access to event log data through the built‐in Event Viewer application and through PowerShell cmdlets that allow for queries leveraging PowerShell Remoting across the network. Event logs can also be centralized to a third‐party security information and event management (SIEM) solution for aggregation and analysis. With proper tuning and log retention, event logs can be an extremely powerful tool for incident responders. Understanding Event Logs An event is an observable activity that occurs on the system. The Windows event logging service can record five different types of event record: Error, Warning, Information, Success Audit, and Failure Audit. All of these have a defined set of data that is recorded for each event, as well as additional, event‐specific details that may be recorded depending on the type of event. Each event can be recorded in an event log record. Event log records are written to event log files by event log sources (programs capable of writing to the event logs). Modern Windows systems have a variety of event logs to which event log sources may write event log records. All Windows systems have the primary Windows logs: Application, Security, and System. Additional Applications and Services Logs are used for specific purposes on each Windows system as well. The built‐in Event Viewer utility provides an easy way to look at event logs on your local system. Figure 8.1 shows the Windows Logs and the Applications and Services Logs on a Windows Server 2019 domain controller. Figure 8.1 : Event Viewer showing default logs on Windows Server 2019 Under Windows Logs shown in the left pane of Figure 8.1, you can see the default Windows event logs. From an incident response perspective, the Security log is one of the most useful sources of log information. The System log records events related to the operating system and device drivers and is helpful for system administration and troubleshooting, but some System event log records may also be of use during incident response. The Application log can be written to by various applications on the system, so its contents will vary depending on the applications installed and the associated audit settings that are configured. The Setup log is populated during initial operating system installation. Forwarded Events is the default location to receive events forwarded from other systems using log subscriptions. For remote logging, a remote system running the Windows Event Collector service subscribes to receive event logs produced by other systems. The types of logs to be collected can be specified at a granular level, and transport occurs over HTTPS on port 5986 using WinRM. Group Policy Objects (GPOs) can be used to configure the remote logging facilities on each computer. In addition to the Application log that is located under the Windows Logs category, Windows provides an Applications and Services Logs category. As shown in Figure 8.1, several additional default logs appear under this category on the example domain controller. The logs located here are geared toward specific event types and may therefore retain events for a longer period of time before reaching their maximum size. This is because fewer events are recorded in each log compared to the more general‐purpose Security event log (by default, once an event log reaches its maximum size, the oldest events are deleted as new events are recorded). For this reason, these Applications and Services Logs can be useful to incident responders when trying to determine information about events that happened in the past, particularly if the more active Windows Logs have already rolled over due to size restrictions. Remember, however, that backup copies and volume shadow copies (discussed in Chapter 11, “Disk Forensics”) of log files may exist even when their maximum size is reached and events begin to be deleted. Deleted event log record entries may also be able to be recovered forensically from unallocated disk space. You should plan your log retention strategy to ensure that critical security events are available for a sufficient period of time. Your strategy should include defining the retention for both locally stored logs as well as the retention period of logs once they are aggregated to a SIEM solution or similar central log storage. The optimal retention time for each will be specific to your environment, but consider that the average dwell time of an attacker before being detected is currently months, not days, when making that decision. Regulatory requirements may also exist that require retention of certain logs for even longer periods of time. The right pane shown on Figure 8.1 provides access to several actions you can take to interact with the event logs. Filter Current Log allows you to search on criteria such as time, event level (Critical, Warning, etc.), event sources, event IDs, keywords, and others, as shown in Figure 8.2. Note that the User field may not behave as you would hope. Often it contains N/A for events, depending on the event source generating the event. You'll learn specific techniques to locate logon, account logon, object access, and other events based on the associated user account as we proceed through this chapter. Figure 8.2 : The filter options in Event Viewer Once you have filtered on the types of events of interest to your particular query, you can then specify a search term and use the Find feature in the Actions menu to look for specific events, such as a user account name. If you find yourself using a filter setting repeatedly, you can save it as a custom view for quick access in the future by choosing Create Custom View from the Action menu. This will save your settings under the Custom Views section of the Event Viewer, as seen in Figure 8.1, so that you can simply click that custom view to apply the same filter in the future. Once filtered, you can export the matching events to a separate file by right‐clicking the applicable custom view and choosing the option entitled Save All Events in Custom View As. Each of the five types of events mentioned at the beginning of this chapter follow the same format and use the same fields to store their data. The standard fields can be seen at the bottom of Figure 8.3. In addition, there is an event‐specific data area (sometimes called the event description) that contains information for each event type. In many cases, this event‐specific data area will be where the most useful information for your investigation is stored. Figure 8.3 shows an interactive logon event. Figure 8.3 : A successful interactive logon event The fields shared by all event types are shown at the bottom of the figure. They include the items listed in Table 8.1. Table 8.1 : Default event log fields As shown in Figure 8.3, the event‐specific data area is displayed at the top of the record entry. As will be the case with many event records, the information of most value for this entry is contained within this event‐specific data area. Note that the User field at the bottom of Figure 8.3 is listed as N/A, and the actual information regarding the account name used to log on to the system is found in the event‐specific data area, under the New Logon section. In this case, we used the user account jlemburg, which is a domain account in the COMPANY domain. Event log entries are categorized by a series of different event identifiers (IDs). Each event ID records a specific type of entry for a specific event source. Learning the event IDs associated with important events, and the idiosyncrasies of the different event‐specific data area for each of those event IDs, is an important skill for incident handlers. EDITING EVENT LOGS Event log files are not easy to manipulate, but it can be done. The Shadow Brokers released a tool called EventLogEdit that can unlink selected event log records so that they do not appear when queried through normal means, but the logs may still be recoverable forensically. Each event record entry is assigned a sequential EventRecordID when it is recorded (shown in Figure 8.4 in a moment), and malicious manipulation of event log files may result in gaps that can be detected in these record entries. Additional details can be found here: https://blog.fox-it.com/2017/12/08/detection-and-recovery-of-nsas-covered-up-tracks Retention of logs in secure, centralized locations, such as a SIEM solution, can help mitigate any risk of log tampering. The data for each event log record is stored as binary XML within an event log file. Event log files end in the extension (the older extension was used for a previous, binary data format in older Windows systems). These EVTX files are mostly located in the directory. You can view the XML associated with an event log record by clicking the Details tab near the top of the event log record (shown in Figure 8.3 and Figure 8.4) and choosing XML View. The information for the default fields is stored within the System element, followed by the event‐specific data area information in the element, as shown in Figure 8.4. Figure 8.4 : The XML representation of an event log record Since the event‐specific data area, which is found in the element of the XML representation of the event log record, often contains information of value, you will see later in this chapter how to leverage PowerShell to access this data more efficiently. Event log records are recorded in accordance with the Windows audit policy settings, which are set in the Group Policy Management Editor under Computer Configuration ➪ Policies ➪ Windows Settings ➪ Security Settings ➪ Advanced Audit Policy Configuration ➪ Audit Policies. You can find baseline audit policy recommendations from Microsoft here: https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations These policy settings control which systems record which events and other aspects of Windows auditing. As discussed in Chapter 2, “Incident Readiness,” ensuring that you have adequate logging enabled, with sufficient retention of those logs, is an important step to get right before an incident is detected. The U.S. National Security Agency publishes a useful guide for configuring and using event log data to detect adversary activity. The guide is freely available here: https://apps.nsa.gov/iaarchive/library/reports/spotting-the-adversary-with-windows-event-log-monitoring.cfm For the remainder of this chapter, we assume that the associated audit policy is enabled to record the event logs discussed, rather than reiterating the importance of this first step in every paragraph that follows. Since audit policies control what is and is not logged, attackers may seek to modify these policies to reduce the evidence that they leave behind. Fortunately, changes to audit policy are themselves logged in an event log record with Event ID 4719 (System audit policy was changed). The Audit Policy Change section lists the specific changes that were made to the audit policy. The Subject section of the event description may show the account that made the change, but often (such as when the change is made through Group Policy) this section simply reports the name of the local system. It is also worth noting that regardless of the settings in the audit policy, if the Security event log is cleared, Event ID 1102 will be recorded as the first entry in the new, blank log (although an attacker may use tools such as Mimikatz to prevent this event from being generated). You can tell the name of the user account that cleared the log in the details of the event record entry. A similar event, with Event ID 104, is generated in the System log if it is cleared. FOR FURTHER STUDY… In this chapter, we provide actionable information on using Windows event logs to reconstruct adversary activity within a network. We will provide so many event IDs and specific indicators that can be used to detect and respond to malicious actors that you may find yourself a bit overwhelmed. Because log analysis requires a working knowledge of many event IDs, we also provide a PDF version of much of this information at this book's website, www.AppliedIncidentResponse.com , so that you have an easily searchable reference of the many event IDs described. In Mastering Windows Network Forensics and Investigation (Sybex, 2012), we dedicated four chapters to log analysis. Those seeking additional information will find a wealth of detail in that previous work that still applies today. We will cover additional details about Kerberos in Chapter 12, “Lateral Movement Analysis,” when we explore attacks such as pass‐the‐ticket, golden tickets, and others. Additional information about specific event IDs can be found on Randy Franklin Smith's Security Log Encyclopedia at www.ultimatewindowssecurity.com/securitylog/encyclopedia/default.aspx . Account‐Related Events For authenticated access to be granted to a Windows resource, an authentication authority must verify that the credentials provided are valid, and then a system provides access to the desired resource. These two actions may be performed by the same system or by different systems. For example, when a user attempts to log in with a domain account to a domain workstation, the authentication authority that confirms whether the provided credential is valid is a domain controller, but the system providing the access to the desired resource is the client workstation. Similarly, if a local account is used to access a stand‐alone computer, that computer both verifies that the authentication is valid and grants the access to the desired resource. When the authentication is approved, Windows records an account logon event in the Security event log of the system that verifies the credential. When a system provides access to a resource based on the results of that authentication, a logon event is recorded in the Security event log of the system providing the access. For example, if a domain user requests access to a file server called FS1, the associated account logon event will be recorded on a domain controller within the network, and the logon event will be recorded on the FS1 server. Similarly, in the case of a local user account, the same computer both approves the authentication requests and permits the access to the system. Therefore, when a local user account is used to interactively logon to a stand‐alone workstation, both the account logon event and the logon event will be recorded on that workstation. You may find it helpful to think of account logon events as authentication events and logon events as records of actual logon activity. The default protocol used within a Windows domain for authentication is Kerberos; however, older protocols such as NT LAN Manager version 2 (NTLMv2) may also be used during normal system activity. Kerberos by its very nature requires a hostname in order to complete its authentication process. When users reference a remote system by IP address, for example, NTLMv2 is the authentication protocol that will be used. For NTLMv2, the NT hash of the user's password acts as a shared secret between the authentication authority (such as a domain controller or the local security authority subsystem in the case of a local account) and the client seeking to gain access. When the user enters the account password, the local system will calculate the corresponding NT hash for that password and use it to encrypt a challenge that is sent by the remote system to be accessed. The authentication authority can then use its copy of the shared secret (the NT hash of the user's password) to encrypt the same challenge and verify that the response sent by the client seeking access is based on the correct password, thereby proving that the user is authorized to use that account
URLs

https://blog.fox-it.com/2017/12/08/detection-and-recovery-of-nsas-covered-up-tracks

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations

https://apps.nsa.gov/iaarchive/library/reports/spotting-the-adversary-with-windows-event-log-monitoring.cfm

https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4768

https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4776

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc787567(v=ws.10

https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625

https://github.com/JPCERTCC/LogonTracer

https://github.com/BloodHoundAD/BloodHound

https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder

https://github.com/SpiderLabs/Responder

https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/using-event-viewer-with-applocker

https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/troubleshoot-windows-defender-antivirus

https://docs.microsoft.com/en-us/windows/security/threat-protection/microsoft-defender-atp/exploit-protection

https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

https://github.com/SwiftOnSecurity/sysmon-config

https://github.com/olafhartong/sysmon-modular

https://github.com/JPCERTCC/SysmonSearch

Extracted

Ransom Note
CHAPTER 10 Malware Analysis Modern adversaries often prefer to “live off the land,” using native tools already found on compromised systems, rather than risk deploying malware that might be detected by endpoint and network controls. This does not mean, however, that malware is no longer relevant. Many attack campaigns still use customized or even commodity malware to great effect. While malware analysis is a deeply specialized field, this chapter will give you effective steps you can use to identify and understand malware in support of incident response. Online Analysis Services Network defenders tend to categorize malware based on its function and/or the adversary campaign in which it is used. Malware categories include droppers, downloaders, ransomware, cryptominers, remote access tools/Trojans (RATs), viruses, worms, spyware, bots, adware … and the list goes on. From an incident response perspective, understanding the behavior of suspected malware and identifying which systems may be affected are key issues that need to be addressed. There are many online services that offer free analysis of malware samples and provide automated reports regarding the behavior of the sample. They also maintain databases compiled from thousands of other samples analyzed, threat intelligence and reputation feeds, antivirus signatures, and other sources of data to provide context around the behaviors and indicators observed in the sample. For example, if the malware communicates with a particular URL, the online service may group samples that all communicate with the same URL, query threat intelligence feeds to determine whether the URL is associated with known threat actors, query reputation services to check if it is already listed as a suspected malicious site, and so on. Examples of these services are VirusTotal at www.virustotal.com The Malware Configuration and Payload Extraction (CAPE) online service offered by Contextis at https://cape.contextis.com Joe Sandbox at www.joesandbox.com Each of these services allows the submission of suspected malware in various ways, such as directly uploading the suspicious executable or providing a URL where the sample is located. Each service maintains data from previously submitted samples that allows them to categorize multiple samples into different malware families. The services also use the data to perform threat intelligence analytics based on commonalities in the code, function, and network indicators associated with each sample. The data can be queried by submitting filenames, IP addresses, domain names, or hash values of executable files to discover if previously submitted samples match the data in the query. Figure 10.1 shows the submission sample page for www.joesandbox.com. Figure 10.1 : Joe Sandbox sample submission page Incident responders can hash a suspected malicious sample and search these online databases based on the hash value alone, without having to submit the sample to the online provider. The hash submitted can be a traditional hash, such as an MD5 hash of the sample file, or a fuzzy hash, generated with a tool such as (freely available at https://github.com/ssdeep-project/ssdeep). Malware authors frequently make minor changes to their code in order to change its associated hash value or signature. Fuzzy hashing uses a technique called context triggered piecewise hashing to hash the file in segments rather than as one whole. This allows more flexibility when performing comparisons, since a change to one section of the file still means that all the other sections of the file are identical. With fuzzy hashes, you can compare two files to determine how similar they may be, as opposed to whether or not they are exactly the same. Two files that are 99 percent the same are likely to be variants of the same malware. Online malware analysis sites often allow a search using either traditional hash values or fuzzy hashes. Remember that operations security in incident response is an important consideration. Malicious actors can also search malware databases for evidence that their customized malware has been detected and submitted to one of the services. At the same time, incident handlers should not reinvent the wheel, exhaustively analyzing every commodity malware sample detected in their environment. You will need to let the specific details of each incident guide your actions and decide if you will use an online analysis service or do the analysis yourself. Often, searching online services for hash values, filenames, IP addresses, or other network indicators associated with the detected malware sample can quickly determine that a malware sample is part of a well‐known malware family and provide actionable intelligence on effective remediation steps. These services can also provide valuable information related to similar samples, detection mechanisms, behavior of the malware based on the site's analysis, additional reports from other members of the community, and other details of the malware that may help address the current incident. In addition to the dedicated malware analysis sites, you can also search for the associated hash values or other indicators on a general search engine like Google. If other analysts have posted results of analysis conducted on the same or similar malware, you may be able to benefit from their published results. However, if the sample is considered too sensitive to use third‐party systems, you can perform your own analysis internally. Here are the primary methods to do so: Static Analysis The executable file is examined without running the associated code. Dynamic Analysis The code is executed in a controlled environment to observe its behavior and analyze its data in memory. Reverse Engineering The executable is disassembled or decompiled to understand its function programmatically. We discuss each of these techniques in this chapter. Static Analysis One of the most obvious initial steps to take when analyzing a new malware sample is to scan it with one or more antivirus tools to see if the vendor has already identified the sample in question. Having some dedicated virtual machines, each installed with a different vendor's antimalware product, gives you an opportunity to scan the sample against multiple vendor signature collections without publicly posting the sample to a site like VirusTotal that may share it broadly with the antimalware community (keep in mind that the vendors for the antivirus products you use might have access to the sample scanned, depending on the products and their configuration). Another commonly employed static analysis technique is to examine the malware file for strings that may indicate modules that it loads (and therefore possibly give an indication of its functionality), IP addresses, URLs, domain names, registry keys, filenames and locations, or other information that may be able to be extracted from the binary executable. Malware authors are aware that this type of analysis is common and may employ obfuscation techniques to conceal relevant data. The FireEye Labs Obfuscated String Solver (FLOSS) tool is designed to help address this challenge. The tool is freely available at https://github.com/fireeye/flare-floss and is included in the FLARE VM, which we discuss later in this chapter. A command‐line utility, FLOSS parses data from a suspected malicious file in a variety of ways. It will extract any plaintext strings (ASCII and UTF‐16 encodings) that are present, but it also uses heuristic analysis to deobfuscate strings encoded with different encoding techniques. The output of the tool displays the strings extracted, grouped by type of string, as seen in the following example (emphasis added): FLOSS static ASCII strings LoadLibraryA FLOSS static UTF‐16 strings FLOSS decoded 4 strings FLOSS extracted 81 stack strings By searching through the resulting strings, you can start to understand the types of functionalities that the malware may have (network capabilities, file‐writing capabilities, deletion capabilities) based on modules or functions that are mentioned, as well as potential locations of artifacts such as filenames or registry keys. You can also search the strings found inside the binary on online malware analysis platforms or threat intelligence platforms such as MISP (Malware Information Sharing Platform, https://misp-project.org). These platforms may offer useful information about the malware family, attack campaign, or threat actor associated with those indicators. Be aware, however, that advanced adversaries have been known to intentionally place URLs, IP addresses, hostnames, and other strings inside of binaries that have no purpose for the malware and then monitor for interaction with those devices as a way of knowing when someone is analyzing their binary. As discussed in Chapter 2, “Incident Readiness,” carefully weigh any actions that may be detected by the adversary. Aside from searching for strings within the executable file, another commonly employed approach for malware analysis is searching the file using YARA rules (YARA stands for Yet Another Recursive Acronym, or Yet Another Ridiculous Acronym). YARA rules are a way to describe and search for string or binary patterns within a data set. They are most often applied to malware analysis, providing a simple, text‐based format to describe specific elements of a file that indicates it may be malicious. Each YARA rule can be customized to detect a specific sample of malware while being generic enough that it will detect similar variants even if their hash values are different. YARA rules define a series of patterns (called “strings” whether they consist of text, binary data, or even regular expressions) and one or more conditions that must exist for a match to be made. This enables rules to be constructed with flexibility and logic in the condition assessment. For example, a rule may match on a file if it has three out of six possible indicators, but not match if it only has two. Similarly, different patterns can be given different severity so that the presence of a high‐severity indicator is given more weight than that of a low‐severity indicator in determining whether a match is found. The author of the rule has the flexibility to structure this logic in whatever way is appropriate. A very simple YARA rule from the project's documentation (online at https://yara.readthedocs.io) is as follows: In this example, the rule begins with two lines of comments sandwiched between the and symbols. The rule is given the name “ExampleRule” and two strings are defined. The term “string” is used for any recurring sequence whether that data is encoded ASCII text, Unicode, a raw series of hexadecimal numbers representing binary data, or a regular expression. The condition section provides requirements for a successful match for this rule. Here you can include previously defined strings that constitute a successful match for this rule. In this case, a set of data is a match to this rule if either (which was assigned the variable name ) or the hexadecimal sequence (which was assigned variable name ) is present anywhere within the data. A text string is assumed to be a case‐sensitive, ASCII‐encoded string unless otherwise specified with a modifier after the string definition. The modifier is used to indicate that the preceding string is to be interpreted as case insensitive. The modifier is used to specify that the text is to be interpreted as two‐byte Unicode values (UTF‐16); however, YARA does not fully support the entire spectrum of UTF‐16 encoding and focuses on characters that are also included in the ASCII character set. If you want to search for other UTF‐16 characters, explicitly state the hexadecimal string. You can also use the modifier in addition to the modifier to search for the text in both of the encoding options, as in this example: Note that only the single string must be present for the condition to be met and for the rule to match. One instance of in upper‐, lower‐, or mixed case, and encoded in ASCII or UTF‐16, is enough for the condition to be true and the rule to match. Strings can also employ wildcards or define regular expressions to increase their versatility. Another commonly encountered string modifier is . This modifier specifies that a match only occurs if the string is bounded by non‐alphanumeric characters. For example, a string defined as would match to the data www.test.com but not match on testdata.com. YARA conditions are evaluated as a Boolean statement and often reference the strings defined. The condition is evaluated, and if the result is a Boolean true, the rule is a match for the data being assessed. Conditions can also include properties of the file such as its size, which can be used to increase efficiency when comparing YARA rules against large sets of files by only searching files that meet specified size requirements, such as being less than 1 MB. Conditions can also set specific criteria such as the number of times a pattern must appear, define different sets of patterns with different thresholds for a match to occur, define the position in the file in which a pattern must appear, and even reference other YARA rules. The syntax for YARA rules allows for a wide range of options to define the criteria for a match. Carefully structuring conditions can be important and allow faster queries when searching large numbers of files. An example would be using file size first to reduce the number of files that need to be evaluated by more time‐consuming searches like string searches. Now that you have a basic understanding of the syntax for YARA rules, how can you use them? The YARA project provides a command‐line tool to compare a target (file, folder, or process memory) against one or more YARA rules and report the results. The tool is available at https://github.com/VirusTotal/yara. The most basic syntax is issuing the command ( on Windows), followed by the location of the rule or rules and ending with the location of the file(s) to analyze. For example, to use a rule called to scan all the files in the folder, the syntax would be The result returned (the second line) indicates that the rule named was a match to the file called in the folder. No other matches were detected. The command‐line tool does not natively support extracting files from archives (such as zip files) to evaluate the data contained within them; however, this can be accomplished by using the tool, available from https://github.com/BayshoreNetworks/yextend. You don't necessarily need to generate your own YARA rules for them to be useful in incident response and threat hunting activities. Many publicly available sets of YARA rules have already been defined to identify known families of malware or individual malware samples. The Yara Rules project (https://github.com/Yara-Rules/rules) seeks to be a community repository where YARA rules can be shared by security researchers and practitioners. Rules are broken down into different categories, including things like exploit kits, malicious documents, malware, packers (tools used to compress and obfuscate executables to conceal their purpose and thwart analysis), web shells, and more. The Awesome YARA project similarly provides a curated list of additional YARA rule sets at https://github.com/InQuest/awesome-yara. Many threat intelligence services, both commercial and open source, release YARA rules to describe specific indicators of compromise associated with threat actor activity. Here is an example of a YARA rule to identify the KeyBase keylogger, written by Bart Blaze and submitted to the YARA Rules project: This is a common structure for a YARA rule. It first defines strings in two sets, and . The condition starts (third text line from the end of the rule) with an unsigned integer value of (the little‐endian representation of the ASCII characters MZ, the file signature for a Windows executable) occurring at the beginning of the file. If the file starts with anything else, the rest of the rule does not need to be evaluated. In addition to the executable file signature at the beginning, for the condition to be true (the second to the last line of text), five of the strings in the set must be prese
URLs

https://cape.contextis.com

https://github.com/ssdeep-project/ssdeep

https://github.com/fireeye/flare-floss

https://misp-project.org

https://yara.readthedocs.io

https://github.com/VirusTotal/yara

https://github.com/BayshoreNetworks/yextend

https://github.com/Yara-Rules/rules

https://github.com/InQuest/awesome-yara

https://github.com/Neo23x0/Loki

https://github.com/Neo23x0/yarGen

https://github.com/fireeye/flare-vm

https://remnux.org

https://developer.microsoft.com/en-us/microsoft-edge/tools/vms

https://chocolatey.org

https://docs.microsoft.com/en-us/sysinternals/downloads/procmon

https://github.com/fireeye/flare-fakenet-ng

https://cuckoosandbox.org

https://cuckoo.readthedocs.io/en/latest/introduction/what

https://cuckoo.sh/docs

Extracted

Ransom Note
CHAPTER 12 Lateral Movement Analysis Lateral movement is the act of the adversary moving from one system to another inside your environment to expand their influence and access throughout the network. This is an area where the adversary will often spend a lot of time and during which we have a good opportunity to detect and respond to their attack; however, doing so requires bringing together the various skills we have discussed up to this point. In this chapter, we will explore many of the most common ways attackers use to move laterally in your environment and highlight ways that we may be able to detect and respond to that activity. Server Message Block Good old Server Message Block (SMB), that ancient protocol used by Windows and *nix systems to enable easy file sharing (and so much more), is designed to allow users to have ready access to the data that they need to do their jobs, no matter where it may be located on the network. Unfortunately, with SMB traffic being extremely common and the tools easy to execute courtesy of Windows pass‐through‐authentication, SMB is also a key attack vector for adversaries. We will start with a broad discussion about SMB and then look at some specific attack vectors that rely on SMB under the hood, such as PsExec and scheduled task abuse, in later sections. SMB MITIGATIONS Back in the days of Windows NT and 2000, it was quite common to see environments where domains were deployed with built‐in local administrator accounts, with relative identifier (RID) 500, active on every client, each of which had the same password set during original installation. As a result, SMB was commonly used by attackers by simply compromising one local host, stealing the local admin credential, and reusing that credential to access all the machines in the environment remotely. As a countermeasure to this type of attack, Microsoft took several steps. They released the Local Administrator Password Solution (LAPS), which allows Active Directory to assign and manage complex passwords for local administrator accounts on clients and member servers. Microsoft also disabled the ability of local accounts that were members of the Local Administrator's group (except for the default RID 500 user) to remotely access the system with administrator permissions; however, this restriction does not apply to domain accounts. For Windows 10 clients, the default Administrator account (with RID 500) is usually disabled by default. This means that the remaining Local Administrator accounts cannot be used to remotely administer local systems, nor can they attach to the default administrative shares to access the data on the systems. If the RID 500 account is enabled, it can still access the local system remotely. Similarly, domain accounts with administrative permissions can be used to access systems remotely across the network by default. There are several registry and GPO settings that can impact this behavior within a Windows environment. Additional details can be found at www.harmj0y.net/blog/redteaming/pass-the-hash-is-dead-long-live-localaccounttokenfilterpolicy . One of the easiest ways for an attacker to leverage SMB to access a remote system is to use the commands, such as , to access storage on a remote system. Fortunately, this type of communication uses standard Windows authentication and therefore will leave event log records showing the associated account logon and logon activity. Recall that when a domain controller authenticates access to a remote system, the account logon events will be located on one or more of the domain controllers. The logon events will occur on the system that was accessed as well as on the system being used by the intruder. Let's look at a few examples. Recall some of the key account logon and logon event IDs from Chapter 8, “Event Log Analysis,” summarized in Table 12.1 and Table 12.2. Table 12.1 : Account logon events Table 12.2 : Logon events Let's start with a simple baseline example, where a user sits at Client1 and logs on, in a domain environment. The initial login to Client1 would generate the following series of event log records on the domain controllers and on Client1, as shown in Figure 12.1. Figure 12.1 : Standard account logon and logon event IDs As you can see, several different events are logged on the domain controller, starting with the request for the ticket‐granting ticket (Event ID 4768). A service ticket is then requested for the client where the user is currently sitting (Event ID 4769 for the computer named Client1), since that system is to be accessed for the logon to complete. We then see a service ticket request (Event ID 4769) for the domain controller itself, since the client will need to log on and use the services of the domain controller in order to complete its request. We also see a service ticket request for the Kerberos ticket‐granting ticket (krbtgt) service, the service responsible for authentication and issuance of the associated tickets. Next, we see a remote logon (Event ID 4624 with a logon type of 3) to the domain controller itself, which is required for processing the authentication on the domain controller. Finally, we see Event ID 4634, recording the end of the remote logon session that started with Event ID 4624. You can correlate these two events by their identical logon ID number. On the client itself, we only see Event ID 4624 with a logon type of 2, indicating an interactive logon. When the user eventually logs off, an Event ID 4647 or 4634 should be generated, but depending on how the logoff occurs this may not always be recorded. Let's now assume that the same user, who is already interactively logged on to Client1, then requests access to a remote file hosted on Server1. In addition to the logs listed in Table 12.1 and Table 12.2, you will find new entries as a result of the remote file access (Figure 12.2). Figure 12.2 : Remote file access account logon and logon event records On the domain controller will be a new Event ID 4769 recording the request for a service ticket for the remote server (Server1). Since the domain controller will need to be accessed to provide this service, many of the event IDs recorded in Figure 12.1 may also be repeated at the time of this request. The server being accessed (Server1 in this example) will record an Event ID 4624 with a logon type of 3, indicating a remote logon. There should also be an associated Event ID 4634, with the same logon ID, as the session is terminated. Note that access to a remote file share may cause multiple, short‐duration connections to the system hosting the share. This is normal and does not have any direct correlation to the number of files accessed or the length of time during which contents may have been viewed. Recall from Chapter 8 that if object auditing is turned on and configured for the shares and/or files accessed, additional log entries may be made on Server1. For example, on the system being accessed, Event ID 5140 (a network share object was accessed) will appear when a shared folder or other shared object is accessed. The event entry provides the account name and source address of the account that accessed the object. The client system initiating the access may show evidence of the connections in the registry key . Finally, on the client being used to initiate the access (Client1), we may find Event ID 4648 (use of explicit credentials), depending on the way the remote system was accessed. For example, if using the Windows Explorer GUI to input a universal naming convention (UNC) path to a remote resource, the process hosting the Netlogon service makes the request on behalf of the user. Therefore, Event ID 4648 will be recorded showing that the SYSTEM Security ID explicitly used the user's credential, since the Netlogon service makes the request on behalf of the user and uses the user's credential (via an impersonation token), instead of the security token under which the Netlogon service is running, to make the request (shown in Figure 12.3). You will also see Event ID 4648 recorded (or possibly an Event ID 4624 with a logon type of 9) when a user accesses a program using the command or provides explicit credentials into a command such as . If a user issues a command such as to mount a remote share, that would not result in the generation of Event ID 4648 (since the user's current credential is used as part of pass‐through authentication). On the other hand, if the user explicitly provided an alternate credential with the command net use * \\server1\C$ /user:administrator, then that would result in an Event ID 4648. We will explore additional examples later in this chapter . Figure 12.3 : Event ID 4648 showing explicit credential use Looking for unusually large numbers of systems that have been accessed remotely by a single account (indicated by Event ID 4624 with a logon type of 3) can be a good indicator that an account has been compromised and is being used maliciously for lateral movement. The LogonTracer tool mentioned in Chapter 8 can help with this type of analysis by showing statistics related to activity of accounts and hosts, as well as by providing a graph representation of activity to help identify anomalies. Once an account has been identified as compromised, or at least suspicious, PowerShell's cmdlet's parameter (discussed in Chapter 8) can be used to help home in on event log records related to that account throughout the enterprise to better quantify the extent of malicious activity and to identify additional systems that may contain evidence of adversary actions. If workstations in your environment do not usually have administrator accounts logging into them (most of the time, users have only domain user accounts), scanning your workstations for Event ID 4672 (special privileges assigned to new logon) may help identify privileged accounts that are being used maliciously. Additionally, looking for Event ID 4776 (NTLM authentication requests) on clients and member servers may provide evidence of attempts to use local accounts to bypass domain authentication. In addition to evidence of lateral movement using authenticated SMB access that may be found in the event logs, network indicators can be very valuable here. Zeek, mentioned in Chapter 7, “Network Security Monitoring,” parses SMB communications from the wire and records details of the communications across several different Zeek log files. These include , , and . If event log data does not exist to track SMB activity, consider leveraging network security monitoring to help locate malicious use of SMB. Another valuable potential indicator of malicious activity is to look for logon attempts from systems that are not members of your domain. Such attempts may indicate that an attacker has gained access to your network (such as through a Wi‐Fi access point or lack of perimeter controls) and is using stolen credentials to move laterally throughout the environment. ADDITIONAL RESOURCES Detection of lateral movement is a critically important aspect of modern network defense, and several resources are available to you to help. This book's website, www.AppliedIncidentResponse.com , has a “Lateral Movement Analyst Reference” PDF that contains a great deal of material that you can keep handy for quickly locating the information that you need. In addition, Rob Lee and Mike Pilkington created an outstanding incident response and threat hunting poster that contains many of the artifacts discussed here and more. It is an essential reference for any incident responder or threat hunter and can be downloaded from: https://digital-forensics.sans.org/media/SANS_Poster_2018_Hunt_Evil_FINAL.pdf You can also find tons of useful information in the following guides: From the European Computer Emergency Response Team: https://cert.europa.eu/static/WhitePapers/CERT-EU_SWP_17-002_Lateral_Movements.pdf From the Japan Computer Emergency Response Team Coordination Center: www.jpcert.or.jp/english/pub/sr/20170612ac-ir_research_en.pdf From the U.S. National Security Agency: https://apps.nsa.gov/iaarchive/library/reports/spotting-the-adversary-with-windows-event-log-monitoring.cfm Pass‐the‐Hash Attacks There are some specific attack vectors that you may encounter when analyzing suspicious SMB activity. We have already mentioned pass‐the‐hash attacks, where an adversary steals the NT hash representation of an account's password and uses it to complete authentication to remote systems. The hash itself can be stolen from a local Security Account Manager (SAM) file, Active Directory, or memory when the user is interactively logged on to a system, or by sniffing a challenge‐response exchange from the wire and cracking it offline. Once in possession of the password hash, tools such as Mimikatz and Metasploit can use the hash to complete remote NTLMv2 authentication to other systems where the credential is valid. We also spoke briefly in Chapter 8, “Event Log Analysis,” about SMB relay attacks, where the attacker assumes a man‐in‐the‐middle position to capture a challenge‐response authentication attempt and redirect the attempt from the intended destination to a destination of the attacker's choosing. In each of these attacks, Windows authentication still takes place and the associated event log entries would still exist. Because a pass‐the‐hash attack uses credentials stolen from another user account, looking for event IDs showing use of explicit credentials (Event ID 4648 or Event ID 4624 with logon type 9) may help confirm suspicions that pass‐the‐hash attacks are in use. If you suspect a system is being used for these types of attacks, you can also look for forensic artifacts of tools that could be used to execute a pass‐the‐hash attack on the host. You can also use Sysmon logging (if you have it enabled) to detect the malicious access to the LSASS process that occurs when placing the stolen credential into memory of the host as a precursor to a pass‐the‐hash attack. Such access will register as a Sysmon Event ID 10, showing the attack tool accessing the LSASS process. Since pass‐the‐hash attacks use NTLM for authentication, looking for Event ID 4776 (NTLM authentication attempt) for the system being accessed (on the domain controller in the case of a domain account, or on the system being accessed in the case of a local user account) may help highlight suspicious activity. On the system being accessed, Event ID 4624 with an authentication package of NTLM may also help identify malicious activity. Keep in mind, however, that NTLM authentication does occur under normal circumstances in a domain (for example, whenever a system is accessed via IP address instead of by its computer name) so the presence of NTLM authentication alone is not evidence of a problem. EVIDENCE OF EXECUTION Throughout this chapter, remember that you can use the forensic indicators of program execution discussed in Chapter 11, “Disk Forensics,” to help identify lateral movement. Analysis of AmCache, BAM/DAM, ShimCache, prefetch, RecentApps, UserAssist, and more can be used to determine if, or even when, a program has been executed on a system. Use of attacker tools can be detected in this manner, but so can living‐off‐the‐land techniques. Many of your regular users might not use PowerShell, WMIC, the command ( ), and similar built‐in tools on their workstations, so use of those programs on some systems could in and of itself be suspicious. If these tools are launched by 32‐bit malware, the 32‐bit versions of those tools (located in the %SystemRoot%\SysWOW64 folder) will be used, which may be another indicator of malicious activity. Other host or support executables that may be relevant to detecting lateral movement include: : WMI Provider Host used to run WMI commands : The host process for PowerShell Remoting activity : The host process for Windows Remote Shell (present on the destination machine, with being used to launch the command on the originating machine) and : Used to support clipboard and other functionality for Remote Desktop Protocol sessions on the accessed system : The Windows Script Host (WSH) process for scripts using a graphical user interface : The WSH process for scripts using a command‐line interface
URLs

https://digital-forensics.sans.org/media/SANS_Poster_2018_Hunt_Evil_FINAL.pdf

https://cert.europa.eu/static/WhitePapers/CERT-EU_SWP_17-002_Lateral_Movements.pdf

https://apps.nsa.gov/iaarchive/library/reports/spotting-the-adversary-with-windows-event-log-monitoring.cfm

https://blog.stealthbits.com/how-to-detect-pass-the-hash-attacks

https://digital-forensics.sans.org/blog/2014/11/24/kerberos-in-the-crosshairs-golden-tickets-silver-tickets-mitm-more

https://youtu.be/lJQn06QLwEw

https://github.com/gentilkiwi/mimikatz

https://github.com/GhostPack/Rubeus

https://blog.stealthbits.com/detect‐pass‐the‐ticket‐attacks

https://blog.stealthbits.com/how-to-detect-overpass-the-hash-attacks

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/ad-forest-recovery-resetting-the-krbtgt-password

https://attack.stealthbits.com/how-dcshadow-persistence-attack-works

https://blog.stealthbits.com/extracting-user-password-data-with-mimikatz-dcsync

https://youtu.be/HHJWfG9b0-E

https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview

https://github.com/malcomvetter/CSExec

https://github.com/kavika13/RemCom

https://github.com/inguardians/ServifyThis

https://github.com/fireeye/flare-wmi/tree/master/python-cim

https://in.security/an-intro-into-abusing-and-identifying-wmi-event-subscriptions-for-persistence

Targets

    • Target

      Applied Incident Response (Steve Anson) (z-lib.org).epub

    • Size

      38.6MB

    • MD5

      2dd660158c81bafacd4002328e1ec2dd

    • SHA1

      18e959a5f58a0cda59a3d9958adf2c1a237a5806

    • SHA256

      4654817ffdd9dbd995fa2b83359486541caa417c76e0e95cbd6ec7b910e3007d

    • SHA512

      7b78aa71975e5511a8d6aa307faa74ede9e454a230b991a9823834369e83af1c896f1cfcbe0359231f43a5661aa11c4410bf8efe24e8f8d7874f09e533855a68

    • SSDEEP

      786432:i9r+wvGysCB2r0wtqe5EhrUZZDYeh73J3hhl1uyw4uzDrm1Z28PR/s0:iJ+wf4r0sQhjw5d1uyw4uzdsRf

    Score
    1/10
    • Target

      OPS/b01.xhtml

    • Size

      134KB

    • MD5

      c47f168492dbebb22389630fc9b096b1

    • SHA1

      a4b613091cf0d254798fb7005227eb07f5f50bb0

    • SHA256

      683135aa4dd7f5e13334a021827dde8909e47429d93305ab2661fed45c594795

    • SHA512

      a0b52a479a9cf0479bc3ae29c841dd06b77eb9b03b188731e6383c704762db3c9927354055396a6981f94f65516c5cca294026a23d4203a99efc61ea4718ea1e

    • SSDEEP

      1536:nsZeN/OgBKt/kWatAupOugouhmsSBcNnK2oVuaPp:J5OyHOu3uRSBcNnK2oVuaPp

    Score
    1/10
    • Target

      OPS/c01.xhtml

    • Size

      53KB

    • MD5

      cf67623f694f98a21a21c6bcac077b30

    • SHA1

      528bce8c7312665068c8202603ff1f2b0b3cfa20

    • SHA256

      a752c8c0ee902fdfa85df3ea4ad972d770271e07f509f054b5f8523d95590b16

    • SHA512

      690f13b525826e573407127d1076bf41a633aca928e03d3c5651ad86900292ef0bc3985c8a5981ac42c9b1fa56c4fc4dc1252d22a400cf557e47cfd0dbb31a12

    • SSDEEP

      768:BSL9pY5DUhUn1QDChF03m0OJzcNxZgwbge2PItxbz1v8/JrJPJvWKgZpiNyAj:35DWUn1QDByGDx3t8/JVxvWKr

    Score
    1/10
    • Target

      OPS/c02.xhtml

    • Size

      76KB

    • MD5

      0c971aa10fbfa258b2e829a8dabc769c

    • SHA1

      19c6d65ba52177d1842806852f15c108fe0f60b6

    • SHA256

      d4b133a906952855149b885d0f7c4e46fe2b50e7a34c3ca6edebf659c70f40e5

    • SHA512

      46cf1fe54250ab9bae0e7f0edf51662fd2e10caccf5cd6b8fcebc35b4214fed220d2e2f45012ac4499f1461fe2a7e09cc6e6df948c7fb7cbe9a2f37d13e3af47

    • SSDEEP

      768:Yhsjqtj929opQsj0wQWnmcyaHun8F1VoJQFD1r82WcsrDq+8LvboW6+t4rUWOtxR:bNqmcyaHunQFD1r7KEz6z8PYSR4BpAN

    Score
    1/10
    • Target

      OPS/c03.xhtml

    • Size

      60KB

    • MD5

      5b89cef01fe2a84e3b1dd9a8fba43a96

    • SHA1

      929255ba834c96a607b2617f4c72defdcc06cfaf

    • SHA256

      23d9fea3de21764871ec79b36e8b4ebfe71c25321b8576273f71a555eafe961b

    • SHA512

      9c0e671a5c19a0c5c8c5b45d2719bc849f876a573e85e29cec354dca9c65608d10d857d118463e89929bfb246adc993151779ebeb59ff916467c8e263f56df83

    • SSDEEP

      768:9rK+Hp39awVsHFb3X24GQ221uvOU4fJ4vGJDnvmAqTUcAYIfbTmXYzOffrXy+Kbp:hAHuS4SrqTUtGz+DQG+UZW98p

    Score
    1/10
    • Target

      OPS/c04.xhtml

    • Size

      104KB

    • MD5

      c9012a826f4151ef5d3431f42009bc95

    • SHA1

      7ba19c2459d02a07d4658167b076d8eb3bf29e8a

    • SHA256

      182f364c0ddb3398241d92387bca3059f4e766a6ab16fe4a13bd5bdae09504d3

    • SHA512

      1080ff9fc19e992ffa7c7751915af85753a3e9b60b4a04156e52374e8ba19d5714bdd9060efbd73fb7d23a963581c333e205c32ac1419b6bd586b8a8baf61662

    • SSDEEP

      1536:2HzV9BEqlUna//+vWjUw5ULWh851xa3H1N9aYi:CoanLKpxa34

    Score
    1/10
    • Target

      OPS/c05.xhtml

    • Size

      78KB

    • MD5

      ba80b169dac3ce331c63218c8bf7069c

    • SHA1

      4ed6e3317efa65cd4102b2b457d2f0999e4ac370

    • SHA256

      bf70c857743e6c035d9911098de45459f9f5bc6a04f270b65fb1b52bd2002469

    • SHA512

      69eb6779e37fd5f1994973f0877076007d83eebbb3d7aa299dfcfe4cc18458336b27fc7ddf186044ec61c04845e22d6dbd56db45bbfafdef964ab5afb0e3832f

    • SSDEEP

      768:rnAbu56+4Vr2/A1dfv22Y2+Yjbo4HDddjj/WjGn+DZIiuJ0pd10GMTkRxNNPaKNB:C+AM2+YXpBwPPDj2kOR9aStBevOw

    Score
    1/10
    • Target

      OPS/c06.xhtml

    • Size

      69KB

    • MD5

      ee278961b667645c4fcdfc1004cba24c

    • SHA1

      654b7f609d194e2e92cf08c14f11ba7491f4d78a

    • SHA256

      225edb1bf14d94ac14581583e97117ddfa62f82c2ba10da9dc1e117462267189

    • SHA512

      c7d08806a4246116498ea8b8f9a2a0191231888c1b617883a1e24fa6ccc87a94a2f1c214f8b7cf63024676d0b492c3b069f8fdccb45f273d5b829548afd708ff

    • SSDEEP

      1536:bozHlN5iCajiCAQTWylAqLtQO3ylkfKKC1chSqW2j4/:boL0SVD5qLtQOClCKKmchSqW2m

    Score
    1/10
    • Target

      OPS/c07.xhtml

    • Size

      106KB

    • MD5

      2f3eb2136a810f051844a12c0b06eb66

    • SHA1

      74e813ff8c7e12eb343219c39fc9dfc57ebdfaa3

    • SHA256

      099a45379f80a23f3549d6a9c19170b46bd5f0cf407a1f4c01aa7f4840541eaf

    • SHA512

      3b3861730c25329a85cd450699ae30a70dbe8d70ce5261b54e3bc1460d1709d3c96298b71b84bafedc6796049ee0d8a0c758bbdd3f2e59128be1216a42f06a17

    • SSDEEP

      1536:I7PALHXMPdcLhuzOgq5a6C14e6HlWRN+Dismln:mPAL3MPdcL4Oa4e6HlW+ismln

    Score
    1/10
    • Target

      OPS/c08.xhtml

    • Size

      108KB

    • MD5

      35d820a82806a9cf947f62bd532c7f0d

    • SHA1

      37b1c98c784b6d7409714ec792481e3979fc9cfa

    • SHA256

      edeefdfa018344b420d01fc9cca2d18f618a43acc121044269c5f3f74960ee1e

    • SHA512

      56416d8e32130ac410cfb9d827b6e00cd49121054bc8559645dfe54c94a9ea10060b0feebf73c706ea22fa9ee57b4cdddc0e7214b51273933f579584278f6e10

    • SSDEEP

      768:QwArQpLahvWFj0+r9WgWLB5jax1Lu5Bf/cxmP2Srs0ELXs0e96qy4nMCv65EnBo0:zYOmW++JiQxjRXQS4vqC0+6opIgRT

    Score
    1/10
    • Target

      OPS/c09.xhtml

    • Size

      118KB

    • MD5

      9ee4e9a1820dbbb81c3fd8d031d24a46

    • SHA1

      a4f2d1375d06aac3185a11e7f70d2cee6ad34c2b

    • SHA256

      1ba79d7b44151376a3092b2e71a1e04bffeaa4de4d502b280ffa7c62b1dd214a

    • SHA512

      078cedf3db59d57d14d4cf0092db2a9c17947c9e77bbb300799661ddef3d83a42a0b86aae3c509902276d6f28acd05d2d84136c655c299752cb5c473ebf3cadb

    • SSDEEP

      1536:MZR2oIRkiuv0rAudh4EfI56ZODU1JAmAwsKGKmjClgVpeM:NPR8v0rAAZI56Zqb2mDpJ

    Score
    1/10
    • Target

      OPS/c10.xhtml

    • Size

      84KB

    • MD5

      d975156624d0a30165a93b2d11220d49

    • SHA1

      55fa291309b5f1c04b29a668f057a01382b57aa7

    • SHA256

      eec46c62e15ea57be6f77c2fd1a998c9bd396cba5d8e9b2ce965154dd81b9797

    • SHA512

      bbd0bd95c3e7dcf4625464551b5c18eee3cf5605a7e4b94e013504a9e7e98ffbd697dbefe6a4e302a1599f58170458f6f9d21375dd6b3c5732351ca43f4d17fc

    • SSDEEP

      1536:uJwNqUQRLDm3qCOzYyPtLCXc3O0CIm9oaPV5q64L:u2NtQRvm3qCWYyVLf3uIm9oaPVE64L

    Score
    1/10
    • Target

      OPS/c11.xhtml

    • Size

      96KB

    • MD5

      fe7aaaa190db6dcb6909ad5b39ee3ece

    • SHA1

      fe2daea424de0140242d5acf5fc5476ad57fe04a

    • SHA256

      400ee60c3f6c1e6d00bf725f4bde6342ba9c3983dfe7931db66bf6ab44393210

    • SHA512

      cedc53663eddf34ded70e0af00d6a5601d657b73bd865c1549369122a8c6301f61f76e9413013aeb4d6edc6990bb8fbc365fc676f31eb04b5ad933cb1d34f6ee

    • SSDEEP

      1536:pHSGRC4XZ7AfxjZLgjTVSbVmVncoy0mAHBb:pHBU+yZOc0VnpRmAHBb

    Score
    1/10
    • Target

      OPS/c12.xhtml

    • Size

      102KB

    • MD5

      f94a640757e4873d19e8b3b6d2e05d63

    • SHA1

      3684ffc8e66dd6b5b8ecc97e500053d94188aa16

    • SHA256

      ac220cb98bb0581e43bab7829cc9fbe3a46e94ad6718c2731e1649af39b9a7b4

    • SHA512

      aef55cb51927bc1d209cbb87b1555c834aaabc2401ab580ee1cdca9113bb0523e5df7614113e55441d387d69f149755f4f396c4ac1f5b95ff332914f866d4432

    • SSDEEP

      1536:FTh1Hqn5de7d0H6wbKzMUR5H4yN9dY2EWlMWxh:dh1Hq5A7AbQX4yG2NVh

    Score
    1/10
    • Target

      OPS/c13.xhtml

    • Size

      50KB

    • MD5

      e07d02072c77fbfd4e9bf37d601bde1a

    • SHA1

      bb948d800d3369da73aab8ee8cbe44d5448c717d

    • SHA256

      68eba3f5fa8a890682a0b47d5da7086cdb6dd74c53b239265e249acf4dca4993

    • SHA512

      650cea1f4525b7d81e0cb2c68ef3122fae62376fb4718ea52adbfcf21bab6ef549cb2b22947f7813b22d13f5d013c15f7fd2120365d1b7548aaab33a512df6d9

    • SSDEEP

      768:hdMU7oniG7fuEkWQKQGFZOIfNoN0yg8RAX5a6lw5nueW5vGIje:IpSiFxy5+Ja6l61I6

    Score
    1/10
    • Target

      OPS/c14.xhtml

    • Size

      48KB

    • MD5

      c900e324885f1f5327b9cf3a53483ebc

    • SHA1

      fb3219af50dc94ec4e4300f9c0a96fa7fb18d6c7

    • SHA256

      275060c5eeccc64eb5bf6148c9ca59ce20f6cd4fce315ad1915a4ebccb384fe0

    • SHA512

      8524d21ea19bf6cebae8effe4fd4a95ba3daf9ba1a2a2c226f0574751a4be599c6eea2db1b95ead07daf17e7304093bc98fe06c97434eb34aa52bcde4986c13e

    • SSDEEP

      768:LxTK0H+Kz+LtMR5BTZTb7b5RE5INckZKQJmKaXun1HYN+Jmjs8:tTbfNpSyNZKX4YB

    Score
    1/10

MITRE ATT&CK Enterprise v6

Tasks