General

  • Target

    Practical Social Engineering - Joe Gray.epub

  • Size

    8.1MB

  • MD5

    fba6717bd9dd95c0faec1675205d3591

  • SHA1

    060e82a9dab410429b1db6f6a258c5b78492bc01

  • SHA256

    d29f193806c0ff443ce1719b4a126854ec7174151aa2f3136cb1009b921ef80b

  • SHA512

    fa67866350b7dee8278008d72fac3c33dc7defaba25807bafff26b2953992f41410af1824df8f893afca55ad87ff532f87895098446d357987a4b4920eab36e9

  • SSDEEP

    196608:XmUoRsEQ7H19fFvFezRVPnwHV5KxhG0ep/Wb81+ZCgU:Xm3RsEQ7H19fFvFerP6m1Wf1Ei

Score
10/10

Malware Config

Extracted

Ransom Note
2 Ethical Considerations in Social Engineering Unlike network and web application penetration testing, the impact of social engineering can extend beyond the confines of a laptop or server. When you’re interacting with real people, you have to take special precautions to avoid hurting them. You must also make sure you abide by the laws in your area, as well as the location of any potential people or businesses you’ll be targeting. While there may not be a legal precedent that directs you to collect OSINT in a specific way—or restricts you from collecting OSINT at all—some laws, like the European Union (EU) General Data Protection Regulation (GDPR), place specific liabilities and repercussions on you for the data that you collect and dictate how you must protect it. This chapter outlines guidelines for conducting social engineering and collecting OSINT legally and ethically. Ethical Social Engineering Let’s start by talking about the social engineering attack itself. In a social engineering engagement, you must be sensitive to how a target will feel as a result of your actions. This can be tricky, because you have to develop ways of showing that a company is vulnerable, usually because the employees lack proper training or processes to follow, without victimizing or villainizing the person who revealed those vulnerabilities to you. One way to protect people is to keep them relatively anonymous to your client. For example, instead of reporting that Ed in Accounting clicked a link in a phishing email, say that someone in either Accounting or Finance fell victim to a phishing attack. In doing so, you should consider the size of the organization and the ability for peers to guess the identity of the victim from the details you give. If you’re working at a small company—say, No Starch Press, the publisher of this book—you might avoid saying that Bill Pollock, the company’s founder, had too much permissive information publicly posted to Facebook, and opt instead to state that a manager lacked privacy and access controls on social media. The actual bad guys likely won’t adhere to similar boundaries. In penetration testing, however, we shouldn’t copy everything the bad guys do. If we did, we’d be using denial-of-service attacks (attacks against networks and systems that keep legitimate users and services from being able to access them) against penetration testing clients; doxing clients by publicly releasing their personal information, such as their address, email address, and phone number; and deploying ransomware (malicious software that requires victims to pay a ransom in order to unlock it). Here are some tips for protecting people in your social engineering engagements. Establishing Boundaries The following should go without saying: if people ask you to stop talking to them, or if they end conversations, you should stop. Also, although you can view a target’s public posts on social media and build a profile on them, you should never do the following: Target their personal accounts (this includes connecting with them) Target them outside of work Imagine that someone continuously asked you work questions when you were at home. You wouldn’t like it, would you? Acceptable use of social media for collecting OSINT includes looking for public data about work, mentions of specific software or technologies, or occurrences of a routine username. Understanding Legal Considerations When it comes to performing social engineering, there are two main legal considerations: spoofing and recording. Other than these issues, one best practice for avoiding legal trouble is to ensure that you’re targeting assets owned by your client, rather than any bring-your-own-device (BYOD) systems owned by employees. Some states, like Tennessee, have laws that make spoofing phone numbers illegal. If you’re spoofing as an adversary emulation that is authorized by contract with your client company, and if you’re targeting company-owned assets only, you are generally clear. When it comes to recording a call, some states require you to have two-party consent, meaning both you and the victim must consent, and others require single-party consent, meaning it’s enough for you to consent. Whether a company can serve as the second party of consent for recording its employees on company-owned devices is a legal gray area. Table 2-1 lists two-party states. If asked to record calls, refer to your legal counsel for further clarification in your specific location. Table 2-1: States That Require Both Parties to Consent to a Phone Call Recording Two-party states Notes Lane v. Allstate, you should treat Nevada as a two-party state. Understanding Service Considerations You might also run into trouble if you violate a service’s terms of use. In 2019, Mike Felch at Black Hills Information Security published a pair of blog posts about selecting the software services to use when phishing. Entitled “How to Purge Google and Start Over” parts 1 and 2, these posts discuss his experience using G Suite (the Google productivity platform now called Google Workspace) as both a target and a tool for attacking. Felch explains how he compromised credentials and used CredSniper to bypass multifactor authentication. That’s where the story takes an interesting turn. He was detected by both the client Security Operations Center (SOC) and Google’s SOC. As a byproduct, Google not only took actions to disable the account he was using, but also (presumably through the use of some OSINT and its own detection algorithms) started to lock him and his wife out of other unrelated accounts to Google services that they used. The moral of the story is to ensure that you coordinate with any other providers the client may use before the engagement to ensure you don’t get locked out of everything, including, as in Mike’s case, your thermostat. Debriefing After the Engagement After performing social engineering operations, it’s important to debrief the organization and the targeted employees. Debriefing involves making victims aware of the techniques you used and the information you gathered, in a broad sense. You don’t have to tell the entire organization that Jane in Finance uses her husband John’s name as a password, or that Madison is having problems with her uncle. Keep the report you give your clients anonymous and leave out specifics; tell them simply you found that some employees were using their spouses’ names as passwords, or that you easily discovered information about their personal relationships. One way to navigate this ethical issue is to maintain a list of those who fall victim to the engagement and how they failed the assessment, while still redacting their names from the report. If your point of contact at the organization asks for that information, you might provide names so long as the company agrees not to terminate the employee. This negotiation can sometimes be a clause used in contracts between social engineers and their clients. If the company is failing to train their employees, it’s not fair to fire them for security missteps. On the other hand, your report should name the people who stopped the engagement from succeeding. These people took actions to protect the organization, and they should be recognized and rewarded. From an organizational perspective, management should let employees know that the company itself did not go snooping on them. Instead, it should be clear that the company paid someone else to snoop, then filtered the data down to information relevant to the business only, to keep the employees’ personal lives private. Furthermore, the organization should use the report you provide to them, along with recommendations and example scenarios, to train the employees so that they can be more secure. When giving presentations at conferences like DerbyCon, Hacker Halted, and various Security BSides events, I follow the same rules as I do in reporting. You never know if one of the people who fell victim to the attack is in the room, so avoid publicly shaming people. Praise in public, and reprimand in private. Inspire people to be more vigilant and report issues to the appropriate people. Case Study: Social Engineering Taken Too Far In 2012, while pregnant with Prince George of Cambridge, Duchess Kate Middleton was hospitalized for extreme morning sickness. The public and the media soon found out, and at 5:30 AM , a pair of DJs at an Australian radio show called the hospital, posing as the Queen of England and Prince Charles. The hosts mimicked their accents and requested an update on Middleton. The nurse working reception answered the phone. Believing the call was legitimate, she put them through to Middleton’s personal nurse, who provided various details of her condition. The DJs recorded the call and played it on the air. The program got international attention. Before the hospital could take any action, the nurse was found dead of an apparent suicide. Prince William and Duchess Kate released a statement regarding their deep sadness for the incident and offering condolences to those close to the nurse. This is an example of social engineering gone too far. Pranks are pranks, but at some point during the call, the pranksters should have revealed themselves. They also shouldn’t have made the stunts publicly known to a vast audience. The radio show seems to have been canceled, and the show and hosts’ Twitter accounts seem to have been deleted. The hosts issued a formal public apology—all too late after an avoidable tragedy. While this action is more of a tasteless prank than an attack, the incident fits the APA definition of manipulation, because the DJs were not acting with the victim’s best interests in mind. Had they not broadcasted the call, their action may have been closer to influence, though the best solution would have been to not make the call in the first place. Ethical OSINT Collection Now that I’ve established the legal and ethical boundaries for social engineering, we should do the same for OSINT. Much of the same considerations come into play, but the stakes are generally lower, because while the information you find through OSINT gathering could affect the well-being of your targets, you’re not interacting with them directly. Still, this doesn’t mean you should collect all the data out there on every target. Protecting Data You should assess how long to retain any data you collect, how to destroy the data, what value to assign to the data, what the outcome of losing the data would yield, and how someone might attempt to compromise the data. Digital forensics and law enforcement often rely on the concept of the chain of custody when dealing with data. The chain of custody seeks to preserve in a secure state any evidence collected, from the time of collection to disposal. This requires keeping all evidence in a secure and controlled location, such as an evidence locker, as you may have seen in police shows on TV. The person accessing the evidence has to demonstrate a legitimate need and sign the evidence out, then back in, for accountability. Digitally, enforcing a chain of custody is a little harder, but it’s possible to accomplish if you take certain precautions. The first is practicing good security hygiene, which we’ll discuss next. For each investigation, you need a dedicated virtual machine that you will use exclusively for that engagement. The machine needs to be encrypted with a strong password. Once you’re finished with the investigation, determine the retention requirements. Store the files that make up the virtual machine on a disk. A CD or DVD may be big enough, or you may need a bigger drive, such as a USB thumb drive or external hard drive. As an additional layer of security, you could encrypt the drive itself and securely store it, disconnected from any computers, with some sort of physical access controls, such as a lock and key. Digital hygiene is nothing more than the consistent application of security best practices. Have a form of malware protection on your devices, and don’t reuse passwords (and use strong passwords). You should also set up a password manager and multifactor authentication at every opportunity. This is but the tip of the iceberg, but these steps can help ensure that no one can call the integrity of your data into question, especially if the OSINT you’re collecting is for litigation. To assign value to data, consider what damage could be done to the company or person with it. I never collect social security numbers, but if I did, I would assign them a very high value. If I collect a name or an email along with a password, I will assign them the highest level possible. Finding this information indicates that an organization or employee has suffered a breach, and that you should exercise due care. That being said, if the organization can demonstrate that the user in question is technically prohibited from using the password in question, you might reduce the finding to low- or informational-level severity. Merely a password without a person tied to it will also have a lower value, although you could use it in a password-spraying attack on the company. (In password spraying, an adversary uses a single password in an attempt to brute-force numerous accounts, such as using a default password across all observed accounts.) In summary, protect your sensitive data by minimizing access to the system on which it’s stored, keeping it patched and up to date, disabling unnecessary services, employing strong passwords, and using multifactor authentication when you can. Encrypt the data whenever possible as well. Even if someone compromises the data, it will be worthless to them if they can’t break the encryption key. Following Laws and Regulations This section covers potential legal considerations for collecting OSINT. While the GDPR is the main law affecting OSINT, other countries, states, and jurisdictions have enacted similar laws associated with the loss of personal information as a result of a data breach. Collecting OSINT is not a data breach in itself, but because no legal precedent has yet established the outcome of GDPR and similar laws when applied to OSINT, you should treat these laws as relevant to your activities. As of May 25, 2018, GDPR regulates what you can do with data belonging to citizens of the EU. The regulation aims to protect citizens and residents of the EU regarding the collection and use of their data. In essence, it empowered EU citizens and residents as consumers to take agency over the data that is collected from them and about them. After GDPR passed in 2016, businesses were given two years to become compliant. May 25, 2018 was the date that all companies, globally, had to be in compliance with GDPR. A company that violates GDPR can face fines of 4 percent of its global annual revenue. This should provide an incentive to protect any information gathered about EU citizens (in the EU and abroad) and people visiting the EU. GDPR’s main impact on social engineering and OSINT is that it gives people the ability to limit others’ collection of their personal information (PI) and sensitive personal information (SPI), which, in turn, reduces their OSINT attack surface. Additionally, it creates penalties for companies that collect and store PI and SPI belonging to EU citizens if that data is breached and the information made publicly accessible. Another important provision in GDPR is the right to be forgotten. This provision allows private citizens to query the information a data owner or data processor holds on them in addition to a request for timely removal of their PI or SPI. If you are in law enforcement (federal, state, municipal, or otherwise) or a licensed private investigator, specific codes of ethics and legal precedents direct the parameters by which you can gather and use OSINT. Review any applicable laws or consult legal counsel before engaging in any OSINT gathering operations. The American Civil Liberties Union (ACLU) published a document in 2012 warning about the slippery slope associated with using big data and other techniques, including OSINT, to attempt to identify potential criminals before they act. The ACLU discussed the practice of gathering mass data fr

Extracted

Ransom Note
11 Technical Email Controls So far, we’ve performed phishing attacks and learned how to train users to notice them. We’ve also discussed how to respond when people fall victim to social engineering despite our training. This chapter covers the implementation of technical email controls to help provide a safety net for the organization and remove some of this burden from the user. In addition, we’ll discuss email appliances and services that can filter and manage emails. But before we get into those, let’s look at the actual standards associated with the technical side of email controls. Standards As email has evolved, so have the technologies to protect it. And as those technologies have evolved, so have the attack patterns, becoming, as with anything in the information security field, a continuous game of cat and mouse. Over time, security professionals have proposed, debated, and approved a variety of standards. When it comes to securing email, there are three major ones: Domain Keys Identified Mail (DKIM), Sender Policy Framework (SPF), and Domain-based Message Authentication, Reporting, and Conformance (DMARC). We’ll discuss each of these in this section. What do these three standards do? A common misconception is that they protect your emails from incoming phishing or spoofing attempts. To some degree, they do, but it’s more accurate to describe them as protecting your reputation: if you send an email with these standards implemented, and the recipient domain is configured to check the associated records, they can detect attempts at spoofing your domain. While this may seem counterintuitive and unproductive, follow along through the remainder of this chapter to see how this might help you. In short, SPF checks whether a host or IP address is in the sender’s list, DKIM sends a digital signature, and DMARC implements both SPF and DKIM, in addition to checking alignment. DMARC also establishes reporting. SPF is considered the lowest of the security standards. The caveat is that the recipient must have their mail servers configured to check for guidance from the sender regarding the standards and then actually perform the actions. “From” Fields To grasp how these standards work, you need to understand the various types of From fields in an email. In addition to a Reply-to field, emails have From and MailFrom. The From field, also called 5322.From, displays the sender. The MailFrom field, or 5321.MailFrom, is the actual service that sent the email. For example, if I sent emails using MailChimp, my email address would be in the 5322.From field, and MailChimp’s server and address would be in the 5321.MailFrom field. The numbers attached to these fields come from the RFCs that they were defined in. Here’s another easy way to think about it: the 5321.MailFrom field is the equivalent of a return address on an envelope mailed using the postal service, while the 5322.From field is the equivalent of a return address at the top of a letter contained within the envelope. Now let’s cover these three standards in chronological order, beginning with DKIM. Domain Keys Identified Mail DKIM became an internet standard in 2011. It seeks to authenticate emails and prevent spoofing by requiring senders to cryptographically sign parts of the email, including the 5322.From field. Seeing as an attacker probably won’t have access to the private key used to digitally sign the field, the email recipients can rapidly identify spoofing attempts. The DKIM header, a field included in the email message, specifies where to get the public key that can verify the signature. The public key gets stored in a DNS TXT record using the DNS domain () and selector () tags you can find in the email message. The DKIM public key is the only part of the framework viewable to the general population, but finding it hinges upon knowing the selector, which you can do only if you received an email from the domain (or manage to brute-force it). The DKIM process is as follows. First, you compose an email. As the email is sent, the private key associated with your DKIM entry creates two digital signatures that prove the authenticity of the email. One signature is for the DKIM header itself, and the other is for the body of the email. Each email has a unique pair of signatures. The signatures get placed in the header and sent along with the email. Once it’s received, and if the recipient mail server has DKIM configured, the server will verify the message’s authenticity by using the public key published to the DNS records. If the key is able to successfully decrypt the email, the email is authentic and wasn’t altered. This said, DKIM isn’t often used for authentication. Instead, we mostly use it to verify the authenticity, and for something called DMARC alignment, discussed in “Domain-Based Message Authentication, Reporting, and Conformance” later in this chapter . One of the shortcomings of DKIM is that it’s effective only if both the sender and recipient implement it. Furthermore, even if your organization implements DKIM internally, it can protect your users only from external actors spoofing other internal employees, which is good for your reputation, but does little to achieve security otherwise. After all, actors might spoof a trusted third party. But as mentioned earlier, the recipient must have their mail servers configured to check the DKIM authentication, which is typically accomplished through implementing DMARC. In the absence of DMARC, authentication failures are still passed to the recipient. DKIM was first introduced in RFC 6376. Later, RFC 8301 amended it with the following specification regarding the type of encryption DKIM could use: Two algorithms are defined by this specification at this time: rsa-sha1 and rsa-sha256. Signers MUST sign using rsa-sha256. Verifiers MUST be able to verify using rsa-sha256. rsa-sha1 MUST NOT be used for signing or verifying. In 2018, another RFC dealing with DKIM was released; RFC 8463 added a new signing algorithm, ed25519, which uses SHA-256 and Edwards-curve Digital Signature Algorithm (EdDSA) in place of an RSA key. For DKIM to be effective, you have to configure it not only in your DNS server but on the mail server as well. Otherwise, it acts as a deterrent at best. Let’s walk through configuring DKIM on a domain hosted through Google Workspace. Other mail servers have similar features. Regular Gmail uses Google’s default DKIM keys, as do domains hosted in Workspace that do not have DKIM configured. You cannot set up your own DKIM for a Gmail account hosted at gmail.com, but you can for a domain using Workspace. According to Google’s support documents, if a user doesn’t set up their own DKIM public key, Google will use the following default one: . Let’s set up our own private key. First, navigate to your Workspace administrator’s console as a Super Admin. Once you’re in the console, click Authenticate email , as shown in Figure 11-1. Figure 11-1: Selecting the Authenticate email option You should now see the DKIM authentication option and be prompted to select a domain to configure DKIM to support (Figure 11-2). Figure 11-2: Beginning the DKIM configuration Once you select Generate New Record, you will need to select a key length and the selector (Figure 11-3). Note that some hosting providers and DNS platforms do not support 2,048-bit key lengths. Per Google, if this is the case, default back to 1,024-bit keys. Figure 11-3: Generating the DKIM record and RSA key From here, select the domain as appropriate and click Generate New Record . This will create the key (censored in Figure 11-4). Open a new window to copy and paste this into DNS. Once this is complete, click Start Authenticating . Figure 11-4: DKIM record in Google Workspace After this stage, enter the cPanel, a common domain management tool used by many hosting providers. The cPanel should include a DNS Zone Editor, with a box that allows you to enter your public key into a TXT record (Figure 11-5). Figure 11-5: Adding a DNS TXT record Note that these panels might limit you to 255 characters: too short for the 2,048-bit-long key recommended by industry standards. (When this happened to me, I contacted support and asked them to manually enter the information on my behalf, which they reluctantly did.) Once you save the key, propagating the record could take up to 48 hours. You’ll need to click Start Authentication on the dashboard to verify it after propagation is complete. Propagation typically takes 24–48 hours, but sometimes as long as 72 hours, depending on the infrastructure and provider. Here’s another important consideration, discussed further in the next section: you must validate that your hosting and DNS provider supports concatenated DNS entries before using anything above a 1,024-bit RSA key. Essentially, certain providers impose limits on the number of characters that can be entered into a single entry in DNS. Your DMARC implementation will fail alignment if the provider does not support the concatenation, as DNS will interpret it as two unrelated TXT entries and fail to accomplish its purpose. For setting up DKIM on other email providers, like Exchange, Office 365, and Sendmail, you can find links to several tutorials at http://email-security.seosint.xyz/. Shortcomings of DKIM The encryption used in DKIM has at times included vulnerabilities. Until 2018, DKIM allowed the use of the SHA-1 algorithm for signing and verification. Yet the security community has known SHA-1 to be insecure since 2010, before the DKIM standard was even created. Researchers at CWI Amsterdam and Google have since successfully performed a collision attack on the protocol, at which point most parties in the cryptography and security communities deprecated it. The collision attack allowed the parties to take hashes of two files that didn’t match and produce the same hash from them, making it appear that they matched. All major web browser vendors announced they would stop accepting SHA-1 certificates in 2017. It’s true that creating a collision at the precise location within the process of DKIM operations would still require a lot of computational power, so only sophisticated and well-funded organizations, such as nation-states or large tech companies, could have the capabilities to perform such an attack. After all, Google was one of the two parties to produce the SHA-1 collision (and it’s unlikely that Google will be attempting to send unauthorized emails to your organization). But if you have the autonomy to do so, use the more secure SHA-256. Secondly, vulnerabilities exist in RSA, used as the public-key infrastructure of the DKIM standard. As I mentioned earlier, Google’s DKIM tool supports two 1,024-bit and 2,048-bit RSAs. The 2,048-bit RSA is the current industry minimum standard. There is significant debate as to whether RSA is secure, given mathematic, computational, and cryptographic advances since RSA’s introduction. Several academics and researchers have claimed to be able to crack RSA or reduce the RSA cryptosystem. Reducing the cryptosystem is a method of weakening its strength by identifying large prime numbers used and factorization. Using 1,024-bit RSA is certainly a vulnerability on paper, while using 2,048-bit RSA is discouraged but not prohibited. Pragmatically, without massive computational resources or access to quantum computing facilities, neither 1,024- nor 2,048-bit RSA can be broken in less than two million years on a single system. Later versions of DKIM added Ed25519-SHA256 as an accepted algorithm, although it has not been widely adopted. The final weakness in DKIM is not a vulnerability, but rather a shortcoming. DKIM is excellent to implement, and it can protect an organization’s reputation—but only if the recipient’s mail server is configured to check the DKIM signature and take action against emails claiming to come from a domain with DKIM enabled; otherwise, your organization’s reputation can still be damaged. Sender Policy Framework Like DKIM, the Sender Policy Framework (SPF) seeks to prevent spoofing using DNS TXT records. In these TXT records, SPF defines the domains, lists of hosts, domains, and IP addresses, and IP addresses allowed to send emails from within a mail environment or on behalf of a domain. While some sources describe SPF as authenticating the sender, it’s more appropriate to describe the framework as validating it; if configured to do so, the recipient will check the sender information from the 5322 and 5321 fields to authorize the senders, as defined in the SPF record. If the record is configured to hard fail, the email will fail, and if it’s configured to soft fail, the email will succeed. To see how this works, imagine that someone spoofs an email from a domain. The recipient checks the SPF record and observes that the sending domain has hard fails configured; also, the sender isn’t listed in the record. In addition, the SPF policy is set to pass. In that case, the email will fail to reach its destination. If there hadn’t been an SPF record, or if the policy was set to or configured to soft fail, the email would have succeeded. Since SPF does not require cryptography, SPF and DKIM are complementary, not competitors. SPF is logic based, as it compares incoming values to a list. The host, domain, or IP address is either in the record or it isn’t. DKIM employs both logic and cryptography in the form of digital signatures. You can read more about SPF in RFC 7208, which introduced it in 2014. Let’s implement SPF in Google Workspace. Begin by determining any service providers, such as Google or Outlook, and the associated domains allowed to send email on behalf of your organization. (You might specify those domains in the MX record.) If you’re running an internal mail server, like Exchange, also determine the network blocks authorized to email on behalf of the organization. Then, for these domains and IP addresses, choose a policy for various situations: Pass ( ) Allows all email to pass through (not recommended, unless for brief troubleshooting) No policy ( ), neutral Essentially means no policy Soft fail ( ) Somewhere between fail and neutral; generally these emails are accepted but tagged Hard fail ( ) Rejects the email As backups, you might configure something like (not recommended, as it would allow all mail), (allows emails from the host listed in the MX record; not recommended if using cloud email like Google or Office 365), or (which would allow emails from nostarch.com). Once you have this information, you’re ready to create the record. To start, navigate to the DNS editor for your hosting provider and create a new TXT record. Alternatively, edit any existing TXT records that have in the body, as shown here: ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6907 ;; flags: qr rd ra; QUERY: 1, ANSWER: 15, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ; walmart.com. IN TXT ;; ANSWER SECTION: walmart.com. 300 IN TXT "v=spf1 include:_netblocks.walmart.com include:_smartcomm.walmart.com include:_vspf1.walmart.com include:_vspf2.walmart.com include:_vspf3.walmart.com ip4:161.170.248.0/24 ip4:161.170.244.0/24 ip4:161.170.241.16/30 ip4:161.170.245.0/24 ip4:161.170.249.0/24" " ~all" ;; Query time: 127 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Tue Sep 08 05:42:49 UTC 2020 ;; MSG SIZE rcvd: 1502 Set the time-to-live (TTL) value to the default of 14,400. The TTL value is the time DNS recursive resolvers have to cache our SPF record before pulling down a new one (if it changed). Some things, like critical assets and load balancers, operate best with a very small TTL. Assets that should not change frequently or have redundancy built in (such as MX records) are recommended to have larger TTL values. This is to attempt to combat techniques like fast flux or dynamic DNS records commonly used in sophisticated phishing campaigns and attacks against social media sites. Then name the TXT record after the organization’s domain. For the actual text, enter , followed by the mechanisms and the policy, as discussed earlier. To define these mechanisms, you’ll need to know the five typ
URLs

http://email-security.seosint.xyz/

https://toolbox.googleapps.com/apps/checkmx/

Signatures

Files

  • Practical Social Engineering - Joe Gray.epub
    .zip
  • META-INF/calibre_bookmarks.txt
  • META-INF/container.xml
    .xml
  • OPS/NSTemplate_v1.css
  • OPS/b01.xhtml
    .html
  • OPS/b02.xhtml
    .html
  • OPS/b03.xhtml
    .html
  • OPS/b04.xhtml
    .html
  • OPS/b05.xhtml
    .html
  • OPS/b06.xhtml
    .html
  • OPS/c01.xhtml
    .html
  • OPS/c02.xhtml
    .html
  • OPS/c03.xhtml
    .html
  • OPS/c04.xhtml
    .html
  • OPS/c05.xhtml
    .html
  • OPS/c06.xhtml
    .html
  • OPS/c07.xhtml
    .html
  • OPS/c08.xhtml
    .html
  • OPS/c09.xhtml
    .html
  • OPS/c10.xhtml
    .html
  • OPS/c11.xhtml
    .html
  • OPS/c12.xhtml
    .html
  • OPS/content.opf
  • OPS/cover.xhtml
    .html
  • OPS/f01.xhtml
    .html
  • OPS/f02.xhtml
    .html
  • OPS/f03.xhtml
    .html
  • OPS/f04.xhtml
    .html
  • OPS/f05.xhtml
    .html
  • OPS/f06.xhtml
    .html
  • OPS/image_fi/500983c03/f03001.png
    .png
  • OPS/image_fi/500983c03/f03002.png
    .png
  • OPS/image_fi/500983c04/f04001.png
    .png
  • OPS/image_fi/500983c04/f04002.png
    .png
  • OPS/image_fi/500983c04/f04003.png
    .png
  • OPS/image_fi/500983c04/f04004.png
    .png
  • OPS/image_fi/500983c04/f04005.png
    .png
  • OPS/image_fi/500983c04/f04006.png
    .png
  • OPS/image_fi/500983c04/f04007.png
    .png
  • OPS/image_fi/500983c04/f04008.png
    .png
  • OPS/image_fi/500983c04/f04009.png
    .png
  • OPS/image_fi/500983c04/f04010.png
    .png
  • OPS/image_fi/500983c05/f05001.png
    .png
  • OPS/image_fi/500983c05/f05002.png
    .png
  • OPS/image_fi/500983c05/f05003.png
    .png
  • OPS/image_fi/500983c05/f05004.png
    .png
  • OPS/image_fi/500983c05/f05005.png
    .png
  • OPS/image_fi/500983c05/f05006.png
    .png
  • OPS/image_fi/500983c05/f05007.png
    .png
  • OPS/image_fi/500983c05/f05008.png
    .png
  • OPS/image_fi/500983c05/f05009.png
    .png
  • OPS/image_fi/500983c05/f05010.png
    .png
  • OPS/image_fi/500983c05/f05011.png
    .png
  • OPS/image_fi/500983c05/f05012.png
    .png
  • OPS/image_fi/500983c05/f05013.png
    .png
  • OPS/image_fi/500983c05/f05014.png
    .png
  • OPS/image_fi/500983c05/f05015.png
    .png
  • OPS/image_fi/500983c05/f05016.png
    .png
  • OPS/image_fi/500983c05/f05017.png
    .png
  • OPS/image_fi/500983c06/f06001.png
    .png
  • OPS/image_fi/500983c06/f06002.png
    .png
  • OPS/image_fi/500983c06/f06003.png
    .png
  • OPS/image_fi/500983c07/f07001.png
    .png
  • OPS/image_fi/500983c07/f07002.png
    .png
  • OPS/image_fi/500983c07/f07003.png
    .png
  • OPS/image_fi/500983c07/f07004.png
    .png
  • OPS/image_fi/500983c07/f07005.png
    .png
  • OPS/image_fi/500983c07/f07006.png
    .png
  • OPS/image_fi/500983c07/f07007.png
    .png
  • OPS/image_fi/500983c07/f07008.png
    .png
  • OPS/image_fi/500983c07/f07009.png
    .png
  • OPS/image_fi/500983c07/f07010.png
    .png
  • OPS/image_fi/500983c08/f08001.png
    .png
  • OPS/image_fi/500983c08/f08002.png
    .png
  • OPS/image_fi/500983c08/f08003.png
    .png
  • OPS/image_fi/500983c08/f08004.png
    .png
  • OPS/image_fi/500983c08/f08005.png
    .png
  • OPS/image_fi/500983c08/f08006_new.png
    .png
  • OPS/image_fi/500983c08/f08007.png
    .png
  • OPS/image_fi/500983c10/f10001.png
    .png
  • OPS/image_fi/500983c11/f11001.png
    .png
  • OPS/image_fi/500983c11/f11002.png
    .png
  • OPS/image_fi/500983c11/f11003.png
    .png
  • OPS/image_fi/500983c11/f11004.png
    .png
  • OPS/image_fi/500983c11/f11005.png
    .png
  • OPS/image_fi/500983c12/f12001.png
    .png
  • OPS/image_fi/500983c12/f12002.png
    .png
  • OPS/image_fi/500983c12/f12003.png
    .png
  • OPS/image_fi/500983c12/f12004.png
    .png
  • OPS/image_fi/500983c12/f12005.png
    .png
  • OPS/image_fi/500983c12/f12006.png
    .png
  • OPS/image_fi/500983c12/f12007.png
    .png
  • OPS/image_fi/500983c12/f12008.png
    .png
  • OPS/image_fi/500983c12/f12009.png
    .png
  • OPS/image_fi/500983c12/f12010.png
    .png
  • OPS/image_fi/500983c12/f12011.png
    .png
  • OPS/image_fi/500983c12/f12012.png
    .png
  • OPS/image_fi/500983c12/f12013_updated.png
    .png
  • OPS/image_fi/500983c12/f12014.png
    .png
  • OPS/image_fi/500983c12/f12015.png
    .png
  • OPS/image_fi/500983c12/f12016.png
    .png
  • OPS/image_fi/500983c12/f12017.png
    .png
  • OPS/image_fi/500983c12/f12018_updated.png
    .png
  • OPS/image_fi/book_art/NSAnnotations-Mono.otf
  • OPS/image_fi/book_art/NSAnnotations500-Mono.otf
  • OPS/image_fi/book_art/chapterart.png
    .png
  • OPS/image_fi/book_art/cover.png
    .png
  • OPS/image_fi/book_art/nsp_logo_black_no-text.png
    .png
  • OPS/image_fi/book_art/nsp_logo_black_rk.png
    .png
  • OPS/p01.xhtml
    .html
  • OPS/p02.xhtml
    .html
  • OPS/p03.xhtml
    .html
  • OPS/toc.ncx
    .xml
  • OPS/toc.xhtml
    .html
  • mimetype