Show Menu
Cheatography

CS445: Cyber Threat Intelligence Cheat Sheet (DRAFT) by

thtrqhtrqnrnyrhyraththtrh

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Chapter 1: Intro to CTI

What is Intell­igence? Humit, Geoint, Masint, Sigint, Osint (focus)
Intell­igence lifecycle: Operat­ional enviro­nment -> Data collected -> Data will be processed and exploited to obtain inform­ation -> Inform­ation will be analysed and utilised -> Intell­igence
Analysis: 1. Requires analysts to immerse themselves into ambiguous situat­ions. Data/Info may not be useful, so need to generate hypothesis to determine possible answers. Hypothesis is then tested against evidence. 2. Analytical judgements should have process searching for, sorting, struct­uring and evaluating data/info. Even if not enough time or data, decision should still be made.
Forensic process: systematic invest­igation used to uncover what happened during an incident (like a cybera­ttack) by examining the evidence. The goal is to gather facts that are defens­ible, repeat­able, and unders­tan­dable.
Defens­ibi­lity: Your conclu­sions must be backed by evidence.
Repeat­abi­lity: Another invest­igator should be able to follow your process and reach the same conclu­sion.
Unders­tan­dab­ility : Your findings must be clear and easy to explain to others, including non-te­chnical people (like executives or law enforc­ement).
Everyone views issues in different ways. Perception should be active instead of passive one (dont passively accept data, actively interpret it)
Dont let your views cloud your analysis since critical situations are ambiguous situat­ions.
WannaCry: Ransomware worm that exploit a vulner­ability in windows os. Infected 300k machines. Adversary from north korea.
Adversary intent one of the hardest questions to crack in cyber security. Unders­tanding actor intent helps structure defenses.
What is CTI? Gathering, proces­sing, analysing inform­ation about potential & active cyber threats. Goal is to help organi­sations make better security decisions by staying ahead of criminals.
Info vs Intel: 1. Info: Raw, unfiltered feed, non action­able. 2. Intel: Processed, sorted inform­ation, actionable
Why use CTI? Prevent, mitigate, solve threats. Make correct decisions to: 1. Prevent signif­icant losses 2. Keep ourselves safe. 3. Protect sovere­ignty of our society.
Assets: Anything valuable that needs protection
Vulner­abi­lity: Weakness that can be exploited.
Threat: Something that can exploit a vulner­ability to harm an asset.
Risk: Likelihood & impact of a threat exploiting a vulner­abi­lity.
Threat actors: 1. Nation states: big 4 (russia, china, north korea, iran) 2. Hackti­vists: Indivi­duals or group with political motiva­tions. 3. Cyber criminals: Attackers seeking financial gain.
Zero day vulner­abi­lity: Vulner­ability that hasnt been discussed or patched yet.
Advantage of Intell­igence led security: 1. Mitigate risk, 2. Help make better decisions. 3. Prioritise resources, 4. Ensure value of operat­ions. 5. Sync between intel and core business
Understand true risk -> Inform business and develop risk mitigation -> Build proactive and reactive strategies -> Demand right budgets + drive right invest­ments.
Types of CTI: 1. Strategic, 2. Operat­ional, 3. Tactical
Strategic intell­igence: Focused on high level trends and advers­arial motives, leverage this unders­tanding to engage in strategic security and business decision making. Stakeh­olders: C suite, Executive board, Strategic intel. (who/why questions)
Tactical intell­igence: Focused on performing malware analysis and take in behavi­oural threat indicators into defensive cybers­ecurity systems. Stakeh­olders: SOC analyst, SIEM, firewall, IDS. (What questions)
Operat­ional intell­igence: Focused on unders­tanding advers­arial capabi­lities, infras­tru­cture, TTPs and leverage that unders­tanding to conduct more targeted and priori­tised cybers­ecurity operat­ions. Stakeh­olders: 1. Threat hunter, 2. SOC analyst, 3. Incident response, 4. Vulner­ability manage­ment. (How/Where questions)
TTP: Tactics: Describe what an adversary is trying to accomp­lish. Aka tactical objective. 2. Technique: Represents how the threat actor achieves tactical objective. 3. Proced­ures: Analysis of procedures used by adversary can help understand what the adversary is looking for within target infras­tru­cture.
Models to convey cyber activity: 1. Mandiant Attack Lifecycle (to be covered in detail) 2. Mitre attack: Framework that maps out tactics & techniques used by attackers. 3. Diamond model of Intrusion, 4. Pyramid of Pain
Diamond Model of Intrusion Analysis: Framework used in cybers­ecurity to help analysts understand cybera­ttacks by identi­fying the key components of an intrusion and the relati­onships between them. 1. Adversary – The attacker (e.g., hacker group). 2. Victim – The target (e.g., company, person, or system). 3. Capability – The tools or methods the attacker used (e.g., malware, phishing). 4. Infras­tru­cture – The resources used to carry out the attack (e.g., IP addresses, domains).
Pyramid of Pain: How hard it is to change attack indica­tors. Bottom is hash values since tiny changes in file can produce completely different hash output. Top is TTPs since attackers core methods are difficult to change quickly.
Estimate language to convey uncert­ainty: 1. High confidence level (100%): Certain (75%), highly likely, likely, 2. Medium confidence level (50%): Even/May, 3. Low Confidence level (25%): Unlikely, Highly unlikely, Imposs­ible.

Chapter 2: CTI Ops

Cyber espionage: Means to gather sensitive or classified data, trade secrets or other forms of intell­ectual property that can be used by threat actor for an advantage.
Financial crime: Illegal activties whose primary goal is to make money.
Hackti­vism: Individual or group who utilises hacking techniques to promote a political or social agenda.
Inform­ation operat­ions: Coordi­nated actions taken to influence, disrupt or exploit an adversary decision making process.
Analyst Tradec­raft: 1. Intell­igence Analysis: Like detectives piecing together clues, CTI analysts use reasoning to figure out what happened and why. 2. Technology Expertise: Analysts need to understand hardware and software engine­ering, systems integr­ation, networks and protocols, exploits and vulner­abi­lities to spot issues.
Challenge of attrib­ution and response: When attempting to find out who is behind an attack, Incident responders typically assess both indicators of compromise (IoCs) and attack tactics, techniques and procedures (TTPs) that had been observed during an attack. IoCs are good place to start but an attacker infras­tru­cture like IP address, domains can be easily spoofed or generated which will obfuscate their real identity.
2 types of thinking ( System 1 - intuition, fast, permits quick judgement. How we perceived the world around us System 2 - analyt­ical, slow, delibe­rate, slow thinking process. Activated when we do something that does not come naturally and requires some thinking through. )
Cognitive bias in CTI: Cognitive biases are mental shortcuts that sometimes lead us astray. Think of them as illusions for the brain.
5 most common analytical traps: 1. Failing to consider multiple hypotheses or explan­ations. 2. Ignoring incons­ist­encies. 3. Reject evidence that does not support the hypoth­esis. 4. Insuff­icient resources to capture key evidence. 5. Improperly projecting past experi­ence.
Failure to consider visibi­lity: Form of failing to consider multiple hypotheses or explan­ations. Different organi­sations have different views of threat landscape. Your enviro­nment, your country, your industry. Example: Suspicious email with unknown backdoor sent to CFO, must be targeted. But this activity is hitting customers of european based banks, must be a regionally focused cyber crime.
Mixing facts with assess­ments: Result in failure to cope with evidence of uncertain accuracy. Example: Team wombat domain news.m­ywo­rld­new­s.com resolved to same IP address as mail.m­edi­aco­rp.com. (fact) Possible misint­erp­ret­ation as mail.m­edi­aco­rp.com is attrib­utable to team wombat (asses­sment).
Failing to properly vet sources: threat intell­igence lives and dies on the quality of inputs, garbage in and garbage out. However, many organi­sations start their threat intell­igence program by signing up for a series of open source threat feeds without a proper vetting process in place. Can result in a flood of alerts that are difficult to trust or differ­ent­iate.
Failure to account for human action: In the landscape of computer operat­ions, we deal with data but it is easy to forget that there is a person behind the keyboard. Our minds naturally want to sort and categorise inform­ation, make sense of the enviro­nment but not always comfor­table with grey areas.
Common Biases: 1. Confir­mation Bias: Seeing what you expect to see, like ignoring evidence against your belief. 2. Ambiguity Effect: Avoiding decisions because of incomplete inform­ation. 3. Bandwagon Effect: Believing something just because everyone else does.
Impact on Cybers­ecu­rity: Bias can cause analysts to misjudge situat­ions, like assuming an attack on multiple targets must be highly organized without verifying the evidence.
Bias is inherent and even awareness of biases not enough to neutralise them, what to do? Heuer says that when presented with an outcome, we ask ourselves the following questions: 1. If the opposite outcome had occurred, would I be surprised? 2. If this report had told me the opposite, would I believe it? 3. If the opposite outcome had occurred, would it have been predic­table given the inform­ation that was available.
Structured Analytical Techni­ques: Frameworks to ensure logical and unbiased analysis. Pros: 1. Promote collab­oration and clarity. 2. Show the reasoning process for conclu­sions, making them more transp­arent.
Intell­igence lifecycle: 1. Planning and requir­ements, 2. Collec­tion, 3. Analysis, 4. Produc­tion, 5. Dissem­ination and feedback
Planning and requir­ements: stakeh­olders defined, business needs and inform­ation concerns.
Collec­tion: From inform­ation sources, raw internal and external data, open source, commercial and sensitive.
Analysis: Collation and aggreg­ation via threat intel platform or analyst best practices.
Produc­tion: Estimative language, challenge analysis
Dissem­ination and feedback: Role based intell­igence reporting, feedback loop firmly establ­ished.
Refer to case study for my details
Diamond Model : Connects the dots between attackers, victims, tools, and infras­tru­cture. Four Elements: 1. Adversary: The attacker or group. 2. Infras­tru­cture: Tools and assets like servers used in the attack. 3. Capabi­lity: The methods or techniques used (e.g., malware). 4. Victim: The target.
Consid­era­tions for the diamond model: 1. Timestamp: Date and time intrusion event occurred. 2. Result: Outcome of intrusion, succeed or failure or unknown. 3. Direction: How event moved through network or host (e.g victim to infras­tru­cture, adversary to infras­tru­cture, bidire­cti­onal) 4. Method­ology: Category of event (portscan, spear phishing) 5. Resources: elements required for intrusion (e.g particular software, knowledge, funds, facili­ties, access rights) 6. Socio-­pol­itical: Relati­onship between adversary and victim. 7. Techno­logy: Tech involved in adversary capabi­lities and use of infras­tru­cture.
Example: LAPSUS$ used social engine­ering to breach companies like Okta and Microsoft, demons­trating how attackers exploit human and technical weakne­sses. Refer to case study for more details.
Cyber Kill Chain (7 stages): 1. Reconn­ais­sance: Spying on the target to find weakne­sses. 2. Weapon­iza­tion: Creating tools like malicious emails or files. 3. Delivery: Sending the malicious tool to the target. 4. Exploi­tation: Activating the tool to break in. 5. Instal­lation: Planting backdoors for ongoing access. 6. Command and Control (C2): Contro­lling infected machines remotely. 7. Actions on Object­ives: Achieving the attacker's goal, like stealing data or causing disrup­tion.