Skip to main content

Threat Modeling

Threat Modeling is a fundamental practice to ensure security from the initial design phases of a system. Its objective is to analyze in a structured manner the potential risks and vulnerabilities of a system, allowing defenses to be planned before problems can manifest themselves.

What is Threat Modeling?​

Threat Modeling answers key security questions in software development:

  1. What are we building? Identify key assets (data, systems, processes) and understand how system components interact.
  2. What could go wrong? Identify potential threats that could exploit vulnerabilities or weaknesses in the design.
  3. What are we doing to mitigate it? Design and implement security controls to prevent, detect or respond to threats.
  4. Have we done enough? Validate that the controls in place are adequate and assess residual risk.

Threat Modeling is an iterative process. As the system evolves, it is important to revisit the model and update it with new changes.

Advantages of Threat Modeling​

  1. Risk reduction: Identifying vulnerabilities in the early stages of development saves time and money compared to mitigating them in production.
  2. Proactive defenses: Allows you to think about how to mitigate threats before they occur, rather than reacting to incidents.
  3. Improves collaboration: Encourages more effective communication between developers, security teams and stakeholders.
  4. Regulatory compliance: Supports compliance with standards such as ISO 27001, GDPR or NIST.
  5. Development optimization: Facilitates a more robust design from the beginning, reducing code rework.

Methodologies in Threat Modeling​

The methodologies are structured guidelines to carry out Threat Modeling, each one has a different approach and may be more suitable according to the needs of the project, the following are the ones used in the Threat Dragon tool:

STRIDE​

STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) is a classic methodology designed by Microsoft that classifies threats into six main categories. It is oriented to protect the functionality and data of the systems.

  1. **Spoofing: **

    • Threat: An attacker impersonates another legitimate identity on the system.
    • Example: Unauthorized access through stolen credentials.
    • Typical controls: Strong authentication (MFA), identity management and access policies.
  2. **Tampering: **

    • Threat: Malicious alteration of data or processes in the system.
    • Example: Modification of a configuration file to disable security.
    • Typical controls: Digital signatures, data integrity and validation mechanisms.
  3. Repudiation:

    • Threat: Denial of actions performed in the system by a user.
    • Example: A user who denies having sent an email.
    • Typical controls: Audit logging and non-repudiation using techniques such as signed logs.
  4. Information Disclosure:

    • Threat: Unauthorized exposure of sensitive information.
    • Example: Confidential data sent in plain text.
    • Typical controls: Encryption in transit and at rest, and privacy policies.
  5. Denial of Service:

    • Threat: Interruption of legitimate access to the system.
    • Example: DDoS attack that renders a web server unusable.
    • Typical controls: Scalability, load balancing and DDoS attack mitigation.
  6. Elevation of Privilege:

    • Threat: An attacker obtains more privileges than he should have.
    • Example: A user with basic privileges accessing as administrator.
    • Typical controls: Role separation, privilege-based access control.

LINDDUN​

Linddun (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, Non-compliance) is a methodology designed to address privacy-related threats in systems. Helps identify vulnerabilities that may impact users' privacy rights.

  1. Linkability:

    • Threat: Linking data that should remain separate.
    • Example: Assign anonymous user records to a specific identity.
    • Typical controls: Data separation and anonymization techniques.
  2. Identifiability:

    • Threat: Identifying an individual from anonymous data.
    • Example: Reconstructing a user's identity from metadata.
    • Typical controls: Pseudonymization and metadata reduction.
  3. Non-repudiation:

    • Threat: Denial of actions related to personal data.
    • Example: A user who denies having consented to the use of his data.
    • Typical controls: Consent audits and clear records.
  4. Detectability:

    • Threat: Possibility of identifying the presence of a subject in the system.
    • Example: Identify users by access patterns.
    • Typical controls: Access management and anonymous pattern analysis.
  5. Disclosure of Information:

    • Threat: Leakage of sensitive data.
    • Example: Personal data exposed due to an error in the database configuration.
    • Typical controls: Encryption and access policies.
  6. Unawareness:

    • Threat: Users do not understand how their data is used.
    • Example: Not informing users about the processing of their data.
    • Typical controls: Transparency and clear policies.
  7. Non-compliance:

    • Threat: Non-compliance with regulations or standards.
    • Example: Storing data without complying with GDPR.
    • Typical controls: Regular assessments and regulatory compliance.

CIA​

The CIA (Confidentiality, Integrity, Availability) methodology focuses on the three fundamental principles of security.

  1. Confidentiality:

    • Ensure that only authorized persons can access the information.
    • Typical controls: Encryption, authentication and access control.
  2. Integrity:

    • Ensure that data is not altered in an unauthorized manner.
    • Typical controls: Hashing, data validation and immutable records.
  3. Availability:

    • Ensure that systems and data are accessible to legitimate users.
    • Typical controls: High availability, load balancing and recovery plans.

DIE​

DIE (Distributed, Immutable, Ephemeral) is a modern methodology adapted to cloud-based architectures.

  1. Distributed:

    • Systems designed to operate without a single point of failure.
    • Typical controls: Replicas and distributed architectures.
  2. Immutable:

    • Systems do not change after deployment.
    • Typical controls: Immutable containers and deployments.
  3. Ephemeral:

    • The systems have short useful lives and are destroyed after use.
    • Typical controls: Autoscaling and resource rotation.

PLOT4ai​

PLOT4ai (Privacy, Liveness, Ownership, Transparency for AI) is designed to address privacy and ethics challenges in AI systems.

  1. Privacy:

    • Protect the data used by the models.
    • Typical controls: Anonymization and federated techniques.
  2. Liveness:

    • Keep systems up to date.
    • Typical controls: Continuous Retrain of models.
  3. Ownership:

    • Define rights over data and models.
    • Typical controls: Clear contracts and audits.
  4. Transparency:

    • Ensure understandable decisions by users.
    • Typical controls: Interpretable models and explainability.