Contents

  Threads
  Race Conditions
  Locking
  Dead Locks
  Live Locks
  Starvation
  Affinity
Threads - Race Conditions

Now we have created a thread, and it is off running a function. As most machines are multi-CPU, and/or most CPUs are now multi-core, you can correctly say that your function and the function the other thread is running are truely running in parallel. Even in the old days of single CPU single core machines, this assumption could be adopted (for our purposes) because of the way the operating system worked.

Now that we have this scenario of two or more functions running in parallel, we get to the heart of multi-threading programming, namely resource sharing.

The easiest way to explain is to think about a Web Page counter. Each user comes in on a separate thread, and you want to increment your counter. Lets assume it is currently zero. The first attempt would be something like

    int g_counter = 0;
    
    void IncrementCounter()
    {
        g_counter++;
    }

At first glance you might think that there is nothing wrong, but this is where all the multi-threaded problems stem from. The 'trick' is to zoom in. Let's have a look at the assembly code that gets generated for this function.

    
1    ; g_counter++;
2    
3    mov     eax, DWORD PTR _g_counter
4    inc     eax
5    mov     DWORD PTR _g_counter, eax
    

Now let's assume that we have two threads calling this function concurrently, in this order:

  • g_counter starts at zero
  • Thread A calls IncrementCounter()
  • Thread A gets to line 3 in the assembly code
  • Thread A reads in zero to EAX from g_counter on line 3
  • The OS Premepts Thread A
At this stage

  • Thread A has zero stored in its EAX register
  • Thread A is waiting for the OS to schedule it
  • g_counter is still zero
Now,

  • The OS Schedules Thread B to run
  • Thread B calls IncrementCounter()
  • Thread B gets to line 3 in the assembly code
  • Thread B will read in zero to EAX from g_counter on line 3 as it hasn't yet changed
  • Thread B increments EAX on line 4
  • EAX is now 1
  • Thread B writes 1 to g_counter on line 5
Finally,

  • The OS Schedules Thread A to continue
  • Thread A increments EAX on line 4
  • EAX is now 1
  • Thread A writes 1 to g_counter on line 5
Hang on, that's not right? The counter should be 2. And this is one one CPU! On machines with more than one CPU there is an added complication of a memory manager where the CPUs each write more than one variable back to main memory from their cache using a shared memory bus, in non-deterministic orders. But, we'll leave that for later, although it is important to know.

The concept above is known as a Race Condition, or Race. It is where n threads' non-deterministic access to m resources has undesired consequences.

<< Prev Next >>


 
 
 
 
 

Privacy Policy
©2008 DebugInspector. All rights reserved