Go Parallel with .NET 4.0 Parallel Extensions

We are in the multi-core era, where our applications are expected to effectively use these cores.  Simply, go parallel, means that partitioning the work being done into smaller pieces those are executed on the available processors in the target system.  Until .NET 3.5, parallel means we are used to use ThreadPool and Thread classes.  We are not in a situation to have a component which partition and schedule work into work items on different cores.  A craft-and-weave parallel framework from Intel is available while for MC++ and not sure how easy it is to use.

Even though threads are basic to parallel, from a developer perspective, what required for parallel programming is not the thread, work items in a work, named “Task”.

.NET 4.0 Parallel Extensions 

Task represents a work item being performed independently along with other work items.  This is the approach taken by the experts in Microsoft Parallel Computing division, and released parallel extensions (Task Parallel Library and Parallel LINQ) in .NET 4.0 beta 1.  As like Microsoft’s other frameworks, developers need not worry about the internals.  The task partitioning and scheduling of the work items are taken care by parallel extensions.

Task and Thread

Let us first understand the relationship between task and thread.  See the following figure.

Thread and Task Relationship

Typically task is resided in a thread.  A task may contain one or more child tasks those are not necessarly resided in the parent task’s thread.  In the above figure, Child Task M2 of Task M resided in Thread B.

Task Parallel Library (TPL)

This library provides API to perform task based parallel programming under System.Threading and System.Threading.Tasks namespaces.  The partitioned tasks are automatically scheduled on available processors by Task Scheduler which is in ThreadPool.  The work stealing queue algorithm in the task scheduler makes the life easier.  I’ll explain about this in the next post. 

Scheduling the tasks on the processors is the runtime behaviour of the task scheduler, so it does support ’scale up’ without recompiling the program on the target machine with few or more cores. 

There are two types of parallelism you can do using TPL.

  • Data Parallelism
  • Task Parallelism

Data Parallelism

It is very common to perform a set of actions against each element in a collection of data using for and for..each.  Parallel.For() and Parallel.ForEach() enable to make your collection processing actions to be parallel.  The following figure depicts this.

 Data Parallelism

Let us see this with typical customer-order collection.  The following code shows Customer and Order declaration.

public class Customer
    public string Name;
    public string City;
    public Order[] Orders;

    public override string ToString()
        return String.Format("Name: {0} - City: {1}",
            Name, City);

public class Order
    public int OrderId;
    public int Quantity;

    public override string ToString()
        return String.Format("Order Id: {0} - Quantity: {1}",
            OrderId, Quantity);

Let us create a in-memory collection for this as like below code.

var customers = new Customer[]
    new Customer{Name="Hussain", City="Vizak", Orders =
        new Order[]
            new Order{OrderId=1, Quantity=10},
            new Order{OrderId=10, Quantity=5}
    new Customer{Name="Abdul", City="Chennai", Orders =
        new Order[]
            new Order{OrderId=7, Quantity=2},
            new Order{OrderId=10, Quantity=4}
    new Customer{Name="Daniel", City="Texas", Orders =
        new Order[]
            new Order{OrderId=12, Quantity=3},
            new Order{OrderId=7, Quantity=1},
            new Order{OrderId=10, Quantity=1}

Let us write a simple parallel code which iterate through each customers and its orders.

    customer =>
        Console.WriteLine("**** {0} ****", customer.ToString());
            order =>
                Console.WriteLine("\t{0}", order.ToString());

In the above code, printing customer detail is one task and printing order detail for a specified customer is the child one for that.  I’ve taken simple overloaded version of For…Each

public static ParallelLoopResult ForEach(IEnumerable source, Action body);

The task scheduler partitions the customers and orders, and schedule them into available processors.

Task Parallelism

In cases like multiple distinct actions to be performed concurrently on the same or different source.  The following figure depicts this.  Note that task count need not be equal to number of actions.

Task Parallelism

Parallel.Invoke() is used for this.  See the following code.

Parallel.Invoke(() =>
        Console.WriteLine("Task 1.  Getting total quantity ordered for OrderId 10");
        var orderId10Sum =
            (from c in customers
            from o in c.Orders
            where o.OrderId == 10                       
            select o.Quantity).Sum();
        Console.WriteLine("Quantity: {0}", orderId10Sum);
    },() =>
        Console.WriteLine("Task 2.  Getting number of customers and their city ordered OrderId 7");
        var items =
            from c in customers
            from o in c.Orders
            where o.OrderId == 7
            select c.City;
        Console.WriteLine("Count: {0}\nFrom following City:", items.Count());

        foreach (string city in items)

I’ve taken the following overloaded version of Parallel.Invoke().

public static void Invoke(params Action[] actions);

I’ve specified two different actions those are acted on customers.

The source code for this article is available at https://udooz.net/file-drive/doc_details/8-net-40-tlp-demo-1.html.

In the next post, I’ll explain the more about tasks, Parallel LINQ portion and task scheduler.

Share This: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us

One Comment

Leave a Reply