Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

COMP 322: Fundamentals of Parallel Programming (Spring

...

2023)

 

Liam Bonnage, Harrison Brown, Mustafa El-Gamal, Krishna Goel, Ryan Green, Ryan Han, Rishu Harpavat, Namanh Kapur, Tian Lan, Tam Le, Will LeVine, Eva Ma, Hamza Nauman, Rutvik Patel, Aryan Sefidi, Jeemin Sim, Tory Songyang, Jiaqi Wang, Erik Yamada, Yifan Yang

Instructor:

Mackale Joyner, DH 2071

Head TA:Srdan Milakovic

Co-Instructor:

Zoran Budimlić, DH 3134

Graduate TAs:

Jonathan Sharman

2063

TAs:Mohamed Abead, Chase Hartsell, Taha Hasan, Harrison Huang, Jerry Jiang, Jasmine Lee, Michelle Lee, Hung Nguyen, Quang Nguyen, Ryan Ramos, Oscar Reynozo, Delaney Schultz, Tina Wen, Raiyan Zannat, Kailin ZhangAdmin Assistant:Annepha Hurlock, annepha@rice.edu, DH 3122, 713-348-5186Undergraduate TAs:

Piazza site:

https://piazza.com/rice/classspring2022/jmwfpr1i85n7l4comp322 (Piazza is the preferred medium for all course communications, but you can also send email to comp322-staff at rice dot edu if needed)

Cross-listing:

ELEC 323

Lecture location:

Herring Hall 100TBD

Lecture times:

MWF 1:00pm - 1:50pm

Lab locations:

Herring Hall 100TBD

Lab times:

Mon  3:00pm - 3:

50pm ()

Tue 4Thursday, 4:00pm - 4:50pm ()

Course Syllabus

A summary PDF file containing the course syllabus for the course can be found here.  Much of the syllabus information is also included below in this course web site, along with some additional details that are not included in the syllabus.

...

The desired learning outcomes fall into three major areas (course modules):

1) Parallelism: functional programming, Java streams, creation and coordination of parallelism (async, finish), abstract performance metrics (work, critical paths), Amdahl's Law, weak vs. strong scaling, data races and determinism, data race avoidance (immutability, futures, accumulators, dataflow), deadlock avoidance, abstract vs. real performance (granularity, scalability), collective & point-to-point synchronization (phasers, barriers), parallel algorithms, systolic algorithms.

...

3) Locality & Distribution: memory hierarchies, locality, cache affinity, data movement, message-passing (MPI), communication overheads (bandwidth, latency), MapReduce, accelerators, GPGPUs, CUDA, OpenCL., MapReduce

To achieve To achieve these learning outcomes, each class period will include time for both instructor lectures and in-class exercises based on assigned reading and videos.  The lab exercises will be used to help students gain hands-on programming experience with the concepts introduced in the lectures.

To ensure that students gain a strong knowledge of parallel programming foundations, the classes and homeworks homework will place equal emphasis on both theory and practice. The programming component of the course will mostly use the  Habanero-Java Library (HJ-lib)  pedagogic extension to the Java language developed in the  Habanero Extreme Scale Software Research project  at Rice University.  The course will also introduce you to real-world parallel programming models including Java Concurrency, MapReduce, MPI, OpenCL and CUDA. An important goal is that, at the end of COMP 322, you should feel comfortable programming in any parallel language for which you are familiar with the underlying sequential language (Java or C). Any parallel programming primitives that you encounter in the future should be easily recognizable based on the fundamentals studied in COMP 322.

...

There are no required textbooks for the class. Instead, lecture handouts are provided for each module as follows.  You are expected to read the relevant sections in each lecture handout before coming to the lecture.  We will also provide a number of references in the slides and handouts.The links to the latest versions of the lecture handouts are included below:

  • Module 1 handout (Parallelism)
  • Module 2 handout (Concurrency)There is no lecture handout for Module 3 (Distribution and Locality).  The instructors will refer you to optional resources to supplement the lecture slides and videos.

There are also a few optional textbooks that we will draw from during the course.  You are encouraged to get copies of any or all of these books.  They will serve as useful references both during and after this course:

Lecture Schedule

 

 

Finally, here are some additional resources that may be helpful for you:

Lecture Schedule

 

Topic 1.1 Lecture, Topic 1.1 Demonstration Quiz for Unit Fri 18  Memoization 2 2 3Jan 23 Finish Accumulators3 3   Homework 1Jan 30 Java’s Fork/Join LibraryWed 06 11 Iterative Averaging Revisited, SPMD pattern 3.5 Lecture , Topic 3.5 Demonstration , Topic 3.6 Lecture,   Topic 3.6 Demonstration  Quiz for Unit 2 Mar 25 Passing Interface (MPI), (start of Module 3)Topic 8.1 Lecture, Topic 8 8.3 Lecture,Fri Distributed Map-Reduce using Hadoop and Spark frameworksworksheet34lec34slidesFri GPU ComputingApr 15Algorithms based on (Scan) operationsWed Algorithms based on Parallel Prefix (Scan) operations, contd.Fri 18

WeekWeek

Day

Date (20182022)

Lecture

Assigned Reading

Assigned Videos (see Canvas site for video links)

In-class Worksheets

Slides

Work Assigned

Work Due

Worksheet Solutions 

1

Mon

Jan 0709

Lecture 1: Task Creation and Termination (Async, Finish)

Module 1: Section 1.1

Introduction

 

 

worksheet1lec1-slides  worksheet1lec1-slides

 

 

WS1-solution 

 

Wed

Jan 0911

Lecture 2:  Computation Graphs, Ideal Parallelism

Module 1: Sections 1.2, 1.3Topic 1.2 Lecture, Topic 1.2 Demonstration, Topic 1.3 Lecture, Topic 1.3 Demonstrationworksheet2lec2-slides

Functional Programming

GList.java worksheet2lec02-slides

 

 

WS2-solutionHomework 1 
 FriJan 1113Lecture 3: Abstract Performance Metrics, Multiprocessor SchedulingModule 1: Section 1.4Topic 1.4 Lecture, Topic 1.4 Demonstrationworksheet3lec3-slides Higher order functions  worksheet3 lec3-slides   

 

 WS3-solution  

2

Mon

Jan 1416

Lecture 4:    Parallel Speedup and Amdahl's Law

Module 1: Section 1.5Topic 1.5 Lecture, Topic 1.5 Demonstrationworksheet4 lec4-slides   

No class: MLK

        

 

Wed

Jan 18

Lecture 4: Lazy Computation

LazyList.java

Lazy.java

 worksheet4lec4-slides  WS4-solution 

 

Fri

Jan 20

Lecture 5: Java Streams

  

 

Wed

Jan 16

Lecture 5: Future Tasks, Functional Parallelism ("Back to the Future")

Module 1: Section 2.1Topic 2.1 Lecture, Topic 2.1 Demonstrationworksheet5lec5-slidesHomework 1 WS5-solution 
3MonJan 23

Lecture 6:

Map Reduce with Java Streams

Module 1: Section 2.24Topic 2.4 Lecture, Topic 2.4 Demonstration  worksheet6lec6-slides

 

 WS6-solutionMon 

Jan 21

No lecture, School Holiday (Martin Luther King, Jr. Day)

       Wed

 

Wed

Jan 25

Lecture 7:

Futures

Module 1: Section 2.31Topic 2.1 Lecture , Topic 2.1 Demonstrationworksheet7lec7-slides

Homework 2

 

 WS7-solution 

 

Fri

Jan 2527

Lecture 8:Map Reduce  Computation Graphs, Ideal Parallelism

Module 1: Section Sections 1.2, 1.43Topic 1.2 .4 Lecture, Topic 1.2 Demonstration, Topic 1.3 Lecture, Topic 21.4 3 Demonstrationworksheet8lec8-slides  WS8-solution Quiz for Unit 1

4

Mon

 

Jan 2830 Lecture 9: Data Races, Functional & Structural DeterminismAsync, Finish, Data-Driven Tasks 

Module 1:

Sections 2

Section 1.

5

1,

2

4.

6

5

 

Topic

2

1.

5

1 Lecture, Topic

2

1.

5

1 Demonstration, Topic

2

4.

6

5 Lecture, Topic

2

4.

6

5 Demonstration

   

worksheet9

lec9-slidesslides   WS9-solution 
 WedFeb 01Lecture 10: Event-based programming model

 

  worksheet10lec10-slides Homework 1WS10-solution 
 FriFeb 03Lecture 11: GUI programming as an example of event-based,
futures/callbacks in GUI programming
  worksheet11lec11-slidesHomework 2 WS11-solution 
5

Mon

Feb 06

Lecture 12: Scheduling/executing computation graphs
Abstract performance metrics
Module 1: Section 1.4Topic 1.4 Lecture , Topic 1.4 Demonstrationworksheet12lec12Module 1: Sections 2.7, 2.8Topic 2.7 Lecture, Topic 2.8 Lecture,worksheet10lec10-slidesQuiz for Unit 2 

 

Fri

Feb 01

Lecture 11: Loop-Level Parallelism, Parallel Matrix Multiplication, Iteration Grouping (Chunking)

Module 1: Sections 3.1, 3.2, 3.3

Topic 3.1 Lecture , Topic 3.1 Demonstration , Topic 3.2 Lecture, Topic 3.2 Demonstration, Topic 3.3 Lecture , Topic 3.3 Demonstration

worksheet11lec11-slides  WS12-solution 

 

Wed

5

Mon

Feb 0408

Lecture 12:  Barrier Synchronization 13: Parallel Speedup, Critical Path, Amdahl's Law

Module 1: Section 31.45

Topic

3

1.

4

5 Lecture , Topic

3

1.

4

5 Demonstration

worksheet12worksheet13lec12lec13-slides  WS13-solution 

 

Fri

Feb

10

Lecture 13: Parallelism in Java Streams, Parallel Prefix Sums

 Topic 3.7 Java Streams, Topic 3.7 Java Streams Demonstrationworksheet13lec13-slides

Homework 3 (includes 2 intermediate checkpoints)

Homework 2No class: Spring Recess

 

  

-

Fri

Feb 08

Spring Recess

      
6

Mon

Feb

13

Lecture 14:

Accumulation and reduction. Finish accumulators

Module 1: Sections 3.5, 3.6Section 2.3Topic 2.3 Lecture   Topic 2.3 Demonstrationworksheet14lec14-slidesQuiz for Unit 3  WS14-solution 

 

Wed

Feb 1315

Lecture 15:   Data-Driven Tasks

Module 1: Sections 4.5, 4.2, 4.3Topic 4.5 Lecture   Topic 4.5 Demonstration, Topic 4.3 Lecture,  Topic 4.3 Demonstrationworksheet15lec15-slides  

 

Fri

Feb 15

Lecture 16: Point-to-point Synchronization with Phasers

Module 1: Sections 4.2Topic 4.2 Lecture ,   Topic 4.2 Demonstrationworksheet16lec16-slides Quiz for Unit 3

Recursive Task Parallelism  

  worksheet15lec15-slides

 

 

 WS15-solution 
 FriFeb 17

Lecture 16: Data Races, Functional & Structural Determinism

Module 1: Sections 2.5, 2.6Topic 2.5 Lecture ,  Topic 2.5 Demonstration,  Topic 2.6 Lecture,  Topic 2.6 Demonstrationworksheet16 lec16-slidesHomework 3Homework 2WS16-solution 

7

Mon

Feb 20

Lecture 17: Midterm Review

7

Mon

Feb 18

Lecture 17: Midterm Summary

   lec17-slides    

 

Wed

Feb 20

Midterm Review (interactive Q&A)

      

22

Lecture 18: Limitations of Functional parallelism.
Abstract vs. real performance. Cutoff Strategy

 

Fri

Feb 22

Lecture 18: Abstract vs. Real Performance

  worksheet18lec18lec18-slides  Homework 3, Checkpoint-1WS18-solution 

 

Fri

Feb 24 

Lecture 19: Fork/Join programming model. OS Threads. Scheduler Pattern 

 Topic 2.7 Lecture, Topic 2.7 Demonstration, Topic 2.8 Lecture, Topic 2.8 Demonstration, 

8

Mon

Feb 25

Lecture 19: Pipeline Parallelism, Signal Statement, Fuzzy Barriers

Module 1: Sections 4.4, 4.1Topic 4.4 Lecture ,   Topic 4.4 Demonstration, Topic 4.1 Lecture,  Topic 4.1 Demonstration,worksheet19lec19-slidesQuiz for Unit 4  WS19-solution 

8

WedMon

Feb 27

Lecture 20: Critical sections, Isolated construct, Parallel Spanning Tree algorithm, Atomic variables (start of Module 2) Confinement & Monitor Pattern. Critical sections
Global lock

Module 2: Sections 5.1, 5.2, 5.3, 5.4, 5.6 Topic 5.1 Lecture, Topic 5.1 Demonstration, Topic 5.2 Lecture, Topic 5.2 Demonstration, Topic 5.3 Lecture, Topic 5.3 Demonstration, Topic 5.4 Lecture, Topic 5.4 Demonstration, Topic 5.6 Lecture, Topic 5.6 Demonstrationworksheet20lec20-slides        WS20-solution 

 

FriWed

Mar 01

Lecture 21:  Read-Write Isolation, Review of Phasers  Atomic variables, Synchronized statements

Module 2:

Section

Sections 5.

5

4, 7.2

Topic 5.5 4 Lecture, Topic 5.5 Demonstration4 Demonstration, Topic 7.2 Lectureworksheet21lec21-slidesQuiz for Unit 5

Quiz for Unit 4

  WS21-solution 

 

Fri

Mar 03

Lecture 22: Parallel Spanning Tree, other graph algorithms 

  

9

Mon

Mar 04

Lecture 22: Actors

Module 2: 6.1, 6.2Topic 6.1 Lecture ,   Topic 6.1 Demonstration ,   Topic 6.2 Lecture, Topic 6.2 Demonstrationworksheet22lec22-slidesHomework 4

 

Homework 3

WS22-solution  

 9

WedMon

Mar 06

Lecture 23:   Actors (contd)Java Threads and Locks

Module 2: 6Sections 7.31, 6.4, 6.5, 6.67.3

Topic

6

7.

3

1 Lecture, Topic

6.3 Demonstration, Topic 6.4 Lecture , Topic 6.4 Demonstration,   Topic 6.5 Lecture, Topic 6.5 Demonstration, Topic 6.6 Lecture, Topic 6.6 Demonstration

7.3 Lecture

worksheet23 lec23-slides

Quiz for Unit 6

 

 

WS23-solution Homework 3, Checkpoint-2

 

FriWed

Mar 08

Lecture 24: Java Threads, Java synchronized statementLocks - Soundness and progress guarantees  

Module 2: 7.1, 7.25Topic 7.1 Lecture, Topic 7.2 Lecture5 Lecture worksheet24 lec24-slides  Quiz for Unit 5

 

-

M-F

Mar 11 - Mar 15

Spring Break

WS24-solution 

 

Fri

Mar 10

 Lecture 25: Dining Philosophers Problem  Module 2: 7.6Topic 7.6 Lectureworksheet25lec25-slides 

 

WS25-solution 
 

Mon

Mar 13

No class: Spring Break

        

10

Mon

Mar 18

Lecture 25: Java synchronized statement (contd), wait/notify

Module 2: 7.2Topic 7.2 Lectureworksheet25 lec25-slides

 

  
 WedMar 2015

Lecture 26: Java Locks, Linearizability of Concurrent Objects

Module 2: 7.3, 7.4Topic 7.3 Lecture, Topic 7.4 Lectureworksheet26 lec26-slides No class: Spring Break    

 

 

Homework 4

(includes one intermediate checkpoint)
  

 

Fri

Mar 17

No class: Spring Break

   Homework 3 (all)  

Fri 

  

10

Mon

Mar 20

Lecture 26: N-Body problem, applications and implementations 

  worksheet26lec26-slides   WS26-solution 

 

Wed

Mar 22

Lecture 27: Read-Write Locks, Linearizability of Concurrent Objects

Mar 22

Lecture 27: Safety and Liveness Properties, Java Synchronizers, Dining Philosophers Problem

Module 2: 7.53, 7.64Topic 7.5 3 Lecture, Topic 7.6 4 Lectureworksheet27lec27-slides Quiz for Unit 7

Quiz for Unit 6

11

Mon

 

 WS27-solution 

 

Fri

Mar 24

Lecture 28: Message

-Passing programming model with Actors

Module 2: 6.1, 6.2Topic 6.1 Lecture, Topic 6.1 Demonstration,   Topic 6 .2 Lecture, Topic 6.2 Demonstrationworksheet28lec28-slides

 

 

 

WS28-solution 

11

MonWed

Mar 27

Lecture 29:   Message Passing Interface (MPI, contd)

 

Active Object Pattern. Combining Actors with task parallelism 

Module 2: 6.3, 6.4Topic 6.3 Lecture, Topic 6.3 Demonstration,   Topic 6.4 Lecture, Topic 6.4 DemonstrationTopic 8.4 Lecture, Topic 8.5 Lecture, Topic 8 Demonstration Videoworksheet29lec29-slides

 

 

WS29-solutionQuiz for Unit 8 

 

Wed

Mar 29

Lecture 30:

Task Affinity and locality. Memory hierarchy 

  worksheet30lec30-slides

 

 WS30-solution 

 

Fri

Mar 31

Lecture 31: Data-Parallel Programming model. Loop-Level Parallelism, Loop Chunking

Module 1: Sections 3.1, 3.2, 3.3Topic 3.1 Lecture, Topic 3.1 Demonstration , Topic 3.2 Lecture,  Topic 3.2 Demonstration, Topic 3.3 Lecture,  Topic 3.3  Topic 9.1 Lecture (optional, overlaps with video 2.4), Topic 9.2 Lecture, Topic 9.3 Lectureworksheet30 lec30-slides  Quiz for Unit 7

12

Mon

Apr 01

Lecture 31: TF-IDF and PageRank Algorithms with Map-Reduce

 Topic 9.4 Lecture, Topic 9.5 Lecture, Unit 9 Demonstrationworksheet31lec31-slidesHomework 5

Homework 4

WS31-solutionQuiz for Unit 9 

 12

WedMon

Apr 03

Lecture 32:  Partitioned Global Address Space (PGAS) programming models  Barrier Synchronization with PhasersModule 1: Section 3.4Topic 3.4 Lecture,  Topic 3.4 Demonstrationworksheet32lec32-slides

 Homework 4 Checkpoint-1

 

WS32-solution 

 

FriWed

Apr 05

Lecture 33: Combining Distribution and Multithreading

 

  Stencil computation. Point-to-point Synchronization with Phasers

Module 1: Section 4.2, 4.3

Topic 4.2 Lecture, Topic 4.2 Demonstration, Topic 4.3 Lecture,  Topic 4.3 Demonstration

Lectures 10.1 - 10.5, Unit 10 Demonstration (all videos optional – unit 10 has no quiz)

worksheet33lec33-slides

 

Quiz for Unit 8

 WS33-solution 

 

Fri

Apr 07

13

Mon

Apr 08

Lecture 34: Task Affinity with Places Fuzzy Barriers with Phasers

Module 1: Section 4.1Topic 4.1 Lecture, Topic 4.1 Demonstrationworksheet34lec34-slides 

 

WS34-solution 

 13

WedMon

Apr 10

Lecture 35: Eureka-style Speculative Task Parallelism 

 

worksheet35lec35-slides

Homework 5

 

 

WS35-solution Homework 4 (all)
 WedApr 12Lecture 36: Scan Pattern. Parallel Prefix Sum 

 

worksheet36lec36-slides Quiz for Unit 9

14

Mon

 WS36-solution 
 FriApr 14Lecture 37: Parallel Prefix Sum applications  worksheet37lec37-slides    
14MonApr 17Lecture 38: Overview of other models and frameworks  worksheet38 lec38-slides    
 WedApr 19Lecture 39: Course Review (Lectures 19-38)   lec39-slides 

Homework 5

-    
  Fri Apr 21Lecture 40: Course Review (Lectures 19-38)     lec40-slides     Homework 5  

Lab Schedule

0  Setup1 10- - 7lab7  

Lab #

Date (20192022)

Topic

Handouts

Code Examples

1

Jan 10

Infrastructure

setup

lab0-handout

lab1-handout

 
2Jan

Async-Finish Parallel Programming with abstract metrics

lab1-handout
-

2

Jan 17

Futures

lab2-handout
-

3

Jan 24

Cutoff Strategy and Real World Performance

lab3-handout -

4

Jan 31

Java's ForkJoin Framework

lab4-handout -

-

Feb 7

 

No lab this week - Spring Recess -

5

Feb 14

DDFs

 

lab5-handout-
17Functional Programminglab2-handout 

3

Jan 24

Java Streams

lab3-handout
 
4Jan 31Futureslab4-handout 

5

Feb 07

Data-Driven Tasks

lab5-handout 
6

Feb 14

Async / Finish

lab6-handout 
-

Feb 21

No lab this week (Midterm)

  
7Feb 28Recursive Task Cutoff Strategylab7-handout 
8Mar 07Java Threadslab8-handout 

6

Feb 28

Loop-level Parallelism

lab6-handout 

-

Mar 14

No lab this week

(Spring Break)

  
9Mar 21

Isolated Statement and Atomic Variables

Concurrent Listslab9-handout 
810Mar 28Actorslab8lab10-handout 
911

Apr 04

Java Threads, Java Locks

Loop Parallelism

lab11lab9-handout 

10-

Apr 11

Apache Spark

lab10-handout  

11

Apr 18

Message Passing Interface (MPI)

lab11-handout  

 

 

Eureka-style Speculative Task ParallelismNo lab this week

  

-

 

Apr 18

No lab this week

  

Grading, Honor Code Policy, Processes and Procedures

Grading will be based on your performance on five homeworks four homework assignments (weighted 40% in all), two exams (weighted 40% in all), weekly lab exercises (weighted 10% in all), online quizzes (weighted 5% in all), and in-class worksheets (weighted 5% in all).

The purpose of the homeworks homework is to give you practice in solving problems that deepen your understanding of concepts introduced in class. Homeworks are Homework is due on the dates and times specified in the course schedule.  No late submissions (other than those using slip days mentioned below) will be accepted.

The slip day policy for COMP 322 is similar to that of COMP 321. All students will be given 3 slip days to use throughout the semester. When you use a slip day, you will receive up to 24 additional hours to complete the assignment. You may use these slip days in any way you see fit (3 days on one assignment, 1 day each on 3 assignments, etc.). Slip days will be automatically tracked through the Autograder, more details are available later in this document and in the Autograder user guideusing the README.md file. Other than slip days, no extensions will be given unless there are exceptional circumstances (such as severe sickness, not because you have too much other work). Such extensions must be requested and approved by the instructor (via e-mail, phone, or in person) before the due date for the assignment. Last minute requests are likely to be denied.be denied.

Labs must be submitted by the following Wednesday at 4:30pm.  Labs Labs must be checked off by a TA by the following Monday at 11:59pm.

Worksheets should be completed in class for full credit.  For partial credit, a worksheet can be turned in before the start of the class following the one in which the worksheet for distributed, by the deadline listed in Canvas so that solutions to the worksheets can be discussed in the next class.

You will be expected to follow the Honor Code in all homeworks and homework and exams.  The following policies will apply to different work products in the course:

  • In-class worksheets: You are free to discuss all aspects of in-class worksheets with your other classmates, the teaching assistants and the professor during the class. You can work in a group and write down the solution that you obtained as a group. If you work on the worksheet outside of class (e.g., due to an absence), then it must be entirely your individual effort, without discussion with any other students.  If you use any material from external sources, you must provide proper attribution.
  • Weekly lab assignments: You are free to discuss all aspects of lab assignments with your other classmates, the teaching assistants and the professor during the lab.  However, all code and reports that you submit are expected to be the result of your individual effort. If you work on the lab outside of class (e.g., due to an absence), then it must be entirely your individual effort, without discussion with any other students.  If you use any material from external sources, you must provide proper attribution (as shown here).
  • HomeworksHomework: All submitted homeworks are homework is expected to be the result of your individual effort. You are free to discuss course material and approaches to problems with your other classmates, the teaching assistants and the professor, but you should never misrepresent someone else’s work as your own. If you use any material from external sources, you must provide proper attribution.
  • Quizzes: Each online quiz will be an open-notes individual test.  The student may consult their course materials and notes when taking the quizzes, but may not consult any other external sources.
  • Exams: Each exam will be a closedopen-book, closedopen-notes, and closedopen-computer individual written test, which must be completed within a specified time limit.  No class notes or external materials may be consulted when taking the exams.

...