10,11c10 < \lecture{notes6}{Partially Ordered Sets; Digraphs and Graphs} < {September 26, 2000} --- > \lecture{notes6}{Graphs, Trees, Tours, and Paths}{September 26, 2000} 13,36c12 < \noindent < Topics for today: < \begin{itemize} < \item < Partially ordered sets (continued) < \item < Digraphs < \item < Graphs < \item < Graph coloring < \item < Graph connectivity < \end{itemize} < < % [[[Shlomi: I think that the lecture will finish graph coloring and < % start the connectivity part. But not finish it. Perhaps just finish < % section 4.1 and give some examples or a nice application. Before < % graphs and after posets, we should say something about digraphs. < % I'm inserting material below.]]] < < \reading{Rosen Section 6.6; Chapter 7} < < \section{Partially Ordered Sets} --- > First still relations: 38,41c14 < This section contains some more applications and properties for < partially ordered sets. < < \subsection{Parallel Task Scheduling} --- > \section{Parallel Task Scheduling} 44c17 < order represents precedence constraints, topological sorting provides us with --- > order is precedence constraints, topological sorting provides us with 58c31 < This leads to the question: \\ --- > So: \\ 66,71c39,43 < Example: Clothes (from last lecture) \\ < We could do all the minimal elements first (left sock, right sock, underwear, < shirt), remove them and repeat. \\ < (Need lots of hands, or maybe servants.) \\ < Do pants and sweater next, and then left shoe, right shoe, and belt, and < finally jacket. \\ --- > On clothes example, we could do all the minimal elements first (Lsock, > Rsock, underwear, shirt), remove them and repeat. \\ > Need lots of hands, or maybe dressing servants. \\ > (So do pants and sweater next, and then Lshoe, Rshoe, and belt, and > finally jacket.) \\ 87d58 < %[[[Shlomi: They haven't seen Pigeonhole yet. Just say this informally.]]] 120c91 < Parallel time = length of longest chain --- > parallel time = length of longest chain 151,154d121 < \begin{defin} < An {\bf antichain\/} is a set of incomparable elements. < \end{defin} < 159a127,130 > \begin{defin} > An {\bf antichain\/} is a set of incomparable elements. > \end{defin} > 173c144 < We can use this theorem to prove a famous result about posets: --- > We can use this to prove a famous result about posets: 210,212d180 < %[[[Shlomi: The following is a very nice application. Should we do < %it in tutorial?]]] < 296,297c264,265 < E.g., if the database is totally sorted, we can compare with middle < elements, then middle of halves, etc. \\ --- > E.g., if db is totally sorted, can compare with middle elements, then > middle of halves, etc. \\ 299c267 < Approximately $\log{n}$ comparisons needed. \\ --- > Approximately log n comparisons needed. \\ 301c269 < E.g., if database totally unsorted, then in the worst case, have to compare --- > E.g., if db totally unsorted, then in the worst case, have to compare 303c271 < This is because each comparison with an element in the database gives no --- > This is because each comparison with an element in the db gives no 305c273 < others in the database. \\ --- > others in the db. \\ 335c303 < One last use of partially ordered sets. \\ --- > One last use of p.o. \\ 337c305 < where a poset is used as a mathematical model for a real computational --- > where a p.o. is used as a mathematical model for a real computational 359,360c327 < Basic theorem: Any topological sort of this partial order < describes a possible --- > Basic theorem: Any topological sort of this p.o. describes a possible 364c331 < Example of what one might do with this partial order: \\ --- > Example of what one might do with this p.o.: \\ 368,369c335 < consistent with the partial order: when one step precedes another in < the partial order, --- > consistent with the p.o.: when one step precedes another in the p.o., 373c339 < time order, still consistent with the partial order.\\ --- > time order, still consistent with the p.o.\\ 386,485d351 < This notion of logical time was invented by Lamport. < This year, the original (late 1970s) paper containing this idea won an < award for the most influential paper ever, in distributed computing < theory. < < % 3:00 < < \section{Digraphs} < < % [[[Shlomi: We should include a short section on digraphs. I have < % gone through past notes and Rosen, trying to collect everything we would < % like them to know about digraphs.]]] < < \subsection{Basic definitions and notation} < < We have actually already defined digraphs: \\ < A digraph (directed graph) is just the same as a binary relation on a < finite set. \\ < People don't usually think of it that way, though; when people refer < to a digraph, they are generally thinking of a particular < representation of a binary relation in terms of dots and arrows. < < Figure~\ref{fig:digraph} depicts a directed graph. < \begin{figure}[htbp] < \centerline{\psfig{figure=/a/class/6.042/spring00/archive/lectures/figures/digraph.eps,height=1.5in}} < \caption{\em This is a picture of a directed graph or digraph.} < \label{fig:digraph} < \end{figure} < < Recall that a digraph allows self-loops. < < Notation: < $G = (V,E)$, where $V$ is the set of {\em vertices\/} and $E$ is the < set of {\em directed edges\/}. < Vertices are sometimes called {\em nodes\/}. < Edges are sometimes called {\em arcs}. < < A little different from the usual relation notation: < For a relation $R$ on a finite set $A$, we would write < $G = (V,E)$, where $V = A$, the underlying set, and < $E = R$, the relation, that is the set of ordered pairs $(x,y)$ such < that $x R y$. < < The graphical notation motivates the study of other properties, < especially, properties involving paths and connectivity (see below). < < Can also represent digraph by adjacency matrix. < < Examples of digraphs we have already seen: < \begin{enumerate} < \item < Tournament digraphs (define as special case of digraphs). < \item < Precedence digraphs (e.g., for dressing, or prerequisites). < \item < The digraphs we studied when we related matrix multiplication to path < composition. < \end{enumerate} < < Some (other) applications: < \begin{description} < \item[Computer network] < A computer network can be modeled nicely as a digraph. < In this instance, the set of vertices $V$ represents the set of computers in < the network. An edge $(u, v)$ represents a direct < communication link from the computer corresponding to $u$ to the < computer corresponding to $v$. < < \item[Airline Connections] < Here the vertices are airports and edges are flight paths. < We could indicate the direction that planes fly < along each flight path by using a directed graph. We could use < weights to convey even more information. For example, $w(i, j)$ might < be the distance between airports $i$ and $j$, or the flying time < between them, or even the air fare. < < \item[Precedence Constraints] Suppose you have a set of jobs to < complete, but some must be completed before others are begun. (For < example, Attila the Hun advised that you should always pillage {\em < before} you burn.) Here the vertices are jobs to be done. Directed < edges indicate constraints; there is a directed edge from job $u$ to < job $v$ if job $u$ must be done before job $v$ is begun. < < \item[Program Flowchart] Each vertex represents a step of computation. < Directed edges between vertices indicate control flow. < \end{description} < < Variants of digraph definition: \\ < Could add {\em weights\/} to the edges, representing some kind of cost. < E.g., time for airline trip. < Could also use other kinds of {\em labels\/}, representing kinds of < relationships between vertices. < < \subsection{More definitions} < < % p. 446 Rosen, bottom < < If $(x,y)$ is an edge of $G$ (element of the edge set $E$ of $G$), < then we call $x$ the {\em initial vertex\/} and $y$ the {\em terminal < vertex\/} of the edge. 487,490c353 < The {\em in-degree\/} of $x$ is defined to be the number of edges with < $x$ as terminal vertex. \\ < The {\em out-degree\/} of $x$ is defined to be the number of edges < with $x$ as initial vertex. --- > \section{Graphs} 492c355,384 < Examples --- > Just like DAGs are a nice way to think about partial orders, graphs in > general are a nice way to think about relations\. They give a picture > of the relation, which can be much more revealing than a list of > tuples\. They motivate the study of other properties. > > \subsection{Definitions} > > We'll start with a bunch of definitions. Many are inherited from the > definition of relations. Others are slightly modified for > convenience. Yet others are new. > > \subsection{Simple Graphs} > > A {\em graph} is a pair of sets $(V, E)$. The elements of $V$ are > called {\em vertices}. The elements of $E$ are called {\em edges}. > Each edge is a pair of distinct vertices. > > We are going to concentrate on {\em undirected} graphs. These > correspond to symmetric relations. If edge $(u,v)$ is present, edge > $(v,u)$ is also present. Therefore, it simplifies matters to think of > these two edges as one and the same. This faintly violates the notion > of a Cartesian pair (where order matters) and really picky > mathematicians denote such an undirected edge by the unordered {\em > set} $\set{u,v}$, but this rapidly fills up your paper with hard to > read $\set{}$ symbols. > > Graphs are also sometimes called {\em networks}. Vertices are also > sometimes called {\em nodes}. Edges are sometimes called {\em arcs}. > Graphs can be nicely represented with a diagram of dots and lines as > shown in Figure~\ref{fig:graph} 494,673d385 < Theorem: < Sum of the indegrees of all nodes in a graph < = sum of outdegrees of all nodes in the graph < = number of edges < < Proof: < Induction on number of edges. < < \subsection{Paths and connectivity} < < \subsubsection{Paths, simple paths, and circuits} < < Path in a digraph: < Same as path in the corresponding binary relation. < Sequence of nodes $a_0,a_1,...,a_n$ such that each consecutive pair < $(a_i,a_{i+1})$ form an edge of the digraph. < Allow a path of length $0$, which is just a single vertex $a_0$. < < %[[[Shlomi: I prefer the definition of path in terms of vertices, not < %edges. Also, we should allow the length $0$ path. OK? Please < %change for the relations lecture.]]] < < Examples < < Path is {\em simple\/} if it does not repeat any vertices, except < possibly that the first vertex may be the same as the last. < < Examples < < This definition is different from the one used in Rosen: < He says no repeated edges. < And he insists that paths have to have at least two vertices (one edge). < %[[[The one I've used is, I think, reasonably standard. Needs < %checking. It seems much neater to me.]]] < This definition is consistent, for example, with CLR, the algorithms < textbook (6.046). < < Theorem: < If $|V| = n$, and there is a path from $x$ to $y$ in $G$, then there < is a path from $x$ to $y$ of length at most $n-1$. < < Examples < < {\em Circuit\/} (a.k.a. {\em cycle\/}) is a path that begins and ends < at the same vertex. < < Examples < < \subsubsection{Paths and relational operations} < < Main results, (mainly review from relations, restated in digraph notation): < Let $G = (V,E)$. < \begin{itemize} < \item < There is a path of length $k$ from $x$ to $y$ in $G$ if and < only if $(x,y) \in E^k$. < ($E$ is the relation corresponding to the edges.) < \item < There is a path from $x$ to $y$ in $G$ if and only < if $(x,y)$ is in the reflexive transitive closure of $E$. < (In this case, we say that $x$ is {\em connected to\/} $y$ in $G$.) < \end{itemize} < < Recall/review how the adjacency matrix representation can be used to calculate < the needed products and take the needed closures. < < Can also use this representation to calculate the number of paths < between all pairs of vertices. < Do Rosen p. 472, Theorem 2 and its proof? < % [[[Or do this for HW?]]] < < Examples < < \subsubsection{Euler circuits} < < Euler circuit in digraph $G$: < A circuit in $G$ that traverses each edge of $G$ exactly once. < < Examples < < Theorem: < Suppose $G$ is a weakly connected digraph (define). < Then $G$ has an Euler circuit if and only if the in-degree and < out-degree of each vertex are equal. < < Proof: Can leave this for HW or tutorial, or do in class. < < [[[This theorem is derived from Rosen Section 7.5, exercise 22 and 34. < Proof needs working out. Ideas: < < ``only if'': (sketch) For any vertex $x$, appears some number < $k$ of times in the circuit. Each time, has one edge in and one edge out. < No repeats, and exhausts all the edges. < < < ``if'': Assume $G$ is a weakly connected digraph such that each < node's degrees are equal. < Give a procedure. It seems to me that basically the same argument < that is used for undirected graphs also works here. ??? < A little more detail: < < If edge set is empty, then weak connectivity implies that there can < be only one node. That node by itself constitutes a trivial (0 < edge) Euler circuit. So assume that the edge set is nonempty. < < In this case, we can show that there exists some circuit: < Show this by starting with any edge, and tracing from node to node < until we get back to the node we started with (argument as in the < argument for the undirected case). < < Now choose a maximum length circuit with no repeating edges, in $G$; < call it $C$. < If $C$ includes all the edges, we're done. So assume that it < doesn't. < < Let $H$ be the subgraph of $G$ consisting of all the edges of $G$ < that aren't in $C$, and all the nodes of $G$ that are incident on < these edges. < Claim that some node in $H$ is also in $C$. < For if not, $G$ is not weakly connected. Check me here... < Choose such a node, $x$. < < Now choose any edge in $H$ that is incident on $x$ (it doesn't matter < if it's ingoing or outgoing to $x$. < Starting with that edge, construct a cycle in $H$, following the < same procedure as above. Eventually, it has to get back to its < starting point. < < Now we have two cycles with disjoint edge sets and a common vertex, < $x$. < Can splice them together to get a larger cycle than $C$. < Contradicts choice of $C$ as maximum. ]]] < < [[[If we do this in lecture, then we could ask about Euler PATHS for HW.]]] < < \subsubsection{Strong connectivity} < < Digraph $G$ is {\em strongly connected\/} provided that every vertex < is connected to every other vertex, that is, there is a path from < every vertex to every other vertex. < < Examples of digraphs that are and aren't strongly connected. < < % 3:15 ? Depends how much you cover. < < \section{Undirected Graphs} < < Some relations, in particular, those that are symmetric but not < reflexive, have nice representations using a different kind of < graph, called an {\em undirected graph\/}, or just a {\em graph\/}. < < \subsection{Simple graphs} < < Start with a symmetric, irreflexive relation. < In a symmetric relation, if edge $(x,y)$ is present, edge $(y,x)$ is < also present. < In an irreflexive relation, no edge $(x,x)$ is present. < < A digraph representing such a relation has the following properties: < \begin{enumerate} < \item < No self-loops. < \item < Edges come in pairs: wherever there is an edge from $x$ to $y$, there < is another edge from $y$ to $x$. < \end{enumerate} < Because edges come in pairs, it simplifies matters to think of these < two edges as one ``undirected edge''. < An undirected graph is defined formally to be a structure that < consists of vertices and undirected edges: < < Definition: < An undirected {\em graph\/}, also known as just a {\em graph\/}, < is a pair of sets $(V, E)$. < The elements of $V$ are called {\em vertices}. < The elements of $E$ are called {\em edges}. < Each edge is an unordered pair of distinct vertices. < < Graphs can be nicely represented with a diagram of dots and lines (not < arrows), as shown in Figure~\ref{fig:graph} 683,702c395,402 < As noted in the definition, each edge $(u,v) \in E$ is a pair of < distinct vertices $u, v \in V$. < Because it's an unordered pair, we really shouldn't be writing it < using Cartesian pair notation (because order matters in that < notation). < Really picky mathematicians denote such an unordered edge by the < unordered {\em set} $\set{u,v}$. < But this rapidly fills up your paper with hard to read $\set{}$ symbols. < So most people are a little casual here and use the ordered notation < even though it's an unordered pair. < < Edge $(u, v)$ is said to be {\em incident\/} to vertices $u$ and $v$. < Vertices $u$ and $v$ are said to be {\em adjacent} or {\em neighbors}. < Phrases like, ``an edge joins $u$ and $v$'' and ``the edge between $u$ < and $v$'' are common. < < Example: < Computer networks are often modeled nicely as undirected graphs, < because most communication links are bidirectional. < The set of vertices $V$ represents the set of computers in --- > As noted in the definition, each edge $(u, v) \in E$ is a pair of > distinct vertices $u, v \in V$. Edge $(u, v)$ is said to be {\em > incident} to vertices $u$ and $v$. Vertices $u$ and $v$ are said to > be {\em adjacent} or {\em neighbors}. Phrases like, ``an edge joins > $u$ and $v$'' and ``the edge between $u$ and $v$'' are common. > > A computer network is can be modeled nicely as a graph. In this > instance, the set of vertices $V$ represents the set of computers in 705a406 > \forstudents{ 708,709c409,411 < The definition in the preceding section only describes {\em simple < undirected graphs}. There are many ways to complicate matters. --- > There are actually many variants on the definition of a graph. The > definition in the preceding section really only describes {\em simple > undirected graphs}. There are many ways to complicate matters. 711c413 < \subsubsection{Multigraphs} --- > \subsubsection*{Multigraphs} 717,718c419 < \subsubsection{Pseudographs} < %[[[Rosen uses the term ``pseodograph'' for this.]]] --- > \subsubsection*{Self-Loops} 720,723c421,423 < In a simple graph or multigraph, each edge is a pair of distinct < vertices. A {\em self-loop} is an edge that connects a vertex to itself. < A {\em pseudograph\/} allows multiple edges and self-loops. < Figure~\ref{fig:multigraph} depicts a pseudograph. --- > In a simple graph, each edge is a pair of distinct vertices. A {\em > self-loop} is an edge that connects a vertex to itself. > Figure~\ref{fig:multigraph} depicts a multigraph with self-loops. 733c433,447 < \subsubsection{Weighted Graphs} --- > \subsubsection*{Directed Graphs} > > In a {\em directed graph} or {\em digraph}, edges are regarded not as > lines, but as arrows. (The technical difference is that in a simple > graph each edge is an unordered pair of vertices, whereas in a > directed graph each edge is an ordered pair.) > Figure~\ref{fig:digraph} depicts a directed graph. > > \begin{figure}[htbp] > \centerline{\psfig{figure=/a/class/6.042/spring00/archive/lectures/figures/digraph.eps,height=1.5in}} > \caption{\em This is a picture of a directed graph or digraph.} > \label{fig:digraph} > \end{figure} > > \subsubsection*{Weighted Graphs} 738,739c452,454 < between vertices $u$ and $v$, then often $w(u, v)$ is taken to be a < default value, like infinity or zero.) --- > between vertices $u$ and $v$, then often $w(u, v)$ is infinity or > zero.) > } 741,746c456 < Recall that an ordinary graph can be represented nicely by an < {\em adjacency matrix}. < Because the underlying relation for an undirected graph is symmetric, < such a matrix is symmetric around the main diagonal. < Also, because the underlying relation is irreflexive, the main < diagonal consists of all $0$s. --- > \subsection{Graphs in the Real World} 748,750c458,475 < Some variants of simple graphs can also be described well by matrices. < For example, a weighted graph is naturally represented by a matrix < where $w(i, j)$ is the entry at row $i$ and column $j$. --- > There are many real-world phenomena other than computer networks that > can be described nicely by graphs. Here are some examples: > > \begin{description} > > \item[Airline Connections] Here the vertices are airports and edges > are flight paths. We could indicate the direction that planes fly > along each flight path by using a directed graph. We could use > weights to convey even more information. For example, $w(i, j)$ might > be the distance between airports $i$ and $j$, or the flying time > between them, or even the air fare. > > \item[Precedence Constraints] Suppose you have a set of jobs to > complete, but some must be completed before others are begun. (For > example, Atilla the Hun advised that you should always to pillage {\em > before} you burn.) Here the vertices are jobs to be done. Directed > edges indicate constraints; there is a directed edge from job $u$ to > job $v$ if job $u$ must be done before job $v$ is begun. 752c477,482 < \subsection{Standard Types of Graphs} --- > \item[Program Flowchart] Each vertex represents a step of computation. > Directed edges between vertices indicate control flow. > > \end{description} > > \subsection{Standard Graphs} 768d497 < \subsection{More definitons} 770c499,510 < \subsubsection{Subgraphs} --- > \subsection{Adjacency Matrices} > > We defined a graph in terms of a set of vertices and a set of edges. > In a computer it is often represented by an {\em adjacency > matrix}---another name for the matrix representation of the > underlying relation. > > Some variants of simple graphs can also be described well by matrices. > For example, a weighted graph is naturally associated with a matrix > where $w(i, j)$ is the entry at row $i$ and column $j$. > > \subsection{Subgraphs} 777c517,518 < \subsubsection{Vertex degree} --- > > \subsection{Vertex Degree} 806,807d546 < % 3:30 < 891,892d629 < % 3:45 < 1171a909,910 > >