All Possible Sortings of a DAG - Hello! (Wave)

  • MHB
  • Thread starter evinda
  • Start date
In summary, the conversation discusses finding all possible topological sortings in a DAG. The first person shares a program they found but doesn't fully understand, and the second person suggests using an adjacency matrix to find the sortings. They then discuss their own algorithm, which involves repeatedly selecting the first node with all incoming dependencies satisfied and adding it to an output list. The conversation continues with clarifications and improvements to the algorithm, ultimately resulting in a revised version that uses a set of starting nodes and an output set to process the nodes.
  • #36
evinda said:
The numbers respresent the discovery and finishing times. The arrows show from which node we came to the current one.

What are "discovery and finishing times"? (Wondering)
I applied only DFS, starting with the nodes that have no incoming edges. (Tmi)

Now I got the following topological sortings:

[m] {e, b, a, d, c}, {b, a, a, d, c}, {b, e, a, c, d}, {e, b, a, c, d}, {e, a, c, b, d}, {e, a, b, d, c}[/m]

But I haven't found the topological sorting [m] {e, a, b, c, d}[/m].. How do we get it? (Thinking)
To find this, we should start from the node [m]d [/m], or not?

It's a variant of [m]{e, a, b, d, c}[/m] where only the last 2 nodes are swapped around, which has no impact on the topological ordering. How is it that you did not find it? (Wondering)
 
Technology news on Phys.org
  • #37
I like Serena said:
What are "discovery and finishing times"? (Wondering)

The DFS procedure takes as input a graph $G$, and outputs its predecessor subgraph in the form of a depth-first forest. In addition, it assigns two timestamps to each vertex: discovery and finishing time.
The algorithm initializes each vertex to “white” to indicate that they are not discovered yet.
It also sets each vertex’s parent to null. The procedure begins by selecting one vertex u from the graph, setting its color to “grey” to indicate that the vertex is now discovered(but not finished) and assigning to it discovery time 0.
For each vertex v that belongs to the set Adj, and is still marked as “white”, DFS-Visit is called recursively, assigning to each vertex the appropriate discovery time d[v] (the time variable is incremented at each step).
If no white descendant of v exists, then v becomes black and is assigned the appropriate finishing time, and the algorithm returns to the exploration of v’s ancestor (v).
If all of u’s descendants are black, u becomes black and if there are no other white
vertices in the graph, the algorithm reaches a finishing state, otherwise a new “source” vertex is
selected, from the remaining white vertices, and the procedure continues as before.
I like Serena said:
It's a variant of [m]{e, a, b, d, c}[/m] where only the last 2 nodes are swapped around, which has no impact on the topological ordering. How is it that you did not find it? (Wondering)
I think that we do not get it, applying all possible DFS. Or have I done maybe something wrong? :confused:
 
  • #38

Attachments

  • 11148979_640135589464454_212831154_n.jpg
    11148979_640135589464454_212831154_n.jpg
    46.6 KB · Views: 45
  • #39
So even if we want to find a topological sort, can we visit at each step any node, even if it has incoming edges?
Because in this case, we also get the output {e, a, b, d, c}... :confused:
 
  • #40
evinda said:
The DFS procedure takes as input a graph $G$, and outputs its predecessor subgraph in the form of a depth-first forest. In addition, it assigns two timestamps to each vertex: discovery and finishing time.
...

Ah! That explains it. (Mmm)
evinda said:
I think that we do not get it, applying all possible DFS. Or have I done maybe something wrong? :confused:

Your DFS algorithm appears to finish each subtree before moving on to the next subtree.
However, that is not a requirement for a topological ordering.
So you're missing out on some topological orderings. (Thinking)
evinda said:
So even if we want to find a topological sort, can we visit at each step any node, even if it has incoming edges?
Because in this case, we also get the output {e, a, b, d, c}... :confused:

Yes, we can visit any node in each step.
And yes, that node can have incoming edges, as long as all its predecessors are already in the current topological ordering. (Wasntme)
 
  • #41
Code:
    S: set of starting nodes 
    L: output set 
    A: adjacency matrix

    Recurse()    
    
    Recurse(){
         if S empty
            output L
         oldA = A
         oldS = S
         oldL = L
         // For each possible choice
         for each node q in S
             // Execute choice
             delete q from S and add it to L
             Alg(q)
             // Recurse
             Recurse()
             // Undo choice
             S = oldS
             L = oldL
             A = oldA
    }
    
    Alg(q){
         for i=1 to |V| {
              if (A(q,i)==1){
                   A(q,i)=0
              }
              for j=1 to |V| { 
                   if (A(j,i)==0){ 
                       k=k+1
                   } 
              } 
              if (k==|V|){ 
                  add i into S 
              }
         }
    }
How can we find the complexity of the algorithm?

I am a little confused because we have a function that calls itself recursively in a for-loop. :confused:
 
  • #42
If we want to have all the topological sortings in an array do we have to do the following? (Thinking)
I tried to implement the algorithm in C.

Code:
    S: array of starting nodes 
    L: output array 
    A: adjacency matrix

    Recurse()  
      
    p=0;
    
    Recurse(){
	    k=0;
        for (i=0; i<length(S); i++){ 
	        if(S[i]==NULL){ 
		        k++; 
	        }
        }
	    if(k==length(S)){ 
        	for (j=0; j<V; j++){ // V=number of vertices 
             	T[p,j]=L[j];
        	}
        	p++;
    	} 
         //output L
         for(i=0; i<V; i++){ 
	         for(j=0; j<V; j++){ 
         		oldA[i,j] = A[i,j]; 
     		}
 		 }
         for(i=0; i<length(S); i++){ 
         	oldS[i] = S[i];
     	 }
     	 for(i=0; i<length(L); i++){ 
         	oldL[i] = L[i];
     	 }
         // For each possible choice
         for (i=0; i<length(S); i++){ 
	         L[i]=S[i];
	         S[i]=NULL; 
             // Execute choice
             Alg(q)
             // Recurse
             Recurse()
             // Undo choice
             for(i=0; i<length(S); i++){ 
         		S[i] = oldS[i];
     	 	 }
             for(i=0; i<negth(L); i++){ 
         		L[i] = oldL[i];
     	 	 }
             for(i=0; i<V; i++){ 
	         	for(j=0; j<V; j++){ 
         			A[i,j] = oldA[i,j]; 
     			}
 		 	 }
                	 	
         }
    }
    
    Alg(q){
         for (i=0; i<V; i++){
              if (A(q,i)==1){
                   A(q,i)=0
              	   for (j=0; j<V; j++) { 
                   		if (A(j,i)==0){ 
                       		k=k+1
                   		} 
              	   } 
                   if (k==V){ 
                       S[length(S)++]=i 
                   }
             }
        }
}
 
  • #43
evinda said:
How can we find the complexity of the algorithm?

I am a little confused because we have a function that calls itself recursively in a for-loop. :confused:

Suppose [m]Recurse()[/m] is called with an $S$ of size $n$.
Then the for-loop is executed $n$ times and the Recurse() inside the for-loop is called with an $S$ of size $n-1$.
So:
$$T(n) = n \times (T(\colorbox{lightgray}{Alg(q)}) + T(n-1) + constant) + constant$$
(Thinking)
 
  • #44
I like Serena said:
Suppose [m]Recurse()[/m] is called with an $S$ of size $n$.
Then the for-loop is executed $n$ times and the Recurse() inside the for-loop is called with an $S$ of size $n-1$.
So:
$$T(n) = n \times (T(\colorbox{lightgray}{Alg(q)}) + T(n-1) + constant) + constant$$
(Thinking)

How can we find the complexity?
I calculated it in Wolfram and I got the following:

https://www.wolframalpha.com/input/?i=T%28n%29%3Dn%2AT%28n-1%29%2Bn%5E3%2C+T%280%29%3D1

So isn't it possible to solve the recurrence relation?
 
  • #45
Couldn't you use the "natural" divide and conquer algorithm derived from the properties of topological orderings (i.e. pick any vertex, find all vertices less than, all vertices greater than, distribute incomparable vertices and recurse on the left and right vertex sets) or is the backtracking algorithm more efficient?​
 
  • #46
Bacterius said:
Couldn't you use the "natural" divide and conquer algorithm derived from the properties of topological orderings (i.e. pick any vertex, find all vertices less than, all vertices greater than, distribute incomparable vertices and recurse on the left and right vertex sets) or is the backtracking algorithm more efficient?​

I haven't heard of this algorithm. Could you explain to me further how it works? Can we find in that way all the possible topolgical sortings? What would be its time complexity?
 
  • #47
In general, what is the best way to find all the possible topological sortings in at most quadratic time? (Thinking)
 

Similar threads

  • Programming and Computer Science
Replies
2
Views
2K
  • Programming and Computer Science
Replies
11
Views
3K
  • Programming and Computer Science
Replies
8
Views
3K
  • Programming and Computer Science
Replies
17
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Special and General Relativity
2
Replies
40
Views
2K
  • Programming and Computer Science
Replies
15
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
Back
Top