Karnaugh Maps: An Essential Tool for Understanding Complex Digital Logic Designs

  • Thread starter Kenneth Mann
  • Start date
  • Tags
    Tutorial
In summary, the Karnaugh Map is a graphical method for simplifying digital logic design. It was first described in a paper by M. Karnaugh in 1953 and has since been refined and extended by others. The map is made up of adjacent cells that represent logical states and can handle up to eight variables. The map is designed so that only one variable can change across each axis, making it easier to minimize Boolean equations. The layout of the map is important and can be chosen based on the direction of changing variables.
  • #36
35. SPOS Use Example

35. SPOS Use Example​

If we look back to the example of insertion # 22, we get a case in which use of the SPOS might be slightly preferable to the SSOP, and which indicates that it is prudent to check out both alternatives. In Figure 37 this map was marked and then solved for the SPOS implementation, for which the circuitry is shown in figure 38. Now, we shall solve this same map for the SPOS implementation, and that grouping is shown in figure 65, along with the resulting SPOS derivation of:

W = (A ' + C ' )(A + B)

The resulting circuits for the SPOS are then shown in figure 66, beneath those for the original SSOP implementation. The two functional depictions show a pretty much equivalent set, however the two "practical" implementations seem to favor the SPOS case. This implementation apparently allows us to do without a couple of inverters, however this may not be the advantage in reality that it initially appears to be. It all depends on the surrounding circumstances. For example, if the signals come from flip-flops, the inverted signals are probably already available along with the uninverted ones. Also the available selection of gate types is a consideration. Both alternatives should be examined, and then the choice made.

KM
 

Attachments

  • Figure 65.jpg
    Figure 65.jpg
    41.6 KB · Views: 407
  • Figure 66.jpg
    Figure 66.jpg
    23.2 KB · Views: 518
Mathematics news on Phys.org
  • #37
36. SPOS Decode For BCD Counter

36. SPOS Decode For BCD Counter​

Here we look at another example; in this case, the decode logic for the BCD counter derived in insertion # 29. We will derive the SPOS logic needed to drive each flip-flop, and compare each to its SSOP counterpart. The grouping for each case is shown in Figure 67.

1) In the first grouping, to derive the SPOS logic for "Jb", we circle all the "empty" spaces (plus a couple of "don't cares), which are all in this case enclosed in a green circle. This grouping, which gives us the logic for [Jb '], then yields the equation:

Jb ' = A ' therefore
Jb = A

This is identical to what we got for the SSOP case (which we could have anticipated in this case, because result yields trivial logic), so this logic stays the same.

2) In the second grouping, to derive the SPOS logic for "Jc", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed with two circles, one green and one red. This grouping, which gives us the logic for [Jc '], then yields the equation:

Jc ' = A ' + B ' therefore
Jc = (A ' + B ' ) '
= AB

Again, it is identical (because again it reduced to less than SPOS or SSOP), so again, the logic is the same.

3) In the third grouping, to derive the SPOS logic for "Jd", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed in three circles, one green, one red and one dark blue. This grouping, which gives us the logic for [Jd '], then yields the equation:

Jd ' = A ' + BC ' + B ' D ' therefore, we get
Jd = (A ' + BC ' + B ' D ' ) '
= A (BC ' ) ' (B ' D ' )'
= A (B ' + C)(B + D)

This time, we do get a full SPOS representation, so we compare it to the SSOP case (Jd = AD + ABC), and the result appears to be pretty much a "wash" (at least gatewise).

In the fourth grouping, to derive the SPOS logic for "Co", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed with two circles, one red and one green. This grouping, which gives us the logic for [Co '], then yields the equation:

Co ' = D ' + A ' therefore
Co = (A ' + D ' ) '
= AD

Again, we get a trivial case, which is the same as for the SSOP. It should be noted (again), however that the results will not always be identical, as they were in three of the four cases here. It is still advisable to check both ways.

KM
 

Attachments

  • Figure 67.jpg
    Figure 67.jpg
    48.5 KB · Views: 410
  • #38
37. SPOS For The 3 X 3 Multiplier

37. SPOS For The 3 X 3 Multiplier​

One final example of a SPOS case, will be used to illustrate a more complex example. In this case, we deal with the "3 X 3 Multiplier" that we derived back in insertions # 25 and #26. The maps for the variables from that example, which were shown in figures #45 and 46, are again shown here in figures 68, 69 and 70 - - - but this time being solved for SPOS, so that here the "zeros" (or empty spaces are grouped).

In figure 68 we seek the SPOS for "P", so here we see that we can group the "zeros" as is shown with the green circled grouping and the red circled grouping. From this we get:

P ' = A ' + D ' therefore
P = (A ' + D ' ) '
= AD
and this is the same (trivial) case as with the SSOP.

Also, in figure 68, we seek the SPOS for "R". Here we can get the following five groupings:
RED: D ' E '
Green: A ' B '
Orange: B ' E '
Purple Hatched: A B D E
Black: A ' B D ' E
Here, we can see that there are four terms, and fourteen instances of the variables, thus it appears reasonable in this case, that use of the SPOS will probably not give an advantage over the SSOP. (We could work it out, but here there doesn't appear to be an advantage to that.)

In figure 69, we first seek the SPOS for "S". Here, the attempt shown has eleven groupings:
Green: D ' E ' F '
Red: A ' B ' C '
Green Hatched: A ' B C ' E '
Red Hatched: B ' D ' E F '
Orange: A ' B ' D ' E
Orange Hatched: A ' B D ' E '
Black Hatched: A ' B ' D ' E '
Black: A C D F
Purple Hatched: A C ' D F '
Purple: A ' B C D E
Blue: A B D ' E F
Here, there are more terms (11 to 9) and more variable iterations (42 to 36), so there's probably not going to be much advantage here, if any so we don't go on.

When we seek the SPOS for "T" we have the following eleven groupings:(Two are not circled).
Black Hatched: B ' D ' E '
Purple Hatched: A ' B ' E '
Orange: C ' D ' F '
Black: A ' C ' F '
Red: E ' F '
Red Hatched: A C ' E F
Green: B ' C '
Green Hatched: B C D F '
Gray: A C D E F (cells 61 & 63)
(uncircled) A B C D F (cells 47 & 63)
(uncircled) A ' B C D ' E F (cell 54)
Again, there doesn't appear to be much advantage to going on.

In figure 70, we are seeking the SPOS for the functions "V" and "X". Here, again once the terms are determined, and there appears to be little advantage to using the SPOS.

KM
 

Attachments

  • Figure 68.jpg
    Figure 68.jpg
    42.1 KB · Views: 433
  • Figure 69.jpg
    Figure 69.jpg
    41.6 KB · Views: 435
  • Figure 70.jpg
    Figure 70.jpg
    45.6 KB · Views: 488
  • #39
Finally - - -

Finally - - -​

At this point, this series of insertions on Karnaugh Mapping is ended. I hope that these have been of use and will serve to show some of the power and flexibility of using the ordered reflective maps as were demonstrated in these series.

There are no more discussions intended on this subject, however if interesting examples are found from time to time, these may be added. I hope that anyone who comes up with interesting applications will also do so. We might start a collection of applications, either in this of another string. After all, this is an applied subject.

I am considering other possible strings, also if any can be sufficiently developed. Presently the only one I have worked out at all is on the subject of "state machines", and I'm not sure that it would be applicable here. Last year, I wrote up quite a bit on 'computer design and organization', but then the machine that I was using turned into an inert object and I haven't had time or money yet to repair it, so until I do, that one will remain "shelved".

Also, any comments, additions and corrections would be appreciated. (I am aware that "figure 64 is incorrect. There are also a couple of errors in that insertion.) Any constructive input is welcome.

KM
 
  • #40
may i know how can i find the next state from the kv map that the kv map for its first condition is given??
 
  • #41
teng125 said:
may i know how can i find the next state from the kv map that the kv map for its first condition is given??


What are you looking for? Your question is not clear. In general, a K-Map only gives the minterms of a Boolean equation, and from these allows us to combine them for minimization purposes.

Are you looking to derive State conditions for State Machines? (The K-Map can be adapted to become State Tables, but this is another subject altogether, and requires a derivation of its own - - and a lot of explanation - - more than I care to go into at this time.) If so, you would then need to state your conditions a lot more completely.

KM
 
  • #42
Combinatorial Circuit Example - - - Trinary System Adder Element
1) D and C are the "variables" for the addend.
2) B and A are the "variables" for the augend.
3) E is the variable for the carry in.
4) Y and X are the functions for the sum.
5) Z is the function for the carry out.

What is the use of F variable? Please describe it?
 
  • #43
atharsani said:
Combinatorial Circuit Example - - - Trinary System Adder Element
1) D and C are the "variables" for the addend.
2) B and A are the "variables" for the augend.
3) E is the variable for the carry in.
4) Y and X are the functions for the sum.
5) Z is the function for the carry out.

What is the use of F variable? Please describe it?

It is unused. It is not needed. Five variables are needed in this case, but maps are usually drawn with an even number of variables, so the uppermost variable representation is ignored, and that part of the truth table and the map can be left blank.


KM
 
  • #44
Dear Sir,
The mapp of Y is not given correct because in truth table output of y at 26 (011010) is 1 but in mapping 1 is at 27. pls sir check it.
 
  • #45
atharsani said:
Dear Sir,
The mapp of Y is not given correct because in truth table output of y at 26 (011010) is 1 but in mapping 1 is at 27. pls sir check it.

You are corrrect!

Both the "1" in 26 and the "Don't Care" in 27 got shifted to the left by one. The result, when they are put in the correct place is that the term AC'E becomes BDE + AC'D'E.
(Note that all the F' values should have left out. After all, F isn't used. The fact that they are all F' should have told me that.) The final result, then is:

X = AC'D'E' + A'B'CE' + BDE' + AC'DE + BCE + A'B'C'D'E
Y = BC'D'E' + ACE' + A'B'DE' + BDE + AC'D'E + A'B'CE
Z = BC + AC'D + BD + DE + A'BE + ACE

Thanks for the correction. I was very pleasantly surprised to find someone who was interested enough to understand the subject well, and to go through it closely enough to find those errors.

KM
 
  • #46
thanks , your tutorial is great!
 
  • #47
I have been hoping that someone would initiate a new string - - expanding the methods used in this string to illustrate a series on "State Machines". It is a simple step. With very little modification, the tables and maps can be extended to them.

KM
 
  • #48
Thanks a lot for your tutorial.

I have a question! How am I supposed to draw/build a ten variable Karnaugh Map?
 
  • #49
Rainier9 said:
Thanks a lot for your tutorial.

I have a question! How am I supposed to draw/build a ten variable Karnaugh Map?

Six variables horizontally, and four vertically. This would take probably at least an 11" X 17" sheet and a lot of work (both the drawing and the use). I have done it so I know. Sorry it took this long.

KM
 
  • #50
Thanks for answering! I actually got a better way to 'build' my circuit.
 
  • #51
Sad to say but ...

Karnaugh maps became an obsolete intellectual curiosity when vacuum tubes and relays were replaced by dense semiconductor logic in the late 60's. Hardware is cheap and minimization is not usually worth the effort except in certain cases such as a cell for some memories that must be repeated many many times.

If I am wrong about this, I would love to be educated.
What practical use are they today ?

I remember having much fun learning them in Engineering school in the 60's.

You should put out a book / pamphlet with your articulation here.

Very nice
Thank you
 
  • #52
paulfr said:
Hardware is cheap and minimization is not usually worth the effort

I'm no expert on the digital design of today but I think your argument that logic minimization is no longer worthwhile is wrong, paulfr.
In many cases it the minimization may be performed software in a step called logic synthesis.
 
  • #53
Point well taken

When I said "not worth the effort" , I meant by hand, yes.

Of course design automation SW will implement a minimization
step in a way specified with constraints called out by the designer.
Min power, min silicon real estate, maximum speed, etc
 
  • #54
Kenneth Mann said:
Six variables horizontally, and four vertically. This would take probably at least an 11" X 17" sheet and a lot of work (both the drawing and the use). I have done it so I know. Sorry it took this long.

KM

I hope that you understand that in real world minimization is not done by Karnaugh maps.
 
  • #55
paulfr said:
Sad to say but ...

Karnaugh maps became an obsolete intellectual curiosity when vacuum tubes and relays were replaced by dense semiconductor logic in the late 60's. Hardware is cheap and minimization is not usually worth the effort except in certain cases such as a cell for some memories that must be repeated many many times.

If I am wrong about this, I would love to be educated.
What practical use are they today ?

I remember having much fun learning them in Engineering school in the 60's.

You should put out a book / pamphlet with your articulation here.

Very nice
Thank you

Yours are wonderful observations, but I couldn't disagree more. First, if you've followed these insertions, minimization itself is not the main thrust. The main use is to define logic relations that accomplish the desired tasks without design blow-ups. Little attempt was made to determine whether the end products were minimum or not - - though this is desired. The interest was simply to get the job done with the least extended effort and the greatest understanding of the end product.

Today, the design effort is often devoted to very large dense packages, and the essential ingredient is toward understanding what is going on within those packages. We use Hardware Definition Languages to help us get through the difficulty and complex accounting involved in designing those packages. These allow us several shortcut approaches, but in the end they are simply alternative means of laying out the real-estate and defining the interrelations needed to accomplish our logic schematics, and as such, are totally interchangeable if we desired to do so. We specify our preferred constructs and approaches and , with luck, the software lays out the real-estate using the logic substitutions that are pre-defined in the definition of the languages - - according to how the language builders determined their preferred approaches.

It is important, however, that the designer still understand what goes into a logic design - - and not simply minimization, but what makes it work. The design aids won't do the designing for you; they simply make it more convenient. Without the understanding of the underlying relationships as can be worked out by mapping (and other approaches to symbolic logic) Complex designs become much more difficult. You wind up with a few days or weeks of design lay-out, followed by months or even years of trouble-shooting.

KLM
 

Similar threads

Replies
4
Views
3K
Replies
21
Views
2K
Replies
4
Views
3K
Replies
6
Views
18K
2
Replies
52
Views
5K
Back
Top