inputs | output | ||
---|---|---|---|
A | B | C | X |
0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 1 | 0 | 0 |
0 | 1 | 1 | 1 |
1 | 0 | 0 | 0 |
1 | 0 | 1 | 1 |
1 | 1 | 0 | 1 |
1 | 1 | 1 | 0 |
A logic gate represents an algorithm which defines its output with respect to its inputs; This pattern can also be expressed by a truth table; Consider the following arbitrary algorithm and its truth table.
If the input has two 1's, the output is a 1.
Suppose that we need to develop a logic circuit for a car indicator with three sequentially illuminating lights. We might choose to use a sequence of states, through which we will iterate, in order to achieve this; Here X0 is light 1, X1 is light 2 and X2 is light 3.
If we suppose that there is an unmarked input, a counter, that causes our curcuit to change its state in progressive order, our lights will have the desired behavoir.
Both the operators OR and AND are types of logic gates.
inputs | output | |||
---|---|---|---|---|
A | B | X0 | X1 | X2 |
0 | 0 | 0 | 0 | 0 |
0 | 1 | 1 | 0 | 0 |
1 | 0 | 1 | 1 | 0 |
1 | 1 | 1 | 1 | 1 |
The X0 column, it is set whenever there is any input on either A or B, effectively an OR gate. X1, even simpler, is 1 whenever A is 1. X2 is set to 1 whenever both A and B are 1.
inputs | output |
---|---|
A | X |
0 | 1 |
1 | 0 |
There is another type of gate that we could use here called the Inverter, or sometimes known as the NOT gate. The inverter transforms the input signal to its inverse state; A 0 into 1 or a 1 into 0. The inversion is depicted next to the element on the circuit's path by a circle.
inputs | output | ||
---|---|---|---|
A | B | C | X |
0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 1 | 0 | 0 |
0 | 1 | 1 | 0 |
1 | 0 | 0 | 0 |
1 | 0 | 1 | 0 |
1 | 1 | 0 | 0 |
1 | 1 | 1 | 1 |
The AND gate outputs a logic 1 only if all inputs equal 1; Analogous to an array of switches in series in an electronic circuit.
Mathematicians often use the ∧ carrot to represent AND such as X = A∧B The carrot is a logic symbol. However in many electrical engineering diagrams, instead of the carrot the product or dot is used instead X = A·B It is easy to see why we might think of this as being a multiplication, look at the digits in our truth table; The only time that we are getting an output from our gate is when the inputs are not being multiplied by zero. We will see later how this symbol links well with that which is used for OR, we will be examining this next.
inputs | output | ||
---|---|---|---|
A | B | C | X |
0 | 0 | 0 | 0 |
0 | 0 | 1 | 1 |
0 | 1 | 0 | 1 |
0 | 1 | 1 | 1 |
1 | 0 | 0 | 1 |
1 | 0 | 1 | 1 |
1 | 1 | 0 | 1 |
1 | 1 | 1 | 1 |
The output of the OR gate returns 1 if any inputs are 1, it is analogous with an array of switches in parallel in an electrical circuit.
In mathematics a small ∨ is typically used to denote OR logic, X = A∨B this small ∨ represents the maximum function, which makes sense for our boolean logic as for each row, if there is a 1, we are setting our output to it's maximum value. However we will not be using it, later on when we are in the process of designing our logic, we will be using something called a 'sum of products' expression, and we may occasionally use a 'product of sums'; The term 'sum' refers to the OR logic, so frequently you will see the + sign used to represent the OR operation. We will be using the + sign to represent the OR operation X = A+B.
inputs | output | |
---|---|---|
A | B | X |
0 | 0 | 0 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 0 |
The exclusive-OR (XOR) gate has some unique definitions, not all of which mutually agree, starting out with the mathematical definition. Output a logic one if exactly one input is a logic one, from this we can understand why it is known as 'exclusive', there must be only one exclusive input at logic one.
A common use of XOR is when using parity bits to check for data corruption which requires the following definition, so that the gate will output a logical one if there are an odd number of input logical ones and zero if there is an even number, such that a parity bit may be set depending upon the result of the XOR operation on the input, the definition is; Output a logic one if the number of ones at the inputs is an odd number. We can see that our two input truth table does also fit this definition, but what will happen if there are three inputs?
inputs | output | ||
---|---|---|---|
A | B | C | X |
0 | 0 | 0 | 0 |
0 | 0 | 1 | 1 |
0 | 1 | 0 | 1 |
0 | 1 | 1 | 0 |
1 | 0 | 0 | 1 |
1 | 0 | 1 | 0 |
1 | 1 | 0 | 0 |
1 | 1 | 1 | 1 |
Our three input exclusive-OR is the same in its first stages but we can see that the result is different in the latter half of the table. Using basic electronic XOR gates, we can obtain the desired result shown in this logic table by cascading one XOR gate directly into another, as shown in the following diagram.
The symbol that we will use for the XOR gate is very similar to that which we use for the OR gate, excepting that we put a circle around the + sign like so X = A⊕B.
Now that we have a set of gates for effecting logical operations we are able to combine them in circuits to achieve specific goals of combinational logic, we can use this to create specialised truth tables for specific needs or requirements. In effect we have already crated one combinational logic circuit already when we coupled together two XOR gates to generate the exclusive or output as defined by our second XOR definition.
To demonstrate this principle imagine a classroom, in this classroom we have a door and some windows with seating for students. On the door is a 'door open' sensor a simple contact switch that detects the open or closed state of the door. There are also motion detectors to detect any motion within the room and finally there are 'glass break' detectors on the windows. Connected to these sensors is an alarm system that we need to set up with some logic to detect when to set off an alarm. We are going to put together the states of these sensors in the following sentence:
"I want the alarm to sound if motion is detected, or if the door
is opened, or if glass is broken."
On first glance, we might consider an OR gate, however this would result in the alarm sounding throughout the day as people move into and about within the class room. To resolve this issue most alarm systems have what is known as an 'armed signal', which will allow the alarm to sound only when the system is armed. We now have a new way of saying our sentence: I want the alarm to go off when the system is armed and either motion is detected or the door is opened or a window is broken. In order to construct this circuit we will need to add an AND gate to the logic of our previous design.
Our resulting logic is: A·(D+M+G)
We might also want our alarm to sound whenever a window is broken, even if the system is not armed.
Alarm = A·(D+M)+G
For our combinational logic in the circuits design we will consider the order of precedence of our logical operations and start with the logic within the brackets, we will then deal with the products after which we will do the sums. First parenthesise, then product then sums otherwise said in logical parlance First parenthesis then AND's then OR's.
Next we are going to derive the truth table which represents our alarm systems circuit, as designed in the previous section; Throughout this process we will once again be generating the boolean algebraic expression that also describes the same. Upon finishing we will have three different ways to represent our final design: the circuit, the boolean algebraic expression and the truth table.
inputs | output | |||||
---|---|---|---|---|---|---|
A | D | M | G | D+M | A·(D+M) | A·(D+M)+G |
0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 0 | 1 | 0 | 0 | 0 |
0 | 0 | 1 | 0 | 1 | 0 | 0 |
0 | 0 | 1 | 1 | 1 | 0 | 0 |
0 | 1 | 0 | 0 | 1 | 0 | 0 |
0 | 1 | 0 | 1 | 1 | 0 | 0 |
0 | 1 | 1 | 0 | 1 | 0 | 0 |
0 | 1 | 1 | 1 | 1 | 0 | 0 |
1 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 1 | 0 | 0 | 1 |
1 | 0 | 1 | 0 | 1 | 1 | 1 |
1 | 0 | 1 | 1 | 1 | 1 | 1 |
1 | 1 | 0 | 0 | 1 | 1 | 1 |
1 | 1 | 0 | 1 | 1 | 1 | 1 |
1 | 1 | 1 | 0 | 1 | 1 | 1 |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
inputs | output | ||
---|---|---|---|
A | B | A+B | A+B |
0 | 0 | 0 | 1 |
0 | 1 | 1 | 0 |
1 | 0 | 1 | 0 |
1 | 1 | 1 | 0 |
So far we have not need to use an inverter in any of our circuits design logic. To represent the inverted value in a boolean algebraic expression we put a bar above that value. In a way the inverter acts like parenthesis, anything that is beneath that bar needs to have its output computed before it is flipped or inverted X=A, we might for example add and inverter to the output of an OR gate, the expression for which would be: A+B.
We have just seen how we can derive a schematic a truth table and a boolean expression from the same design. To consolidate this process we are going to do this a few times, so as to gain some familiarity.
We can use the same technique here that we use when evaluating expressions in math, some may already be familiar with the acronym 'PEMDAS' an acronym for the order of operations, or precedence, when evaluating algebraic expressions; To remind us of the order in which parts of an expression are to be solve if we are to work though them correctly:
Parentheses, Exponents, Multiplication, Division, Addition, Subtraction.
Though we don't have quite so many operations in our boolean expressions we still do have some to consider. For example we will do the parentheses first, an interesting thing about the not negation is that everything underneath the bar is considered to be in parenthesis. A+B is the equivalent of (A+B). Then, after the inverses we do the AND's which are like a multiplication, then the OR's like addition.
We might liken this to a simplified algebra, however, given that algebra is usually performed using base ten upon an infinite set of numbers, whilst we are dealing with the somewhat more restrained set, that of base two, within which we use only the representative digits, 1 and 0.
Consider the expression: X=A·C+B·C
If we are to design a schematic from this expression, firstly we should extrapolate our variables; A, B and C, placing them to the left in a column we can the begin by expressing the product of A and C as two inverted values, from where we can then AND them, we also need to spilt off the C signal before it is inverted in order to be able to AND it with the B value and then finally the result this can pass through an OR to give us our final result X.
input | output | ||||||
---|---|---|---|---|---|---|---|
A | B | C | A | C | A·C | B·C | A·C+B·C |
0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 |
0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 |
0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 |
1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 |
Often times when considering logic in an electrical circuit, positive is considered to be a logic one and negative or absence a logic zero. However in reality the terminal with the pressure, with the actual electrons that are flowing is the terminal that is negative; This brings us to the notion of an 'Active Low Circuit/Output'. What this means is that the terminal that will flow, the one with the power, is the logic zero. This means that the active state is a logic zero where as the inactive state is the logic one. Intuitively we tend to think of one being the active agent.
inputs | output | ||
---|---|---|---|
A | B | A·B | A·B |
0 | 0 | 0 | 1 |
0 | 1 | 0 | 1 |
1 | 0 | 0 | 1 |
1 | 1 | 1 | 0 |
If we consider the AND gate, there also exists and active low version of this gate which in effect translates to a negated AND; The following truth table denotes its behaviour in its second column of output.
This gate is so common in its use, that it has its own name, it is known as the NAND gate. The next diagram shows what this circuit looks like.
We have previously seen that it is the circle in the diagram of the inverter that denotes the inversion, with this considered it is easy to understand the more common abbreviated diagram of the NAND.
inputs | output | ||
---|---|---|---|
A | B | A·B | A·¬B |
0 | 0 | 0 | 1 |
0 | 1 | 0 | 1 |
1 | 0 | 1 | 0 |
1 | 1 | 0 | 1 |
A NAND gate can be used to identify unique conditions, suppose that we would like a circuit that outputs a logical one when A is a one and B is not a one and that that be the only case in which it does.
Which can also be very simply depicted in a schematic adding another circle for inversion of B.
X = A·B·C·D
Supposing a situation in which we have four inputs and we want to find one specific condition, this situation is known as a decoder, because it decodes the required unique condition.
inputs | output | ||||
---|---|---|---|---|---|
A | B | A | A·B | A+A·B | A+B |
0 | 0 | 1 | 0 | 0 | 0 |
0 | 1 | 1 | 1 | 1 | 1 |
1 | 0 | 0 | 0 | 1 | 1 |
1 | 1 | 0 | 0 | 1 | 1 |
There are some useful tools when dealing with the simplification of boolean algebraic expressions. Take this example of how boolean expressions can be manipulated and reformatted whilst the still essentially say the same thing.
Consider: X = A+A·B And: A+A·B = A+B; Our expression has exactly the same output as an OR gate.
As an aside, this demonstrates nicely how much simpler it is to prove something with boolean algebra than algebra using the real numbers.
Although the output of these two different circuits will be the same, the latter is far simpler and much faster than the former. Given that the first circuit has five inputs and three outputs where the second has only two inputs and one output, also every gate will incur a very slight delay; Our second circuit is faster. As well as this, every gate takes some power to run, they require energy which is emitted as heat whilst the circuit is working, so the first circuit will also require more power to run and in so doing will also generate more heat. Thirdly the first circuit requires more transistors than the second and as such also requires more physical space. Finally due to all of this the last circuit also costs less.
Our optimal circuit:
It is worth pointing out here that this phenomenon also translates into code!
w/self | w/inv | w/1 | w/0 | |||||
---|---|---|---|---|---|---|---|---|
AND | A·A | A | A·A | 0 | A·1 | A | A·0 | 0 |
OR | A+A | A | A+A | 1 | A+1 | 1 | A+0 | A |
XOR | A⊕A | 0 | A⊕A | 1 | A⊕1 | A | A⊕0 | A |
You are most likely already familiar with the notion of a mathematical identify, anything multiplied by zero equals zero, anything multiplied by one equals itself, anything plus zero equals itself; All of these are identities. It happens to be so that in boolean algebra identities work phenomenally well. Whenever you have the values of 0 or 1, the identities can be very useful.
We are going to examine for AND, OR and XOR the result of each operation upon a value in each situation, with it's inverse with one then finally with zero.
Similar to multiplication, anything that is AND'd with one equals itself, and anything that is multiplied by zero equals zero.
A | A·A | |
---|---|---|
0 | 0·0 | 0 |
1 | 1·1 | 1 |
A | A·A | |
---|---|---|
0 | 0·1 | 0 |
1 | 1·0 | 0 |
A | A·1 | |
---|---|---|
0 | 0·1 | 0 |
1 | 1·1 | 1 |
A | A·0 | |
---|---|---|
0 | 0·0 | 0 |
1 | 1·0 | 0 |
A | A+A | |
---|---|---|
0 | 0+0 | 0 |
1 | 1+1 | 1 |
A | A+A | |
---|---|---|
0 | 0+1 | 1 |
1 | 1+0 | 1 |
A | A+1 | |
---|---|---|
0 | 0+1 | 1 |
1 | 1+1 | 1 |
A | A+0 | |
---|---|---|
0 | 0+0 | 0 |
1 | 1+0 | 1 |
A | B | A⊕B | A⊕A | |
---|---|---|---|---|
0 | 0 | 0 | 0⊕0 | 0 |
0 | 1 | 1 | 1⊕1 | 1 |
1 | 0 | 1 | 0⊕0 | A |
1 | 1 | 0 | 1⊕1 | A |
When using assembly language the call to reset everything to 0 is a
two step process; First get the MOV command and then the value of
#0.
MOV A,#0 The logical XOR command
is faster when initialising to zero, as it only has one step; XOR A
with A giving zero.
XOR A,A
We have just seen that any time that we see something being combined with itself, it simplifies, with its inverse it simplifies. Any time that we see something being combined with a zero, it simplifies. Any time that we see something being combined with a one, it simplifies. This seems a little strange, why would we ever have an element combined with itself, its inverse or with a constant?
It turns out that these processes are not obvious at first glance, but that we are going to be applying some processes or principles to boolean expressions during which some of these identities will emerge as red flags, showing that simplification is possible.
A+B = B+A
A·B = B·A
A | B | A+B | B+A |
---|---|---|---|
0 | 0 | 0 | 0 |
0 | 1 | 1 | 1 |
1 | 0 | 1 | 1 |
1 | 1 | 1 | 1 |
A+(B+C) = (A+B)+C
A·(B·C) = (A·B)·C
A | B | C | B+C | A·(B+C) | A·B | A·C | A·B+A·C |
---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |
0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 |
1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 |
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Applying our boolean primitives we can reduce this expression considerably.
A common mistake when first starting out simplifying boolean expressions is to make the assumption that A·B = A·B but it does not; The distribution does not work like that.
A·B != A·B
Our next truth table looks very much like that of an OR gate, except that it is inverted.
A | B | A·B |
---|---|---|
0 | 0 | 1 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 0 |
A | B | A | B | A+B |
---|---|---|---|---|
0 | 0 | 1 | 1 | 1 |
0 | 1 | 1 | 0 | 1 |
1 | 0 | 0 | 1 | 1 |
1 | 1 | 0 | 0 | 0 |
We have just demonstrated De Morgans's Theorem which states:
Does the same thing happen to a NOR gate?
A | B | A+B |
---|---|---|
0 | 0 | 1 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 0 |
A | B | A | B | A·B |
---|---|---|---|---|
0 | 0 | 1 | 1 | 1 |
0 | 1 | 1 | 0 | 0 |
1 | 0 | 0 | 1 | 0 |
1 | 1 | 0 | 0 | 0 |
Yes it does!
The same is true for physical logic gates in a circuit.
All of these principles, the commutative law the associative law and the distributive law, both forms, in combination with De Morgan's theorem, both forms. Will allow us to manipulate expressions until one of the boolean identities appear and we can simplify the expression; Those three red flags, something combined with itself, something combined with its inverse and something combined with a constant.
Lets try a couple of examples of the application of these principles.
Proof:
A+A·BA+A·B = A
Considering that: A+B·C=(A+B)·(A+C)
(A+A)·(A+B)A+A·B = A+B
Our expression, at the outset, has no obvious red flags; We need to shift it around a little to see what might emerge, however despite the appearance, we can't use the distributive law, the bar must be dealt with first.
(A+B+C)·B
In order that we use the distributive law we will need to do something about the inverter that follows our OR gate. Applying De Morgan's law we can move it to the input, if we switch the gate over to an AND while inverting all of it's inputs.
(A·B·C)·B
The associative law says that the parenthesis can be moved back and forth the and can be combined across the multiple elements in any order that we would like, and the commutative law says that we can swap elements back and forth.
So we can swap B and B with out altering the result. Then using the associative law we can associate those two together; Now we have a red flag!
A·C·(B·B)
A·C·0
0
(A+B+C)·B = 0
(A+B)·(B+A)
Remembering the acronym FOIL, it says that we can take the First and AND those together, OR'd with the Outside, AND that together, OR'd with the Inside, AND them together, OR'd with the Last.
A·B + A·A + B·B + B·AThere are a couple of ways that we can progress from here, and this is one of the nice things about boolean algebra; So long as we follow the properties correctly, it does not make any difference which route we take, the result is always the same.
A·B+B+B·A = B+(A·B+B·A)
B+(A·B+B·A)
B+B·(A+A)
B+B·1
B+B
B
A·B+B+B·A = B+B·A+B·A
B+B·A+B·A
B+B·A
B
A·B·C+B·C·D+(A+¬B+C)
A·B·C+B·C·D+A·B·C
A·B·C+B·C·D+A·B·C
B·C·(A+D+A)
B·C·((A+A)+D)
B·C·(1+D)
B.C·1
B.C
There are four things that are essential when considering the effects of this simplification, our result has one two input AND gate and one inverter, whereas our initial expression required three AND gates each with three inputs, two inverters and one three input OR gate; The result is far more efficient!
Some more examples:
A·B·C+A·B·C+A·B·C+A·B·C
A·B·C+A·B·C+A·B·C+A·B·C
B·(A·C+A·C+A·C+A·C)
B·(C·(A+A)+C·(A+A)
B·(C·1+C·1)
B·(C+C)
B·1
B
Examining line three of the above expressions simplification we can come to recognise this the pattern (A·C+A·C+A·C+A·C), it is now clear that all possible combinations of A and B are used and that the result can only be a logic one.
A·C+A·D+A·C·D+A·B
A·C+A·D+D+A·B
A·(C+D+C·D+B)
There are two different ways to proceed from here:
Considering that A+A·B=A+B
A·(C+D+C·D+B)
A·(C+D+C+B)
A·(C+C+D+B)
A·(1+D+B)
A
Considering De Morgan's law A+B=A·B
A·(C+D+C·D+B)A·(C+D+(C+D)+B)
A·((C+D)+(C+D)+B)
A·(1+B)
A
Up till now we have been looking at logic that has no memory, the next step is for to use memory within our circuit; The use of memory in our circuit results in the creation of a state machine.
A state machine is basically logic with memory, a machine that can not only process things but can remember where it has been, it has a history of all of the states that it has gone through.
Whenever we write code we are using many different state machines the most simple of which is a for loop, as we iterate over the loop, the state is often remembered by the variable i. Within a CPU there is a memory location also, the purpose of which is to remember what the current instruction is at any moment during the execution of a program. Depending on which architecture you are using it may have a different name, some architectures call it a program counter others call it an instruction pointer, regardless of its name it is the exact same thing; Which instruction are we currently executing, what are all of the values in our program, those are state, our code is really a state machine.
The state machine that we will be designing will be far simpler than this, but the principle is exactly the same. We will have only very small variables, maybe only two or three bits. If we have only two bits, how many possible combination of 1's and 0's are there? There are four, so our system with a two bit variable can take on four states.
The first thing that we will do is to make a system level design, a block diagram for our system; What are the inputs to our system, what are the outputs that we expect from our system.
Once we have a good idea of the system level we can make 'state diagram'. A state diagram is something known as a directed graph, we start upon an initial state and follow lines known as 'edges' which take us to a different state, known as 'vertices'. We have a web of vertices that are connected by edges.
The third step is a simple step, almost trivial, but it has a great effect upon how our logic turns out. Next will will assign numbers to our states. Supposing we have a state machine with five states, we have to design a logic for this, so we will have to give each of these states a number. We start counting at zero and count though till four, assigning numbers to states, we can give values to our logic. The designation of these numbers is arbitrary and as such it does effect the final form of our logic.
Once finished assigning numbers we use them to create truth tables. Using the numbers assigned to the different states, depending upon how they are linked in the state diagram; We build up our truth table.
We use these truth tables in order to design the logic in an efficient way using Karnaugh maps
Imagine that we have an intersection with a very busy north south road, as the main road. But there is a side road and we have to let these cars out of the side street. That side street is the east west path. We will leave our traffic light green letting the north south traffic flow, until we detect that somebody has appeared on the east west road.
There are several way that we can detect traffic on this route, one of the older methods is to place an inductance loop underneath the road which uses magnetic inductance to determine when a large piece of metal is above it on the road. This method is somewhat dated and does not offer any ability to predict how many cars are waiting nor to see whether or not a car is coming in advance. More modern systems use cameras to detect with a far more detailed set of outputs and consequent logical uses, these cameras which are seen on booms above traffic lights, use edge detection and other AI principles to detect when a car is present. Whichever type of detector we use, we will have something on the east west path that will identify when a car is approaching.
This detector is not going to be shown as an input in our system, that would make the diagram very complicated and for our first design we want to do something simple.
When a car is detected on the E.W approach a timer will start and the N.S lights will turn yellow. Once the timer has finished the lights will switch the E.W lights to green and the N.S to red, the timer is again started. Once the timer is done we will go from green to yellow E.W path and again the timer will start. Once this timer expires E.W will turn to red and N.S to green. Putting us back in the initial state of the system.
To start our state diagram which will comprise of a graph of vertices and edges, we need to first consider which will be our initial starting state, one way to envisage this is to imagine that the power is cut and then reconnected, in which state do we want our system to be? In reality this type of system would usually have a state of flashing red in both directions to really mark that the system has just been started up, but for ours we will keep it simple and start up with the N.S route green and the E.W route red.
This starting vertices tells contains the output of our system when it is in that state, we have both the N.S output value and the E.W output from our system diagram within the vertices.
Drawing our graph we see that each vertices has the same number of possible paths to the next, this is called an 'out count' in graph theory, it this case it is two which corresponds with the number of possible states of our timer.
Our final state, with the greatest number, tells us how many states we are going to need; We will need as many bits are required to write the number in binary, in this case only two bits: 3 = 11 in binary digits.
Naming least significant bit S₀ and the most significant bit S₁ we will be describing the output of the state machine in our first truth table.
S₁ | S₀ | R/n.s | Y/n.s | G/n.s | R/e.w | Y/e.w | G/e.w |
---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 |
light | logic |
---|---|
R/ns | S₁ |
Y/ns | S₁·S₀ |
G/ns | S₁·S₀ |
R/ew | S₁ |
Y/ew | S₁·S₀ |
G/ew | S₁·S₀ |
From this truth table we can derive the logic required for each individual light:
S₁ | S₀ | T | S₁′ | S₀′ |
---|---|---|---|---|
0 | 0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 | 1 |
0 | 1 | 0 | 1 | 0 |
0 | 1 | 1 | 0 | 1 |
1 | 0 | 0 | 1 | 1 |
1 | 0 | 1 | 1 | 0 |
1 | 1 | 0 | 0 | 0 |
1 | 1 | 1 | 1 | 1 |
This truth table describes what the next state for every edge is. The truth table will depend upon three bits, the first S₁ and S₀ define which state we are currently in and so these are again needed, we will also need the state of the timer as our third bit. Observing again our previously constructed state diagram, if we count the number of edges in the graph we see that there are eight, the same number as our number of rows.
Our two truth tables have now effectively digitised our entire state diagram, all but the reset state is now recorded and contained in these two truth tables.
One of the keys to this system is that it has a clock, for visual representation in the schema we will used flip-flops, that clock will keep ticking and every time that it does the system will check to see if it should go to the next state.
Our memory of the values stored in S₁ and S₀ is maintained within the flip-flops, we retrieve them in the schematic as the value that emerges from Q, coming into D is the state that we are going to store next inside. This is the heart of our system, the memory for S₁, S₀ and S₁′, S₀′.
This is a Moore machine, which is to say that the output relies only upon the state.
We are going to require a Karnaugh map for both S₁′ and S₀′ and these maps are lain out with grey code order of numbering.
Once we have these maps; We can add some rectangles and make 'sum of product' expressions from them.
S₁ | S₀ | T | S₁′ | S₀′ |
---|---|---|---|---|
0 | 0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 | 1 |
0 | 1 | 0 | 1 | 0 |
0 | 1 | 1 | 0 | 1 |
1 | 0 | 0 | 1 | 1 |
1 | 0 | 1 | 1 | 0 |
1 | 1 | 0 | 0 | 0 |
1 | 1 | 1 | 1 | 1 |
S₁′ | ||
---|---|---|
T | ||
S₁/S₀ | 0 | 1 |
00 | 0 | 0 |
01 | 1 | 0 |
11 | 0 | 1 |
10 | 1 | 1 |
S₀′ | ||
---|---|---|
T | ||
S₁/S₀ | 0 | 1 |
00 | 0 | 1 |
01 | 0 | 1 |
11 | 0 | 1 |
10 | 1 | 0 |
Both our maps have three rectangles in them and one of the rectangles in each has only one cell, which means that the product that will be generated for those cells will have all three inputs. However, two of the rectangles in each map are of size two. Encompassing two cells each and because of that one input is going to drop out of the product for both of those rectangles.
Our derived expressions are based upon three inputs, T and the current state S₁ and S₀; We can now add them to the schema.
We will now write a finite state machine in code, obtaining the required behaviour within a computer program. To this ends we will express a coin toss operation. In the following table → defines the starting state and * for Final state or states.
(HvT)*TT
Current | H | T |
---|---|---|
→a | a | b |
b | a | c |
*c | a | c |
We need to extract a few formal characteristics; We have Q which is our list or set of states, we also have an Initial state of which there can be only one. We also need to represent the Final states or Accepting states of which there are one or more, though always a subset of Q. We have our Input alphabet.
Q = {a, b, c} Initial State = a Final States ⊆ Q = {c} Input Alphabet = {H, T}
Finally we have a transition function δ which takes the current state q and one of the elements from the input alphabet i.
δ(q,i) = q'
How are we going to represent the states? We need to remember that we need a way to identify the initial state, we need to identify a sub-set of those initial states which represent our final states and we also need to create the transition function which is defined by our transition table. There are a couple of ways that we can do this, we could just number the states and use an int to represent each state but the problem with that would be that we would then have to write functions that check the bounds of those int's to verify that they exist, it is also not very descriptive. We will use something else, now we could define the states as and object, a set of key values pairs, but this would become a little complicated, as such we will opt for something between.
We will use an array to represent both the states and the accepting states.
const state = ["a", "b", "c"]; 0 1 2
const acceptingState = [state[2]];
Our input alphabet can be formed with a simple string.
alpha = "HT";
<!DOCTYPE html> <html> <body> <h2>JS FSM Detecting RegEx (H v T)*TT</h2> <script> const State = [ "Ends with heads", "Ends with 1 tails", "Ends with 2 tails", "Error"]; const initialState = State[0]; const acceptingStates = [State[2]]; const inputAlphabet = "HT"; const transitionTable = { 0: [State[0], State[1]], 1: [State[0], State[2]], 2: [State[0], State[2]], 3: [State[3], State[3]] }; const errorState = State[3]; function returnNextState(state, input) { return transitionTable[State.indexOf(state)][ inputAlphabet.indexOf(input)]; } let currentState = initialState; const inputSequence = "HTTHHTTTTH"; for (i = 0; i < inputSequence.length; i++) { let currentInput = inputSequence.charAt(i); document.write("<br>Current state: <strong>" + currentState + "</strong> - input: <strong>" + currentInput + "</strong>"); if (!inputAlphabet.includes(currentInput)) { currentState = errorState; } else { currentState = returnNextState(currentState, currentInput); } document.write(" --> Next State: <strong>" + currentState + "</strong>"); if (acceptingStates.includes(currentState)) { document.write(" (accepting state)"); } } document.write("<br>Ending state: " + currentState); if (acceptingStates.includes(currentState)) { document.write(" (accepting state)"); } </script> </body> </html>
In order to catch any errors we add an error state to out array of states and set the current state to this whenever an error is encountered.
run codeOur state machine from the previous section is generic, we will transform it into another machine to recognise a different regular expression, to demonstrate this.
To summarise: the theory that we need to represent with our code are the following.
We will adapt our code to fit the following machine:
01*(1∨0) 011 010 0111 0110 01111 01110 011111 011110
Current | 0 | 1 |
---|---|---|
→q0 | q1 | q4 |
q1 | q2 | q3 |
*q2 | q4 | q4 |
*q3 | q2 | q3 |
q4 | q4 | q4 |
This time we will need five states where our previous machine required only three. We will now use the transition table of this state machine to define our transition function.
By changing the state array, accepting states and transition table; We can reuse our state machine code in a generic way.
<!DOCTYPE html> <html> <body> <h2>JS FSM Detecting RegEx 01*(0 v 1)</h2> <script> const State = [ "No characters received", // index 0 "Rcvd initial 0", // index 1 "Rcvd 01*0", // index 2 "Rcvd 01*1", // index 3 "Doesn't match RegEx", // index 4 "Error"]; // index 5 const initialState = State[0]; const acceptingStates = [State[2], State[3]]; const errorState = State[5]; const inputAlphabet = "01"; const transitionTable = { 0: [State[1], State[4]], 1: [State[2], State[3]], 2: [State[4], State[4]], 3: [State[2], State[3]], 4: [State[4], State[4]], 5: [State[5], State[5]] }; function returnNextState(state, input) { return transitionTable[State.indexOf(state)][ inputAlphabet.indexOf(input)]; } let currentState = initialState; const inputSequence = "0111111111110"; for (i = 0; i < inputSequence.length; i++) { let currentInput = inputSequence.charAt(i); document.write("<br>Current state: <strong>" + currentState + "</strong> - input: <strong>" + currentInput + "</strong>"); if (!inputAlphabet.includes(currentInput)) { currentState = errorState; } else { currentState = returnNextState(currentState, currentInput); } document.write(" --> Next State: <strong>" + currentState + "</strong>"); if (acceptingStates.includes(currentState)) { document.write(" (accepting state)"); } } document.write("<br>Ending state: " + currentState); if (acceptingStates.includes(currentState)) { document.write(" (accepting state)"); } </script> </body> </html>run code