Changing circuitry of analog computer *During* simulations?

In summary, analog computers can change their circuitry and component values during simulations, making them more versatile and potentially advantageous for certain problems. This can be achieved through manual rewiring or with the help of software tools. However, care must be taken to avoid self-excitation and undefined conditions during these changes. An example of this capability is the use of multiturn potentiometers controlled by servomotors in early hybrid computers. This feature may be useful for real-time simulations of complex biological organs.
  • #36
Kirana Kumara P said:
Still, we can expect the analog computer to be faster than a digital computer only if changing the connections (or switching) does not take significant amount of time.
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.

Kirana Kumara P said:
One of the methods of solving the above problem could be to make use of the finite element method (although it may be possible to solve the differential equations directly by building a suitable analog computer, without making use of the finite element method).
A finite array of analogue computer elements is still FEM.

Kirana Kumara P said:
Of course, the answer to the above question may be problem dependent.
You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.
 
  • Like
Likes Kirana Kumara P
Engineering news on Phys.org
  • #37
for the most part the passive components in an analog simulation would remain constant but hypothetically there can be non linear components or ones with initial conditions or ones that depend on the value of another term (feedback). For example a light bulb will change its resistance as it heats up or a capacitor can have an initial DC charge or cell energy uptake depends on available oxygen and other factors. Active non linear and time dependent circuits can also be developed but I have no idea how one would implement that for a biological system.

As far as I know, the analog computer is best at solving partial differential equations not necessarily in simulating systems. If you can write a Diff Eq for this system then you need to look at each term for nonlinearity and time dependence and endeavor to build a circuit that emulates that function. you're going to have to limit what you simulate, the biological systems have many feedback terms and depending on the result and resolution you want, it could be overwhelming.
 
  • Like
Likes Kirana Kumara P
  • #38
anorlunda said:
We can not possibly answer that without knowing how long you define significant. Let's say that a relay switches in 10 milliseconds. Is that significant?

Your description of the problem does not describe any dynamics at all. It is the dynamics (i.e. range of interest in the frequency domain) that determine the needs of switching. Indeed, your description sounds like the problem may be static, with no dynamics at all.

If you have a nonlinear 3D problem, the required granularity is also a critical parameter. If you represented it with finite elements, how many elements would you need?

The quality of answers you receive here depends strongly on the quality of the question description. You are asking for design advice. In engineering, requirements specifications always precede design.

Ten milliseconds is insignificant for me (and hence okay for me). This is because I have about 30 milliseconds for completing one individual simulation and to change the connections/parameters once. If changing the connections/parameters can be completed in 10 milliseconds, I still have 20 milliseconds for the actual (individual) simulation. Since the very idea of going for an analog computer is to make the simulations run faster, it should be possible for the individual simulations to complete within the remaining 20 milliseconds (otherwise there is no point going for an analog computer). I wish to know whether the 10 milliseconds number is a realistic estimate of the time needed to change the connections/parameters (at least I wish to know whether the 10 milliseconds figure is not impossible while solving certain problems; here, these "certain problems" may or may not be the problem that I have in my mind).

Yes, one can assume that I am interested to solve a static problem. But I am interested to solve a series of static problems (30 static problems) within one second. I am not interested in the dynamics that may come into picture when one switches from one static problem to the next. Let us assume that the system is designed well so that it is practically quite stable (something close to a critically damped system where the oscillations before reaching the steady state are minimum). From a practical point of view, even if the unwanted dynamics and oscillations happen to be unavoidable, it may be reasonable to assume that it would not usually take more than 10 milliseconds for the oscillations to settle down. Hence it may be reasonable to assume that I can complete one individual (static) simulation within 30 milliseconds (including the time required to change the connections/parameters once), if the time required to change the parameters/connections is only about 10 milliseconds.

I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.
 
  • #39
Kirana Kumara P said:
I would prefer using a minimum of about 500 elements. There is no harm using more number of elements. But I want an individual simulation (plus the time required to change the connection/parameters once) to be completed within 30 milliseconds.

It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.
 
  • Like
Likes Kirana Kumara P
  • #40
The topology of integrators and summing amplifier circuits used for the solution of differential equations in analogue computers is the same as was used in analogue state variable filters. Those filters have now been replaced by very low power digital filters throughout signal processing technology.
https://en.wikipedia.org/wiki/State_variable_filter
https://en.wikipedia.org/wiki/Digital_filter

It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array
 
  • Like
Likes Kirana Kumara P
  • #41
Baluncore said:
False. For the same bandwidth technology, a digital processor will produce results more than 100 times faster than any analogue computer, with or without changes of parameters. A digital signal never has to settle to a fixed value, it need only be clear of the transition threshold. An analogue circuit requires more time and a lower noise environment. It is unlikely that the errors of an analogue computer at speed will be much less than 0.4%, equivalent to 8 bits. We can do a great many 16 bit digital computations, (with less than 0.004% error), in the time it takes one analogue signal to settle.A finite array of analogue computer elements is still FEM.You are correct, there are many problem dependent possibilities. Without specifications, or a set of equations, anything is possible. It seems like you are trying faithfully to maintain a belief in analogue computers, while all the evidence suggests they have been extinct for quite some time.
I no longer expect you to present specifications and equations, as to do that would threaten your faithfully held belief.

When one uses the word "simulation", it can be one of these:

1) Simulation of physics using a digital computer
2) Simulation of physics using an analog computer
3) Simulation of the analog computer of point 2) above, on a digital computer

One may note that points 1) to 3) are not one and same (although the results obtained using the three methods are expected to be practically the same). When I talk about analog simulations, it always refers to 2) above (not 3) above). I am interested to simulate certain physics, not certain analog computer. How one represents the physics depends on whether one uses a digital computer or an analog computer.

To give a simple example, if someone wants to find the circumference of a circle using a digital computer, he can just use the well known formula that can calculate the circumference if the radius is known. On the other hand, analog computer is something like actually drawing a circle with the specified radius on the ground, and then actually measuring the circumference using a thread. Now the point 3) above is like drawing the circle on a computer screen using an equation that describes the circle, and then "measuring the circumference" by counting the number of pixels on the circumference and by knowing the distance between individual pixels. Hence it is not fair to simply compare the speed of digital computers to analog computers because the problem to be solved itself are different (although the physics that needs to be simulated is the same, the analog model is different from the digital model).

Hence I do not understand your point that digital simulation should be at least 100 times faster than the corresponding analog simulation. I believe that the success of analog computers depend on the possibility of finding a good electrical analogy. Of course, when it comes to problems like the simulation of biological organs, it may turn out to be a very complicated task and the risk of failure may also be high, but this makes the problem interesting, challenging, and important. But no one would try to take up the challenge (and risk) if there is no possibility of being successful.

These three points are still of concern to me (thanks to many of the replies above, which helped me to get insight into the following points):
1) Time consumed for actively changing the connections and/or parameters
2) Time required for the oscillations to settle
3) Unpredictable and uncontrolled variation in parameters because of heating and drift over time

Constructing arrays of analog computing elements resembles more to the Finite Difference Method (not the Finite Element Method). The Finite Difference Method carries out simulations by replacing differential equations with difference equations.

There is a reason for not presenting the nonlinear differential equations. Available literature does not give the final form (which is required for our purpose) of these differential equations. This is like first defining a variable "a" as a function of the variables "b", "c", "d", then defining a variable "e" as a function of "a", then defining a variable "f" as a function of "e", then writing down the nonlinear differential equation in terms of "f". One may need to use a digital computer to get the final form of the differential equations. Moreover, some of the parameters (coefficients) in the differential equations will not be known beforehand. For example, the coefficients may depend on the position of the mouse pointer on the screen that is connected to a digital computer; the mouse pointer would be actively controlled by a human user (nobody can predict beforehand how the human user is going to move the mouse pointer); hence the digital computer would note down the position of the mouse pointer, then calculate the coefficients for the differential equations. Then the digital computer can issue commands (at appropriate times) to the analog computer to change the connections/parameters of the analog computer.

If I am going to use the Finite Element Method, I can tell the form of the final set of equations to be solved using an analog computer; that is, as I have already told, I need to solve just a set of simultaneous nonlinear algebraic equations (not nonlinear differential equations). As informed in the previous paragraph, I can get the numerical values of the coefficients only during run-time. As informed already, I have about 5000 equations in the set of simultaneous nonlinear algebraic equations (but if the number is too much, I am okay with 1000 equations also).

Finally, speed is more important for me than accuracy. I am okay with about 5% error also.
 
  • #42
Baluncore said:
It would seem sensible to replace all the analogue computer nodes with the appropriate circuit code for implementation as digital filters. That way, speed and reliability will increase, while at the same time there will be a reduction in power and calibration time. Those functions can be quickly implemented and revised in FPGAs.
https://en.wikipedia.org/wiki/Field-programmable_gate_array

While using FPGAs, is it possible to change the connections and parameters *During* simulations?
 
  • #43
anorlunda said:
It was about 45 years ago when we quit using analog circuit analogies to solve equations because digital solutions became so much better. Therefore, I think it is likely that the others in this thread who say that you are wrong, and that digital solutions would be faster are right and you are wrong. However, we don't have enough details about your problem to say that conclusively. Therefore, I'll take you at your word that your analog circuits can do in 30 ms what supercomputers are not able to achieve. So what then?

It sounds like the speed to switch components between solutions is not the limiting factor. You can get solid state relays even faster than 10 ms to switch one resistor out and a different resistor in.

But with 500 or more nodes, you are talking about thousands, or tens of thousands of analog components. The time to design and fabricate them will be very long. You will likely need a digital computer anyhow to decide on the parameter changes and to issue commands to the thousands of relays. Heat dissipation will become a problem for this monster machine that will fill a whole room.

Most of all, reliability could make it all impractical. In my early days, we used analog computers in a project. We had a technician who showed up early every working days to replace the vacuum tubes that failed the previous day. That took an hour each day and we only had 10 amplifiers to worry about. Even with modern components, the MTBF of this machine might be less than an hour.

Switching speed aside, I am highly skeptical that your analog computer dream will be practical. If you asked me to collaborate with you on the project, I would flee the scene. It would be a career killer to go back 45 years in technology to solve a problem.

My best advice is that you should re-verify your understanding about what modern digital computers can accomplish in a short time. It is likely that you got it wrong the first time.

The relevant literature tells that none has been successful in solving the problem using the present day digital computers (major part of the problem involves solving a set of 5000 nonlinear simultaneous algebraic equations thirty times a second). I am clear about this.

A modern supercomputer (which is a digital computer) may make a simulation which would take one hour to complete on a desktop computer to complete within one second. However, it may not be capable of making a simulation which would take one minute to complete on a desktop computer to complete within 30 milliseconds. This is because of the time required for communication between processors in a supercomputer. This can be avoided only if at least one the following would hold good in the future:
1) We will have individual digital processors which are very fast (like 100 GHz processors)
2) Time required for communication between the processors in a supercomputer is reduced to a great extent.

Coming to analog computers, earlier I was worried about:
1) switching speed
2) time required for the signals to settle
3) unwanted change in the parameters because of heating and drift over time

From your reply above, I have to add the following important point to the above list:
1) reliability

Coming to practical difficulties of analog computing, the following points need to be noted also:
1) time, effort, and expertise required to design and fabricate
2) difficulty in finding collaborators
3) space requirement and cooling arrangement

I have not lost all the hope about the analog computer still, since I am ready to sacrifice a bit of accuracy as trade off for speed (even about 5% error may be okay for me). Assuming that switching speed is not of major concern, this might take care of time required for the signals to settle, and unwanted change in the parameters because of heating and drift over time. Reliability can come into picture after the computer is built. However, realizing the analog computer would likely be a long and tedious task, and the success may not be guaranteed. If successful, one may be able to tell that the analog computer has done something which even a modern supercomputer has not been able to achieve so far.
 
  • #44
Baluncore said:
A finite array of analogue computer elements is still FEM.

Yes, it could be FEM (or it could be other techniques, e.g., the Finite Difference Method).

One of my earlier replies could imply that I do not agree with the statement "a finite array of analog computer elements is still FEM". Hence I am writing this reply to clarify that I do not contradict the statement. But it need not be FEM alone (in fact the Finite Difference Method has more resemblance).
 
  • #45
I wish to ask a new question here.

Suppose we have an analog circuit (true hardware, not a digital computer simulation of hardware). If we give some inputs, the circuit delivers the output (result) in certain time interval (solution time). Let us call this "analog simulation".

Next, let us assume that we would simulate the same circuit on a digital computer. Here we are not bothered about the physics behind the construction of the analog circuit, and we will not consider the analytical solution for the problem in hand. In fact, we will not even bother about the original problem. We will just concentrate on the analog circuit and simulate the analog circuit on a digital computer. Let us call this "digital simulation of analog circuit".

I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?

One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
 
  • #46
Kirana Kumara P said:
I wish to know which of the two simulations is faster. Or, whether "analog simulation" is faster or "digital simulation of analog circuit" is faster is dependent on the problem (circuit) in hand?
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.

Kirana Kumara P said:
One may note that both the above two simulations are different from the purely "digital simulation" which does not require any analog circuit at all (real or virtual).
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

You are worrying about irrelevant things. Digital processors will now always beat analogue processors. Your faith in an array of analogue processors looks like an example of “Steampunk”.
 
  • Like
Likes Kirana Kumara P
  • #47
Baluncore said:
An analogue processor solves the problem as a continuous function. The digital processor takes discrete steps to reach the solution. To satisfy the Nyquist-Shannon sampling theorem, the rate of samples taken by the digital processor will be more than twice the highest significant harmonic component in the signal. That same rule applies to digital filters. It can be shown that if the sampling theorem is obeyed, the discrete digital and the continuous analogue are solving the same system of equations. Fourier would agree.
I expect a digital processor to produce results about 100 times faster than an analogue processor.
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal. For high frequency real-time processes, I believe that analog circuits still have the advantage when applied to a complicated signal network. (But I have never worked with high-frequency systems, so I don't know what issues might create problems in analog simulations.)
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #48
FactChecker said:
You should not use the Nyquist theorem for real-time processes. Nyquist says that a sampling rate of twice the frequency would allow accurate phase and gain results given an infinite sample to analyse. (There are more complicated versions that apply to a finite sample length.) That is not relevant to real-time simulations. In reality, you would want at least 20 samples per cycle to get real-time results with marginally satisfactory phase and gain errors. It is easy to estimate what the phase and gain errors can be for any sampled signal.
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal. The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
The OP has referred to 5% error in post #43, “(even about 5% error may be okay for me)”.

FactChecker said:
(There are more complicated versions that apply to a finite sample length.)
Can you please give me a reference to such a version.
 
  • Like
Likes Kirana Kumara P
  • #49
Baluncore said:
I did not refer to “samples per cycle” but to twice the highest significant harmonic component in the signal.
Yes. We are both talking about the same thing.
The frequency components in the simulation are very low which is why views every 30msec make an acceptably smooth image. A simulation over time is a continuous process, extracting a snapshot of the state every 30msec does not make it an isolated fixed short length sample.
But still quite different from analyzing a long series. The rule of thumb of minimum 20 samples per cycle is a good starting point.

Can you please give me a reference to such a version.
Sorry, I do not have a reference at this time. I ran into one long ago, but I don't know where it is now.

ADDITIONAL NOTE (CORRECTION?): The rule of thumb of a minimum 20 samples per cycle was relevant to control law design. There the problem is to respond fast enough and accurately enough to control a process through feedback. That might be different from the problem of simulating system without trying to control it. I don't know about that situation.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #50
Baluncore said:
There should be no difference. Both digital algorithms will be an analogue of the organ being simulated. If they are different, one must be a worse approximation than the other.

A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation. Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
 
  • #51
Coming back to the problem of the simulation of biological organs in real-time, let me give a few more details about the problem in my mind. The biological organs that I have in my mind are: liver and kidney. Let us assume that we have the geometry of the biological organs in hand (CAD files, say). I am not bothered about the inner details of the biological organs; the organs may be assumed to be homogeneous and isotropic. When subjected to specified displacements at certain points on the surface, the entire surface of the organ would undergo deformation. Large deformations are allowed. The material behaviour is nonlinear and the material may be assumed to be hyperelastic; hyperelastic material properties are known. Over the course of time, geometry of biological organs may change, e.g., because of cutting. Mass of the biological organ may be ignored. Dynamics and inertia effects may also be ignored. My problem is to find the displacement of the entire surface of the organ when displacements only at a few points on the surface are known; this computation should be completed within 30 milliseconds (or, we should be able to complete about 30 such computations within a second). There can be a slight change in the geometry (because of cutting, say) and the boundary conditions, once a computation finishes and before the next computation starts.

Literature is clear that as of now nobody has been successful in solving the above problem using a digital computer with reasonably good granularity, i.e., granularity that is usable for practical purposes (of course, many have made simplifying assumptions, and thus claimed that they have got a solution to the problem).
 
  • #52
Kirana Kumara P said:
A digital solution methodology that employs a worse approximation could be slower than a digital solution methodology that employs a better approximation.
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.

Kirana Kumara P said:
Coming back to the problem of finding the circumference of a circle using a digital computer, it may be faster to calculate (simulate) the more accurate solution (2*pi*radius) when compared to calculating the circumference by drawing a circle with the given radius on the computer screen by using a circle generation algorithm and then "measuring" the circumference by actually counting the number of pixels on the circumference and the distance between the pixels.
That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.
 
  • Like
Likes Kirana Kumara P
  • #53
Baluncore said:
In which case only a fool would consider using the worst and slowest of the available solutions.
There are a huge number of bad and slow solutions. Any fixation on those failures would represent a lack of, or a misapplication of intelligence.That is not really a good example or demonstration of anything. Also, the number of pixels around a circle is not π * the diameter in pixels. I believe it is actually closer to 2*√2 multiplied by the diameter of the circle in pixels.

While solving real-world problems (represented in the form of some equations, say), it may not be possible to find a perfect electrical analogy (one may have to use some approximately analogous circuit). Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).

With reference to the digital simulation of analog circuits, I would not use the formula (pi*diameter). I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
 
  • #54
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.

The above analog computer (which may be thought of as a module) may be coupled to a digital computer. When the digital computer comes across a set of equations, it can command the analog computer to solve the same.

But nobody would buy the module if the module fails frequently. Moreover, the module would have value only if one can prove that using the module would speed up the computations (when compared to solutions carried out using digital processors alone).
 
  • #55
I am extremely grateful to each and every one who has replied to my questions. Physics Forum has already given me more than what I expected to get (both in terms of quality and quantity).
 
  • #56
Kirana Kumara P said:
Now there is a difference between obtaining solutions by simulating this approximately analogous circuit on a digital computer, obtaining solutions by using the approximate analog circuit itself (circuit in the form of hardware), and obtaining solutions by directly solving the equations using a digital computer without making use of any analogy (or any circuit).
It is all very well hypothesising that there are a huge number of possible solutions and that some are better than others. If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.

Kirana Kumara P said:
I assume that the circumference may be approximated by a polygon while length of each side of the polygon is decided by the distance between the centers of the two pixels that make up the two ends of the side of the polygon.
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.

Kirana Kumara P said:
It may be useful to have an analog computer that can solve a set of simultaneous nonlinear algebraic equations, even if it does nothing else. As the first step, having an analog computer that can solve a set of simultaneous linear algebraic equations may also be useful. It is also okay if the total number of equations in the set cannot be altered.
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.

Any technique “may be useful". Solving real problems is actually better than "may be useful".
 
  • Like
Likes Kirana Kumara P
  • #57
Baluncore said:
If it can be done with analogue components then it can be done faster digitally. If it can't be done with analogue components then there is no issue, digital will be the only way to solve it.
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.

Baluncore said:
As I wrote, it is not a good example. Where pixels are arranged in a rectangular matrix, it is the Manhattan distance not the linear distance that decides the number of pixels needed to draw the side of a polygon.
I gave the example just to explain the concept; finer details are not important.

Baluncore said:
Solving sets of simultaneous linear algebraic equations is now done efficiently using digital computers. Going back to analogue computers would be a waste of time and resources.
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
 
  • #58
Kirana Kumara P said:
What can we do if it can't be done digitally? I hope analog computing might come to the rescue.
You have got it backwards. Analogue computing was once the “maiden in distress”, who died. Digital computing is still the “knight in shining armour”.

Kirana Kumara P said:
But not as efficiently as I want it to be (there is difference between efficiency and the wall clock time).
Everyone wants better algorithms.

You are thinking in circles trying to justify your misplaced faith in analogue computers. Go ahead, write a representative set of equations, design an analogue processor to solve them, then build a prototype and test it.
 
  • Like
Likes Kirana Kumara P and anorlunda
  • #59
There has been interest in large-scale analog chips to perform functions that are not practical on digital computers. (e.g. http://www.news.gatech.edu/2016/03/02/configurable-analog-chip-computes-1000-times-less-power-digital). One area of possible application is in neural networks. I have seen problems in the last 5 years that can not realistically be done on digital computers even in non-real-time batch mode.

As others have said, it has always been possible to include real-time switching in analog simulations that can change the circuit as a simulation runs. That can be done in real time in nano seconds. However, it would be necessary to have all the options programmed ahead of time and just switch between signal paths. Variable gains and parameters are always possible. It is easy to have variable gains that will zero out one signal path and turn another path on.

Although a lot is possible with old-fashioned analog computers, they have great practical disadvantages compared to digital computers. I would not consider them unless there was no possibility of solving the problem digitally. It sounds like you have already tried digital computers on your problem and can not do it in real time. I am not really familiar with the current work on large-scale analog chips. Apparently they require very little power (and therefore cooling).
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #60
I think that this thread has been hindered by differing definitions of analog computer.

Many of us think of a general purpose machine based on operational amplifiers. We arrange feedback around those amplifiers to represent our equations. The amplifiers have limitations including settling time.

A broader view of the term includes electric circuits whose equations are direct analogs of the study system. Their behavior is what it is, instantaneously (except for near light speed propagation). As I said in #18, a simple resistor is an analog computer useful to solve Ohm's Law. The challenge is to find a nonlinear circuit which really is analogous. There have been many analog/hybrid attempts do to that with neuron analogies.

The OP has not demonstrated here that he understands circuits well enough to understand the difference.
 
  • Like
Likes Kirana Kumara P
  • #61
Baluncore said:
Everyone wants better algorithms.

But in my case speedier simulation is not only desirable but more importantly it is a necessity. I have no problem going to vacuum tubes, hydraulic or pneumatic analog computers, or I have no problem going back to thousand years and building a mechanical analog computer if it can deliver faster simulations (not just faster when compared to any other type of computer but it should solve my problem within the allowed time duration).
 
  • #62
My electrical analog computer has the following characteristics:
1) It is described by an electrical circuit 2) The circuit represents the electrical analogy for the original physics (or problem or system) to be simulated.

The individual electronic/electrical components of the above analog computer (filters, amplifiers, switches, resistors, capacitors etc.) can be either analog or digital. Even if some of these components are digital, I call the circuit as analog computer.

Based on the above definition of analog computer, my "common sense" says that (of course, "common sense" can go wrong at times) the analog computer will be the fastest, followed by the purely digital simulation. Digital simulation of the analog circuit might be the slowest. I "feel" that this would hold true at least for many of the problems (this may or may not be true for all the problems, i.e., there is a possibility that which of the simulations is faster depends on the problem taken up).

The reason I feel that the digital simulation of analog circuit can be slower than the purely digital simulation is that the purely digital simulation does not require any circuit (and does not require any electrical analogy). Although both the digital simulations could be perfectly analogous, they may use different algorithms for their respective digital solutions (since how the original problem is digitally represented/defined/explained could be different).

Now, if some theory says that digital simulation of analog computer is faster than the analog computer, the theory may assume that the same accuracy is expected out of the results from the two simulations. But if some amount of error is allowed in the simulations, digital simulation may be calculating more accurate results even when that is not required (that may be the reason that would make it slower when compared to the analog computer, even while the theory may predict otherwise).
 
  • #63
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.
 
  • #64
Kirana Kumara P said:
Just some unrealistic thoughts below (although I am a mechanical engineer and do not know much about electrical circuits, I know that the following are quite unrealistic, at least for the time being).

Let us assume that someone designs an analog computer which has tens of thousands of electrical components. Then if the entire analog circuit can be placed (fabricated) on a very small piece of material (like VLSI, or a modern Intel processor), then the analog computer is likely to be very fast (fast simulations, fast switching). Other advantages: reliable, less heat generated, less space required, lower power consumption.

Going a step further, someone who designs an analog computer (such as the analog computer mentioned in the last paragraph above) should be able to get it fabricated (on a very small chip as mentioned above) by outsourcing the fabrication to work to some company (e.g., Intel). Ordering (fabricating) just one piece (one analog computer) should also be allowed.

There is so much wrong with this I don't even know where to begin.

Tens of thousands electrical components for your integrated analog computer is too small. A VLSI analog computer like you're describing would more likely be millions (an op amp is thousands of components once parasitics have been extracted) of components to simulate. You could simulate it of course but it won't be fast, since the simulation time increases as the square of components, to first order.

Sure you could do it in a VLSI chip but what do you mean "a modern Intel processor"? Intel's fabrication process is *highly* optimized for digital and their internal analog designers have to jump through outrageous hoops just to get anything analog (like a clock generator) to work.

Anyway, you aren't going to be outsourcing anything to Intel. They don't operate a foundry service. You would need something like MOSIS to broker space on a wafer run for you. I doubt you can afford a full wafer engineering run. You'll get 40 to 100 parts depending on the process you use. Using an older process would still set you back a few tens of thousands of dollars, just in fabrication cost. You could by a HPC cluster for that price.

The real sticking point, though, will be how do you propose to design the analog computer? What tools will you use? Public domain tools, quite frankly, suck and professional tools are really expensive. I mean REALLY EXPENSIVE.

Also, did you know it is hard to make different op amp circuits (for example, integrators) on the same wafer act in a similar way? How do you propose to deal with that? I'm guessing you don't know that's a problem.

I think your concept of an analog computer doing fast simulations for specific problems is sound in principle, I think you are way, way out of your depth. Simulate with software and be done with it. Start with MATLAB or SciPy and migrate to C if it's too slow. This is my recommendation.
 
  • Like
Likes Kirana Kumara P
  • #65
There already are VLSI analog chips that are user-programmable. They are called Field Programmable Analog Arrays (FPAA). They can be used similarly to Field Programmable Gate Arrays (FPGA). See http://www.anadigm.com/fpaa.asp , http://www.okikatechnologies.com/ , and https://en.wikipedia.org/wiki/Field-programmable_analog_array .

Programming either FPAAs or FPGAs requires some help in scaling all the signals. The calculations are fixed point. MATLAB has a tool to turn a floating point signal diagram into a FPGA fixed point diagram and a similar thing would be useful for FPAAs. In fact, as far as I know, it might apply directly.
 
  • Like
Likes Kirana Kumara P
  • #66
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
 
  • Like
Likes Kirana Kumara P
  • #67
analogdesign said:
In all fairness, Field Programmable Analog Arrays universally suck. There is a reason they are only offered by tiny companies and no one buys them.

For the OP's application, anyway, an FPAA would be a total Charlie-Foxtrot since they are invariably switched-capacitor internally and so the OP would have to deal with sampled-data effects on top of solving the desired equations. The whole point of going analog here is speed... and FPAAs just aren't going to get it done.
I am not familiar with them, but one spec sheet that I looked at said that it had a signal band width up to 2 MHz. From that, I assumed that any sampling effects are minimal except for very high frequencies. (http://www.anadigm.com/_doc/DS231000-U001.pdf )
 
  • Like
Likes Kirana Kumara P
  • #68
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
 
  • Like
Likes Kirana Kumara P
  • #69
analogdesign said:
The sampled data effect is the need for good quality anti-aliasing filtering. That means you're adding op amps on your front end even if you are eliminating them by using this part you linked to.

Also, based on the OP's speed requirements, 2 MHz just isn't going to cut it.
If the OP is looking at something with mode frequencies over 2 MHz, then it is no wonder that digital computers are not solving it. But he was talking about running at a 30 millisecond frame, with a desire for 1 millisecond frame. So I assume that he is dealing with frequencies way below 2 MHz ... more on the order of well under 500 Hz.
 
Last edited:
  • Like
Likes Kirana Kumara P
  • #70
The OP said at one point the he or she want the calculations done in 1 ms. Assuming 0.1% settling (that's about 10 bits) and a feedback factor or 0.1 (who knows how much gain would be needed?) and with a then the GBW required would be -ln(0.001)/(2*3.14*0.1*1ms) = 11 MHz. That's right at the limit of the analog circuits on the part you linked to. Who knows? Maybe it would work.
 
  • Like
Likes Kirana Kumara P
Back
Top