Understanding Grids and Units in Computer Graphics and Physics Simulations

In summary: Wow. Those are very broad questions. It would take a whole textbook and a semester's study to answer all that.But maybe I can answer one part of your query.What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all. Consider simulating a violin string. (1 dimension makes it easier). I might divide the string into N equal sub-lengths, then simulate each of them a a
  • #36
Aufbauwerk 2045 said:
Taking the second part of your question first, the image of the model appears to be rendered instantly (hopefully at least) because the calculations are done very quickly. Behind the scenes, the program is recalculating the new projection of the 3D coordinates of the object onto the 2D pixel coordinates. Then the new 2D pixel coordinates are used to draw the next frame. So if, for example, we have a stationary camera, and the object is a cube, and the cube is rotating, then the 3D coordinates of the cube's vertices are changing, and those new vertices need to be transformed into pixel coordinates. What happens during the actual drawing is that we have triangles (think of two triangles per cube face) which are drawn and filled in with the appropriate color or texture. Each triangle is defined by three vertices. We use triangles because the GPU is good at drawing triangles very quickly. For more details, I really suggest finding a good book on 3D real time graphics. You may also be interested in a book on Physics for Game Programmers. Not that you are necessarily a game programmer, I just mention that because that is a good place to find this information. As for spacing between points, if you mean mapping real-world spacing into a 3D coordinate system to begin with, that's somewhat arbitrary. I could represent a real-world object using many different coordinates, depending on my choice.

That's about all I can say on this topic. I need to get back to work now! Best wishes. :)

This combined with many of the other explanation has solved my conundrum, or so I think. And I think I was thinking and had said something similar that the instruction set for a 3D model contain information for creation of the model for display on a 2D screen with respect to pixels (such as the points and how they connect to form surfaces) where the pixels work in conjunction with the translation properties of the model, essentially meaning that even a 3D Model is not actually 3D but a 2D image but it is so well defined that it's instruction set includes construction information and connectivity of all the points from each and every perspective/angle but only on a 2D plane, i.e; for display of the side or whatever part of the model it is that is facing the camera, or the screen. It is like how movie sets used to be made. The computer is like the set dresser and we are like the director. When we want the front facing, the only make the front of it but from and according to our given instruction, they prepare a plan for the entire set, but do not create it. When we switch from orthographic to perspective and view it at an angle, they immediately create the set from the plan that they had (which in a way is given to the computer by us when we make a model, the plan is the instruction it seems) and say, prepare half of the front and the entire side and maybe a little bit of the top or the bottom and show that to us. And the computer or the set dresser does it so quickly (since we have already given them the instruction) that we may (or in this case, I was) lead to believe that there is construction of an actual 3D model (although, I do not think that I was since my query was always about the space). But the difference between the set and the computer generated image is that the image is flat and so is the model. The space for the computer is the pixel, that is the ground which they build the set on. And the pixels also act as the paint, we simply pick the colour. So, a 3D model is a collection instruction sets of 2D images conveying information for the construction of how something presumably looks when looked at from different angles. This is such a relief !
Edit - I was thinking about what I have written and a thought struck me about 3D Printing. Those things come from 3D Models. I somewhat conclusively decided that they are 2D information from all the angles. A sculptor is capable of making sculptures just by looking at something or someone from different angles. Is that the principle which 3D Printing works on/by ? Is that applicable ? If not, my entire understanding would now turn out to be a disappointment.

I would like to apologize to Sir @russ_watters because I think I dismissed your analogy simply due to my incompetence and my misinterpretation of it. You drew a cube on a paper and as soon as I asked to look at the other side, you immediately drew the other side. Like I said, not the brightest student, but I like to think it helps me tread carefully. And if I may, I would also like to blame studying about 'ray-tracing' for this because ray-tracing (speaking from my limited knowledge) makes it sound like there is a 3D space and 3D object and light. This has also raised another question, what is light in computers ? It has to be pixels. So, how can there be simulations of illumination in computers ?
 
Last edited:
Technology news on Phys.org
  • #37
Raj Harsh said:
Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.
I was one of the original developers on ComputerVision's CADDS-3 system, precursor to their CADDS-5 system.

In general, you have (in the computer) more than points. You also have ways of forming the polygons (or other 2D surface shapes) that are anchored by those points.

Here's another example:
Code:
units = "mm"

splineA.pt1.x,y,z = 100,0,15
splineA.pt2.x,y,z = 100,20,25
splineA.pt3.x,y,z = 100,40,18
splineA.pt4.x,y,z = 100,60,18
splineA.pt5.x,y,z = 100,80,20
splineA.pt6.x,y,z = 100,100,28

splineB.pt1.x,y,z = -100,0,24
splineB.pt2.x,y,z = -100,20,22
splineB.pt3.x,y,z = -100,40,32
splineB.pt4.x,y,z = -100,60,10
splineB.pt5.x,y,z = -100,80,15
splineB.pt6.x,y,z = -100,100,30

ruledsurface.edge1 = splineA
ruledsurface.edge2 = splineB
ruledsurface.ystart = 0
ruledsurface.ystart = 100
So what we are doing here is:
1) defining our units of measure (millimeters)
2) fitting a cubic spline "splineA" to a series points (a way of specifying a smooth curve)
3) fitting another cubic spline "splineB" to another series points
4) defining a surface which is construct by sliding a rule (straight-edge) along both splines simultaneously. So, with the data provided, the ruler will run from y=0 to y=100 and will remain in the x/z plane.

We now have a smooth, ripply 3D surface defined by a 12 3D points - plus a lot of context information. It's that context information that specifies "spline" or "ruled surface" that allows general purpose software to reconstruct exactly what you want.

Once you have that surface, you understand how you could generate a 2D picture that can be displayed or printed to paper.

But, depending on the application, that may not be what is most important...

Let's say we specify a CNC milling machine (things like its command structure, units of measure, bad dimensions, tool home locations, etc), and a tool (cross-section (shape), coolant requirements, etc) and now we want to generate a list of instructions to that machine that will cut that surface from a block of aluminum.

So you have one data set that describes the geometry you are creating and other data sets describing camera set-ups and CNC machines.

That said, there may be time when all you have is a point cloud. In those cases, you will need to discover the surfaces (little triangles) before you can do the kind of rendering (display, CNC machine, whatever) we have been discussing. For a dense creating those triangles is often easy. The points hook up to the other points that are closest to them. So for each point, find its closest neighbors in each of say 6 directions, then work out the surfaces based on whatever heuristic method works for you.
 
  • Like
Likes Raj Harsh
  • #38
Mark44 said:
The pixels aren't created -- they are built into the monitor.
By pixel, I meant the display information for a point, which is usually represented by the physical form of pixel. I was somewhat referring to the digital form because even the vector art is represented by pixels, the physical type in display units.

Mark44 said:
Just to be clear, a surface is not actually created, but rather, memory is modified so that when it is displayed on a monitor, the resulting image appears to be a surface. Of course, if you want to see the image, you need a monitor.
Basically what I asked confirmation on and I have said it again in my previous post.
Mark44 said:
This doesn't make a lot of sense to me. On the one hand, there are the points that form the framework of the object, and these have nothing to do with pixels. There are other concepts, such as pixel shaders, meshes, textures and others, that are used to tranform the framework of an object to what you see on a monitor (the pixels).
I am well aware of shaders and meshes as I have studied about them when I was learning digital modelling. I used the word framework/armature as an indicator of the points but also as the entire frame. The frame of a cube is an empty cube with borders. If I give you say a plank, you can connect the top corner to the bottom corner. So in this case, I think my question was regarding how the computer knows how to create the surface when it only works with points. I know that I did say that the computers store the information about how those points connect as well but my question was actually aimed at tessellation. In computers, models only contain the information about the points and then tessellation takes care of the rest by simply reading the the information on how the points are connected and what surface needs to be created and by illuminating the desired and the required light source on the display unit to the shade defined by the modeller, in accordance to the environment and it's behaviour and reaction to light. But I think it has been rendered moot since I have arrived at the conclusion that it is not actually 3D but 2D. But I still think it should be hard for computers to progress the connection of the points on their own if the instructions are not clear. Connecting in a linear fashion - connect the top left points to the top right, then connect the top right to the bottom right, then from the bottom right to bottom left and then back again to the top right. But this is where it can get tricky. It is supposed to connect the front two points to the bottom two in the front and the top points in the back only to the bottom points on the backside but the computer usually (most often, although, I have seen this merge function as well as others fail at times) knows itself what it needs to connect to round out the model. It can make use of deletion of overlap and follow the same linear path from all the sides. It does not follow a linear progression model but it handles tessellation very well in most cases. I wanted to learn about that.
 
  • #39
If you really want to see what's going on, learn to program it yourself. I can recommend the excellent series "games with go", which is currently appearing on twitch, and is archived on youtube. https://gameswithgo.org/

It goes all the way from simple text adventures to full 3D graphics.

--hendrik
 
Last edited:
  • #40
Hendrik Boom said:
If you really want to see what's going on, learn to program it yourself. I can recommend the excellent series "games with go", which is currently appearing on twitch, and is archived on youtube. https://gameswithgo.org/

It goes all the way from simple text adventures to full 3D graphics.

--hendrik
Thank you for the suggestion. I will look into it.
 
  • #41
Raj Harsh said:
Thank you for the suggestion. I will look into it.
He's an entertaining lecturer even though he just sits there, programs, and talks about what he's doing. You don't have to pay the ten dollars or so for access to his github archive of all the code; you can just type everything in off the video, though I've found myself wondering sometimes what kind of brackets he's using.

He uses Visual Studio on Windows; I use Emacs and command line in Linux. I can assist with setup if you use Linux too.

Above all, have fun.
 
  • #42
Hendrik Boom said:
He's an entertaining lecturer even though he just sits there, programs, and talks about what he's doing. You don't have to pay the ten dollars or so for access to his github archive of all the code; you can just type everything in off the video, though I've found myself wondering sometimes what kind of brackets he's using.

He uses Visual Studio on Windows; I use Emacs and command line in Linux. I can assist with setup if you use Linux too.

Above all, have fun.

Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.
 
  • #43
Raj Harsh said:
Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.

I've used DirectX and OpenGL. I prefer OpenGL, but I can't recommend which one to use. I suppose it depends on what exactly you are doing and who are the intended users. Of course I don't know anything about the future of either API.

One reason I like the Angell book I mentioned is that he concentrates on the basic concepts and mathematics of computer graphics, and he does not lock his explanations to a single API. On the contrary, he uses a small set of graphics primitives which can be implemented by whichever API you want to use. He gives a few examples of implementing the primitives. I think that is a good approach for learning, because you can focus on the math and not need to worry about mastering the huge DirectX or OpenGL API at the same time.

Since people are discussing development environments, the one I've settled on for my private efforts is the following. I use Code::Blocks as my IDE. I use MinGW for compiling. I develop on my Windows 10 machine because right now it's all I have. Setting up a Linux system is on my to do list. I prefer Linux to Windows for development purposes, but again everyone's needs are different.

I've also used Visual C++. What can I say? I don't want to seem like a MS-basher, but I will say that some people really need to read Wirth's essay "A Plea for Lean Software" and take its message to heart. I would say that about everyone, not just MS.

Here are links to what I like to use.

http://www.mingw.org

http://www.codeblocks.org

As long as I'm recommending stuff, I like this simple text editor for when I don't need to use Code::Blocks.

https://www.notetab.com

For document preparation, I use Latex.

https://www.texstudio.org
 
  • Like
Likes Raj Harsh
  • #44
Aufbauwerk 2045 said:
I've used DirectX and OpenGL. I prefer OpenGL, but I can't recommend which one to use. I suppose it depends on what exactly you are doing and who are the intended users. Of course I don't know anything about the future of either API.

One reason I like the Angell book I mentioned is that he concentrates on the basic concepts and mathematics of computer graphics, and he does not lock his explanations to a single API. On the contrary, he uses a small set of graphics primitives which can be implemented by whichever API you want to use. He gives a few examples of implementing the primitives. I think that is a good approach for learning, because you can focus on the math and not need to worry about mastering the huge DirectX or OpenGL API at the same time.

Since people are discussing development environments, the one I've settled on for my private efforts is the following. I use Code::Blocks as my IDE. I use MinGW for compiling. I develop on my Windows 10 machine because right now it's all I have. Setting up a Linux system is on my to do list. I prefer Linux to Windows for development purposes, but again everyone's needs are different.

I've also used Visual C++. What can I say? I don't want to seem like a MS-basher, but I will say that some people really need to read Wirth's essay "A Plea for Lean Software" and take its message to heart. I would say that about everyone, not just MS.

Here are links to what I like to use.

http://www.mingw.org

http://www.codeblocks.org

As long as I'm recommending stuff, I like this simple text editor for when I don't need to use Code::Blocks.

https://www.notetab.com

For document preparation, I use Latex.

https://www.texstudio.org

I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ? I have seen more of OpenGL codes than DirectX and the library is very neat. I am seesaw-battling but I think I will first learn about OpenGL a little as it is older than DirectX and then move to Microsoft's API because it is much widely used today, or study both to keep my options open. Their functionality is more or less the same. I think I am most likely to go towards game development, which is why I learned modelling and a little bit of VFX, and I do not want it to go to waste.

I could not find the exact book you mentioned but I have found other books from the author, one with similar title - High Resolution Computer Graphics Using C. I will get it. Hopefully, I will finish it because many of my books after a few days end up on shelf, collecting dust.

Thank you for the recommendations. I have chosen Windows because I run many applications on my computer, and video games too and Linux is not suitable for that. I may buy a new system with Ubuntu, but not now. And as the saying goes, to each their own. I do not have any issues with Visual C++, which is only available as a part of Visual Studio now, I think. And I do not like that Microsoft's offline installer is also an online installer which you download to download the setup. I currently do not care much about other things, my perspective may change once I am step out of the learning phase. But I have used CodeBlocks too, but I do not use it currently. But I do like that it is lightweight.

Coincidentally, I have read an excerpt from that not too long ago. I have very recently updated to Windows 10 because I did not like the idea of being aware of Microsoft's data collection from my system and despite that, assenting to it simply to use their product. I can turn off certain settings but we have to accept their terms. There is also Cortana which I never use, basically a bloatware, it should have been optional. And I think Cortana is a custom made Nuance's Dragon, from the Naturally Speaking series. But what upset me the most is that Microsoft's old and simple Photo Viewer was nowhere to be seen and when you install an application and choose to make it the default for certain extensions, it does not work. And then seeing people complain about it on forums is what lead me to that excerpt which said that there has not been much advancement in terms of functions of the softwares but it is the aesthetics which has massively increased the size and the prices. And I do agree with it, it is something I have thought of too. Apple and Google, both release updates for their mobile OS frequently but the changes are very minor. They change the menu and the arrangement but most of the features are still the same. And the design today is not very good. People are keen on making everything to look futuristic, which to them means sharp and crisp and a little dull. Before, there was a time when the design was well-rounded, quite literally as the edges were smoothed out and the colour pallet was big and they tried to add depth to each design whereas today, it is flat. Take Windows for example. Their Aero design compared to the current design of Windows 8 and Windows 10 is much more better so the change is definitely not for the better, and it was unnecessary. And Windows 10's design is much more flatter. Regardless of whether people agree or not, the current design is not much different from the Windows 98 or the Classic Windows layout, which is better because it can help one save memory. Many people switch to it on their workstation.
 
  • Like
Likes Aufbauwerk 2045
  • #45
Raj Harsh said:
I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ? I have seen more of OpenGL codes than DirectX and the library is very neat. I am seesaw-battling but I think I will first learn about OpenGL a little as it is older than DirectX and then move to Microsoft's API because it is much widely used today, or study both to keep my options open. Their functionality is more or less the same. I think I am most likely to go towards game development, which is why I learned modelling and a little bit of VFX, and I do not want it to go to waste.

I could not find the exact book you mentioned but I have found other books from the author, one with similar title - High Resolution Computer Graphics Using C. I will get it. Hopefully, I will finish it because many of my books after a few days end up on shelf, collecting dust.

Thank you for the recommendations. I have chosen Windows because I run many applications on my computer, and video games too and Linux is not suitable for that. I may buy a new system with Ubuntu, but not now. And as the saying goes, to each their own. I do not have any issues with Visual C++, which is only available as a part of Visual Studio now, I think. And I do not like that Microsoft's offline installer is also an online installer which you download to download the setup. I currently do not care much about other things, my perspective may change once I am step out of the learning phase. But I have used CodeBlocks too, but I do not use it currently. But I do like that it is lightweight.

Coincidentally, I have read an excerpt from that not too long ago. I have very recently updated to Windows 10 because I did not like the idea of being aware of Microsoft's data collection from my system and despite that, assenting to it simply to use their product. I can turn off certain settings but we have to accept their terms. There is also Cortana which I never use, basically a bloatware, it should have been optional. And I think Cortana is a custom made Nuance's Dragon, from the Naturally Speaking series. But what upset me the most is that Microsoft's old and simple Photo Viewer was nowhere to be seen and when you install an application and choose to make it the default for certain extensions, it does not work. And then seeing people complain about it on forums is what lead me to that excerpt which said that there has not been much advancement in terms of functions of the softwares but it is the aesthetics which has massively increased the size and the prices. And I do agree with it, it is something I have thought of too. Apple and Google, both release updates for their mobile OS frequently but the changes are very minor. They change the menu and the arrangement but most of the features are still the same. And the design today is not very good. People are keen on making everything to look futuristic, which to them means sharp and crisp and a little dull. Before, there was a time when the design was well-rounded, quite literally as the edges were smoothed out and the colour pallet was big and they tried to add depth to each design whereas today, it is flat. Take Windows for example. Their Aero design compared to the current design of Windows 8 and Windows 10 is much more better so the change is definitely not for the better, and it was unnecessary. And Windows 10's design is much more flatter. Regardless of whether people agree or not, the current design is not much different from the Windows 98 or the Classic Windows layout, which is better because it can help one save memory. Many people switch to it on their workstation.

Sorry, I did not quote the exact title. I haven't used the book for a while. Here are the correct details.

Ian O. Angell, High-resolution Computer Graphics in C. New York: John Wiley & Sons. (Halsted Press).

ISBN: 0-470-21634-4

I think Windows was improving for a few years, but I really did not like Vista, and I strongly dislike Windows 10.

For an alternative, which it seems few people are using, there is Oberon, designed by Wirth. At least he shows how to design good software.
 
  • #46
Raj Harsh said:
I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ?

Sure but at some point that gets harder, messier, involves ugly compromises, etc and the best way forward becomes a clean break. One could argue that this was done with OpenGL 2.0 (OpenGL 1.4 and earlier are very different). OpenGL is old, it's been more than 20 years since the release of OpenGL 1.1 and 14 years since OpenGL 2.0 and a lot has happens to CPU/GPUs in those years. So a few years ago the people behind OpenGL (Khronos Group) released an updated API that again broke with the old API. They could have called it OpenGL 5.0 (originally it was "Next Generation OpenGL Initiative" or "OpenGL next") but they decided to name it Vulkan.
 
  • Like
Likes Aufbauwerk 2045
  • #47
Vulkan is quite new and one must have very strong reasons to use it, otherwise OpenGL is still just fine.

By the way, you may want to look over a couple of projects of mine, described here:

https://compphys.go.ro/Newtonian-gravity/
https://compphys.go.ro/event-driven-molecular-dynamics/

Both contain very similar OpenGL code (in fact, I simply got the code from the first project into the second one), the difference is that in the second project I added instancing. Of course, the gl programs are different between the two, but a lot of code is common.

The projects are open source, available on GitHub. Hopefully they could help. You should be able to compile them with the latest Visual Studio.
 
  • #49
Raj Harsh said:
Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.
OpenGL exists on a variety of platforms; DirectX is Microsoft only. So if you or your users might ever want to run your code on, say, Linux, you'd want to be using OpenGL.

Vulkan is new. I'm not familiar with it, but I don't know how widely available it is.
 
Back
Top