Combinist: Unlimited Detail Object Loading

  • Thread starter Aquariouse
  • Start date
In summary, our computer programs load only what is in our line of sight, or what is within the range of our sensors.
  • #1
Aquariouse
5
0
I came across an interesting debate over whether one could load endless amounts of three dimensional graphics with atomic precision, As a combinist I will attempt to explain my theoretical views on the situation. To date any three dimensional graphic interface ,excluding hypothetical attempts, loads your full three dimensional area. After the program loads all objects in range appear on your screen for you to view. traditionally this method limits the amount of detail in an object, for the more detail in three dimensional object the bigger the file size and the more your program has to load. What our computer program loads is based on what objects are in range of the user, and of their details.

Observe your computer , or any said object in three dimensional space, what proof do you have that the "back" end of that object exist or that the particles are and laws of physics at your unobservable point is not having a party without your notice?. We , as the observer, do not know if such activity is occurring, this is because our brain loads the two dimensional flat view of the three dimensional images it detects. Our brain, due to its adapts to three dimensional, uses a complex system of equation to assume out of view objects. nearly every time an image is displayed our brain loads a near constant amount of data, the amounts of photons reflected from atoms obtainable in your area of view. Ultimately your brain loads an "bitmap" of a near constant size every time over and over again using a series of complex equations.

Why buy a 4x4x4 sheet of metal, which cost more, over a 4x4, which cost less, when your only going to use the front face of the object. Loading full three dimensional object on a two dimensional viewing plane adds excessive fluctuating data that could easily build data overflows on ones PC. The human brain, a viewer of three dimensional space, only displays the two dimensional space viewed from the three dimensional objects, thus why load more than you will never use?.

A program loading a single bitmap of which it constantly modifies would require little to no user end graphics precessing, and would be easily scalable. Your program would need to first gather information , using a system of equations, of all two dimensional parts ,from the three dimensional parts, viewable in your in programs "line of sight" for every pixel on your screen. using a few more equations one could calculate what pixels need to be shifted, as well as which ones need to be completely overwritten. This way ones program would load a single "bitmap" file of a set size ,based on your users screen ratio, and constantly edit it without changing the size of the file, only the placement of data.

Simplicity happens to be the key?
 
Technology news on Phys.org
  • #2
Hey Aquariouse and welcome to the forums.

For the problem you are describing, the answer is both yes and no.

There is support for what is known as parametric surfaces. These surfaces are mathematical objects like Bezier and Spline curves/surfaces that have an infinite amount of geometric detail.

The hardware then is able to render these objects with an amazing amount of accuracy and as hardware gets better the speed/accuracy ratio will get a lot better.

Having said this it is important to understand the nature of the graphics pipeline and its limitations.

When you actually see the results of a game or a rendering application (like when you develop movies with 3DSMax or similar on a PC or set of PCs) you are usually seeing things that go through the standard rendering pipeline that is used by many common graphics cards.

Graphics cards render things by using a vertex and a fragment pipeline. The vertex deals with point data and the fragment deals with pixels.

Graphics cards render triangles with texture maps (think a bitmap). Basically in the fragment pipeline you have a set of texture maps and things are done mostly by doing "pixel" operations in parrallel.

This model is able to be done in parrallel which is what is actually done which means that as graphics cards add more processors and increase the speed, you get amazing jumps in performance and graphics capability.

But of course the downside is that these work in triangles. The point of this in the context of your question is that if you want to add more detail, you need more triangles or you need to use "tricks" via your texture maps. The texture map tricks give the "illusion" of more detail but they are constrained to the number of effects that can be done and don't actually represent the actual "geometry" data wise.

Also if the texture is small enough, then you will see what happens when you try and view objects up close (i.e. you see the pixelation and other similar things). One way to counter this is to use larger textures (which is done) but then you have issues with memory (graphics cards have limited memories) as well as performance issues with transferring stuff to the graphics card every time you need to render if you need to do so because of memory issues.

Also in the context of your question, there are actually a lot of 'tricks' that are employed in rendering to avoid extra computation where necessary. But the important thing to be aware of is how rendering is done on specific architectures. Once you understand this, you will see the limitations soon enough.

There are other architectures that are used (including ones that do ray-tracing), but the simple vertex/fragment pipeline model is the most popular because it is well understood and has very good performance and room for implementing all kind of cool things without having to change anything in terms of the underlying architecture (so people don't have to keep redesigning something).
 
  • #3
Euclideon is a company claiming to be able to have 'unlimited detail' is it possible real time.

Heres some videos they have posted on youtube



Better quality 3D simulations and games is always wanted but are these guys fake or not? :D

Is it possible to have a 3D environment that someone can navigate in real time where they can fly from say Earth to Mars, without the loading screen that are currently used.

Freelancer a good space game by Microsoft had flying around in space ships and navigating to other galaxy systems, but they still used loading screens.
http://en.wikipedia.org/wiki/Freelancer_(video_game )

There is also the issue of float limits, like when you get near the edge of a floating point integer as far positive or negative as you can it will wrap around if you keep going in that direction. So when you go so far positive you will be negative again. Even Minecraft has these limits in mind. And when your working with distances you hit this overflow issue sooner too the closer you are to the edge. So you sort of have to limit your playing field to a cube that is quite smaller than the max size of a positive float.

Portal 2 and a few other games, design their maps in a way that has rooms on top of each other to make use of as much space as possible. And also because they designed a whole complex. Load into some maps and no clip out and you can see this. Even their boss level had a box just for the space part.
http://photos-d.ak.fbcdn.net/hphotos-ak-snc6/247198_1602092912753_1849005320_1063111_4777788_a.jpg
Duke Nukem forever also made use of this, if your on the first level of the demo and you noclip through the ground far enough you get to the mansion. Not only is the mansion preloaded when your playing the intro to the game, it is also deep "underground" the playing field of the demo area. Smart use of 3D space but in also the rain will rain inside the mansion if you noclip down.

To have a game that can span out as much as you want you still have to have limits on the playing field it seems. You cannot go for ever in any direction. Even 3D graphics have level of detail so when you are closer to things they are more detailed than when you are further away. At least in polygon systems. So what's up with Euclideon's claims? Real or Fake? Notch and John Carmack also had things to say about their claims.
 
Last edited by a moderator:
  • #4
Hey LiamMitchell and welcome to the forums.

Thanks for that post and the links, I really appreciated it.
 
  • #5
Thanks chiro,

What are your thoughts on the claims by Euclideon?
I think its marketing hype if it is not a hoax. Because computers are limited by resources.

With "real time" games we try to get everything working with short cuts to provide the user with a gaming experience rather than them lagging with all the calculations between steps.
 
  • #6
Thanks for the great criticism guys it is always appreciated. Though the initial rely to my thread appeared to misunderstand my theory. Take out your camera and set it to capture photos, not record, look around your room and you will notice that when you view complex objects and simple objects their is not a single change in loading speed. How does you camera allow you to view "Unlimited detail" objects?. The answer is simply the fact that your camera, similar to the human eyes, never loads the object, it only processes the photons it receives in a static buffer with a constant size. Your camera then loads a "2D" file based on the photons processed, this 2D file has a constant data with static data, similar to the buffer. Since you are loading the exact same size "2D File" your camera will be able to view any detail object with the same speed as the viewing of a low detail object. Simply take this concept, using advance algorithms to trace the imaginary " virtual photons" of an object file.
 
Last edited:
  • #7
LiamMitchell said:
Thanks chiro,

What are your thoughts on the claims by Euclideon?
I think its marketing hype if it is not a hoax. Because computers are limited by resources.

With "real time" games we try to get everything working with short cuts to provide the user with a gaming experience rather than them lagging with all the calculations between steps.

I don't want to make a hasty comment saying that its a hoax.

The thing is that by what he is explaining, he is using a completely different paradigm for rendering. From what he is saying, it actually makes sense to use a 'search algorithm' to get the data than to just throw all of the data to the GPU which is basically what goes on in polygon rendering.

In fact the search algorithm makes a lot of sense because you can actually do all of the things like collision detection and if you use the right data structures, doing animation is going to be a breeze.

You have to remember that while computers have limitations, you have to view these limitations in the context of the limitations of the actual hardware and not in the context of a computing model or set of algorithms.

To think about this practically think about you among millions and millions of other people are able to search google's database of websites in under a second. Also remember that the algorithms they use are designed for scaling, so that as number of users go up and amount of data to search goes up, the system doesn't fall over like you would expect in a more classical design.

This is the kind of analogy I see with this model of rendering.

The biggest thing that would really give away the secrets are the data structures used. He hints that he is using 'rendering atoms', but the big thing would be how they are generated, how they are represented, and how the search algorithm makes use of these structures.

Due to past experience, I have a feeling that what he is doing is using a spatial classification system for representing the 'atoms' and then by using that structure, using a search algorithm to get the actual data when it comes time to rendering. By optimizing the classification structures and the algorithms for rendering applications you get something which is able to render something with lots of detail without having to use large amounts of data.

We actually do this kind of thing with games, but nowhere as effeciently as we could do with a method like this.

Usually the spatial classification systems are used to cull areas of the scene that are not visible and then only things within the scene that are 'visible' are thrown towards the card.

The different between this and what Euclideon seem to be doing is that a) the spatial system is a lot more optimal, b) is based on a different rendering primitive and c) Is based on searching and not just 'checking'.

By checking I mean that what usually happens is that in many spatial classification systems, things are 'checked'. For example in BSP partitioning, you have to 'check' that an object is on one side or the other for every 'checking calculation', to find which part of the scene to cull or render.

The big question that I have again relates to the data structures which is the amount of memory that is needed to represent these things. This would be a very interesting thing to find out.

After looking at the videos I am inclined to believe that they are not pulling peoples legs and have what they have and although they are not giving away there techniques, the hints they have dropped do tell me a lot about the probabilities in terms of what they are doing (which is what I commented on above).
 
  • #8
Aquariouse said:
Thanks for the great criticism guys it is always appreciated. Though the initial rely to my thread appeared to misunderstand my theory. Take out your camera and set it to capture photos, not record, look around your room and you will notice that when you view complex objects and simple objects their is not a single change in loading speed. How does you camera allow you to view "Unlimited detail" objects?. The answer is simply the fact that your camera, similar to the human eyes, never loads the object, it only processes the photons it receives in a static buffer with a constant size. Your camera then loads a "2D" file based on the photons processed, this 2D file has a constant data with static data, similar to the buffer. Since you are loading the exact same size "2D File" your camera will be able to view any detail object with the same speed as the viewing of a low detail object. Simply take this concept, using advance algorithms to trace the imaginary " virtual photons" of an object file.

You have to realize that both the representation and model of processing have really really important implications for the scope of what can and can't be done as well as the tradeoffs.

For example consider the difference between a waveform and a bitmap representation of an image. The waveform has essentially infinite resolution while the bitmap does not. This means that even if you had an extremely high resolution bitmap in comparison to the waveform, then at some point you would end up seeing blocks that had colour discontinuities like you see when you blow up images too much. With a waveform you wouldn't get this.

Also the processing model plays an important role. If your model acted more or less like a set of 'computational filters' that acted in parallel and computing things more or less instantly, then this going to be different to something if you had to do a conventional computation that didn't scale well when you have lots of things.

The way you represent information is really really critical both in the implications for doing things very well and very quickly and for getting the kind of results in terms of resolution and quality.
 
  • #9
Nice informitive posts :)

They must store basic color information per "atom" that can be used to generate the color it should be on the screen as well as a type. They only generate the outside so its still bassed somewhat from polygons. the renderer part would be able to take the colors of each part and interpolate them or something to create the color that should go in the pixel on our display.

Although I am at a loss as to how to store the data in memory and cycle through it.
I think something like the point data is stored in a cube and there's a cube of those cubes.
You start in the center cube and when you are at the edge of it and go into another cube, it could unload the further away cubes and reuse their space for further detail.

So the world would be split up into a bunch of files that can be loaded in for their data.

As for what color should go on the screen I think its a matter of starting from the pixel on the display and going forward from that spot in a 3D path, with every atom/3D pixel passed through it should be blended into the display pixel color this line could go on for a certian distance.

As for animation an object would have its own point data for each frame similar to a gif only in 3D of course. the object's point cloud would be merged with the world cube every time it changes i suppose.

It should be possible to create objects that modify their own data each step or certian amount of steps, and then tell the world they have changed so that they can be remerged.

But that seems like quite a lot of data to handle...

Ill have to sit down and look at it some day :)
 
  • #10
I have a feeling that they may employ more innovative data structures.

That is going to the whole key to what they are doing: even if you did not know the content of the routines themselves, if you got a peek at the actual data structures I'm pretty sure you would be able to figure out the routines pretty quickly.

Also the thing to really take note of is not the data structures that represent the actual geometry (the atom data as they call it), but also any of the other data structures required for spatial classification and for 'searching'. This is the real key to the technique: it's all of the other support structures that the routine needs to work and not just the isolated idea of using 'atoms' per se.
 

Related to Combinist: Unlimited Detail Object Loading

1. What is Combinist: Unlimited Detail Object Loading?

Combinist: Unlimited Detail Object Loading is a software technology developed by Euclideon, a company based in Australia. It is a new method of loading 3D objects in real-time without the use of traditional polygonal meshes.

2. How does Combinist work?

Combinist uses a process called voxelization, where objects are broken down into tiny voxels (3D pixels) and stored in a database. These voxels are then rendered at different levels of detail depending on the distance from the viewer, resulting in highly detailed and realistic objects.

3. What are the benefits of using Combinist?

Combinist allows for the rendering of unlimited detail in 3D objects, without the limitation of traditional polygonal meshes. This results in highly realistic and detailed graphics, without the need for high-end hardware. It also allows for faster loading times and smoother gameplay.

4. Is Combinist compatible with all platforms?

Yes, Combinist is compatible with all platforms, including PC, gaming consoles, and mobile devices. It can be integrated into any game engine and is not limited to a specific operating system.

5. What are the potential applications of Combinist?

Combinist has the potential to revolutionize the gaming industry, as it allows for incredibly detailed and immersive game worlds. It can also be used in other fields such as architecture, medicine, and virtual reality, where highly detailed 3D graphics are essential.

Similar threads

  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
15
Views
1K
Replies
1
Views
322
Replies
2
Views
824
  • Programming and Computer Science
Replies
11
Views
1K
  • Programming and Computer Science
Replies
5
Views
1K
  • Astronomy and Astrophysics
7
Replies
226
Views
12K
  • Programming and Computer Science
Replies
15
Views
2K
  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
4
Views
3K
Back
Top