I was reading an interview with the lead developer from Unreal 3 (http://gamesdomain.yahoo.com/preview/28441), while drooling over the screenshots, and they satrted talking over the future of graphics in games, and it got me thinking...
Basicly, in about 20 years time the theory is that we will be able to achieve perfect photo-realistic graphics which can't be bettered (or at least if they were then you wouldn't be able to notice any difference). So what happens from there? Throughout the history of gaming, games have strived to achieve the best graphics possible, and then suddenly all the graphics are perfect to the extent that they can't be pushed any further!
What do you think will happen then? I think the graphics will take a whole new direction, maybe it will become like art and will will have abstract and impressionist games. Maybe we'll go back to retro 80's games!
Damn! That looks fuckin amazing. Always been a fan of the Unreal games.
As for your question, I think computer graphics have their limits as far as realism goes, and the technology currently available can produce graphics almost impossible to tell the difference between real and computer generated images. Of course we're yet to see this in games, but the way I see it as the technology increases in power graphics will become more detailed and more realistic overall.
I don't see it going into any wierd abstract phases in particular. Perhaps 3D glasses and VR gear will make a come back for good. I'd say gamers will want to be more involved in the game once graphics get to their limit and VR style setups will be common place. It's just a matter of time.
MUGGUS
Come and annoy me more at
www.muggus69.tk STOUT ANGER!!!
thats what ive been thinking, tigs. people say that its going to happen so quickly...but how can you get all that power into a tiny little space so quickly? took long enough for arcade games to have gfx that people could understand without someone telling them or reading the cabinet
Well technolity is going faster every day, so eventually I'm seeing in within 20 years, computers running at speeds 5x faster then penium 4 at the price of Penium 2 & videocards running 3D games that dont even use faces in the graphics because they would have discovered a way to make graphics even better then this at top speed even on the smallest of videocards. I mean just 30 years ago hardly anyone had a computer or even a TV for that matter & now we have having all these new things being made everyday. BTW, Nice!I just got UnrealTournament 2004 for long ago & I gotta say, I am a HUGE fan of the game.
I also dont think people will think they reached the end of the graphics chain, because where we made get as good as it gets, not many people enjoy that because it may be TO realistic, which inturn we will be using cartoonlike graphics, expecially in TV shows, as you can see if you watch certain stations, they are already starting to do this, shows such as Jimmy Nutron that are all 3D, & shows like Futurama, that take what was extremly hard to draw, things like 3D spaceships, & work them into 3D, to keep the classic drawling look, while adding that 3D touch that people now'a'days long for.
Sure, CGI may look photorealistic, but unless they perfect animation, you'll always have a nagging feeling that something's a little off.
@awesomeanimator: One word: miniaturization. The Nintendo DS now packs equal or greater graphical power than the N64 in a small space. Ditto for the PSP. Personal computers now are more powerful than the room-filling vacuum tube supercomputers of yesterday.
"Omg. Where did they get the idea to not use army guys? Are they taking drugs?" --Tim Schafer on originality in videogames
well when we have perfect grapics there is still the matter of time before we got perfect programming. There is still alot of features missing in the program for todays games for it to be perfect.
The speed of processors is meant to double every 18 months, so in 20 years, I make that 24.5 terahertz in the average processor in 2024. But I don't think that will be true, because processors today are already reaching the physical limit on size. If they miniaturise processors much more, they are so small quantum stuff starts messing around with them and they stop working.
"...what if a "detailed" hand with 5 fingers is catching a bottle but the fingers pass right through it? Is this still realistic? Rather than to show each meticulous and tiny detail of a finger, it is more important to make the end action look more credible by working on the movement and functionality of the arms and the hand in relation to the object." - Shigeru Miyamoto
I think this shows the relative unimportance of photo-realism, because the game will only seem actually realistic if everything happens in a completely realistic way, which will require significant improvements in the basic physics programming of most games.
Talking purely graphically though, there can still be improvements even once photo-realism is achieved for example in terms of the number of objects on screen, the complexity of their animation etc.
More important for me is further development in AI aswell as the degree of interaction with your surroundings that is available.
It's not a case of perfect raytracing or anything like that, otherwise photorealism could only be acheived using a perfect simulation of reality, which would take as much storage as the scene itself would in the physical world... and that's just ridiculous. There will always be shortcuts, and as far as I can tell, the upper end of CG has already reached a 'better than photorealism' state. The trick now is to drop the quality and make everything dirty enough to seem real.
I play a game to escape from reality. I don't want to play a game that's really relistic. I want to see a cool, diffrent graphic style that isn't too relistic.
Fine Garbage since 2003.
CURRENT PROJECT:
-Paying off a massive amount of debt in college loans.
-Working in television.
Well, as others have pointed out we have already achieved photo realistic pre rendered graphics (look at most modern movies which use CGI). One thing that has not been perfected though is the animation of humans. We can create photo realistic CGI humans, but the animation is not flawless, and since we are so used to seeing other humans, any small flaw will be recognized. Thats why CGI actors have not replaced real actors yet (although I promise you that eventually, all movies and TV shows will be totally CGI). It is much easier to fool people with animation of other living creatures because we are not around these creatures all the time, and won't notice flaws (which is why movies often use large amounts of CGI for non human animals).
As for games, I am sure eventually we will be able to achieve perfect photo realistic graphics and animation in real time, probably within 20 years. It will even get to the point where you can have as many moving objects as you want in a scene with no drop in FPS at all. When this happens, the focus will shift towards creating a good "mood" in a level or scene, because the graphics can't get any better. For example, in movies you try to set up the lighting and the set to achieve a certain mood; the focus of game graphics will probably move towards that.
99 percent chance that the above post is 100 percent correct.
One thing that has always evolved in games (among others) is the graphics. Every new generation of graphic engines pack an even more mindblowing experience. Take the awaiting of Doom3 and HL2.
The evolution of the graphics in games is closely linked to the evolution of hardware. The evolution of hardware drives the evolution of graphics. It is clear that we will see improved graphics in the future. But I think we will eventually reach a level after which it will not matter much and not be worth making better graphics. And I think that goal will be reached some time before we have absolutely 100% true photorealistic graphics. There are three main reasons:
1. Reasonable. At a point it will not really be reasonable to improve the graphics any further. The time and cost involved is not outweighted by the income any more. Developers are allready today talking about how much the graphics can evolve on the current generation of hardware and that we could see a shift in games towards evolvement of other aspects.
2. We play to be entertained. This means we are not really interested in the games being too realistic. The reality imposes too many limitations. A games should only by so realistic that does not ruin the fun.
3. Robot constructors have allready run in to the third reason. If they build their robots to be too human like people tend to be disgusted about them. It has turned out that our brain automatically imposes an expression on things that does not have an expression (like animals and robots that do not look too human) but if something is looking very human like we in stead try to interpretate the expression we expect to be. The problem is the robots often have a "wrong" impression on their face and/or that they do not follow the rules we expect humans to follow (in terms of expression). The result is robot constructors have gone back to create more simple robots (in terms of appearance). The same problem also exist when developers create computer games.
In terms of hardware development we are beginning to reach the limits of the current technology. Moore's law says something like:"The speed of computers will double every 48 months". This has been very true up until now. However we are truely beginning to reach the limits of what we can do with silicium based cpus. Intel has recently moved to the 0.90 micron technology and expects to reach the 0.65 micron technology in the near future (a few years) but that will also be the last step in shrinking the transistor which is the heart of all electronics today. Shrinking the transistor any further is not possible as the amount of "stray electricity" (electricity jumping from one transistor to another) will be so high it would be impossible to get anything useable through the cpu. When the transistor cannot be shrunk any more we also hit the limit in frequency. Even Intel have finally realized more speed cannot be achived by ramping up the frequency. Ramping up the frequency has always been the Intel approach to speed increase. AMD has for a long time realized speed could and would have to be reached by other means. So while Intel made their cpus less efficient to ramp up the frequency (true for the P4 family with a few exceptions) AMD have made their cpus more efficient. That is why the AMD top model is still around 2,5GHz while the P4 is up around 3,4GHz. But Intel is finally also realizing the future and is about to shift to a number based rating system (instead of rating by frequency). To further increase the speed the near future will bring us cpus with two or more cores meaning we will have cpus with multiple cores built in to the same socket.
However all these improvements cannot change the fact that we are reaching the limits imposed by silicium. And we do not have a new technology around the corner. Companies like Intel, AMD, IBM as well as universities etc research in new technologies but we will not see anything new in the next 10 years so the future is quite unknown.
Yeah, it'll be a long time before photo-realism can be achieved. Probably about 2 decades or so. Heck, about a decade ago, 1 GB was considered massive. 1.5 decades ago, even 20 MB was a lot. I know. I've lived with computers back in those days and was so proud of my 1GB HDD .
Oh well, if no one beats me to it, I'm gonna make the first VR console, coming with one of those gun thingies and a sensory glove. Main prob would be doing the turning around stuff and preventing the headgear from spoiling your eyes. But then again, monitors already spoil your eyes as it is .
Disclaimer: Any sarcasm in my posts will not be mentioned as that would ruin the purpose. It is assumed that the reader is intelligent enough to tell the difference between what is sarcasm and what is not.
Cybermze: http://www.intel.com/research/silicon/mooreslaw.htm (I think the theroy originally said it was every 18 months).
Anyway, I don't think we are going to have photo-realistic graphics anytime soon. Keep in mind that animation is going to be the most difficult part, even though they have technology that can simulate actual human movement (motion capture). Still, you also have to ask yourself the question of how much it costs and how long will it take. Companies can't afford to take several years on a game.
On a final note: Realism depends a lot on its enviroment. Sure, we could create a realistic human that just stands around, but how about having that human also run, walk, jump, catch, fall down, and swim while in a pool with lots of mirrors and other people?
Not in the conventional manner, but we still have a few fractions of a micron to go before we hit a relativistic limit. Once chips can't get any smaller, we innovate. Imagine stacking a bunch of 1gHz processors on top of one another and treating it as a single chipset. That's the direction we'll be going in the near future.
n/a
Pete Nattress Cheesy Bits img src/uploads/sccheesegif
Registered 23/09/2002
Points 4811
20th July, 2004 at 20:46:01 -
my chemistry's a bit rusty, but there's a formation(allotrope?) of Carbon called Buckminsterfullerene. apparantly it would act as an ideal replacement for silicon and allow for the creation of next generation hardware. all this is from my old chemsitry teacher, so i don't have any first hand knowledge. it's all greek to me, but for your clicking pleasure: http://www.google.com/search?q=buckminsterfullerene
Animation ain't much of a problem. Just look at how much 3DS Max has improved in it over just half a decade. The only problem would just be getting the polygons/vertices to flow right according to what hits it, which wouldn't take animators too long to figure out, considering how much they've done already. Things are already easy enough with motion capture and stuff.
Silicon's pretty poor... and the main reason it's used in such high quantities is coz it's uber-cheap. But since carbon is easier to get, it'll be a matter of time before Buckminsterfullerene replaces it. Or maybe those quantum computers will come out sooner than we'd expect, which is usually the case with overfunded R&D these days.
I'd try to join in the quantum computer research stuff, but I suck at chemistry, so I'm stuck with typical electrical engineering .
Disclaimer: Any sarcasm in my posts will not be mentioned as that would ruin the purpose. It is assumed that the reader is intelligent enough to tell the difference between what is sarcasm and what is not.
I tend to agree with Tigs, current technology can only be pushed so far. It's only a matter of (probably a long) time before we find ourselves using Quantum Computers (http://www.cs.caltech.edu/~westside/quantum-intro.html) since they are almost infinately more powerful than what we have today.
Theres already talk of using DNA instead of a hard disk, it all sounds a bit odd but DNA can store (according to IBM) around 3 terrabytes per chromosome.
As for games, the graphical advancements have been decreasing a lot in the recent years, it's all a matter of time (as Cybermaze pointed out) before it grinds to a halt and you just won't be able to tell a difference in graphical quality.
I don't think that's possible, I tried to pass a current through a flower once and it burst into flames - how are they going to turn that into a microchip?
n/a
Pete Nattress Cheesy Bits img src/uploads/sccheesegif
But it is true both DNA as well as atoms themselves are subjects we research in. Using the electrons as a way to store data in atoms it is possible to create computers with enormous power taking up very little space. The real challenge is controlling the atoms. But just imagine if I had a piece of metal and every atom in the metal would function the same way as one transistor. I think it was some swedish scientists who could use atoms as ram storing and retrieving data. The good thing about atoms is also that we can create more complex computers. While transistors only support 2 modes (on/off they are binary) a computer of atoms could possibly take more 4 modes (0,1,2,3) meaning it can calculate more data in one run than a binary computer.
The real problem is really that all these technologies are some years away ... maybe as much as 10-20 years. In the meanwhile we are quite limited. Once the Silicon transistors cannot be smaller we will hit the ceiling. Which means something around 0.65 micron and 6GHz (if it is at all useful to go that high). After that the only real alternative is to place more cpu cores on the same die. A while ago I saw the estimated requirements for Windows Longhorn. It suggested a 4GHz cpu with dual core (two cpu cores on the same die) and at least 2Gb ram.