 |
 |
GoldenEye 007 Nintendo 64 Community, GoldenEye X, Nintendo 64 Games Discussion GoldenEye Cheats, GoldenEye X Codes, Tips, Help, Nintendo 64 Gaming Community
|
 |
 |
 |
 |
|
 |
 |
 |
 |
 |
Sogun General


Joined: 15 Dec 2010 Posts: 661 Location: Valencia, Spain  |
Posted: Tue Nov 05, 2013 6:01 am Post subject: Discussing N64 specs (and some PS1 comparison) |
 |
|
Hi guys.
Sorry for the long message, I'm going to talk about a lot of stuff and I have done my best to make it as short as possible, hehe. I also have little or no knowledge about some of these things, so forgive my ignorance and help me to understand them.
I've been reading a lot about N64 and PS1 specs lately. If you look at numbers only, the N64 is clearly superior. Every component is bigger or faster, some of them even by x2 or x3 times. But when you look at the games, is the difference that big? Of course N64 games look a lot better thanks to smooth filters and perspective correction, but when it comes to polygon counter, texture quality and sound quality is the PSX the one with the upper hand. Why is that? Is because the CD against the cartridge? Or because most of the N64 raw power is used in the special effects (antialising, trilinear filtering, perspective correction, z-buffer...) that the PS1 isn't capable of?
I'm sure most of you have heard about the Turbo3D microcode which is supposed to render 500,000-600,000 polygons per second, around x2 times the PS1 capability with a a similar quality. But we never got to see it in any game or demo.
Well, we all know that the N64 architecture comes with some bottlenecks. The more known are the latency of RAM and the small texture cache. I'm not a hardware or software expert (in fact I'm quite ignorant in those aspects, it's all like magic to me, hehe) so I'll suppose that the N64 can render a lot of polygons per second but the RAM can't keep up and that's why we have those horrible slowdowns (not that the PS1 was free of them, heh).
About the texture cache size I can speak a little longer since I have some experience dealing with it while doing GE/PD maps. It's 4KB and that's the maximun size your texture can be. And in order to use the trilineat filtering (bilinear filtering plus mipmap interpolation) the cache splits in two 2KB parts to handle the too mipmaps being interpolated. That means that any texture bigger than 2KB will look pixelated from the distance (like the white bricks in the Super Mario 64 castle), althought bilinear filtering will be applied once the texture size on screen exceeds its native resolution. There seems to be a way to avoid the texture cache and use bigger textures using the CPU instead of the RCP, but for what I know this method can only be done in still images (like menus or some sky backgrounds).
And now comes the tricky part that I don't understand. Do you know how big is the PS1 texture cache? It's just 2 KB! So how can the PS1 textures be better if they should be the same size and unfiltered? At first I thought the PS1 used its superior polygon counter to apply lots of small textures that make a bigger one (for example, like Super Mario 64 does in the paintings, or GE in the big photos for levels and characters), but PS1 does really use textures bigger than the N64. The reason for this is something called "a texture page" (ack, I can't find the accurate term); a lot of textures are stored in "pages" of 256x256 pixels and latter splited to 32x32 parts to fit the cache. I think with this you can fill a polygon with more than one texture, no need for extra tris. I don't know if the N64 is able to do it. Apparently it is but there weren't tools in the devkits for this :S
Another thing about textures on PS1 that I wasn't able to confirm is that they can't repeat in the same polygon as a pattern (UV just go from 0 to 1), so you have to waste a lot of polygons in order to fill a surface with a pattern texture. The N64 can do patterns up to 2048x2048 pixels (64 repetitions of a 32x32 texture on each direction, or 32 repetitions of 64x64, etc), you'll have to subdivide the surface if you want to repeat the pattern further than that. Smaller polygons on PS1 also helps the lack of perspective correction. N64 games can save a lot of polygons, that's why big open areas work better than on PS1.
So the special effects the N64 can do save a lot of tris, but the PS1 is capable of displaying more polygons per second. If you google a little you'll find the PS1 renders 360,000 polygons per second while the N64, depending of the source, goes from 100,000 to 160,000. Actually, you would better not trust these numbers since no one has checked it. We all know about games that look ugly but they don't perform smoother than any other good looking games. Theory is one thing and the actual results is another. What about animations, sounds, IA, collision data, lighting, resolution...? Don't those affect poly count somehow? I bet they do. So it's not as simple as comparing images or videos, even with games of the same type. Perhaps those numbers are referred to optimal conditions, but no one has stated "This games runs (or peaks) at 276,489 tris per second; and this one at 302,450; and the other at 121,004". Don't trust the numbers, play the games and judge yourself, hehe.
Going back to RAM, the PS1 has 2 MB of main RAM, 1 MB of VRAM and 512 KB for sound (so a total of 3.5 MB) while the N64 uses 4 MB of RDRAM for all (8 MB with the Expansion Pak). The N64 has a small lead there but, since you can split the RAM as you want, you can make a better use of that RAM. For example, a game that only uses 1.5 MB of main RAM in PSX wastes 0.5 MB, while in N64 the same game would free that extra 0.5 MB for other things. At least that's how I think it works, heh.
Unfortunately not all is as good as it sounds. The graphics techniques the N64 does use a lot of RAM, at least the Z-Buffer. So that lead is smaller.
You can read at wikipedia that most of N64 games weren't poly count limited (that means that the framerate issues were due lots of tris on screen) but fillrate limited. I already said that I have a very small knowledge about technical aspects so what I understand is that the CPU would be able to do more tris (if it's actually the CPU the one responsible for that) if the RCP didn't slack with the other graphic stuff. And the increase of RAM only worsened the problem.
Nintendo was very mean at that time, keeping the best devkits and documentation for their own developer teams. It wasn't until the late N64 years when Nintendo allowed some third parties to develop their own microcodes. Heh, custom microcodes are like the magics words that would unveil the N64 hidden power, and in some sense it was true. The microcodes make the RCP work as you like, and the microcodes provided by Sillicon Graphics weren't that good since they weren't game developers. Nintendo's inflexibility for other companies to develop their own microcodes played against the N64 benefit, but made games done by Nintendo and Rare look a lot better than the other third parties. Truth be said, developing your own microcode was HARD as hell, since it was almost trial and error. Rare created their own microcodes (I think for lighting effects), but the most interestings are the ones developed by Boss Games and Factor 5. These two companies discarded Z-Buffering because the big impact it had to fillrate and general performance. You can bypass the lack of Z-buffering by creating tables for tri order render as the PS1 games used, but I have read that Z-buffering is needed for trilinear filtering to work. However, if you look at the games that didn't use Z-buffering (World Driver Championship, Stunt Racer 64 and Indiana Jones and the Infernal Machine) you don't notice anything lakcing, they look like ordinary N64 games but they have high polygon counts and play smoothly (well, WRC has noticeable popping).
World Driver Championship -> http://www.youtube.com/watch?v=UaH0ObX6GeI
Indiana Jones and the Infernal Machine -> http://www.youtube.com/watch?v=MEQBXn8_Nrc
In fact, Indiana Jones and the Infernal Machine has to be the most technologically advanced game of the N64 since it streams data directly from the cartridge. Factor 5 said in an interview that they weren't limited by the RAM anymore, but by the cartridge size (256 Mbits, or 32 MB. The biggest N64 games where 512 Mbits, or 64 MB). I have yet to play the game, but it looks promising, hehe. Streaming is possible from CDs too, I think PS1 did it in the Soul Reaver games.
Oh, talking about resolutions, there's somethign I want to know. The standar resolution for most N64 and PS1 games is 240p (320x240) and other games use 480i (640x480). But there are some other resolutions like 512x384. Are those resolutions higher than 240 vertical lines interlaced? How do they scale on screen so there are no artifacts?
So I have written about the two major N64 flaws. The textures issue seems to be a lack of tools fault, not the hardware itself, while the low polygon counts can be deal with microcodes. I haven't written about sound yet.
PS1 can use audio CD tracks for its games, although most of its famous games use midi sample music. N64 can use MP3. Bitrate of 320 kbps seems to be CD quality, but I doubt the N64 is powerful enough to decode it (or doing it without harming the performance too much). CD sound quality is 16-bit PCM encoding at a 44.1 kHz sampling rate per channel while N64 is capable of 16-bit PCM encoding at a 48.0 kHz sampling rate per channel, so it's actually higher than CD quality. Each channel uses like 1% of CPU. Jet Force Gemini has probably the best music on the N64 and it's midi sampled. Chrono Cross is one of the best PS1 soundtracks and it's midi sampled too, and the amount of channels don't differ from JFG if you can trust the midis I have found on the net, hehe.
Music and sound in a videogame is not only about quality, you have to interact with them too. Rare changed the instruments on the fly in some of their games depending of the situation, the music in Zelda changes depending how far you are from an enemy, some instruments can fade out, etc. Midi samples is the best way to do this, and you can use MP3 for more orchestral themes in menus or intros (although I'm not sure how good that would sound).
Audio quality isn't an issue in N64 compared to PS1. The problem is quantity because of the cartridge vs CD.
As for FMVs (today called CGs), the N64 is very capable. You have these demos made by Marshall (creator of the 64Drive).
http://www.youtube.com/watch?v=hEhw0UuqIBs
http://www.youtube.com/watch?v=mmEMmZ6FUSg
As with music, although this time even worse, the problem is the small cartridge capacity compared to CDs (that Dirt2 intro is like half Super Mario 64!). RE2 videos were compressed a lot because they had to fit 2 CDs of data (FMV's, 2D backgrounds, lots of dubbed dialogue... more than 1000 MB) in a 512 Mbit cartridge (64 MB).
I remember reading in the magazines when the N64 was released that the maximun cartridge size would be 512 Mbits (and there were a few games that made use of those) but I wonder if that's really the limit for N64 cartridges or at least I want to know why they can't go bigger than that.
Heh, if we could combine the best of Nintendo, Rare and Factor 5 in a 512 Mbits cartridge I wonder what kind of masterpiece would come from that. Imagine a Majora's Mask kind of Game with Rare's refinement and Factor 5's technology. Maybe Dinosaur Planet was the closest thing to that before it became Starfox Adventure.
I think that we were yet to see some amazing things on N64, and we didn't because of Nintendo's stuborness and the lack of a bigger storage system (be it CD's or ultra-cartridges, hehe). Not having CD was a bad commercial decision but it has little to do with the games. If the most important third parties left Nintendo (only Konami and Acclaim released a decent number of games) it wasn't only because of the N64 not having a CD drive, but also for obscure Nintendo actions like high royalties and keeping the best tools for themselves. In fact, I think the decision to use cartridges was more about controlling the game production than preventing piracy or have no/small loading times.
Heh, I really hope someone is able to read and diggest this long text. Thanks for reading, share your thoughts and correct my mistakes.  |
|
|
|
|
|
 |
 |
 |
 |
 |
zoinkity 007


Joined: 24 Nov 2005 Posts: 1730
 |
Posted: Tue Nov 05, 2013 3:28 pm Post subject: |
 |
|
I'm not going to putz over too much of this, but there's a few things off.
Firstly, the N64 does not have one huge lump of ram like many articles would make it appear. You're muddling up general rdram and the RSP's own ram buffer. The texture bottleneck occurs in the RSP and has nothing to do with rdram.
Tri generation is done by running code in the RSP, which then dumps the result into rdram. An entire scene will usually be done at one time and thrown into one of the video buffers, and it is in fact manipulating data from rdram to do so.
Quote: | a lot of textures are stored in "pages" of 256x256 pixels and latter splited to 32x32 parts to fit the cache. I think with this you can fill a polygon with more than one texture, no need for extra tris. I don't know if the N64 is able to do it. |
This is effectively what later microcodes automated. They split surfaces into smaller regions that did fit in memory. If you dump images from post-Zelda titles you'll see a lot of color ones that are 64x64 and above.
The zbuffer--the single biggest murderer of framerate--has no correlation with rdram usage. The problem with it has more to do with efficiency and algorithms, and replacing its use with a different codebase doing the same sort of things is ideal.
The RSP is good at what it does, but it was designed to be inexpensive. The chief concern was die size, and the thing barely made it in time. There's an interesting writeup about its development circulating the web.
You can run code directly off cart at any time except within the exception handler, but it does operate at a much slower rate. There are also some games that directly read data from it without using DMA. The access rates aren't really that much slower, and its certainly faster than CD transfer rates. The biggest reason carts were used was simply because access rates and time was believed to be more important than storage.
As an interesting point, the vast majority of a game's ROM will be dedicated to audio, followed by images, followed by other resources. Only a few MB of space is actually dedicated to code. This effectively didn't grow, but resource demands did.
Marshallh's video demos use a format from the devkits: MPQ. MPQV is a video version of the standalone MPQ image format, which is effectively a jpeg derivitive. It's most effective on animated or CG images, and was used commercially. The best example, and only one outside Japanese-exclusive titles I know of, was the video in Pokémon Puzzle League.
Cart sizes can be significantly larger, and these can also be remapped. You can double rdram. You probably aren't aware of it, but you can access the entire filesystem of a devcart from any given ROM. If you shove a 16GB memory card in a devunit, you have 16GB of ROM available. The loader program is actively manipulating that space. _________________ (\_/) Beware
(O.o) ze
(> <) Hoppentruppen! |
|
|
|
|
|
 |
 |
 |
 |
 |
Kerr Avon 007

Joined: 26 Oct 2006 Posts: 924
 |
Posted: Wed Nov 06, 2013 9:41 am Post subject: |
 |
|
System specs don't mean too much, it's down to programming skill and
One thing I never understood about the N64 is why it didn't allow for system link - it would have been great if you could link up two or more N64s for multiplayer game. The Playstation of course allowed this, and it would have been great for things like Goldeneye, Perfect Dark, and Super Smash Bros.
Back when we were still waiting for N64 games to be released, I did wonder if Perfect Dark would come with a connector on the cartridge that would allow it to be connected to another Perfect Dark cartridge, and that way allow two N64s to be networked together. Sadly, of course, that wish didn't come true, but how great would it have been? Being able to use two TVs and two N64s, so that up to four players could each have half a screen, or two players could each have a TV screen to themselves (not that I mind the 'cheating' of people looking at each other's area of the same screen, it's part of the fun and prevents most camping, but it still would have been great to have the two N64s option).
Who knows, maybe this would have allowed up to eight humans to play PD! It would certainly have made co-op and counter-op modes run with better frame-rates, as (when playing with two human players) each N64 would only have to render one (full screen) single screen display. |
|
|
|
|
|
 |
 |
 |
 |
 |
Sogun General


Joined: 15 Dec 2010 Posts: 661 Location: Valencia, Spain  |
Posted: Wed Nov 06, 2013 10:20 am Post subject: |
 |
|
Thanks Zoinkity!
Seems that lots of things are done by the RCP. What does the CPU do, then? I have read that in some games it manages sounds, although that's something the RCP is supposed to do too.
About textures. 64x64 4 bit color fits the cache, but it won't do mipmaps because it's slighty bigger than 2 KB (PD has some textures like that). It would be another matter if it's a 64x64 8 bit color texture, though.
With "post-Zelda titles" you mean starting from 1999? Is there any game that has something like a 64x128 color texture used as a pattern (not in menus or 2D backgrounds) you know about?
Regarding streaming from cartridges. For what I undestand, usually you transfer the data needed for one area (geometry, models, textures, animiations, sounds, music, collision data, etc.) from the cartridge to RAM, so you are limited to the 4/8 MB. If you need more than that, then you have to unload RAM and load new stuff. The unload-load process is normally done while a black screen (transitioning areas or going back to menus). With streaming you are still limited to the RAM limit in order to show things on screen, but you can unload what you don't need and load new things as you need with no transitions at all.
Now, with your comment about bigger cartriges I understand you can even surpass the RAM limit, using cartridge storage as it is RAM. That means you can show more things at once that the N64 is capable of (if the RCP is powerful enough, of course, hehe). I think Neo Geo games did something like that.
Aside from Indiana Jones what other games use streaming from cartridge? Or is streaming more common than I thought? What makes Indiana Jones special?
Kerr Avon,
I think they didn't allow system link because you need 4 plugs for the thing to work, hehe. Two consoles and two TV's not too far from each other.
Originally GE planned to have that feature, but they didn't seem to serious about it. Probably Nintendo thought having 4 controller ports was better.
And about the 4 controller ports... some PS1 games had simultaneus 8 player multi (I think most of them are sport games) using two multi-taps. It would be cool if somehting like that was possible on the N64. Saturn Bomberman also has 8 players multi. |
|
|
|
|
|
 |
 |
 |
 |
 |
mistamontiel 007


Joined: 17 Apr 2011 Posts: 849 Location: Miami, FL, CUBA  |
Posted: Wed Nov 06, 2013 1:13 pm Post subject: |
 |
|
Kerr, ha, don't remind me
In order to splitscreen Destruction Derby 1, you need two PS1s, two DD1s. lol. _________________
 |
|
|
|
|
|
 |
 |
 |
 |
 |
zoinkity 007


Joined: 24 Nov 2005 Posts: 1730
 |
Posted: Wed Nov 06, 2013 4:02 pm Post subject: |
 |
|
The main CPU runs code. The RSP is another processor with a similiar archetecture that runs code very quickly without error correction, with the addition of vector mathmatics. Just like with dual processors in PCs, you offload stuff to one, and when finished it returns a flag to say it's done.
In other words: the main CPU is running game code, and offloads things onto the RSP. It's use is optional. Typical things are video and audio processing, and it can usually do 4-8 different things (like drawing a frame and processing audio) each Vsync, all while the CPU does its own thing. Yes, that means you could do triple buffering.
Load/store is done all the time, not just when you aren't displaying anything. The standard library would usually load things into memory for you, but it isn't like some games didn't read small things like strings and text directly. It wasn't that common though, in part because the restriction isn't much of one and the DMA rate makes it more practical if you're ever going to reuse something. Video was a good case: it's more efficient to read in blocks of 1-4kb of it than to read long stretches from ROM. That said, if you're using some aggressive memory management it's easier to just read from ROM.
Want an off-the-wall example? GE's ramrom tool read an uploaded file directly off ROM. Technically, because dev units use eeprom or similiar you're also writing to it while recording. Likewise, the GameShark also uses direct execution from its ROM to avoid overwritting the interrupt handler.
Online multiplayer was and wasn't a reality. There was a modem used with the 64DD, but only turn-based and singular update was implemented (a golf game, mahjong game, artist series, and anything else with randnet support). You try to do accurate realtime multiplayer across a dialup modem.
They did have emulators with downloadable games planned. The official emus packaged in (predominantly japanese exclusive) games are evidence of this. _________________ (\_/) Beware
(O.o) ze
(> <) Hoppentruppen! |
|
|
|
|
|
 |
 |
 |
 |
 |
Dragonsbrethren Hacker


Joined: 23 Mar 2007 Posts: 3058
 |
Posted: Wed Nov 06, 2013 5:02 pm Post subject: |
 |
|
zoinkity wrote: | You try to do accurate realtime multiplayer across a dialup modem. |
Doom, Quake(world), and pretty much every PC FPS after them in the 90s? |
|
|
|
|
|
 |
 |
 |
 |
 |
zoinkity 007


Joined: 24 Nov 2005 Posts: 1730
 |
Posted: Fri Nov 08, 2013 3:40 pm Post subject: |
 |
|
The issue with consoles is latency. To remain cost effective they went with slow data transfer rates but with very low latency.
A PC can use additional memory to manage data correction and keep several fallback states. Games that don't handle this well (the Homeworld series, especially HW2 come to mind) run the risk of desyncing. Consoles at this point don't have that kind of memory in reserve, so you have to ensure your data is as accurate as possible. The focus on latency though means you're looking at the higher-end equipment of the time which drives up cost. So, to compensate they fall back to lower data transfer rates.
I should say, the problem wasn't really solved from the xband period.
Programmers didn't usually deal with textures at the low level. High-level functions would load and draw them, and later tools didn't have the texturing constraints that necessitated certain companies writing their own tools.
An unusual example of large textures would be the objects in the Custom Robo series. Parts and characters in particular use a single texture source image, containing within it all the stuff that will be mapped to the model. This is very similar to the single-image texture approach used in modern PC games.
Outside of directly setting pixels int he video buffer, there is no such thing as 2D drawing. At low level, the N64 draws 3D. _________________ (\_/) Beware
(O.o) ze
(> <) Hoppentruppen! |
|
|
|
|
|
 |
 |
 |
 |
 |
Sogun General


Joined: 15 Dec 2010 Posts: 661 Location: Valencia, Spain  |
Posted: Fri Nov 15, 2013 6:10 am Post subject: |
 |
|
I have more questions about the N64 capabilities.
Theoretically if you disable some of the N64 special graphics effects (as seen in Z-buffeering) the performance of the game will improve. The only games I know that disabled Z-buffering used a custom microcode to do it, but what about the other graphic techniques? (antialising, perspective correction, etc).
We know that trilinear mipmap interpolation (or trilinear filtering) is disabled is you use textures bigger than 2KB. Also, Rare disabled it in some small textures in PD (although they still do bilinear filtering).
So is there any advantage to disable all texture filters? Games like Turok and Extreme-G use a cheat to disable filters and transparencies on textures, but I didn't notice any framerate improvement.
Anyway, games look horribly with no filters (TWINE or Destruction Derby 64), but with texture correction still look better than PS1 games.
Antialising. Some games seem to disable this feature. From my experience, if you intersect two poligons and not split them where they cross (like a plane of water going through the landscape), that intersection won't have antialising at all.
But Rush 2049 or Rogue Squadron seem to disable antialising in all edges, and other games seem to disable antialising in hi-res mode as well.
What kind of antialising does the N64 do? Do you need a custom microcode in order to disable it? Is there much gain in performance by doing it? (I guess so, comparing how PC settings behave).
Perspective correction. By no means I would disable this one, but I wonder if as in Z-buffering there is another way to do it. Seems that Crash Team Racing on the PS1 has some kind of perspective correction or at least textures don't blend as bad as in other PS1 games. I have only played that game twice in multi more than a decade ago, so I can only judge by youtube videos and most of them are likely to use emulation. CTR looks amazing, perhaps the best looking game on PS1. Back then I wasn't impressed, perhaps because I only played multi.
Anyone knows how they managed to do some short of perspective correction?
On another note. I was wondering if someone knows about an emulator that can do frame buffer captures in order to know about some native resolutions of games (I know some PS1 emus can do that).
Trevor posted a PD PAL/NTSC low/hi-res comparison a while back, although I think he didn't take the images.
http://fgfc.dyndns-free.com/PD%20FrameBufferRes%20Comparison.bmp
An speaking of framebuffer. Gonetz, the author of the Glide64 plugin, is working on a new OpenGL based plugin called GLideN64 and started a blog about it. He has written some very good articles.
http://gliden64.blogspot.com.es/ |
|
|
|
|
|
 |
 |
 |
 |
 |
Dragonsbrethren Hacker


Joined: 23 Mar 2007 Posts: 3058
 |
Posted: Mon Nov 18, 2013 4:22 am Post subject: |
 |
|
Glide64 can give you native res shots if you set aspect ratio to "original" in emulation settings. |
|
|
|
|
|
 |
 |
 |
 |
 |
zoinkity 007


Joined: 24 Nov 2005 Posts: 1730
 |
Posted: Mon Nov 18, 2013 6:43 pm Post subject: |
 |
|
Antialiasing is set by a bitflag in VI Status. Some visual effects are applied when processing microcode, but others are set at a hardware level. Other hardware settings include the video output mode, bitdepth, and frame buffering mode.
Z-buffering doesn't need to be used. You can set it with flags in any microcode, but it's usually set by default. It avoids a whole stack of calculations, and I'd wager the games that disabled it but have low framerates did so because they had low framerates.
It's difficult to code around it without replacing it with something just as bad, and the best case is to try to avoid cases where you need it. There's one game that did a blue-screen kind of thing (if by blue you mean Tzeentch purple), applying anything that would require alpha correction after doing everything that didn't. It replaced the nasty stuff from buffer 2 with buffer 1, but that also meant they did rampant copy/paste. Even with DMA it's questionable if there was an advantage.
Avoiding any filtering also avoids the calculations involved in that filtering.
Anything hardware can do software can do slower, including perspective correction.
Some games with "hi-res" settings don't actually throw their games into hi-res mode (a hardware setting), and to be honest it isn't clear what it does. I'm looking at you Eurocom. That was a fun surprise when writing the GS screencapture tool. _________________ (\_/) Beware
(O.o) ze
(> <) Hoppentruppen! |
|
|
|
|
|
 |
 |
 |
 |
 |
|
 |
 |
 |
 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
 |