Sorry @Dealer_Gaming, @Optimus_Code, @LeviathanGamer2, @CJGrannell, @RedGamingTech I want to make one think totally CLEAR. I am NOT a fan of a company. I likeed my NES,Sega Mega Drive, Xbox and PS. But prbbly we should talk outside of Twitter, bcs rplys r delayd (1/x)
Yet I will try bcs I wan't to clear something up. If we want to discuss things, we have to agree on a base, otherwise we all will talk abt different things. First thing: Big O (Landau). In IT this is calculateable on three parts, each for itself or together afaik. (2/x)
We can use this to get a non fully measured idea, of how our ALGORITHM will work. An Algorithm = SET of instructions. SET of Instructions are from different classes: APIs on different lvls. For Example: High lvl API (DX12 vs GNMX) Low lvl API (?/GNM) (3/x)
From here on, I assume we agreed on those basics, else my chain of arguments won't be valied. All of you are very wellcome to call BS in this thread out, I love when ppl point BS out that I'm doing, it's the only way I can get better in a FAST way. (4/x)
1st Calculation Time, the time it takes to solve a problem via an Algorithm. Those can be reduced, therefor be more efficient, by using different algorithms. Like Bubblesort vs. Quicksort (easy example). So you put 1000 random numbers into both algorithms, both will be... (5/x)
have same outcome (hopefully). But Big O for those thousend is different. n = amnt of numbers. I'm not googling, but trying to remember those Big Os for both, maybe I'm wrong now. Bubblesort = O(n²). Quicksort = O(n * log(n)). Quicksort is vastly... (6/x)
more efficient in number of cycles it needs to be calculated. BCS we assume both are running on the SAME machine, we can assume Quicksort will be in a timespan of x. Both are compareabl, bcs same systemcalls (Algorithms too), same RAM (storage), etc. (7/x)
2nd Storage: the amnt of storage used by that ALGORITHM to solve the given problem. So I can say I will load every single picture from a given folder into RAM (storage) and find that picture I'm seeking by name. But I could also say, I will only load their names... (8/x)
into RAM, and search that list, and only after that, I will load the needed file, from that pointer of my filetable. I will assume (depending on the size of the images) it normally should work more efficiently int that second scenario. Therefore 2nd Big O should be better (9/x)
3rd BW. This is a given factor, and maybe the only fixed thing, that isn't debatable or changable on a computing system. We can not change this, this is just physics. You have bigger water pipes, and more of them, you can push more water through it with same pressure on (10/x)
That's y I said @Zalker87 and @Optimus_Code, were only partly right. Explicitly if you ONLY consider BW. But to get a holistic view of the problem, you have to consider ALL three parts. That's where @Optimus_Code and I have an disagreement till now, I hope to clearify (11/x)
I'v stated numerous times, both go entirely DIFFERENT ways. So their philosophy is different. This time it SEEMS to me, that MS has worked more close to AMD then SONY (what classic was a fact). Thats where I agree with @CJGrannell, that SONY was caught off guard by MS (12/x)
Now the more important thing abt that much too long stuff. MS to me HAS different algorithms, although theyr sometimes baked on silicon (Sampler Feedback), have an whole specific hw Block (DecompressionBlock), or are on SW-Side. MS has gone for more effic. SSD... (13/x)
...size usage + tries to be efficient with PHYSICAL BW by reducing the need of it, with algor. (set of instr) backed on HW (that's right @Optimus_Code 100%), but SW too, that differs of SONY's: DirectStorage,custom VRAM Solution, BCPack, DLSS, VRS (APi = SW), DX12U vs GNM (14/x)
So they will try to use lossless and lossy compression (Was it ZLib (Lossless)? and BCPack (Lossy)), decompress it with DCB ( reduce ssd i/o need), Reduce amnt of needless data in RAM (SF), and will try to make a pooling portion from SSD avlbl (DirectStorage + SFS) (15/x)
Therefore the CPU and GPU are critical to be more powerfull then Sony's. Sony has a superior BW. Kraken is awsm too. But What I've heared BCPack should be better then Kraken + BCx (1-7). Sony hasn't confirmed any FORM of SFS. Maybe they have. (16/x)
With that bw, they don't NEED to be MORE efficient in BW usage, compared to MS. Bcs: "we need more iron" (sorry fo that one :) ). SONY needs power on that graphics cards too, but not as much as MS. Therfore faster caches are more important to them. (17/x)
Bcs simply there has more to happen on MS GPU + CPU. Which one is better? Well a great british guy I know would say this: Well, errr.... ( @RedGamingTech) And on this we all should agree, we need to know more. Me, from my "taste" of doing things... (18/x)
thinks, MS has done a "better" job. Bcs we have to consider limitations on that storages from an sheer amnt of size limitations. Thats where @LeviathanGamer2 is absolutly spot on in MY EYES (that doesn't make it a fact). I frankly do not know... (19/x)
what the most critical part of that equotation will do, the DEVELOPERS, with those both absolutly awesome machines. I fankly was baffled at how different they went this time, despite being x86. What will produce better games? System + API + DEV, so I dunno for now (20/20)
You can follow @Clevemayer.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: