Jump to content

AMD unveils world's most powerful desktop CPUs


sman

Recommended Posts


AMD unveils world's most powerful desktop CPUs

https://www.zdnet.com/article/amd-unveils-worlds-most-powerful-desktop-cpus/

Need CPU power? AMD has you covered with new 24-core and 32-core AMD Ryzen Threadripper processors.

In the never ending war between the chip giants, AMD has released a salvo by unveiled what are the world's most powerful desktop processors --  the new 24-core AMD Ryzen Threadripper 3960X and 32-core AMD Ryzen Threadripper 3970X.

 

These 3rd-generation Ryzen Threadripper Processors are built using AMD's 7-nanometer "Zen 2" core architecture, and both chips feature 88 PCIe 4.0 lanes with extraordinary power efficiency.

On the performanced front, AMD claims that the new 32-core Ryzen Threadripper 3970X offers up to 90 percent faster performance over the competition.

And you don't have to wait months for these processors to land, as they both will be available starting Tuesday, November 19.

 

Edited by AdvancedSetup
Corrected font isuse
Link to post
Share on other sites

I'm far more anxious for Zen 3 and Zen 4 based on the leaks and rumors that have come out so far.  Massive IPC gains, 3~4 SMT threads per core rather than just the normal 2 that have existed for HT/SMT enabled chips since the beginning of time, and huge potential increases in both single-threaded and multi-threaded application performance.  The next few years for PC hardware should be very interesting indeed, and AMD aren't the only ones pushing things forward either.  The latest leaks from Intel indicate they have big things in the works including 3D stacked chips with integrated high speed RAM cache on-die (most likely some form of HBM, similar to the VRAM used on AMD's Vega GPUs and NVIDIA's Volta AI GPUs).  Of course, Intel's also coming out with the XE discrete GPUs over the next few years, so we might finally have some decent competition for NVIDIA (assuming Intel can do better than AMD has on the high end over the past several years), and AMD's new Navi/RDNA GPU architecture looks really good so far, and I believe that if they release a big die GPU soon, it should have no trouble trouncing the mighty RTX 2080 Ti, but only time will tell (talk from engineers inside the Radeon team indicate that they have been working on a GPU that they refer to internally as 'the NVIDIA Killer' so hopefully that's not just idle chatter and false hype because NVIDIA has driven GPU prices up to absurd levels since Turing launched thanks to the lack of competition from AMD and their monstrous inventory issues following the GPU mining boom crash last year that Jensen ended up basically lying to his investors about).

Also, if AMD starts applying some of the tech they use on the semi-custom side of their business to the PC, we might see some seriously powerful hardware in thin and light form factors never dreamt of before (think a light version of the 'PS5/XBOX Scarlett' inside a netbook with a 1080p/1440p screen and capable of full AAA gaming at ultra settings but with the battery life of a regular 15" laptop inside a 14" or smaller form factor, though obviously power usage would go up a lot during gaming, but at idle efficiency should be good enough to make devices capable of on-the-road use for work/school without the battery dying too quick, then coming home, plugging it into power and gaming on it all night without having to worry about it overheating thanks to the efficiency of the powerful APU inside it).

EPYC looks sick too.  Over twice the cores of Intel's highest-end chip, but for like a third of the cost and with far better power and thermal efficiency.  It's kinda nuts, but unless Intel gets their act together real soon, we might actually see AMD dominate the server market for the first time in history, and considering that market is where Intel generates the vast majority of their income, that could really hurt team blue, and with all these recent security vulnerabilities in Intel chips being discovered, many IT admins/big companies are already planning to switch to AMD next year, and when a company makes a change like that, they don't just flip-flop back to their old provider as soon as the problems are cleared up; any major corporation that makes the switch to AMD is likely to continue using and upgrading their hardware for at least 5 years or more, meaning those will be guaranteed sales for AMD not only now, but continuing well into the future, regardless of what Intel does.

And of course ARM is starting to push their way into the mobile market; another space that has been largely dominated by Intel since time immemorial, and with even Microsoft looking to develop Windows on ARM, the world of x86 computing, especially in the mobile space, might look very different in a few years.

Link to post
Share on other sites

AMD Zen 3 CPU Architecture Rumored To Provide Significant IPC Uplift Over Zen 2


Summed up, we are looking at a new cache hierarchy, faster clockspeeds, and more than an 8 percent IPC uplift when Zen 3 arrives. Good stuff, if all of that comes true.

https://hothardware.com/news/amd-zen-3-cpu-architecture-8-ipc

 

Edited by AdvancedSetup
Corrected font isuse
Link to post
Share on other sites

2 hours ago, AdvancedSetup said:

ARM is going to end up stomping AMD and Intel eventually I think unless something stops them on their journey

Yep, I saw a Coreteks video all about this.  If their deal with Microsoft works out and Windows gets ported to a new architecture, it's only a matter of time before makers of x86/x64 CPUs become obsolete.  I think this is also one of the reasons AMD is investing so much into their semi-custom stuff (like for the consoles and whatnot) and also why Intel has been pursuing mobile with their low powered chips.  Of course NVIDIA is likely sweating more than anyone as I'm certain they're aware that their days are numbered in the PC space as integrated GPUs and APUs become more powerful and capable of pushing 1080p AAA games at max settings, and eventually 1440p and then 4K.  There has been a lot of stagnation in the iGPU/APU space for the past 10 or so years, but with AMD pushing things as they have, Intel is finally pushing their integrated graphics performance and investing serious research and money into developing graphics with their XE series planned to start releasing over the next couple of years and that advanced graphics technology is going to trickle down into their integrated graphics on their future CPUs, and AMD is going to have to put more than a handful of CUs (Compute Units) and shaders (the 'cores' in GPUs that are used for rendering graphics in games and other 3D applications) into their APUs to keep up, especially since they hope to compete with Intel in the mobile space for thin and light laptops; an area that Intel has dominated thanks to their basic integrated graphics and low powered CPU solutions which use a lot less power than a CPU+a discrete GPU.

There are also rumors that ARM and some other vendors are planning to get into the GPU market, so it will no longer be just AMD/Radeon and NVIDIA any more, and with Intel also entering the space in the next few years with their XE GPUs, graphics technology should start advancing faster than it has in decades, much like what AMD's Ryzen chips have done to accelerate the development and advancement of CPUs.

I think in 5 short years PC hardware is going to look and work completely differently than it does now, in ways that will be a massive departure from how they have been for the past couple of decades or so.  The lines between a CPU and GPU are getting blurred, and the lines between volatile storage (RAM) and non-volatile storage (HDDs/SSDs) are also becoming blurred with technologies like M.2/NVMe, Intel's proprietary Optane technology, and projects under development by the likes of Samsung among others pushing the boundaries of the current limitations of existing storage solutions.  A day may come when there is no real difference between system RAM and storage due to increases in bandwidth, and the cache in CPUs may eventually be replaced by technology such as HBM2 memory which affords much faster speed and higher bandwidth than existing DDR4 memory.  They've already started using it on GPUs, and I think it's only a matter of time before they start integrating it into CPUs, and we've already seen leaks and roadmaps from both Intel and AMD showing similar things with 2.5D and 3D stacked chips containing CPU cores, integrated memory, integrated graphics and more to finally eliminate many of the communication bottlenecks that currently limit PC performance.

For anyone interested, here are a few channels worth checking out:

https://www.youtube.com/channel/UCX_t3BvnQtS5IHzto_y7tbw/videos
https://www.youtube.com/user/adoredtv/videos
https://www.youtube.com/channel/UCa5uMMs0cVg9opJt_Kw3HLA/videos
https://www.youtube.com/channel/UCRPdsCVuH53rcbTcEkuY4uQ/videos

Link to post
Share on other sites

The first one will likely never replace traditional binary systems due to the massive difference between how functions and operations work within a quantum computer and a standard binary (1s and 0s) device.  Quantum computers are suited to complex problems with multiple solutions/many possible answers that traditional computers are not capable of calculating, however they are no good for standard types of software where the developer expects specific output from the code/commands they input into the system/their code, so quantum will remain a separate tool suited for a specific purpose (things like advanced AI and complex algorithmic solutions as well as complex models for things like physics, environmental data analysis and other types of problems that cannot be easily quantified/calculated using traditional computing).

As for the latter two, yes, both are viable possibilities, however each is much farther down the road than the next 5 years (optical is probably at least 20~30 years out if not more, and DNA is mostly in the experimental/hypothetical phase at this point so it will likely be much longer for that technology to see the light of day, and even then it may end up similar to quantum in that it is relegated to specialized complex scientific calculations and applications).

I wish we were closer to optical because it shows the most promise for boosting the sheer speed of traditional computing, and would have the added benefit of virtually eliminating concerns about heat that plague devices today and which limit greatly the level of hardware one can put inside a small form factor device.  Since light isn't electricity and thus doesn't generate heat the way that electricity does, components that use light rather than electrons/voltages to process data run nice and cool instead of cranking out heat comparable to the amount of voltage/current being pumped into them the way that existing computing components do.  It also bypasses the limitations of copper and silicon transistors, potentially relieving us of the impending physical limitations of shrinking transistors (the end of Moore's Law, essentially, which should occur within the next 5~10 years given our current rate of development/node shrinks, as silicon transistors will likely not be able to shrink beyond 3 nanometers while still retaining their ability to prevent electrons from 'jumping' gates, thus leading to constant errors and inaccurate calculations since such CPUs would be unreliable for determining if a transistor state is on or off (1 or 0) due to leakage of electrons across the small surfaces/planes).  They are working on a new type of gate which shows some promise to resolve this, however I'm not confident that they will be able to shrink it much further than maybe one or two nodes before it too becomes bound by the limitations of physics.

That said, they may find some new material to work with besides silicon which will allow for more robust resistance at smaller sizes thus allowing for at least a few more nodes beyond the physical limits of silicon, but I'm hopeful that by the time the necessity for such materials becomes a reality that we will already have moved on to optical computing for most components, especially the RAM, storage, CPU (or at least its cache) as well as data interfaces like PCIe which is used for many devices attached to the system, particularly the GPU, because the level of gaming and computing performance possible if you eliminate the bottlenecks between these various components in a system and the CPU cores, the faster the device will perform for all task (including gaming; an area I am particularly enthusiastic about).

Here's a good video that takes a look at the past and possible future of computing/transistor technologies: 

 

Link to post
Share on other sites

By the way, with regards to the immediate future, take a look at Foveros; a new 3D stacking technology that Intel has been working on.  It isn't exactly optical computing, but it still has the potential to scale well beyond existing CPU, memory throughput and GPU technologies by shortening the paths between components through stacking them together atop a single die.  Integration of multiple types of 'chiplets' such as standard x86/x64 cores (like those found in desktop CPUs today) as well as smaller ARM chiplets for smaller/specialized low-power workloads, GPU cores for graphics and additional compute power as well as integration of high speed HBM memory, essentially extending the CPU's and GPU's cache/memory subsystems and their associated memory throughput/bandwidth well beyond what exists in systems today.  Integrate a decent amount of NAND on a high bandwidth bus and you've essentially got an entire system within a single chip no larger than a standard CPU.  It has the potential to make devices like powerful tablets and NUCs capable of resource intensive tasks such as high resolution/high framerate AAA gaming, virtual reality, and professional productivity tasks (like 4k/8k video editing and encoding, 3D modelling, CAD/engineering etc.) a reality.  And they could likely build smaller, lower powered versions capable of fitting within even smaller devices like cell phones, assuming they are efficient enough not to need a massive amount of power and cooling.

Link to post
Share on other sites

good info @exile360 tks.. 

On DNA computing & where it is can find more infor in https://interestingengineering.com/what-is-dna-computing-how-does-it-work-and-why-its-such-a-big-deal

Quote

As demonstrated with Adleman’s paper, the major advantage of DNA computing over classical computing—and even quantum computing to an extent—is that it can perform countless calculations in parallel. This idea of parallel computing isn’t new and has been mimicked in classical computing for decades.

When you run two applications on a computer at the same time, they aren’t actually running concurrently; at any given time, only one instruction is being carried out. So if you are listening to music and shopping online using a browser, the computer is actually using something called context switching to give the appearance of concurrency.

It runs an instruction for one program, saves the state of that program after the instruction is carried out, and removes the program from active memory. It then loads up the previously saved state of the second program, runs its next instruction, saves its new state, and then unloads it from active memory. It then reloads the first program to carry out its next instruction and so on.

By making millions of incremental steps a second across different programs, the appearance of concurrency is achieved, but nothing is ever actually being run in parallel. DNA computing can actually carry out these millions of operations at the same time.

Unquote

 

Quote

DNA computing then is best thought of as a complement of quantum computing, so that when paired together and driven by a classical computer acting as a Singleton-style manager, the kinds of dramatic increases in computational power that people are hoping to see in the future actually become realistically possible.

Unquote

Quote -

just this month, computer scientists at the University of California at Davis and Caltech have synthesized DNA molecules that can self-assemble into structures by essentially running their own program using six-bit inputs.

Microsoft even has a programming language for DNA computing that can help make DNA computing practical once the technology of bio-processors progresses to the point that it can run more sophisticated algorithms. In fact, Microsoft is planning on introducing DNA computing to its cloud services by 2020 and actively developing a DNA data storage to integrate into its cloud services.

It is likely that these advances will be realized much quicker than advances in quantum computing. Quantum computing requires sophisticated machinery, superconductors, and extremely cold conditions to keep qubits stable enough to do any actually useful computational tasks, and unless we develop a material that can act as a superconductor at room temperature, they won’t be making their way into our computers anytime soon.

DNA computing, meanwhile, uses DNA that we have become expert at manipulating to the point of replacing a single gene of a DNA strand through CRISPR. The materials needed to synthesize DNA molecules are cheap and readily available and remain stable at room temperature and beyond. What DNA Computing is potentially able to achieve given DNA’s resiliency and biological parallelism represents an essential step towards the future of computing.

Unquote

 

 
UnquoteUADVERTISEMENT
Link to post
Share on other sites

  • Root Admin

@sman

Can you read your post well? As I've said in your other topic since you've been posting more lately, please do not copy directly from a web page if it can be avoided. The fonts, links, etc are often wrong fonts for the forum - I was actually on another forum recently where they ban users that keep posting odd fonts. I'm not saying we're going to do anything like that, but please try to be aware the most people find it difficult to read when you use too many different fonts, colors, etc. so please try to keep it to  a minimum. When copying from a Web page, paste it into NOTEPAD first. Then copy it from Notepad so that all the oddball web characteristics are removed for you.

 

image.png

 

Thanks

 

Link to post
Share on other sites

Now, I find forum completely in 'Dark mode', is there any shift to it? 

And on the web content preview, when I created any new topic with only link, was advised to give some 'preview' of the contents, so this is what has prompted to give some web content preview with posts and which is the crux of problem now, so is it OK to avoid content preview and go with links only? tks.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
Back to top
×
×
  • Create New...

Important Information

This site uses cookies - We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.