After the U.S. government imposed crippling sanctions against select Chinese high-tech and supercomputer companies through 2019 and 2020, firms like Huawei had to halt chip development; it is impossible to build competitive processors without access to leading-edge nodes. But Jiangnan Computing Lab, which develops Sunway processors, and National Supercomputing Center in Wuxi kept building new supercomputers and recently even submitted results of their latest machine for the Association for Computing Machinery’s Gordon Bell prize.
The new Sunway supercomputer built by the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) employs around feature approximately 19.2 million cores across 49,230 nodes, reports Supercomputing.org. To put the number into context, Frontier, the world’s highest-performing supercomputer, uses 9472 nodes and consumes 21 MW of power. Meanwhile, the National Supercomputing Center in Wuxi does not disclose power consumption of its latest system.
Interestingly, the new supercomputer seems to be based on the already known 390-core Sunway processor that derive from the Sunway SW26010 CPUs and have been around since 2021. Therefore, the new system increased the number of processors, but not their architectural efficiency, so its power consumption is likely to be gargantuan. Meanwhile, actual performance of the machine is unknown, since scaling out has its limits even in the supercomputer world.
The National Supercomputing Center in Wuxi has not disclosed performance numbers of its new supercomputer, and it is hard to make any estimations about its performance at this point. The reason why we called it ‘exascale’ is because its predecessor, the Sunway Oceanlite from 2021, was estimated to offer compute performance of around 1 ExaFLOPS.
Meanwhile, engineers revealed the workload that it used the machine for. Apparently, the the group created a new code for large whirlpool simulations to address compressible currents in turbomachinery. They applied it to NASA’s grand challenge problem using an advanced unstructured solver for a high-pressure turbine sequence with 1.69 billion mesh components and 865 billion degrees of freedom (variables).
Given how complex the simulation is, it is likely that the machine is indeed quite powerful. Meanwhile, there is no word whether the simulation was conducted with FP64 precision, or precision was sacrificed for the sake of performance.
Right now at Amazon, users can find the AMD Ryzen 7 5800X3D CPU for its lowest price to date. This processor has been priced around $320 lately but right now is marked down to just $269.
We reviewed the AMD Ryzen 7 5800X3D when it first debuted and recognized it at the time as of the fastest processors for its price. Today’s discount only enhances that benefit. Our biggest drawbacks were its lack of overclocking support and integrated graphics. It has great performance and is currently at a price well worth a look if you’re on an AM4 motherboard.
The AMD Ryzen 7 5800X3D has 8 cores and a total of 16 threads. It has a base speed of 3.4 GHz but can reach 4.5 GHz with Max Boost enabled. According to the official specifications from AMD, the Ryzen 7 5800X3D has 96MB of L3 cache.
User’s can expect PCIe 4.0 support as well as the ability to install up to 128GB of DDR4-3200. This processor does not come with a stock cooler and you’ll definitely be better off using a liquid cooler to get the most out of it performance-wise.
Visit the AMD Ryzen 7 5800X3D product page at Amazon for purchase options.
Intel recently disclosed Downfall, a security vulnerability that affects multiple generations of Intel processors – some of which used to be the best CPUs on the market. The chipmaker has rolled out an updated software-level microcode with a fix for the flaw. However, it has caused some alarms since there’s a potential claimed performance impact of up to 50% on AVX2 and AVX-512 workloads involving the Gather instruction.
As a quick recap, Downfall (CVE-2022-40982) is associated with the memory optimization feature inside Intel processors. Downfall exploits the Gather instruction, which speeds up the processor where Intel chips fetch data scattered in different places in the memory. The Gather instruction inadvertently exposes internal hardware registers to software, allowing the latter to tap into data kept by other software. Downfall affects Intel mainstream and server processors, spanning from the Skylake to the Rocket Lake microarchitecture. Therefore, you’re likely affected unless you own one of Intel’s more recent processors, such as Alder Lake, Raptor Lake, or Sapphire Rapids. Intel has put up an extensive list of all the affected chips.
The main concern is how the mitigation will affect the performance of Intel processors. Leading Linux publication Phoronix has evaluated the impact of the Downfall mitigations on Linux. The news outlet tested a pair of Xeon Platinum 8380 (Ice Lake) processors, a Xeon Gold 6226R (Cascade Lake) chip, and a Core i7-1165G7 (Tiger Lake) part. Phoronix utilized diverse real-world software packages that form part of the Intel oneAPI software.
The two Xeon Platinum 8380 were around 6% slower in OpenVKL 1.3.1. With OSPRay 2.12, Phoronix recorded performance hits of up to 34%. The mitigations caused significant decreases in AI workloads, such as Neural Magic DeepSparse 1.5, Tencent NCNN, and QMCPACK, with up to 17% reductions.
The Xeon Gold 6226R benchmark results revealed similar performance deterioration. The Cascade Lake chip lost up to 33% in OSPRay 2.12 and up to 20% in Neural Magic DeepSparse 1.5.
As for the Core i7-1165G7, Phoronix only ran three benchmarks on it, but they were enough to show the performance degradation from the Downfall mitigations. For example, the Core i7-1165G7 delivered 11% lower performance in OpenVLK 1.3.1. On OSPRay 2.12, the mitigations shaved off between 19% to 39% of performance from the Core i7-1165G7.
The good news from Phoronix’s initial set of results is that the performance decrease from the Downfall mitigation was lower than Intel’s forecasted 50% overhead. However, the bad news is that the performance penalty is still significant. AVX instructions aren’t limited to AI or HPC workload tests, either. You can find them in other workloads, such as video encoding or transcoding. Logically, it would be interesting to see which workloads are negatively impacted by the Downfall mitigations. From Phoronix’s preliminary tests, HPC workloads are the most affected.
The microcode update isn’t mandatory. If you want to turn off the mitigation, Intel offers an opt-out mechanism to restore your processor’s performance in vectorization-heavy workloads. Then there is the debate on the complexity of successfully carrying out a Downfall attack. The exploit sounds like a difficult feat overall, but ultimately, it depends on whether you value your security more than performance or vice versa.
The prospects of LK-99, the controversial substance that its discoverers claim is a room temperature superconductor, living up its hype are dimming. In recent days, many scientists have synthesized the substance and published studies showing that it does not have superconductive properties, at least in the form that they tested it. However, two papers published to pre-print scientific repository Arxiv posit that LK-99 could, maybe, prove to be a superconductor if only some doping were applied.
Doping LK-99 is exactly what it sounds like: you take something that wasn’t in the original recipe (in this case, for lead-apatite) and you put it inside the system to improve its performance. In this case, the new results, penned by Liang Si et al and Korotin et al, find reason to believe that doping LK-99 by inserting extraneous atoms (which weren’t supposed to exist in the original system) might result in the claimed superconductivity. We say “might” because they haven’t actually created and doped the substance.
Although initial analysis from other teams found that LK-99 already had the ‘Fermi’ flat bands necessary for electrons to cross unimpeded and frictionlessly (as they’d need for the zero resistance claim to be true), others have theorized that it needs some help getting there: there’s still work to do to even understand LK-99. Achieving the exactly-precise configuration of particles and quantum particles (quanta) is hard.
There are two kinds of LK-99 researchers right now: the experimentalists (who have attempted to synthesize LK-99 and physically verify its properties) and the software simulation wizards (they’re all still quantum scientists, just to be clear). Those who have actually attempted to synthesize LK-99 and then analyze the sample have found no real proof of superconductivity. Some have shown the substance magnetically levitating and researchers at Southeast University in Nanjing claim that they measured zero resistance at -163 degrees celsius.
The computer models are more bullish, positing that there are scenarios where this could become a superconductor. None of them prove so conclusively – they’re just simulations, after all. And these simulations too have to make assumptions based on the available LK-99 data.
Confirming this gap between theory and practice, there are also new results on the experimental front. The work done by Andrew McCalip on LK-99 synthesis resulted in a small LK-99 fragment, which was analyzed with the help of the USC Materials Consortium. The conclusion is what we’d expect, as we’ve covered here at Tom’s Hardware: the biggest issue stems with the experimental synthetization process.
Andrew McCalip explains how even by using 99.99%-purity precursors (precursos being the initial chemical agents made to react with one another to generate the final compound), there were still enough impurities in them that their resulting LK-99 included micrograms of simple iron (Fe). Iron being a ferromagnetic material, its presence (even in the decimals) was enough to induce magnetic responses in the LK-99 samples they cooked. When you’re dealing with the quantum realm, even a wary, unwelcome subatomic particle can throw a wrench at the partying particles.
There’s a chance that this is what the end of the LK-99 road looks like; a field of failed replications finding solace in the presence of other elements that might be the cause of LK-99’s behavior. But there’s also a chance that this isn’t the end of the line, and that increased scrutiny and improvements to the synthetization process (perhaps paired with some additional data from Korea) will approximate theory and practice.
Published results looking at LK-99 will only increase in the foreseeable future, as different teams across the world finish their replication and simulation attempts. It remains to be seen how history remembers LK-99; perhaps it’s just the latest overhyped disappointment. Or maybe doping will change the game. However it ends, it was a fun run where material science inspired the public’s imagination.
As stocks of the many Raspberry Pi models steadily improve, we’re now coming to the end of a difficult period for the Raspberry Pi community and the world at large. So how bad was it and how is it improving? To learn this we contacted RPiLocator, a service that sprang up to help Raspberry Pi fans to locate precious Raspberry Pis. Thanks to the analytics data provided by RPiLocator we can see the highs and lows of the last year.
Using the number restocks as a proxy for product availability, we can see that Raspberry Pis as a whole were 271% more available in July 2023 than July 2022 (394 restocks versus 106). And, among particular models of Pi, the Pi Zero 2 W grew the most going from 1 to 56 restocks year-over-year.
We have chosen to look back to May, June and July 2022 and compare those months to 2023, to see how things have changed for the better after million unit months were predicted. We also take a look at the holiday season of December 2022, when Raspberry Pi CEO Eben Upton announced that more units would hit the sales channels.
Some caveats about our data: We’re measuring restocks, not sales / produced units and it covers all of the official resellers across the world. So stock may have only been available in one or two countries (and not necessarily the U.S. or UK). We’re only able to measure “weeks in stock” not hours or days so that the model was in stock somewhere for some portion of a week, which could have been less than a day. Raspberry Pi now remains in stock for longer, fewer restocks are required.
Two boards that have remained in stock throughout have been the Raspberry Pi Pico and Raspberry Pi Pico W. These boards may not have gigahertz CPUs and oodles of RAM, but for microcontroller projects they are cheap, easy to source and very easy to use.
Restock Data at a Glance
Swipe to scroll horizontally
Restock Ranking
Total Restocks
Weeks in Stock
Weeks out of Stock
Raspberry Pi 4
918
76
2
Raspberry Pi Compute Module 4
388
70
8
Raspberry Pi 3
294
55
23
Raspberry Pi Zero 2 W
210
52
26
Raspberry Pi Zero W
236
29
49
Raspberry Pi Compute Module 3
11
7
71
While we don’t know how many Raspberry Pi units were sold, we do know how many restocks were made in a 78 week period between February 6 2022 and July 30 2023 across all of the global official resellers tracked by RPiLocator.
The Raspberry Pi 4 is the clear winner, being in stock for 76 out of 78 weeks, and receiving 918 restocks across all of the official resellers. How many units does that translate to? We don’t know, but a restock will not be in single digits.
In second place, and quite a surprise, is the Raspberry Pi Compute Module 4. The CM4 was in stock for 70 out of 78 weeks and received 388 restocks. The Raspberry Pi saw over 2.4 times more restocks than the CM4. The reason for this is highly likely to be the specialized nature of the CM4, used more in embedded and industrial applications than the full-size Pi 4. This means that more of the CM4 units were sent to industrial customers for use in its products.
The Raspberry Pi Zero 2 W was released in late 2021, but you would be forgiven for thinking that it was a dream. Stock of the Zero 2 W has been low to none for quite some time. In the same 78 week period, the Zero 2 W was restocked 210 times, and was in stock for 52 weeks. Remember that this is across a global reseller network. There were no restocks between 21 August 2022 and 18 September 2022, a long five week period.
The older Raspberry Pi Zero W (in stock 29/78 weeks, 236 restocks) board was not restocked between February 6 2022 and January 8 2023, a 48 week period! This doesn’t exclusively mean that the Zero W was out of stock, it could also mean that demand was low and that no further stock was required. But given the global supply shortage, and the eagerness of the community, the former is the most likely scenario.
Raspberry Pi also produced Raspberry Pi 3 and Compute Module 3 units. “Why?” we hear you ask? Well if you don’t need the power of Pi 4, or the Zero form factor, then an older Raspberry Pi will do the job just fine. The Raspberry Pi 3 was the third most in stock board, in stock for 55 out of 78 weeks and seeing 294 global restocks. The last Raspberry Pi 3 restock was 23 July 2023.
Bringing up the rear, in last place is the Compute Module 3 which was out of stock for an incredible 71 out of 78 weeks and saw just 11 restocks. There has not been a restock since May 21 2023. The Compute Module 3 was the last to use the SODIMM form factor that was preferred for Compute Modules. The Compute Module 4 introduced two rows of connectors that attached to breakout points on the Compute Module IO board.
Holiday Cheer
(Image credit: RPiLocator)
Swipe to scroll horizontally
December 2022 Restock Data
Header Cell – Column 0
12-4-2022
12-11-2022
12-18-2022
12-25-2022
1-1-2023
Raspberry Pi 4
7
10
14
9
6
Raspberry Pi Compute Module 4
2
4
13
1
6
Raspberry Pi 3
12
12
15
2
7
Raspberry Pi Zero 2 W
4
4
4
0
2
Raspberry Pi Zero W
0
0
0
0
0
Raspberry Pi Compute Module 3
0
0
0
0
0
In a December 12 blog post, Upton announced that extra units would be passed on to resellers for single unit sales. “As a thank-you to our army of very patient enthusiast customers in the run-up to the holiday season this year, we’ve been able to set aside a little over a hundred thousand units, split across Zero W, 3A+ and the 2GB and 4GB variants of Raspberry Pi 4, for single-unit sales.”
These units hit resellers between December 18 and the end of 2022. On December 18 2022, we saw 14 Raspberry Pi restocks, versus 10 the week previous. The Compute Module 4 saw a jump from 4 restocks to 13. On December 25, restocks were lower, returning to a pre-announcement level. January 1 2023 saw restocks drop further.
After this blip, things returned to normal (for a post pandemic supply chain) for the first half of the year. But, June 2023 is where we start to see an uptick in Raspberry Pi availability. Eben Upton, Raspberry Pi CEO released a statement via a Raspberry Pi Community Events newsletter which stated from July 2023 we could expect to see a million units per month.
For Q1 of 2023, there were 800,000 Raspberry Pi units made (we don’t have a breakdown of how many per model). That is three months production and the worst it has been since 2015. The reason for this is down to pulling stock for the holiday period of late December. In May we saw 612,000 units, June 788,000 which saw actual production matching Upton’s predictions.
Looking Back: Comparing July 2022 to July 2023
(Image credit: RPiLocator)
Swipe to scroll horizontally
Header Cell – Column 0
May 2022
May 2023
June 2022
June 2023
July 2022
July 2023
Raspberry Pi 4
68
53
46
92
53
238
Raspberry Pi Compute Module 4
34
8
16
9
50
31
Raspberry Pi 3
9
8
1
17
2
36
Raspberry Pi Zero 2 W
12
6
9
6
1
56
Raspberry Pi Zero W
0
46
0
49
0
33
Raspberry Pi Compute Module 3
0
2
3
0
0
0
A year ago, the global chip supply shortage cast a shadow on many aspects of our lives. The Raspberry Pi, initially, weathered the early days of the shortage, but late 2021 saw a gradual decrease in the supply of Raspberry Pi. It wasn’t a constant downtrend, more of a roller coaster effect that saw a glut of stock hit resellers to be eagerly purchased.
Since Upton announced that million unit months were ahead, we have seen a steady increase in the number of restocks, and that means more Raspberry Pi are flooding into the sales channels. How does this period in 2023 compare to 2022?
In May 2022 we saw 68 global restocks of Raspberry Pi 4, compared to just 53 in May 2023. In fact restocks most models of Raspberry Pi were down in May 2023 when compared to 2022, the only ones to buck the trend were the Zero W (0 versus 46 in May 2023) and the Compute Module 3 (0 versus 2 in May 2023).
June 2022 and there are 46 restocks of the Raspberry Pi 4 and 16 for the Compute Module 4. Looking to June 2023 and the number of Pi 4 restocks has doubled to 92, while the CM4 receives just nine. Big surprise of this month is 49 Raspberry Pi Zero W in June 2023 versus zero in June 2022.
July 2023 is where we see the biggest number of restocks. Leading the charge is the Raspberry Pi 4 with 238 restocks, versus just 53 in July 2022. The next biggest restock was for the Raspberry Pi Zero 2 W, seeing 56 restocks in July 2023 versus one in July 2022. More Raspberry Pi 3 (36 in 2023, 2 in 2022) and Raspberry Pi Zero W (33 in 2023, 0 in 2022) provide a plentiful, lower cost and adequate platform for most projects. The Raspberry Pi Compute Module 4 is hanging in with 31 restocks in 2023, but a might 50 in 2022. Obviously a large chunk of CM4 stock is going to industrial customers.
The Last Sixty Days
The last 60 days has seen the Raspberry Pi 4 be the clear winner for restocks at 330. In second place is the Raspberry Pi Zero W with 82, and third is the Raspberry Pi Zero 2 W with 62. Things are looking much better as resellers are clearly getting more product.
Swipe to scroll horizontally
Last 60 Days Restock per Model
Header Cell – Column 0
Last 60 Days Restocks
Raspberry Pi 4
330
Raspberry Pi Compute Module 4
40
Raspberry Pi 3
53
Raspberry Pi Zero 2 W
62
Raspberry Pi Zero W
82
Raspberry Pi Compute Module 3
0
RPilocator has a record of the top five resellers per Raspberry Pi board and we can see that the usual suspects (Adafruit, The Pi Hut and Pimoroni) are not in the top spot for any of the boards. Adafruit came second for Raspberry Pi 4 restocks, with 32 restocks in a 60 day period. UK reseller Pimoroni came fifth with 16.
Swipe to scroll horizontally
Top Resellers by Board (number of restocks in the past 60 days)
Board
Reseller
Restocks in 60 Days
Location
Raspberry Pi 4
PiShop
48
United States
Compute Module 4
BerryBase / Pi-Shop
10 / 10
Germany / Switzerland
Raspberry Pi Zero 2 W
PiShop
12
South Africa
Raspberry Pi Zero W
MC Hobby
27
Belgium
Raspberry Pi 3
MC Hobby
11
Belgium
As per Upton’s predictions, it seems that 2023 is looking much brighter for our favorite single board computer.
While Intel has revealed that its codenamed Lunar Lake processor due in 2024 will be based on a brand-new microarchitecture that will deliver unprecedented performance-per-watt efficiency, it never disclosed anything else about the CPU. Yet, the latest update of Intel’s PerfMon software (discovered by @InstLatX64) fills some gaps.
(Image credit: @InstLatX64/Twitter)
As it turns out, Intel’s Lunar Lake will pack Lion Cove high-performance cores as well as Skymont energy-efficient cores. The software does not disclose core count or peculiarities of the Lion Cove and the Skymont microarchitectures, but it clearly reveals that the upcoming Lunar Lake processor will indeed use all-new microarchitectures and, at least based on Intel’s claims, will deliver unprecedented performance efficiency.
Swipe to scroll horizontally
Row 0 – Cell 0
P-Core
E-Core
Alder Lake
Golden Cove
Gracemont
Raptor Lake
Raptor Cove
Gracemont
Meteor Lake
Redwood Cove
Crestmont
Arrow Lake
?
Crestmont
Lunar Lake
Lion Cove
Skymont
Intel’s Lunar Lake will inherit disaggregated multi-chiplet design from Meteor Lake and Arrow Lake client CPUs, but it is expected (based on the schematic images that Intel is publishing) bring it to a new level and split the things up a bit further. Also, Intel’s Lunar Lake will use the company’s 18A (1.8nm-class) fabrication process for its compute tile, which will enable Intel pack in more RibbonFET transistors and improve performance thanks to backside power delivery.
For now, it is hard to estimate performance advantages enabled by the new Lion Cove and Skymont microarchitectures as well as Intel 18A process technology over Arrow Lake CPU, but Intel itself sets pretty high expectations for its 2024 CPU part.
Despite the fact that AMD’s Zen 1 architecture is immune to the recent ‘Inception’ vulnerability affecting modern Zen 3 and Zen 4 CPUs, another vulnerability has been found that affects Zen 1 CPUs specifically. According to a report by Phoronix, a new Zen 1 vulnerability was found that can release potentially sensitive data if the CPU divides an integer calculation by the number 0 in Linux operating systems.
According to commentary made by AMD Linux developer Borislav Petkov, the bug specifically leaves “stale quotient data” after a Zen 1 CPU divides an integer calculation by 0 in certain circumstances. The fix involves doing a “dummy division 0/1 before returning from the #DE exception handler in order to avoid any leaks of potentially sensitive data.”
Thankfully the Linux community has already issued a bug fix, resolving the issue. Petkov authorized a Linux kernel patch to work around the vulnerability. The patch was merged into Linux 6.5 kernel today and is set to be back-ported to all stable Linux kernel releases as well. So if you are running an OS featuring an older Linux kernel, you don’t have to upgrade to the latest Linux kernel version to get the new fix.
Interestingly there is also another workaround method that doesn’t involve kernel updates. Apparently, the vulnerability can be neutralized by disabling Symmetric Multi-Threading on Zen 1 CPUs as well. This method obviously has its flaws, since it reduces the performance benefits SMT offers on AMD CPUs, and in many cases, it also disables sleep mode due to an architectural limitation. But it is a good way to temporarily patch the vulnerability for Linux users that are waiting for the kernel patch to be released for their specific OS.
This issue is reminiscent of a similar problem on Intel’s Skylake chips from several years ago, where disabling Intel’s HyperThreading technology patched some security holes on those chips.
Thankfully the issue appears to be Linux-specific and does not affect Windows operating systems. Plus the vulnerability is already being actively patched for Linux users. However, the same cannot be said of the two other vulnerabilities affecting modern AMD CPUs and Intel CPUs, Inception and Downfall, right now.
Samsung is at the Flash Memory Summit in California, showing off its latest wares, announcing breakthrough technologies, and discussing some incredible advances. Samsung is often the source of the biggest news stories of these events, and it hasn’t disappointed with its announcement of both a 256TB SSDs and unveiling of its PBSSD architecture, designed for peta-byte scale solutions.
The Samsung presentation at the FMS 203 was all about “How Memory Innovations Shape Products and Services”. And, you guessed it, everything was being framed in the context of being reimagined for “the AI era.” Never worry, as Samsung is here to develop the latest technologies to cope with the “exponential growth of data and its many applications,” attendees were told.
(Image credit: Samsung)
PM1743 With PCIe 5.0
Samsung started by highlighting its PM1743 server SSD, which was first unveiled at FMS last year. This comes in capacities of up to 15.36 TB, has been enhanced with a PCIe 5.0 interface. The interface revamp means the new drive is capable of “achieving twice the power efficiency of its predecessor,” says Samsung. The first users of the new PM1743 have it earmarked for usage in the field of generative AI, such as with ChatGPT, according to Samsung.
The development of the PM9D3a server SSD in standard 2.5-inch form factor with PCIe 5.0 is also complete, said the tech giant. With its 8-channel controller, this drive will be up to 2.3 times faster than its predecessor, the company claimed. The new PM93a will become available up to 15.36 TB first, with models up to 30.72 TB in H1 2024.
256TB SSDs
In the quest for maximum data storage within the power and volume limits of a single-server rack, Samsung has created a 256TB SSD. It says that the new devices wield QLC NAND and the highest level of integration density to create these devices.
In Samsung’s tests, the 256TB SSD consumed seven times less power than a stack of eight 32TB SSDs (reminder, 8×32=256).
(Image credit: Samsung)
Petabyte Scale PBSSD
Last but certainly not least, Samsung showcased its latest PBSSD architecture at the Summit. It says that this takes the form of a “petabyte-scale ultra-high capacity solution that provides high scalability by varying the capacity depending on the application.”
With such a great capacity in a single device, Samsung and partners like Meta are aiming to make PBSSDs multi-user friendly. It says the platform made use of a technology called Flexible Data Placement (FDP), which has been ratified in NVMe. FDP is a way of optimizing data placement for increased predictability and performance in real-world hyperscale workloads. The software behind FDP is completely open source.
We hope some of this super-capacious flash memory tech is going to trickle down to consumers. Shoppers still find it difficult to find readily available PC SSDs in sizes larger than 4TB.
Do you miss the days when Microsoft PDAs roamed the Earth, or the more recent Windows Phone era? Then Japan’s Gloture might be able to tempt you with its new NanoPC (h/t PC Watch). This new product is a highly compact PC with a full-cover 5.5-inch touchscreen on one side, built in battery, and Windows 11 Pro.
Compared to smartphones in 2023, the NanoPC doesn’t look very ergonomic with large bezels, sharp-looking edges, and a thick build. On the positive side, you get Windows 11 in your pocket, and this device bristles with connectivity options and ports including Gigabit Ethernet.
(Image credit: Gloture)
With our description above, and the various images shared in this article, some of you might be thinking ‘wow’, but we are sure another sizable contingent will have responded with a ‘why?’ So, let’s look into the specs:
Swipe to scroll horizontally
Gloture NanoPC
Screen
5.5-inch, 1,280 x 720 pixels multi-touch display
Processor
Intel Celeron J4125 ‘Gemini Lake’ 4C / 4T up to 2.70 GHz, UHD Graphics 600
Memory
8GB LPDDR4 RAM and 128GB eMMC storage
Ports
USB 3.0 x4, USB Type-C power, Gigabit Ethernet, Mini HDMI 2.0 x2, microSD card slot, 3.5mm audio
Wireless
Wi-Fi 6, Bluetooth 5.2, no SIM
Battery
2,500 mAh
Physical
142 x 91.2 x 17.76 mm
OS
Windows 11 Pro
Price
84,700 Japanese Yen ($590)
(Image credit: Gloture)
One of the standout features of the Gloture NanoPC is how much connectivity it packs in. In the promotional imagery for its NanoPC, Gloture shows this device used as a pocket desktop with two extra large screens attached. Users get a decent array of (mostly) full-sized industry standard ports in a world where smartphone makers are starting to be tease devices with fewer and fewer ports.
(Image credit: Gloture)
We feel that the Gloture NanoPC, in this iteration, isn’t looking to tread on the toes of phone makers, due to some major design decisions. Firstly, this new pocketable PC device doesn’t have a SIM card slot (or eSIM). Moreover, it has a built in mic, but no camera of any kind. Lastly, even on such a small device with a 10 W Celeron, and a small 5.5-inch screen, we don’t think 2,500 mAh is meant to provide more than a couple of hours portability away from a wall socket.
For a few years now, gaming laptops have been some of the most intriguing PCs around. They’ve gotten thinner and lighter, naturally — but they’ve also become vastly more powerful and efficient, making them suitable for both work and play. They’ve adopted some bold innovations, like rotating hinges and near desktop-like customizability. Gaming laptops are where PC makers can get adventurous.
If you’re a professional in the market for a beefy new computer, and you like to play a few rounds of Apex Legends on occasion, it may make more sense to go for a gaming notebook instead of an Apple MacBook Pro workstation or the like. You’ll still get plenty of power for video encoding and 3D rendering, plus you may end up paying less. We’ll help you figure out which is the best gaming laptop for you, from budget options like the Dell G15 to premium notebooks like the Razer Blade 15 and everything in between.
Devindra Hardawar/Cunghoctin
What’s your budget?
Your laptop buying journey starts and ends with the amount of money you’re willing to spend. No surprise there. The good news: There are plenty of options for gamers of every budget. In particular, we’re seeing some great entry-level PC gaming choices under $1,000, like Dell’s G15 lineup. A cheap gaming laptop in this price range will definitely feel a bit flimsier than pricier models, and they’ll likely skimp on RAM, storage and overall power. But most cheaper laptops should be able to handle the majority of video games running at 1080p at 60 frames per second, which is the bare minimum you’d want from any system.
Things get interesting when you start looking at the best gaming laptops in the mid-range space, with prices at $1,000 and higher. At that point, you’ll start finding PCs like the ASUS Zephyrus ROG G14, one of our favorite gaming notebooks. In general, you can look forward to far better build quality than budget laptops (metal cases!), improved graphics power and enough RAM and storage space to handle the most demanding games. These are the gaming machines we’d recommend for most people, as they’ll keep you gaming and working for years before you need to worry about an upgrade.
If you’re willing to spend around $1,800 or more, you can start considering more premium options like Razer’s Blade. Expect impeccably polished cases, the fastest hardware on the market, and ridiculously thin designs. The sky’s the limit here: Alienware’s uber customizable Area 51m is an enormous beast that can cost up to $4,700. Few people need a machine that high-end, but if you’re a gamer with extra cash to burn, it may be worth taking a close look at some of these pricier systems.
What kind of CPU and GPU do you want?
The answer to this question used to be relatively simple: Just get an Intel chip with an NVIDIA GPU. But over the last few years AMD has stepped up its game with its Ryzen notebook processors, which are better suited for juggling multiple tasks at once (like streaming to Twitch while blasting fools in Fortnite). Intel responded with its impressive 12th and 13th-gen chips, but it’s nice to have decent Ryzen AMD alternatives available, especially since they’re often cheaper than comparable Intel models.
When it comes to video cards, though, AMD is still catching up. Its Radeon RX 6000M GPU has been a fantastic performer in notebooks like ASUS’s ROG Strix G15, but it lags behind NVIDIA when it comes to newer features like ray tracing. (We’re still waiting to test AMD’s new Radeon 7000 series mobile graphics.) At the very least, a Radeon-powered notebook can approach the general gaming performance of the NVIDIA RTX 3070 and 3080 GPUs.
If you want to future-proof your purchase, or you’re just eager to see how much better ray tracing can make your games look, you’re probably better off with an NVIDIA video card. They’re in far more systems, and it’s clear that they have better optimized ray tracing technology. NVIDIA GeForce RTX GPUs also feature the company’s DLSS technology, which uses AI to upscale games to higher resolutions. That’ll let you play a game like Destiny 2 in 4K with faster frame rates. That’s useful if you’re trying to take advantage of a high refresh rate monitor.
You’ll still find plenty of laptops with NVIDIA’s older RTX 30-series GPUs these days, and they’ll still give you tremendous performance. But to be safe, it’s probably worth opting for the newer RTX 40-series systems, since they support the newer DLSS 3 technology and offer a wealth of performance upgrades. (If you’re looking out for the best deals, you can probably find some killer RTX 3070 laptops out there.) The entry-level RTX 4050 is a solid start, but we’d suggest going for a 4060 or 4070 if you’re aiming to maximize your framerates on faster screens. The RTX 4080 and RTX 4090 are both incredibly powerful, but they typically make systems far too expensive for most users.
It’s worth noting that NVIDIA’s mobile GPUs aren’t directly comparable to its more powerful desktop hardware. PC makers can also tweak voltages to make gaming performance better in a thinner case. Basically, don’t be surprised if you see notebooks that perform very differently, even if they’re all equipped with the same GPU.
What kind of screen do you want?
Screen size is a good place to start when judging gaming notebooks. In general, 15-inch laptops will be the best balance of immersion and portability, while larger 17-inch models are heftier, but naturally give you more screen real estate. There are some 13-inch gaming notebooks, like the Razer Blade Stealth, but paradoxically you’ll often end up paying more for those than slightly larger 15-inch options. We’re also seeing plenty of 14-inch options, like the Zephyrus G14 and Blade 14, which are generally beefier than 13-inch laptops while still being relatively portable.
But these days, there is plenty to consider beyond screen size. For one: refresh rates. Most monitors refresh their screens vertically 60 times per second, or at 60Hz. That’s a standard in use since black and white NTSC TVs. But over the past few years, displays have evolved considerably. Now, 120Hz 1080p screens are the bare minimum you’d want in any gaming notebook — and there are faster 144Hz, 240Hz and even 360Hz panels. All of this is in the service of one thing: making everything on your display look as smooth as possible.
For games, higher refresh rates also help eliminate screen tearing and other artifacts that could get in the way of your frag fest. And for everything else, it just leads to a better viewing experience. Even scrolling a web page on a 120Hz or faster monitor is starkly different from a 60Hz screen. Instead of seeing a jittery wall of text and pictures, everything moves seamlessly, as if you’re unwinding a glossy paper magazine. Going beyond 120Hz makes gameplay look even more responsive, which to some players gives them a slight advantage.
Steve Dent/Cunghoctin
Not to make things more complicated, but you should also keep an eye out for NVIDIA’s G-SYNC and AMD’s FreeSync. They’re both adaptive sync technologies that can match your screen’s refresh rate with the framerate of your game. That also helps to reduce screen tearing and make gameplay smoother. Consider them nice bonuses on top of a high refresh rate monitor; they’re not necessary, but they can still offer a slight visual improvement.
See Also:
One more thing: Most of these suggestions are related to LCD screens, not OLEDs. While OLED makes a phenomenal choice for TVs, it’s a bit more complicated when it comes to gaming laptops. They’re mostly limited to 60Hz, though some models offer 90Hz. Still, you won’t see the smoothness of a 120Hz or 144Hz screen. OLEDs also typically come as 4K or 3.5K panels – you’ll need a ton of GPU power to run games natively at that resolution. They look incredible, with the best black levels and contrast on the market, but we think most gamers would be better off with an LCD.
Devindra Hardawar/Cunghoctin
A few other takeaways:
Get at least 16GB of RAM. And if you’re planning to do a ton of multitasking while streaming, 32GB is worth considering.
Storage is still a huge concern. These days, I’d recommend aiming for a 1TB M.2 SSD, which should be enough space to juggle a few large titles like Destiny 2. (If you can afford the jump to a 2TB SSD though, just do it.) Some laptops also have room for standard SATA drives, which are far cheaper than M.2’s and can hold more data.
Get your hands on a system before you buy it. I’d recommend snagging the best gaming laptop for you from a retailer with a simple return policy, like Amazon or Best Buy. If you don’t like it, you can always ship it back easily.
Don’t forget about accessories! For the best performance, you’ll need a good mouse, keyboard and headphones.
Best overall: ASUS ROG Zephyrus G14
If you can’t tell by now, we really like the Zephyrus G14. It’s shockingly compact, at just 3.5 pounds, and features AMD’s new Ryzen chips paired together with its Radeon 6000M graphics (we’d recommend the Ryzen 9 model with an RX 6700M for $1,400). While its 14-inch screen is a bit smaller than our other recommendations, it looks great and features a fast 144Hz refresh rate. We also like its retro-future design (some configurations have tiny LEDs on its rear panel for extra flair). While the G14 has jumped in price since it debuted, it’s still one of the best gaming notebooks around, especially since ASUS has finally added a built-in webcam.
Read our Full Review of ASUS ROG Zephyrus G14
Best budget: Dell G15
We’ve been fans of Dell’s G5 line ever since it first appeared a few years ago. Now dubbed the G15, it starts at under $1,000 and, while not the most powerful gaming laptop, it features all of the latest hardware, like Intel’s 13th-generation CPUs and NVIDIA’s RTX 40-series cards. (You can also find AMD Ryzen chips in some models.) This budget-friendly gaming laptop is a bit heavy, weighing over five pounds, but it’s a solid notebook otherwise. And you can even bring it into mid-range gaming territory if you spec up to the RTX 4060.
Best premium gaming laptop: Razer Blade 15
Razer continues to do a stellar job of delivering bleeding-edge hardware in a sleek package that would make Mac users jealous. The Blade 15 has just about everything you’d want for great gaming, including NVIDIA’s RTX 4080, Intel’s 13th-gen CPUs and speedy quad-HD screens. Our recommendation? Consider the model with a Quad HD 165Hz screen and an RTX 4060 GPU for $2,500. You can easily save some cash by going for a cheaper notebook, but they won’t feel nearly as polished as the Blade.
Read our Full Review of Razer Blade 15
Another good option: Acer Predator Triton 500 SE
While we’ve seen some wilder concepts from Acer, like its 360-degree hinge-equipped Triton 900, the Triton 500 is a more affordable bread and butter option. This year, it’s bumped up to a 16-inch display, giving you more of an immersive gaming experience. It’s relatively thin, weighs just over five pounds, and it can be equipped with Intel’s 11th-gen CPUs and NVIDIA’s RTX 30-series GPUs. Acer’s build quality is as sturdy as ever, and it has most of the standard features you’d need in a gaming notebook.
Read our Full Review of Acer Predator Triton 500 SE Gaming Laptop
Best large gaming laptop: Alienware m18
Alienware’s m18 is its biggest gaming laptop ever, and it packs in just about everything we’d want including a really big screen. It can be equipped with Intel and AMD’s fastest CPUs, as well as NVIDIA’s fastest GPUs (including the 4090). Its base configuration with an RTX 4060 is also surprisingly affordable for an 18-inch laptop, starting at $2,100. We’ve always liked Alienware’s m-series gaming laptops, but this year they’re more refined, with better cooling and a slightly sleeker design. You can also opt for CherryMX mechanical keys, which deliver a desktop-like gaming and typing experience.
Best with a dual screen: ASUS ROG Zephyrus Duo 16
You know if you actually need a dual-screen laptop: Maybe a single 17-inch screen isn’t enough, or you want a mobile setup that’s closer to a multi-monitor desktop. If that’s the case, the Zephyrus Duo 16 is the best laptop for you. It’s powerful, and its extra 14-inch screen can easily let you multitask while gaming dutifully working. It also has all of the latest hardware you’d want, like AMD’s new Ryzen chips and NVIDIA’s RTX 4000 GPUs. Sure, it’s nowhere near portable, but a true multitasker won’t mind.
Read our Full Review of ASUS ROG Zephyrus Duo 16 Gaming Laptop
The worst rivals for any chip designer and PC maker are not its direct competitors, but rather devices that their potential customers already own. Therefore, to make those customers buy something new, they need to advance their products at a rapid pace so that a new PC would offer radically better experience than a three-years-old computer. This is apparently what Intel and Lenovo are doing in their joint co-engineering lab in Shanghai.
Intel and Lenovo’s Advanced System Innovation Lab serves as a breeding ground where engineers from both companies combine their skills to build next-generation laptops that provide strong performance, elegance, features, and user experience, reports DigiTimes.
Lenovo and Intel have a long history of working together on multiple innovative products, including ThinkPad X-series as well as ThinkPad X Fold series. To build such systems, Intel and Lenovo not only need to overcome hardware-related challenges such as performance, power management, and thermals, but also software-related issues. In addition, Intel’s dedicated teams work closely with Lenovo in other co-engineering labs located in Zizhu and Pudong
“We share a long and illustrious history of deep engineering collaboration with Lenovo,” said Zheng Jiong (ZJ), senior director of client customer engineering for Intel China’s client computing group (CCG). “We work together very well and are thankful for the innovation support Lenovo has given us through joint labs like these.”
A notable achievement of the Advanced System Innovation Lab in Shanghai is development of an OLED display driver that can run two OLED screens instead of one, which opens doors to a number of potentially interesting use cases.
“This work was critical to the development of our platform,” said Zhijian Mo, director of platform design and development in Lenovo’s intelligent devices group.
Furthermore, both companies joined forces with DRAM makers to enhance LPDDR5 memory data transfer rates.
While Lenovo remains a key partner, Intel also teams up with other global PC OEMs and software vendors. Their collective goal is to break technological barriers, identify core issues, and engineer enhanced PC solutions. Several advancements in CPU, power, thermal management, and other PC parts have emerged from this cooperative approach.
Looking ahead, as work on the Meteor Lake platform nears completion, plans for the Lunar Lake platform are already in motion and expected to be ready in 2024. To make Lunar Lake-based PCs radically better than systems in use today, the two companies are again collaborating across multiple fronts, but this is probably something the two companies would prefer not to discuss in detail for now.
“It is a very special project that involves detailed co-engineering efforts between both our teams,” said Mo.
As Elon Musk awaits his doctor’s permission to fight Mark Zuckerberg in an on-again / off-again cage match, the two billionaires’ companies continue to spar in the consumer adoption arena. Meta forged ahead with Threads’ aggressive update schedule today as it tries to challenge Musk’s erratic X (rebranded Twitter). Zuckerberg announced today that the young platform now supports sharing posts to Instagram DMs, custom alt text for photos and videos and a new mention button.
The Send on Instagram option lets users publish their Threads posts directly to Instagram DMs via the Send button. The feature could be seen as one of the “retention-driving hooks” Meta cooked up to ensure “people who are on the Instagram app can see important Threads,” as Reuters reported Chief Product Officer Chris Cox said in a recent company meeting. The strategy is allegedly tied to reports of falling engagement after the fledgling platform added over 100 million users in less than a week. Zuckerberg reportedly described the decline as “normal” and expected retention to grow as Meta continues to flesh out the social channel, which launched in early July.
Meta
Meanwhile, the custom alt-text option is an accessibility feature allowing Threads users to add (or edit existing auto-generated) alt text for photos and videos before uploading. The new mention button makes it easier to tag profiles in your Threads posts. Finally, the platform is making it easier to verify your identity on fediverse platforms like Mastodon. “You can now add your Threads profile link on supported platforms to verify your identity,” Instagram head Adam Mosseri posted today.
These are only the latest additions Meta has rolled out in the past few weeks. It announced earlier this month that a much-needed web version is on the way, offering desktop use for the first time, along with a proper search function. Other post-launch additions include a chronological feed and the ability to sort your following list and view your liked posts.
While Wi-Fi 7 is right around the corner, many are still getting acclimated to Wi-Fi 6E. We’ve already tested several Wi-Fi 6E routers, with the latest coming courtesy of MSI.
MSI’s RadiX AXE6600 is a compelling entry into the wireless gaming router field with its array of six antennas and heavy use of MSI’s Mystic Light to liven things up stylistically. MSI also enhances the RadiX AXE6600 with a 2.5Gbps WAN/LAN port (one of the four remaining 1 Gbps ports can also be used for WAN if you desire).
Available for a reasonable $256, RadiX AXE6600 delivers good looks, a comprehensive web GUI and performed well in our benchmarking suite. That makes it one of the best gaming routers available and also one of the best routers overall.
(Image credit: Tom’s Hardware)
Looking at the router, the feature that stands out most is the antenna array, with six poseable elements along its sides and back panel. Each antenna features an LED for syncing with MSI’s Mystic Light. There’s one additional LED located in the MSI logo on top of the router.
Interestingly, while you can enable Mystic Light from the web interface (or using the smartphone app), you cannot adjust the colors using these methods. Instead, you must install the MSI Center app on a Windows PC to change the RGB settings.
A status panel near the top rear of the router also highlights internet connectivity, power status, and activity from the four gigabit ethernet ports. There are also buttons for Wi-Fi and WPS, along with an “LED Off” button which, understandably, turns off the LEDs on the status panel.
(Image credit: Tom’s Hardware)
Moving around to the rear of the router, you’ll find a 2.5 GbE WAN port, four GbE LAN ports, a USB 3.0 port, and a power button. There’s also a plethora of vents along the front, sides, and bottom of the RadiX AXE6600 to keep the router cool.
MSI RadiX AXE6600 Wi-Fi 6E Gaming Router Specifications
Swipe to scroll horizontally
Wi-Fi Standard
Wi-Fi 6E
Wi-Fi Bands
2.4-GHz AX: 2×2(Tx/Rx) 1024/256-QAM 20/40MHz, up to 574 Mbps
5-GHz-L AX: 2×2(Tx/Rx) 1024/256-QAM 20/40/80MHz, up to 1201 Mbps
6-GHz-H AX: 4×4(Tx/Rx) 1024/256-QAM 20/40/80/160MHz, up to 4804 Mbps
CPU
1.8GHz quad-core
Memory
256 MB Flash, 512MB DDR4 RAM
Ports
1x 2.5 Gigabits port for WAN/LAN, 1x Gigabit port for WAN/LAN, 3x Gigabit port for LAN, 1x USB 3.0
MSI RadiX AXE6600 Setup
Although you can set up the RadiX AXE6600 using a smartphone, I used the tried-and-true web browser method. After navigating to the http://msirouter.login address, I could log in using the credentials printed on the bottom of the router. Then, I was greeted with a vibrant, easy-to-navigate user interface with large buttons and clear graphics.
Thankfully, the router already had the latest firmware installed. However, updating the firmware is a simple affair. You just must navigate to Advanced → Administration → Firmware Upgrade. You can manually upgrade using a firmware package downloaded directly from MSI or an online tool that automatically checks for new updates and installs the firmware.
(Image credit: Tom’s Hardware)
When you first login to the router, you’re greeted with the Dashboard, which presents information on hardware resources consumed and currently-connected devices. You’ll also find presets to change the router’s QoS (Quality of Service) settings. The four that most users will likely enable at some point include AI Auto, Gaming, Streaming, and WFH.
There’s also a dedicated Game Center subsection where you can enable Game Boost mode. If you select the Traditional QoS mode, you’re also given the option to prioritize MSI devices. Game Center is also where you’ll find controls to enable the Mystic Light function.
(Image credit: Tom’s Hardware)
Like most Wi-Fi 6E routers, the RadiX AXE6600 ships with three wireless bands enabled (2.4GHz, 5GHz, 6GHz), each with its own SSID. In this case, the router defaulted to MSI_2G_FE, MSI_5G_FE, and MSI_6G_FE as the SSIDs, but you will want to set your own.
However, the RadiX AXE6600 can also use Smart Connect, which sets a single SSID for the router. Whether your device maxes out at Wi-Fi 5, Wi-Fi 6 or Wi-Fi 6E, it will connect to the router using the same SSID, with the client choosing the appropriate band automatically. However, keep in mind that Smart Connect can often diminish network performance.
(Image credit: Tom’s Hardware)
The Radix AXE6600 offers basic parental controls, including the ability to filter for Adult content, gambling, violence, drugs/firearms, malicious content and games. Each is individually selectable, and you can specify the MAC address to which the settings apply.
MSI RadiX AXE6600 Performance
We conducted multiple tests in a single-family, two-story home with a 500 Mbps connection, using a laptop with an AMD RZ616 160 MHz network adapter as the client. Another PC, attached via Ethernet, functioned as the server to receive traffic. We used iPerf to test throughput and ping to test latency. Four sets of tests were conducted on the 2.4-GHz, 5-GHz and 6-GHz bands:
Near uncongested: Testing laptop approximately 6 feet away from the router, no substantial traffic being carried across other devices
Far uncongested: Testing laptop approximately 25 feet away from the router, no substantial traffic being carried across other devices
Near congested: Testing laptop approximately 6 feet away from the router with videos streaming on four devices throughout the house
Far congested: Testing laptop approximately 25 feet away from the router with videos streaming on four devices throughout the house
Image 1 of 12
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
The RadiX AXE6600 put up solid numbers on the 6-GHz band, as witnessed by our iPerf test scores. The RadiX AXE6600 can hit a theoretical 4,804 Mbps on the 6-GHz channel, but real-world iPerf numbers for every router we’ve tested are in the 150 to 550 Mbps range.
The RadiX AXE6600 managed 532 Mbps in the 6-foot iPerf test, which still far surpassed that of the next-closest competitor, the Netgear Nighthawk RAXE300. The results weren’t nearly as impressive 25 feet away, where the RadiX AXE6600 managed just 283 Mbps. Congested performance at 6-GHz was also strong, reaching 500 Mbps at 6 feet. However, that performance dropped at 25 feet, dropping to just under 200 Mbps, putting it well behind the other Wi-Fi 6E competition.
We witnessed a similar Dr. Jekel and Mr. Hyde performance at 5-GHz, with the 6-foot numbers coming in at a healthy 486 Mbps, well over 100 Mbps faster than the next-closest competitor. The tables turned at 25 feet, with the RadiX AXE6600 coming in at just 149 Mbps, below the average performance of the other assembled routers. With congested traffic, the RadiX AXE6600 pulled up last place at 6 feet and 25 feet.
At 2.4-GHz, the RadiX AXE6600 had some of the weakest throughput numbers of any router we tested. However, it still had very low latency and its speeds were more than good enough for the kinds of devices most people would have on a 2.4-GHz band (smart home devices, old tablets, etc).
The 2.4-GHz uncongested iPerf numbers reached 128 Mbps at 6 feet and 44.5 Mbps at 25 feet, both of which are quite low. The router slipped into last place among its competitors on the 2.4-GHz band when the network was congested, managing just 73.6 Mbps at 6 feet compared to 101.2 Mbps for the nearest competitor and 151.4 Mbps for the first-place Asus RT-AXE7800.
One thing that was consistent through all the tests, however, was the ping. Across 6-GHz, 5-GHz and 2.4-GHz channels with congested or uncongested traffic, the RadiX AXE6600 maintained speedy pings of 1 to 4 ms. Latency matters more than throughput when you’re playing games because most of the game data is actually on your PC, but your movements and commands need to reach the server quickly.
Bottom Line
The MSI RadiX AXE6600 offers strong performance on the 6-GHz band, a wealth of features, and striking looks at an attractive price point. On the other hand, 5-GHz performance was inconsistent, and 2.4-GHz performance put it at the back of the pack compared to other assembled Wi-Fi 6E routers. The RadiX AXE6600 has an MSRP of $349.99; however, retailers like Amazon offer the RadiX AXE6600 with a $90 discount, taking it to just $256 shipped.
(Image credit: Tom’s Hardware)
If you have Wi-Fi 6E devices on your network, the RadiX AXE6600 is by far the best-performing router that we’ve tested. The numbers that the router was able to consistently deliver on the 6-GHz band far outpaced the competition at 6 feet and were in the same ballpark at 25 feet. 5-GHz performance was a bit more sporadic (although it took a considerable lead in the 6-foot iPerf test). 2.4-GHz performance, while on the lower side, is more than adequate for the older, slower devices that typically use that band.
We also can’t ignore that MSI has crafted an easy-to-use UI for configuration changes. There are plenty of presets for those that want AI to take over for QoS, or you can get down in the weeds and make granular adjustments on your own. And for those that aren’t enamored with the flashiness of the Mystic Light, it can be turned off.
If you don’t have any Wi-Fi 6E devices on your network or don’t plan on enlisting any soon, there are plenty of cheaper Wi-Fi 6 routers out there to consider, including the budget-minded Asus RT-AX1800S. But if you want blazing-fast Wi-Fi 6E performance, the RadiX AXE6600 is tough to beat.
Just when we thought dual-cores were dead, Intel is rumored to be bringing back the “obsolete” core configuration for another generation. A Tweet by leaker chi11eddog alleges Intel is developing a new 14th Gen entry-level dual-core chip that will feature one of Intel’s three hybrid CPU architectures found in its Best CPUs, Alder Lake, Raptor Lake, or Raptor Lake Refresh.
According to chi11eddog, this new chip will be the spiritual successor of the Pentium Gold G7400. Thanks to Intel’s removal of the Celeron and Pentium sub-brands altogether, the new chip will be dubbed the “Intel 300,” similar to other entry-level chips (like the “Intel Processor N200”).
The chip’s reported specifications include a 3.6GHz base frequency (no turbo clock), a 6MB L3 cache capacity, and a core configuration consisting of two P-cores with four threads. Power consumption is rated at 46W. The Twitter leaker did not reveal any specifications for the iGPU, but for obvious reasons, we expect this CPU to sport one of Intel’s lower-end configurations — like its UHD 730 Graphics unit.
14th Gen “Intel 300″ processor will be out in Q3 2023. Specs: 2 cores (2P+0E)/4 threads, 6MB L3 cache, P-core base frequency 3.9GHz, 46W. 🧐🧐🧐”Intel 300”, the new naming convention, is the successor to Pentium Gold G7400.August 8, 2023
See more
Compared to Intel’s other entry-level CPUs like the Core i3 series, this “Intel 300” will be the cheapest and least powerful entry in Intel’s desktop lineup. The chip features two fewer cores than the Core i3-13100 (Intel’s lowest-end i3 right now), and a much slower clock speed thanks to the lack of Turbo Boost technology. As previously stated the Intel 300 will also effectively be the replacement of Intel’s previous-generation desktop Celeron and Pentium processors which also sported dual-core configurations.
Because of its dual-core design, we don’t expect this chip to be great for anything beyond casual web browsing, office work, and video streaming. Even with the addition of Hyper-Threading, the physical limits of dual-core designs make them sub-optimal for gaming tasks, especially in modern game engines which can take advantage of 6 six CPU cores.
Chi11eddog believes the CPU will be arriving as soon as this quarter, so we shouldn’t have to wait long for this new entry-level chip to arrive on store shelves — assuming it exists. With its 14th Gen moniker, the new dual-core will allegedly arrive in conjunction with Intel’s 14th Gen Raptor Lake Refresh lineup which is also expected to launch very soon.
Users looking to expand their storage without spending more than they have to should take a look at this offer from Amazon on the Intel 670p 1TB SSD. This SSD has been going for around $39 lately but today is marked to one of it’s best prices yet—just $34 — or around 3 cents per GB.
This is not the fastest SSD on the market and is probably not ideal for anyone looking for a high speed gaming drive but it’s plenty fast for anyone looking for some extra storage. We reviewed the Intel 670p SSD when it debuted in 2021 and appreciated its performance which has held up over the years.
This offer applies only to the 1TB model but there are other capacities available in this line as well, including a 500GB and 2TB edition. All of the drives in this line have an M.2 2280 form factor and use a PCIe 3.0 X4 interface along with a Silicon Motion SM2265 controller. The Intel 670p SSD is designed with Intel 144-layer QLC memory. This particular size is able to reach read/write speeds as high as 3500/2500 Mbps. Again, that might not be the fastest on the market but it’s not bad for a drive in this price range.
Users have option 256-bit AES encryption to take advantage of for added security. The drive is supported by a limited 5-year manufacturer’s warranty from Intel that voids should the drive reach 370 TBW. It’s also backed by Amazon’s 30-day return policy.
Visit the Intel 670p 1TB M.2 2280 SSD product page at Amazon for more information and purchase options.