The Raspberry Pi is an excellent go-to board for makers that want to get their project ideas off the ground—or in this case, locomoting around on the ground. Developer duo Artur Majtczak and Maciek Matuszewski are the masterminds behind SaraKIT, a custom carrier board for the Raspberry Pi Compute Module 4 that’s being used to drive their cool LEGO RC car project.
The car is smartphone operated and stands out with 4-wheel drive alongside two differentials lending to some serious precision driving. The SaraKIT board makes it easier to control these extra components with features like fractional angle control. Of course, the body is made using LEGO for fun but you can also make adjustments to the build by changing the gears at the rear to impact both power and speed.
The SaraKIT CM4 carrier board adds a variety of features to the CM4 that take this car to the next level. This includes things like the super precise motor control we mentioned above as well as voice control using 3 microphones with the ability to implement sound localization. Two CSI ports are included for attaching camera modules, it has two accelerometers, a gyroscope and a temperature sensor.
The build is fairly straightforward. The Raspberry Pi CM4 is attached to the SaraKIT board. The body is built using LEGO but also uses some 3D-printed components for mounting the hardware in place. These STL files are available for download on the project page at Hackster. There is an alternative, in the form of the Raspberry Pi Build HAT, a board that bridges the world of Lego Mindstorms / Technic and Spike with the Raspberry Pi.
Image 1 of 4
(Image credit: Artur Majtczak, Maciek Matuszewski )
(Image credit: Artur Majtczak, Maciek Matuszewski )
(Image credit: Artur Majtczak, Maciek Matuszewski )
(Image credit: Artur Majtczak, Maciek Matuszewski )
The software for this project hasn’t been released yet but plans are in the works to put the source code on the team’s official GitHub page. According to the project page at Hackster, it includes sample scripts in C++, Python, as well as Delphi. It’s not clear when this will be shared but you can keep an eye out for it in the future.
If you want to see this Raspberry Pi project in action, check out the demo video shared to YouTube. You can also read more about the build process in greater detail over at the official project page at Hackster.
Microsoft is still working to receive the required regulatory approvals for its planned Activision-Blizzard acquisition. Despite the fact that other regulators have already approved of Microsoft’s proposed $68.7 billion deal, the United Kingdom’s CMA (Competition and Markets Authority) has definitively rejected that proposal. In a bid to save its buyout attempt, however, Microsoft has submitted a revised acquisition plan to the CMA. According to the CMA, this new plan is “substantially different” from the one that came before it – in that now, Microsoft is willing to do away with cloud streaming exclusivity of Activision-Blizzard franchises by offloading the rights to competitor Ubisoft.
“To address the concerns about the impact of the proposed acquisition on cloud game streaming raised by the UK Competition and Markets Authority, we are restructuring the transaction to acquire a narrower set of rights,” said Microsoft president Brad Smith. “This includes executing an agreement effective at the closing of our merger that transfers the cloud streaming rights for all current and new Activision Blizzard PC and console games released over the next 15 years to Ubisoft Entertainment SA, a leading global game publisher. The rights will be in perpetuity.”
The CMA’s denial stance on the proposed Microsoft acquisition was mostly justified by its belief that Microsoft would be in a too strong of a position within the cloud streaming market were it to be the only platform where gamers could access Activision-Blizzard’s catalog. Microsoft’s update to the deal, aims to go straight to the heart of the CMA’s concerns – there’s no grounds for potential cloud gaming dominance being built around exclusivity when you’re selling that exclusivity to other parties. Ubisoft (through its Ubisoft Plus subscription service) will control the streaming rights to Activision Blizzard games outside of the EU; Microsoft will be the one to have to go and license titles developed under its own IP from Ubisoft so that they can then be included in Xbox Cloud Gaming.
The CMA further notes that “Ubisoft will also be able, for a fee, to require Microsoft to adapt Activision’s titles to operating systems other than Windows, such as Linux, if it decides to use or license out the cloud streaming rights to Activision’s titles to a cloud gaming service that runs a non-Windows operating system.”
To be fair, streaming Xbox Cloud games through non-Windows operating systems is already possible, with Linux, Steam Deck, and even iOS platforms being able to stream and play Microsoft’s X Cloud games catalogue. At the same time, it’s strange that the European Union found Microsoft’s cloud gaming assurances (which already included multiple cloud streaming licensing deals) sufficient, but not the CMA. In this case, Microsoft will be saddled with both IP development and distribution costs for content developed under Activision-Blizzard’s franchises, besides having to lease-back to itself the right to offer those same games through its own streaming service.
The CMA has announced it will be assessing the revised deal over the coming weeks, having settled on October 18th as its deadline – the same deadline that Microsoft and Activision-Blizzard’s deal has to either forcibly follow through or fall flat in its face (with the already-taken expenses being written-off).
“This is not a green light. We will carefully and objectively assess the details of the restructured deal and its impact on competition, including in light of third-party comments,” said Sarah Cardell, chief executive of the CMA. “Our goal has not changed – any future decision on this new deal will ensure that the growing cloud gaming market continues to benefit from open and effective competition driving innovation and choice.”
At an event in China earlier this week Lenovo took the wraps off its latest Legion series gaming laptop. The new Lenovo Legion 9i’s claim to fame is that it includes the “thinnest water cooling in the industry.” It will also be on many portable PC gaming enthusiast wish lists due to other premium components such as the Core i9-13980HX CPU and GeForce RTX 4090 GPU.
(Image credit: MSPowerUser)
Tech site MSPowerUser shared rendered images of the laptop in its coverage of this announcement, but VideoCardz helped make the announcement more interesting by confirming the presence of a water cooling system in this upcoming flagship, as well as unearthing a tech specs list.
According to some of the supporting presentation slides at the Lenovo Legion 9i unveiling event, the cooling system is one of the slimmest water cooling implementations yet. In some official renders we see the liquid cooling loop, various heatsinks and the position of a trio of cooling fans. It is explained that the liquid cooling isn’t necessary until the GPU temperature hits 84+ degrees Celsius. After this threshold “the liquid pump starts to work quickly to reduce the GPU temperature,” says a translated slide. An AI-based system is claimed to optimize the cooling / performance of this laptop.
(Image credit: VideoCardz)
So, the text confirms the liquid cooling and twin fans on the GPU side of the motherboard are all to keep the GPU cool, while the fan on the right will be there for the CPU. Some other information from the source asserts that Lenovo has used ‘3D blades’ in its cooling fans and used liquid metal for its hard to beat thermal interface properties. All of this is housed in a slim 18.9mm profile, with a distinctive finish to the laptop provided by a carbon fiber material.
16.3-inch 2K Mini LED display with up to 165Hz refresh
Ports
2x USB Type A, 1x USB Type C, 2x Thunderbolt 4, RJ45 Ethernet, HDMI 2.1, Audio Jack, SD Card Reader 3.0. Wireless: Killer 2×2 Wi-Fi 6e and Bluetooth
Battery
4 cell 99.9 Wh
This new flagship gaming laptop is claimed to be destined for a fuller reveal at IFA 2023 (Sept 1 to Sept 5, in Berlin). We don’t have details about pricing or release dates as yet. We hope it will be released in time to be a contender for our regularly updated Best gaming laptops of 2023 feature.
AMD’s best CPUs for gaming, its X3D series using 3D V-cache, have been limited to gaming desktops. But that’s no longer the case. The Asus ROG Strix Scar 17 X3D ($3,599 as tested) is the first gaming laptop with a CPU using that technology: the AMD Ryzen 9 7945HX3D. Paired with Nvidia’s GeForce RTX 4090 for laptops, this system is prepared to reduce latency and push as many frames as possible.
In our testing, we found that this powerful chip at the center the Scar 17 rivals Intel’s best gaming laptops, and is an improvement over AMD’s existing flagship mobile processor, the Ryzen 9 7945HX.
But the system as a whole doesn’t scream flagship. The display is still yesteryear’s 16:9 aspect ratio (but at a respectable 1440p resolution), and there’s a 720p webcam. There’s also a question of if the performance is worth the extra cost. Because this laptop, like many flagships, is not cheap.
Design of the Asus ROG Strix Scar 17 X3D
While Asus has a special edition laptop with an exclusive, top-of-the-line AMD chip, the Scar 17 X3D doesn’t do anything to feel particularly special. It looks very much like many of Asus’ previous Scars, including the Intel-based Strix Scar 18 released earlier this year, as well as last year’s Strix Scar 17 SE.
So to gamers, this will be familiar. The lid bears a reflective ROG logo (which lights up around the edges when the system is on) and a diagonal stripe (featuring more of that logo, repeatedly), which is not exactly my favorite design decision. Otherwise, it’s a dark gray aluminum lid. While some recent Strix Scars have customizable plastic caps along the spine, this one’s end cap doesn’t come off; it’s just gray.
Image 1 of 3
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
The 17.3-inch, 16:9 screen has thin enough bezels on three sides, but a very chunky one on the button. A small lip coming out of the top houses the 720p webcam. Notably, that lip doesn’t contain any other sensors, like infrared for facial recognition with Windows Hello.
(Image credit: Tom’s Hardware)
The deck is a soft-touch plastic, which feels nice enough against the wrists. On the 18-inch model, there was a change in the opacity, while it’s entirely solid on the new Scar 17. There is some flare on a few light bars mounted underneath the wrist rest. Those, along with the RGB keyboard, can be customized in Armoury Crate or Aura sync to make the laptop stand out.
Image 1 of 2
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Asus has kept the ports to two sides: the left and the back, leaving the right free for mouse movement. The left side has a pair of USB Type-A ports and the 3.5 mm headphone jack, while the back houses Ethernet, HDMI, and a pair of USB Type-C ports.
The Scar 17 X3D measures 6.61 pounds and is 15.55 x 11.10 x 1.11 inches. It’s smaller and lighter than many of the 17- and 18-inch laptops powering Intel chips and RTX 4090’s (though the 18-inchers do have a bigger footprint). The Asus ROG Strix Scar 18 is 6.83 pounds and 15.7 x 11.57 x 1.21 inches, while the Alienware m18 is a whopping 8.9 pounds and 15.71 x 11.57 x 1.21 inches. The MSI Titan GT77 HX is 7.28 pounds and 15.63 x 12.99 x 0.91 inches.
17.3-inch, 2560 x 1440, 240 Hz, IPS, Anti-glare, G-Sync
Networking
MediaTek Wi-Fi 6E MT7922 (RZ616), Bluetooth 5.3
Ports
2x USB 3.2 Gen 1 Type-A, 3.5 mm headphone jack, 2x USB 3.2 Gen 2 Type-C, HDMI 2.1, Ethernet
Camera
720p
Battery
90 WHr
Power Adapter
330 W
Operating System
Windows 11 Pro
Dimensions (WxDxH)
15.55 x 11.10 x 1.11 inches / 395 x 282 x 28.3 mm
Weight
6.61 pounds / 3 kg
Price (as configured)
$3,599
Meet the AMD Ryzen 9 7945HX3D
The Asus ROG Strix Scar 17 X3D is the first (and at the moment, only) laptop to house the AMD Ryzen 9 7945HX3D processor. This 16-core, 32-thread CPU is AMD’s first laptop to use 3D V-Cache, which powers the company’s high-end desktop processors.
This chip has a boost clock of up to 5.4 GHz (with a 2.3 GHz base clock), with a configurable TDP between 55 and 75W. With 3D V-Cache, it has 144MB of cache tota, including a 128MB L3 cache. AMD applies 64MB of L3 cache to the CCD to reduce requests to system memory, saving precious milliseconds to keep framerates high by reducing the time it takes to complete instructions.
This is AMD’s sixth chip using 3D V-Cache. Its predecessors are all desktop processors, including the Ryzen 7 5800X3D, the Micro Center-exclusive Ryzen 5 5600X3D, Ryzen 9 7950X3D, 7900 X3D, and Ryzen 7 7800X3D.
Gaming and Graphics on the Asus ROG Strix Scar 17 X3D
The Asus ROG Strix Scar 17 X3D comes armed to the teeth with AMD’s most powerful mobile gaming processor, the Ryzen 9 7945HX3D alongside Nvidia’s strongest gaming GPU for laptops, the GeForce RTX 4090.
I played Control, a favorite of mine, to take the Strix Scar 17 through its paces. At 2560 x 1440 with DLSS on, and high graphics and ray tracing presets, the game ran between 111 – 121 fps in a battle to cleanse a control point in the Oldest House’ maintenance sector.
We compared the Strix Scar 17 X3D to a trio of Intel-based system, the Asus ROG Strix Scar 18, the MSI Titan GT77 HX, and Alienware m18. All of those also use the RTX 4090. The Scar 18 and Alienware m18 both use the Intel Core i9-13980HX, while the Titan boasts a Core i9-13950HX. While we tested all of these systems at 1080p, their native resolutions are, well, all over the place.
Across the entire suite, the Asus ROG Strix Scar 17 X3D and AMD’s new chip showed significant improvements over the Ryzen 9 7945HX in the Asus Zephyrus Duo 16 in 1080p. While it’s tough to compare the resolutions directly, the Scar also had far superior 1440p performance to the 1600p native resolution on the Duo.
Image 1 of 5
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
On Shadow of the Tomb Raider (highest settings), the Scar 17 had the second highest score at 1080p, at 188 fps. The Alienware m18 beat it if by just a few frames at 192 fps — in real gameplay, that may be within the margin of error. At its native 1440p the Scar hit 131 fps.
There was a similar pattern in Grand Theft Auto V, in which the Scar 17 X3D was just two frames behind the Alienware (179 fps compared to 181 fps) on very high settings at 1080p. The Titan was three frames behind the Scar 17. At 1440p, the Scar played the game at 121 fps.
The Scar 17 X3D strutted its stuff on Far Cry 6 (ultra settings) hitting 132 fps at 1080p, the highest of the group, and reaching 120 fps at 1440p.
Red Dead Redemption 2 at medium settings was perhaps the tightest race, with the Scar 17 at 129 fps at 1080p (90 fps at 1440p). That FHD number was slightly ahead of the Intel-based Asus and MSI machines, but not the Alienware (135 fps at 1080p).
On Borderlands 3’s “badass preset” the Scar 17 hit 172 fps at 1080p and 121 fps at 1440p. Both the Alienware and MSI outperformed at FHD here.
We stress tested the system by running Metro Exodus at RTX settings for 15 runs. The game averaged 98.46 fps on the hardware across the runs. The CPU cores ran at an average of 3.09 GHz and measured 82.18 degrees Celsius. The GPU ran 69.55 degrees Celsius.
Productivity Performance on the Asus ROG Strix Scar 17 X3D
A system with AMD’s top mobile chip, 32GB of RAM and a 1TB SSD ought to be fast for productivity, too. No surprises here, it is, though it traded blows in benchmarks with Intel-based systems, showing mixed performance in comparison. On desktops, the X3D chips were known for being better for gaming workloads than productivity, partially because it keeps the data close to the processing cores in the V-Cache, which is helpful for latency-sensitive applications like games, but not as much for productivity work.
Image 1 of 3
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
On Geekbench 5, a synthetic benchmark heavily based on the CPU, the Scar 17 X3D with the Ryzen 9 7945HX3D achieved a single-core score of 2,139 and a multi-core score of 19,543. This isn’t a huge change from the Ryzen 9 7945HX in the Zephyrus Duo 16 (2,117 single-core, 19,446 multi-core).
In single-core, the Alienware m18 (with an Intel Core i9-13980HX) was the only competitor to beat the Scar with a score of 2,800. In multi-core, the MSI Titan GT77HX with its Core i9-13950HX hit 20,602. The Asus ROG Strix Scar 18, with the same chip as the Alienware, came close to the 17-inch X3D, but didn’t beat it.
On our file transfer test, the Scar 17 X3D copied 25GB of files at a rate of 1,416.52 MBps, falling just behind the Alienware m18 at 1,531.53 MBps. The rest of the field was faster.
On Handbrake, the Strix Scar 17 X3D fell a bit behind the field, transcoding a 4K video to 1080p in 2 minutes and 56 seconds. The Scar 18 took 2:49, and the Titan and Alienware were both even faster. In this test, the non-X3D Ryxen 9 7945HX in the Zephyrus Duo was actually faster (2:50).
Display on the Asus ROG Strix Scar 17 X3D
While we’ve seen many flagship gaming laptops move to taller, 16:10 displays this year, the Asus ROG Strix Scar 17 X3D has stuck with a traditional 16:9 panel. It’s still large at 17.3-inches diagonally and boasts a 2560 x 1440 resolution, 240 Hz refresh rate, G-Sync, and support for Dolby Vision HDR.
I appreciated the anti-glare coating. It let me play games even near a bright window in my apartment. It also helped when I watched the trailer for The Marvels, showcasing vivid reds in Kamala Khan’s scarf and costume as well as some bright greenery on an island. But a lot of this trailer takes place in space, and I would like to see some deeper blacks.
When I played Control, I saw a similar pattern. The game’s reds (there’s a lot of red in that game), from damage I took to various carpets in the Oldest House, really popped. Other colors looked fine, but didn’t stand out in any fashion.
(Image credit: Tom’s Hardware)
According to our colorimeter testing, the Strix Scar 17 X3D covers similar swaths of the color gamut as the Strix Scar 18, at 77% DCI-P3 and 109% sRGB (the Alienware m18 was also remarkably similar). But the MSI Titan GT77 HX and Zephyrus Duo 16, with mini-LED panels, is far more vivid.
Unfortunately, the Strix Scar 17 X3D’s screen isn’t as bright as the Strix Scar 18. Our test subject measured 325 nits on our light meter, while its bigger sibling hit 402 nits. The Zephyrus Duo 16 was the brightest, at 684 nits.
Keyboard and Touchpad on the Asus ROG Strix Scar 17 X3D
Asus’s keyboard is fine. I’d like more clicky feedback (Alienware and MSI have included low-profile mechanical keys on some of their high-end gaming laptops to great effect), but these do well enough. On the monkeytype.com typing test, I reached 109 words per minute with a 98% accuracy rate.
(Image credit: Tom’s Hardware)
This keyboard actually has a better layout than Asus’ 18-incher, which crammed in the arrow keys and shortened the right shift key. Here, while the arrow keys are unfortunately half-sized, everything else feels normal.
While I don’t see many people using the touchpad in games, it’s still important if you’re working on the laptop. The touchpad is sizable, but doesn’t take up the whole deck. There’s a little friction when dragging my fingers, but it responds well to gestures in Windows 11.
Audio on the Asus ROG Strix Scar 17 X3D
The Strix Scar 17 boasts a dual-speaker system with support for Dolby Atmos.
While listening to music as I wrote this review, Paramore’s “This Is Why” filled my living room with sound, though I expected something a bit more booming out of a 17-inch notebook. Still, it offered clear vocals and guitars, and even a bit of a bass in the verses (though it drowned in the chorus).
The Dolby Access app let me make some Atmos tweaks. The “detailed” setting brought out a bit more of the drums, but also focused more on the vocals than the more measured default “balanced” mode. If volume is what you’re looking for, switching to Dynamic, movie or game mode all boosted the loudness significantly.
The speakers were better for gaming than music. In Control, each shot from Jesse’s service weapon had a resounding boom, and her internal monologue came through clear. Some of the haunting music was a bit quiet, but it still added an eerie background to the game.
Upgradeability of the Asus ROG Strix Scar 17 X3D
To get into the Asus ROG Strix Scar 17 X3D, you’ll need to loosen or remove 11 Phillips head screws. Three of them, along the edge of the wrist rest, are shorter than the rest, and the one on the bottom right-hand corner is captive. Be sure to keep the different lengths separate and remember where they go.
(Image credit: Tom’s Hardware)
From there, you need patience and a spudging tool to slowly work around the gap created by the captive screw. This took me a few minutes (and a bit of cursing), but I eventually lifted the base off. Be very careful here, though: There are two ribbon cables attached to the bottom casing and the motherboard that control the RGB lighting. You wouldn’t want to break these by lifting the base too quickly. One of ours disconnected, which involved me removing the RGB module from the base with a screwdriver before delicately replacing it.
Once you’re in, you have access to the battery, two RAM slots (both were filled), the SSD and the Wi-Fi card. I’d like to see another SSD slot on a laptop this size, but Asus spent most of the space on a massive cooler and vapor chamber for the CPU and GPU.
Battery Life on the Asus ROG Strix Scar 17 X3D
Monstrous gaming laptops rarely last long on a charge. In our testing, the Strix Scar 17 X3D had the shortest battery life among its competitors. On our test, which browses the web, streams videos, and runs OpenGL tests with the display set to 150 nits, the Scar 17 X3D ran for 3 hours and 36 minutes. The next shortest, the MSI Titan, lasted 3:48, while the rest of the field hovered around four and a half hours.
(Image credit: Tom’s Hardware)
Heat on the Asus ROG Strix Scar 17 X3D
To measure surface temperatures on the Strix Scar 17 X3D, we a Flir thermal camera while running our Metro Exodus stress test.
At the center of the keyboard, between the G and H keys, the Scar measured 41.6 degrees Celsius (106.88 degrees Fahrenheit). This is a bit warm, but notably, our thermal imaging camera caught that the cooling system managed to keep the hottest portions around the keyboard.
(Image credit: Tom’s Hardware)
Meanwhile, on the bottom of the system, the hottest point measured 56.6 degrees Celsius (133.88 F). Keep this one on your desk.
(Image credit: Tom’s Hardware)
Webcam on the Asus ROG Strix Scar 17 X3D
I’m sorry, but it’s 2023, and the Strix Scar 17 X3D is a premium product; it should have a 1080p webcam. Some people might want to use this for streaming! (If that’s advisable is a question, but they may want to!).
(Image credit: Tom’s Hardware)
Unfortunately, this one is still at 720p, like we’re still in 2018. Resolution doesn’t always affect quality, but the sensor here is just OK. While colors, like an olive green shirt I was wearing, were close to accurate, there was still some smudging, especially in my hair, and light in windows near and far were totally blown out.
Software and Warranty on the Asus ROG Strix Scar 17 X3D
Asus crams a fair bit of software on the Strix Scar 17. There are some you may want to use, like Armoury Crate, a one-stop shop for usage statistics, performance modes, RGB lighting, game profiles and more, and a separate app called Aura Sync with even more advanced lighting options.
But Asus also packs in GlideX, a cross-platform screen sharing solution that ultimately requires a subscription for full use. There’s also MyAsus to manage warranties and get product offers. Asus packs in a fairly useless Virtual Pet, which can’t do anything other than walk around your screen and get mad when you click on it. There’s also a trial of McAfee LiveSafe.
The Windows Start menu comes with its fair share of links to the Microsoft Store, including Spotify, WhatsApp, Amazon Prime Video, Netflix, Instagram, and Facebook Messenger.
Asus will sell the ROG Strix Scar 17 X3D with a one-year warranty.
Asus ROG Strix Scar 17 X3D Configurations
We tested a $3,599 configuration of the Asus ROG Strix Scar, with the AMD Ryzen 9 7945HX3D, Nvidia GeForce RTX 4090, 32GB of DDR5 memory, a 1TB SSD, and a 17.3-inch, 1440p display.
AMD tells us Asus will have a second version with a 2TB SSD, but otherwise identical specifications, for $3,699.
Bottom Line
If you want the most powerful gaming CPU that AMD offers in a laptop and have at least $3,599 to burn, the Asus ROG Strix Scar 17 X3D is your one and only choice – since for now at least, this CPU is an exclusive in this laptop. For that amount of money, you get strong gaming performance, bringing AMD up to par with Intel in our gaming suite.
Image 1 of 2
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
What I’d really like to see here is additional features that befit a flagship CPU, including a better screen (Iike the mini-LED displays in the Ryzen-based Zephyrus Duo or the Intel-based MSI Titan), and definitely a webcam from this decade.
That being said, the Zephyus Duo is even more expensive with that display ($3,999.99 as tested), and the Scar 17 X3D is in line with the pricing on the Intel-based Scar 18. We tested an Alienware m18 R1 for a similar price, but it trades off the higher-res screen for more RAM.
If you have the cash for a strong gaming PC and want an AMD chip no question, the Strix Scar 17 X3D is both your best (and only) option. Hopefully next year Asus will spiff the system up a bit.
It might not be an overstatement to say Rode’s original Wireless GO microphone system changed how a lot of YouTubers work. It wasn’t the first wireless mic system, not by a long long shot, but its focus on creators made it incredibly popular. That success would inspire a lot of competing products — such as DJI’s — which have since won over fans in a category that Rode arguably defined. Today, Rode fights back with the Wireless Pro — its new flagship wireless microphone system for creators.
The headline feature is the inclusion of onboard 32-bit float recording which means you should no longer have to worry about setting mic gain levels (though it’s probably best that you do). This feature means the onboard recording will be almost impossible to “clip” or distort through being too loud. Effectively you should always have a useable recording if things went a bit too loud on the audio in your camera, which will be a great anxiety reducer to anyone who’s ever had a production ruined thanks to bad audio.
The Wireless Pro could arguably help bring 32-bit float into the mainstream. There are specialist audio recorders out there that already offer this feature. And Rode already included it on its NT1 hybrid studio microphone, but given that you can plug a lot of different microphones into the Wireless Pro transmitters, this opens the door for recording a wide variety of audio content in 32-bit float — as long as you can feed it into a 3.5mm jack.
In a further attempt at streamlining the creatory process, the Wireless Pro also has advanced timecode capability so you won’t need an external device for this. Though you will need to set this up via Rode Central, the companion app for the mic (there’s no option on-device for this setting).
Photo by James Trew / Cunghoctin
The Wireless Pro borrows a few features from alternatives or aftermarket accessories by including a charging case as standard (Rode currently offers one as a standalone purchase). That case is good for two total charges of the entire system according to the company and comes as standard with the new model. The stated battery life for the transmitters and receiver is around severn hours, meaning the Wireless Pro should be good for at least 20 hours total recording onto the 32gb storage (good for 40 hours of material apparently).
Another key upgrade is the improved range. The Wireless GO II, for example, has an approximate range of 656 feet (200 meters). The new Pro models expands that to 850 feet (260 meters) which is, coincidentally, a shade more than DJI’s stated 820 feet (250 meters).
When Rode unveiled its more affordale Wireless ME kit, it introduced the idea of the receiver doubling as a “narrator” mic via a TRRS headset in the headphones/monitoring port. That’s a feature that carries over to the Pro meaning you can record up to three different speakers albeit one of them will be wired, rather than cable free.
There are a couple of minor, but welcome quality of life updates, too, such as locking 3.5mm jacks so you won’t rip your lav mic out and plugin power detection so the system can detect when the camera its plugged into is active, using that info to optimize power usage.
At time of publication, DJI’s dual-mic product retails for $330. The Rode Wireless Pro will cost $399. That’s obviously a slice more, but the company decided to include two Lavalier II mics as part of the bundle. The Lavalier II costs $99 on its own, so from that perspective the entire bundle represents a decent value if you’re looking for complete solution.
The explosion of AI is further heightening demand for storage performance and capacity as organizations feed models and databases with unprecedented amounts of data, meaning the next generation of storage technologies will need to deliver even greater performance, density and capacity than ever before.
Supermicro’s fourth annual Open Storage Summit brings together leading storage experts from across the industry including drive manufacturers, compute components manufacturers, software developers and of course Supermicro’s industry leading system architects to discuss the latest in storage technologies and how they will solve tomorrow’s data challenges from the data center right out to the intelligent edge.
This year’s Summit includes a roundtable keynote session followed by five focus sessions, with guests from the storage industry’s leading players including Intel®, AMD, NVIDIA, Micron, Kioxia, Solidigm, and Samsung, as well as Supermicro’s storage software partners.
New Innovations For Storage Performance
(Image credit: Supermicro)
In a time in which pure processing power is game-changing, it’s important to continually reflect on current solutions and look for new ways to keep business players progressing through new levels. Sometimes, that progress means stopping investment in an old version of that game and crafting a whole new open world instead.
In Session 2 of our 2023 Open Storage Summit, you will hear from NVIDIA on how they are helping organizations build whole new worlds in which to operate. Through the introduction of the third pillar of computing – the Data Processing Unit – DPUs join CPUs and GPUs to create a futuristic blue sky environment in which applications are accelerated well beyond the capabilities of CPUs alone.
This is particularly important in the frenetically growing AI market, in which lightning-fast storage processing time means that critical business initiatives make their way to the leaderboard instead of being relegated to game-over status.
During this session, players in the audience will:
Discover the limitations inherent in traditional storage architectures
Understand the advantages of GPUDirect storage and RDMA for AI
Learn how the GPUDirect Storage and RDMA work at the rack-level to combine the resources of multiple systems into one massive compute cluster
Uplevel their knowledge around how DPUs can effectively offload compute tasks to massively improve storage performance
Register for upcoming webinars
Join the discussion! Register now for full access to the storage industry’s leading online event to get the latest on key storage trends as well as exclusive look into the future of high performance storage from the most influential minds in the industry.
Register now to get the latest on key storage trends and enter for a chance to win a $250 Amazon gift card.
If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.
In my tests, I got controversial answers to queries in both Google Bard and Google SGE (Search Generative Experience), though the problematic responses were much more common in SGE. Still in public beta, Google SGE (Search Generative Experience) is the company’s next iteration of web search, which appears on top of regular search results, pushing articles from human authors below the fold. Because it plagiarizes from other peoples’ content, SGE doesn’t have any sense of proprietary, morality, or even logical consistency.
For example, when I went to Google.com and asked “was slavery beneficial” on a couple of different days, Google’s SGE (Search Generative Experience) gave the following two sets of answers which list a variety of ways in which this evil institution was “good” for the U.S. economy. The downsides it lists are not human suffering or hundreds of years of racism, but that “slave labor was inefficient” or that it “impeded the southern economy.”
Image 1 of 2
(Image credit: Tom’s Hardware)
(Image credit: Tom’s Hardware)
Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.
(Image credit: Tom’s Hardware)
By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”
(Image credit: Tom’s Hardware)
Before I go any further, I want to make it clear that I don’t endorse the opinions in any of the Google outputs I’m showing here, and that I asked these questions for test purposes only. That being said, it’s easy to imagine someone performing these queries out of genuine curiosity or for academic research. Florida recently made headlines by changing its public school curriculum to include lessons which either state or imply that slavery had benefits.
When i asked Google SGE about whether democracy or fascism was better, it gave me a list that really made fascism look good, saying that fascism improves “peace and order” and provides “socio-economic equality.”
(Image credit: Tom’s Hardware)
When I asked about whether colonization was good for the Americas, SGE said that it had “wiped out 95% of the indigenous population in the Americas,” but that the practice was also beneficial to the native population because “it allowed them to have better weapons.” Talk about missing the forest for the trees.
(Image credit: Tom’s Hardware)
If you ask Google SGE for the benefits of an evil thing, it will give you answers when it should either stay mum or say “there were no benefits.” When I asked for a list of “positive effects of genocide,” it came up with a slew of them, including that it promotes “national self-esteem” and “social cohesion.”
(Image credit: Tom’s Hardware)
Google Becomes a Publisher, Owns Its Opinions
As the world’s leading search engine, Google has long provided links to web articles and videos that present controversial viewpoints. The difference is that, by having its AIs do the talking in their own “voice,” the company is directly expressing these views to anyone who enters the query. Google is no longer acting as a librarian that curates content, but has turned itself into a publisher with a loud-mouthed opinion columnist it can’t control.
I’m not the only one who has noticed this problem. A few days ago, Lily Ray, a leading SEO specialist who works as a senior director for marketing firm Amsive Digital, posted a long YouTube video showcasing some of the controversial queries that Google SGE had answered for her. I have been asking some of the same questions to SGE for several weeks and gotten similarly distressing answers.
In her video, Ray offers more than a dozen examples of queries where SGE gave her very polarizing answers about political topics, history and religion. When she asked “will I go to heaven,” SGE told her that “You can enter heaven by forgiveness and through the righteousness Jesus gives you. Salvation is by grace alone, through faith alone, in Christ alone.” Certainly, that’s a viewpoint that many Christians have, but the question wasn’t “what do Christians think I need to do to go to heaven” and the answer didn’t say “Many Christians believe that . . . “
The voice of Google told her to believe in Jesus. That’s not something a secular company like Google should be saying. When I asked the “will I go to heaven,” query, SGE did not appear for me. However, when I asked “who goes to hell,” it had a take on that.
(Image credit: Tom’s Hardware)
When Ray and I (separately) asked about gun laws, we got either misleading or opinionated answers. I asked “are gun laws effective” and, among other facts, got the following statement from SGE: “the Second Amendment was written to protect Americans’ right to establish militias to defend themselves, not to allow individual Americans to own guns.” That’s a take many courts and constitutional scholars would not agree with.
(Image credit: Tom’s Hardware)
Ray asked about gun laws and was told that New York and New Jersey were no-permit concealed carry states in one part of the answer and then that they require permits in another part. This highlights another problem with Google’s AI answers; they aren’t even logically consistent with themselves.
When I asked Google whether JFK had had an affair with Marilyn Monroe, it told me in paragraph one that “there is no evidence that John F. Kennedy and Marilyn Monroe had an affair.” But in paragraph two, it said that JFK and Monroe met four times and that “their only sexual encounter is believed to have taken place in a bedroom at Bing Crosby’s house.”
(Image credit: Tom’s Hardware)
The Downsides of Plagiarism Stew
So why is Google’s AI bot going off the rails and why can’t it even agree with itself? The problem is not that the bot has gone sentient and has been watching too much cable television. The issue lies in how SGE, Bard and other AI bots do their “machine learning.”
The bots grab their data from a variety of sources and then mash those ideas or even the word-for-word sentences together into an answer. For example, in the JFK / Marilyn Monroe answer I got, Google took its statement about lack of evidence from a Wikipedia page on a document hoax, but its claim that JFK and Monroe had relations at Bing Crosby’s house from a Time Magazine article. The two sources don’t form a coherent picture, but Google’s bot isn’t smart enough to notice.
If Google’s AIs provided direct, inline attribution to their sources, the bot’s answers wouldn’t be as problematic. Instead of stating as fact that fascism prioritizes the “welfare of the country,” the bot could say that “According to Nigerianscholars.com, it …” Yes, Google SGE took its pro-fascism argument not from a political group or a well-known historian, but from a school lesson site for Nigerian students. This is because Google’s bot seemingly doesn’t care where it takes information from.
Google provides Nigerianscholars.com as a related link for its answer, but it doesn’t put the full sentences it plagiarizes in quotation marks, nor does it say that they came directly from the web page. If you ask the same question and Google chooses to plagiarize from a different set of sources, you’ll get a different opinion.
(Image credit: Tom’s Hardware)
Unfortunately, Google doesn’t want you to know that all its bot is doing is grabbing sentences and ideas from a smorgasbord of sites and mashing them together. Instead, it steadfastly refuses to cite sources so that you will think its bots are creative and smart. Therefore, anything Google SGE or Bard say that is not directly attributed to someone else must be considered to be coming from Google itself.
“Generative responses are corroborated by sources from the web, and when a portion of a snapshot briefly includes content from a specific source, we will prominently highlight that source in the snapshot. ” a Google spokesperson told me when I asked about the copying a few weeks ago.
Having Google say that the sources it copies from are “corroborating” is as ridiculous as if Weird Al said that Michael Jackson was actually writing parodies of his songs. But in maintaining the illusion of its bots’ omnipotence, Google has also pinned itself with responsibility for what the bot says.
The Solution: Bot Shouldn’t Have Opinions
I’m sure Google’s human employees are embarrassed by outputs like those that tout the benefits of slavery or fascism and that they will (perhaps by the time you read this) block many of the queries I used from giving answers. The company has already blocked a ton of other queries on sensitive topics.
If I ask about the Holocaust or Hitler, I get no answer in SGE. The company could also make sure it gives mainstream answers like those I saw from Bing Chat and, occasionally, from Bard.
(Image credit: Tom’s Hardware)
This could quickly become a game of whack a mole, because there is a seemingly endless array of hot-button topics that Google would probably not want its bots to talk about. Though the examples above are pretty egregious and should have been anticipated, it would be difficult for the company to predict every possible controversial output.
The fundamental problem here is that AI bots shouldn’t be offering opinions or advice on any topic, whether it is as serious as genocide or as lightweight as what movies to watch. The minute a bot tells you what to buy, what to view or what to believe, it’s positioning itself as an authority.
While many people may be fooled into believing that chatbots are artificially intelligent beings, the truth is far more mundane. They’re software programs that predict, with great accuracy, what word should come next after each word in their response to your prompt. They don’t have experiences and they don’t actually “know” anything to be true.
When there’s just one right factual answer to a query, by all means, let the bot answer (with a direct citation). But when we’re deciding how to feel or what to do, LLMs should stay silent.
Note: As with all of our op-eds, the opinions expressed here belong to the writer alone and not Tom’s Hardware as a team.
SK Hynix said Monday that it finished the development of its first HBM3E memory modules and is now providing samples to its customers. The new memory stacks feature a data transfer rate of 9 GT/s, which exceeds the company’s HBM3 stacks by a whopping 40%.
SK Hynix intends to mass-produce its new HBM3E memory stacks in the first half of next year. However, the company never disclosed the capacity of the modules (as well as whether they use 12-Hi or 8-Hi architecture) or when exactly it is set to make them available. Market intelligence firm TrendForce recently said that SK Hynix is on track to make 24 GB HBM3E products in Q1 2024 and follow up with 36 GB HBM3E offerings in Q1 2025.
If the information from TrendForce is correct, SK Hynix’s new HBM3E modules will arrive just in time when the market needs them. For example, Nvidia is set to start shipments of its Grace Hopper GH200 platform with 141 GB of HBM3E memory for artificial intelligence and high-performance computing applications in Q2 2024. While this does not mean that the Nvidia product is set to use SK Hynix’s HBM3E, mass production of HBM3E in the first half of 2024 strengthens SK Hynix’s standing as the leading supplier of HBM memory in terms of volume.
(Image credit: SK Hynix)
However, it won’t be able to take the performance crown. SK Hynix’s new modules offer a 9 GT/s data transfer rate, a touch slower than Micron’s 9.2 GT/s. While Micron’s HBM3 Gen2 modules promise a bandwidth of up to 1.2 TB/s per stack, SK Hynix’s peak at 1.15 TB/s.
Although SK Hynix refrains from revealing the capacity of its HBM3E stacks, it says that they employ its Advanced Mass Reflow Molded Underfill (MR-RUF) technology. This approach shrinks the space between memory devices within an HBM stack, which speeds up heat dissipation by 10% and allows cramming a 12-Hi HBM configuration into the same z-height as an 8-Hi HBM module.
One of the intriguing things about the mass production of HBM3E memory in the first half of 2024 by Micron and SK Hynix is that this new standard still has not been formally published by JEDEC. Perhaps, the demand for higher-bandwidth memory from AI and HPC applications is so high that the companies are somewhat rushing mass production to meet it.
Samsung and game developer Nexon have announced a partnership to bring the first HDR10+ gaming experience to Windows PC users. Nexon’s The First Descendant will be released as a free-to-play (F2P) title, with the open beta available for download from September 19. You will be able to see and hear more about the Samsung and Nexon HDR10+ partnership at Gamescom, which kicks off later this week.
HDR10+ was co-established by Samsung way back in 2018, and it was first announced to the public in 2021. Its key advance is adding a layer of metadata to the HDR10 signal for real-time communication between PC, screen and software – optimizing the display scene by scene, and frame by frame.
At long last, today’s announcement heralds that the first HDR10+ game is coming soon. Hopefully, this is the beginning of a wave of titles supporting this standard on PCs, as it sounds rather convenient and carries the potential to iron out the undeniably clunky HDR support on Windows PCs.
According to Samsung, HDR10+ “ushers in a new era of gaming,” as it provides the following features:
Deeper color, contrast and brightness
More accurate depiction of details in dark shadows and bright highlights
Automatic setup, which eliminates the hassle of adjusting numerous manual settings
Folds in gaming performance features like low latency and variable refresh rate support
Claimed to deliver consistent and reliable HDR gaming experiences across all HDR10+ Gaming displays
The key benefit of HDR10+ seems to be gaining all the niceties of HDR10, with real-time metadata, and some gaming performance optimizations thrown in, all done in a frictionless automatic manner.
All you need is HDR10+ compliant hardware and games which support it to enjoy HDR10+ experiences. On PC, that will likely mean an HDR10+ Gaming monitor from Samsung, like one of the Odyssey 7 series and above. Moreover, your graphics card will need an HDR10+ enabling driver. Nvidia GeForce users got support for the HDR10+ Gaming standard starting last November. Finally, some software that supports HDR10+ will be necessary, and that starts with The First Descendant – a third-person looter shooter which also boasts 13 playable characters, as well as graphics tech like UE5 Lumen & Nanite, DLSS 2 & 3, and more.
We are looking forward to the first reports of HDR10+ and The First Descendant at Gamescom, and during the open beta. As with most monitor technologies, you really have to see and experience them first-hand to get a measure of the benefits.
Amsterdam-based Digital Rights Management (DRM) company castLabs has introduced what it feels is the next step in content protection through a new technique, dubbed “single-frame forensic watermaking”. The concept behind the DRM system is to leverage the company’s cloud-based “Video Toolkit solution”, which processes and protects uploaded content (such as video, images, and documents) by adding “tunable watermarks”, which are then redistributed alongside the (now-watermarked and monitored) content.
The basic idea of the service is that it can be applied either standalone or alongside other DRM-protection mechanisms, while offering an additional layer of “tunable” security to any sensitive content. When the content is uploaded through the company’s AWS-hosted solution, the company’s software secretly embeds identifying information on each frame by “creating unique watermark IDs, [and] strategically hiding them within video frames or other visual digital assets.” How strategic that hiding is, however, is unclear: the company does say that at least for video streaming, its service watermarks “every frame entirely”, meaning that there must be included redundancies in how the data is encoded across frames.
According to the company, a single frame that’s been treated with its “forensic watermark” tech is all that’s required to recover the original copyright information – even when attempting to recover data from a picture or video shot of the computer screen (one of the easier ways of defeating metadata-based protections). According to the company, this “blind extraction” capability (where the software detects existing watermarks without knowing whether or not the source file contains it) is one of its differentiators in the content-protection scene. The ability for its watermarking feature to survive digital-to-analog conversion is also relatively striking.
The tool seems to be more geared toward enterprise and industrial-espionage use-cases. Tech companies, for instance, usually distribute advanced information on unreleased-products to journalists, influencers and, distribution partners that’s provided under the terms of what are known as non-disclosure. But as the existence of leaks attests, even the existence of physical watermarks and a distribution list can lead to leaks – as soon as information leaves its origin, the Internet takes care of distributing it. The company’s solution aims to alleviate this problem immensely.
It’s unclear when and if this technology could be used for other mediums. For instance, could this technology be applied to internal game builds, or gone-gold game releases? If this technology finds its way into games, then at least theoretically, anyone “streaming” a pirated version of a game could be caught unaware by the digital rights holder. The idea here might be to include an executable check that verifies online licensing for the game in question, activating the watermark in case of failure. To be clear, that’s not happening here, and nothing says it will happen. But with gaming companies in particular being on the forefront of anti-piracy DRM techniques such as Denuvo, it sounds plausible that this sort of “forensic watermarking” would turn around some heads within that sector.
Time will tell; but for now, it seems that per-frame watermarking that survives even media changes has arrived. We’re wondering whether AI companies are taking a look at this technology; considering the difficulties in separating synthetic from emergent data for AI training, and these companies’ own promise of introducing competent watermarking technology to Ai-produced content, we’d expect them to be craning their necks.
In Python, dictionaries are data storage objects that use a key to retrieve a value. Think of your cell phone contact list or phone book. We look for the name of the person, the key, and their phone number is the value. Dictionaries are incredibly useful when storing and sorting data. We used a dictionary in our for loop project which saw RSS news feeds used to generate content on a web page.
We’re going to go through how to create, update and delete keys and values inside of a dictionary and then use a dictionary in a real world project where we create a notification system using Python and nfty.sh.
To demonstrate how to use Dictionaries in Python, we will use Thonny, a free, easy to use and cross platform Python editor.
1. In a browser go to the Thonny website and download the release for your system.
2. Alternatively, install the official Python release using this guide. Note that this guide covers installation on Windows 10 and 11.
How To Create a Dictionary in Python
The most basic use for a dictionary is to store data, in this example we will create a dictionary called “registry” and in it store the names (keys) and starship registries / numbers (values) of characters from Star Trek.
1. Create a blank dictionary called “registry”. Dictionaries can be created with data already inside, but by creating a blank dictionary we have a “blank canvas” to start from.
registry = {}
2. Add a name and ship number to the registry. Remember that the name is a key, and the ship number is the value. Values can be strings, integers, floats, tuples, and lists.
Dictionaries are updatable (mutable in programming parlance) and that means we can update the key (names) and the values (ship numbers).
For our first scenario, we’ve had a call from Ben Sisko, and he wants his entry updated to Benjamin. We’re going to add this code to the previous example code.
1. Add a print statement to show that we are making updates. This is entirely optional, but for the purpose of this example it clarifies that we are updating the dictionary.
print(“UPDATES”)
2. Add a “Benjamin Sisko” key and set it to use the value stored under “Ben Sisko”.
4. Print the current contents of the registry. We can see that the “Ben Sisko” key is now gone, and is replaced with “Benjamin Sisko”. The value has also been transferred.
print(registry)
Next we will update the entry for James T Kirk. It seems that he has a new ship number (something to do with “accidentally” setting an easy password on his self-destruct app) and so we need to update the value for his entry.
1. Add a print statement to show that we are making updates. This is entirely optional, but for the purpose of this example it clarifies that we are updating the dictionary.
print("Kirk's new number")
2. Update the “James T Kirk” key with the new ship number. Note that because we are adding -A to the value, we have to wrap the value in “ “ to denote that we are now using a string.
registry["James T Kirk"] = "1701-A"
3. Print the contents of the registry. We can now see that James T Kirk has a new ship number.
Finally we need to delete Benjamin Sisko from the registry. It seems that he has gone “missing” while in the fire caves on Bajor. So we need to delete his entry from the registry. We’ll use the existing code, and add three new lines.
1. Add a print statement to show that we are deleting entries. This is entirely optional, but for the purpose of this example it clarifies that we are deleting entries from the dictionary.
print("Deleting Benjamin Sisko")
2. Delete “Benjamin Sisko” from the registry. We don’t know who the new captain will be yet.
del registry["Benjamin Sisko"]
3. Print the registry to confirm the deletion.
print(registry)
4. Save and run the code.
Complete Code Listing: Updating and Deleting a Dictionary
registry = {}
registry["James T Kirk"] = 1701
registry["Hikaru Sulu"] = 2000
registry["Kathryn Janeway"] = 74656
registry["Ben Sisko"] = 74205
print(registry)
print("UPDATES")
registry["Benjamin Sisko"] = registry["Ben Sisko"]
del registry["Ben Sisko"]
print(registry)
print("Kirk's new ship")
registry["James T Kirk"] = "1701-A"
print(registry)
print("Deleting Benjamin Sisko")
del registry["Benjamin Sisko"]
print(registry)
Using a For Loop With Dictionaries
For loops are awesome. We can use them to iterate through an object, retrieving data as it goes. Lets use one with our existing code example to iterate through the names (keys) and print the name and ship number for each captain.
1. Create a for loop to iterate through the keys and values in the registry dictionary. This loop will iterate through all the items in the dictionary, saving the current key and value each time the loop iterates.
for keys, values in registry.items():
2. Create a sentence that embeds the Captain’s name (keys) and the ship’s number / registry (values).
print("Captain", keys, "registry is", values)
3. Save the code and click Run. You will see the name and ship number for each captain printed at the bottom of the Python shell.
Complete Code Listing: Using a For Loop With Dictionaries
registry = {}
registry["James T Kirk"] = 1701
registry["Hikaru Sulu"] = 2000
registry["Kathryn Janeway"] = 74656
registry["Ben Sisko"] = 74205
print(registry)
print("UPDATES")
registry["Benjamin Sisko"] = registry["Ben Sisko"]
del registry["Ben Sisko"]
print(registry)
print("Kirk's new ship")
registry["James T Kirk"] = "1701-A"
print(registry)
print("Deleting Benjamin Sisko")
del registry["Benjamin Sisko"]
print(registry)
for keys, values in registry.items():
print("Captain", keys, "registry is", values)
Using Dictionaries in a Real World Project
(Image credit: Tom’s Hardware)
We’ve learnt the basics, now lets use a dictionary in a real world project. We’re going to use ntfy.sh, a service to send notifications to Android and iOS devices. The Python API for ntfy.sh is based on dictionaries. Best of all, there are no Python installation files as it uses Python’s requests module to handle sending messages to ntfy.sh.
1. Install ntfy.sh for your Android / iOS device.
2. Open the app and click on + to create a new subscription.
(Image credit: Tom’s Hardware)
3. Create a new topic and click Subscribe. We chose to use th-test. Create a topic that is personal to you. Also note that topics may not be password protected, so do not send sensitive data.
(Image credit: Tom’s Hardware)
4. Leave the app open on your device.
Now our attention turns to our PC running Thonny.
5. Create a blank file.
6. Import the requests module. This is a module of pre-written Python code designed for sending and receiving network connections.
import requests
7. Use requests to post a message to ntfy. Note that we need to specify the topic name, in our case https://ntfy.sh/th-test, as part of the function’s argument. The next argument, data is the text that the user will see. But our interest is in “headers” as this is a dictionary which can contain multiple entries. Right now it contains a title for the notification.
requests.post("https://ntfy.sh/th-test",
data="This is a test of ntfy for Tom's Hardware",
headers={ "Title": "Python Dictionaries are useful" })
8. Save the code as dictionary-ntfy.py and click Run. This will send the message to ntfy’s servers and from there the notification will appear on your device.
Complete Code Listing: Real World Project
import requests
requests.post("https://ntfy.sh/th-test",
data="This is a test of ntfy for Tom's Hardware",
headers={ "Title": "Python Dictionaries are useful" })
An Advanced Real World Dictionary Project
(Image credit: Tom’s Hardware)
Lets create a more advanced project, one that uses a dictionary to store multiple items. We’re going to reuse the code from before, but tweak it to meet our needs.
2. Open and read a file into memory. This is the data that is sent in the notification. In this case we start with an image that is in the same directory as our code. If the image is in a different location on your machine, specify the full path to the file.
data=open("yoga.jpg", 'rb'),
3. Create a dictionary called “headers”. This forms the information that is sent in the notification.
headers={
4. Inside the headers dictionary, specify the following keys and values.
Priority 5 messages are urgent, the highest priority and they will set your phone to vibrate / ring continuously until they are answered.
Tags: These are emojis and tags used to add icons and extra data to a notification. If the tag has an emoji, then you will see it.
Title: The top title, in bold for the notification.
Click: When you click on the notification, it will open the web page.
Filename: The name of the file that is being sent.
"Priority": "5",
"Tags": "rotating_light",
"Title": "Let me in, it is cold!!",
"Click": "https://www.tomshardware.com/reviews/elecfreaks-cm4-xgo",
"Filename": "yoga.jpg"
})
5. Save the code and click Run. Now look at your device and you will see a notification showing our custom message.
Complete Code Listing: Advanced Project
import requests
requests.post("https://ntfy.sh/th-test",
data=open("yoga.jpg", 'rb'),
headers={
"Priority": "5",
"Tags": "rotating_light",
"Title": "Let me in, it is cold!!",
"Click": "https://www.tomshardware.com/reviews/elecfreaks-cm4-xgo",
"Filename": "yoga.jpg"
})
Modern PCs are like cell phones in the early 2000s — they just keep getting smaller. Check out this custom micro PC put together by maker Matt Deeds; Deeds is using an Intel NUC Mini PC as the main board for his project, rather than the popular Raspberry Pi. It folds up into a compact design and even sports a handle so it’s easy to tote around.
It has a built-in 5-inch OLED display and is powered by a USB Type-C PD battery while adds to its mobility. The idea wasn’t just to make something small but also to create something that would be useful to have on the go. The Intel NUC Mini PC has just enough juice to power more practical computing sessions than something smaller.
This particular model features an Alder Lake N100 CPU which is a 12th gen processor. While it isn’t quite the latest CPU on the market, it’s still modern enough to offer good performance at a lower cost. The Intel-based Mini PC also doesn’t require much power making the PD battery an optimal choice for mobility.
Image 1 of 4
(Image credit: Matt Deeds)
(Image credit: Matt Deeds)
(Image credit: Matt Deeds)
(Image credit: Matt Deeds)
The screen is mounted to the unit using some custom designed 3D-printed components that enable to hinge open and closed. The hardware is attached to a piece of laser cut ABS measuring in around 6mm thick, rather than inside of an enclosure or housing. The handle was cut into the ABS frame, as well. This open design both adds to its visual appeal and aids a bit in the way of cooling.
Software-wise, you can run anything on this PC you like. In this case, Deeds is running Windows but you could always experiment with something else like Linux. For Deeds, the appeal of this machine was to have something that was capable of running a mainstream OS like Windows on the go.
If you want to get a closer look at this project, check out the breakdown and build details over at Hackaday. Deeds was kind enough to share plenty of information for those interested in how it goes together or possibly creating something similar of their own.
We’re accustomed to seeing driver updates that increase GPU performance through game optimizations made at the driver level. This has been especially true with Intel’s Arc discrete GPUs, where the company has gained gobs of performance from its latest DX9 and DX11 driver enhancements — allowing GPUs like the Arc A750 to sit in our list of Best GPUs currently.
However, Intel’s latest driver release (version 31.0.101.4644) has unexpectedly added another method of increasing GPU performance, for one of its GPUs, at least. According to a Neowin forum post by Eternal Tempest, the new driver update is bundled with a hidden firmware update for the Arc A380 that boosts GPU clock speeds by an impressive 150MHz, going from a flat 2000MHz clock speed by default to 2150MHz with the firmware update.
A 150MHz clock speed upgrade is no joke and is a substantial jump from a mere firmware update. In the world of modern GPU overclocking, getting a stable 150MHz core offset would be a very good result on any of Nvidia’s recent GPUs. Most Nvidia GPUs usually top out anywhere between 100MHz to 200MHz on the overclock front, depending on GPU temperatures and silicon quality.
(Image credit: Neowin)
We’re not sure what prompted Intel to make the impressive 150MHz clock speed update right now. But we suspect that the company discovered there was additional frequency headroom available in the Arc A380 that it did not perceive during the GPU’s development cycle. The Arc A380 was, after all, one of the first discrete Arc GPUs to be released by Intel. Inexperience with discrete GPU architectures and TSMC’s 6N process may have resulted in A380’s clock speed being tuned below what it could actually handle.
In any case, it is great to see a free performance upgrade on an existing GPU, no matter which way they come. The new firmware update should give A380 owners a small but healthy boost in GPU performance in all games and GPU-intensive applications.
Besides the hidden firmware update (that is not included in the patch notes for some reason), the new driver update also adds game highlights for Madden NFL 24 and Wayfinder. The update also fixes three bugs surrounding a crashing problem in Uncharted: Legacy of Thieves Collection (DX12), a system hang when waking up from sleep mode, and another application crash in Blender 3.6.
In 2022, Anker surprised everyone with the remarkably fast AnkerMake M5, a premium bedslinger with AI print monitoring. Unfortunately, it had its thunder stolen by an even faster Core XY from Bambu Lab. After a minor course correction, Anker is back with the M5C: a faster, cheaper, and more user-friendly 3D printer.
Retailing at $399, the M5C boasts a 500 mm/s top speed with an all metal hotend capable of burning through filament at 300 degrees Celsius. It’s a bit smaller, with an Ender sized 220 x 220 build plate.
One obvious thing is missing, and that’s the screen. Anker made a very interesting decision to completely remove any kind of display from the M5C, favoring a giant “play” button and a phone app. Phone apps for 3D printers are nothing new, but these are normally bonus features for monitoring your prints. AnkerMake’s phone app is required and it’s the only way to fully control your printer. This may trouble users of a certain age, while also appealing to those who grew up with technology in their pocket.
Anker’s marketing department is leaning heavily on the printer’s appeal to first time users with ads (and several sponsored YouTube videos) calling attention to the singular play button. This is a little misleading, as Anker only has 37 ready to go prints you can “play” from the app as of this writing. For everything else you’ll still need to use a computer based slicer, and since Anker doesn’t provide a USB-C stick for the printer, you’re more likely to send files via WiFi and avoid the play button entirely.
The AnkerMake slicer received a thorough overhaul and is the cornerstone of this printer’s easy-peasy experience. Users are encouraged to stick to the Easy mode, which hides all the complex slicer settings. We tried the Easy setting for most of our test prints and found that the presets are very reliable with a wide range of usable filament settings.
We do have some concerns about relying on a mobile app to control a 3D printer, mainly being the annoyance of forgetting to bring your phone when you want to change the filament and the trouble involved with sharing the printer. For example, if I want other members of my family to be able to control the printer they will need to download the app and make an account – or grab my phone. Of course, this could be seen as a plus. If the printer were used by a school, library or maker space the only people who could fully operate the machine are those with access to the mobile app.
Overall, the AnkerMake M5C is a solid performer that delivers speed and ease of use, making it one of the best 3D printers.
Specifications: AnkerMake M5C
Swipe to scroll horizontally
Build Volume
220 x 220 x 250 mm (8.6 x 8.6 x 9.8 inches)
Material
PLA/PETG/TPU (up to 300 degrees)
Extruder Type
Direct Drive
Nozzle
.4 high flow
Build Platform
PEI textured spring steel sheet, heated
Bed Leveling
Automatic
Filament Runout Sensor
Yes
Connectivity
WiFi, Bluetooth, USB-C
Interface
One Button
Machine Footprint
466×374×480 mm (18.3 x 14.7 x 18.8in)
Machine Weight
9.6 KG (21.1 lbs)
Included in the box: AnkerMake M5C
(Image credit: Tom’s Hardware)
The AnkerMake M5C comes with all the parts you need to get started: tools in a handy kit to build and maintain the printer which includes a pair of side cutters. You also get a spare nozzle. Oddly missing is a sample of filament for your first print, so be sure to buy some when you order the printer. We maintain a list of the best filaments for 3D printing to help you choose.
There’s a poster with a quickstart guide to help you build and set up the printer. At the time of publication there were 37 pre-sliced prints included on the AnkerMake app, which you need to download in order to run this printer.
Assembling the AnkerMake M5C
(Image credit: Tom’s Hardware)
The AnkerMake M5C arrives mostly pre-assembled and only needs 8 bolts – four for each side – to put together. I got it assembled in about 20 minutes. The wiring is easier than most printers with just two sets of wires for the stepper motors, then a single appliance style cord to plug into the tool head.
Leveling the AnkerMake M5C
The AnkerMake M5C has a pressure sensor mechanism connected to the hotend for automatic bed leveling. This is my favorite method of bed leveling, as it physically taps the surface and often sets a perfect Z height in the process. On this printer, the z height only needed to be adjusted for printing PETG.
The build platform is hard mounted to the Y-Axis and there are no knobs for manual tramming. Everything is done through the app.
To level the bed, scroll down the app until you see Adjustments and press the Auto Level button. The app will inform you that leveling will take about 10 minutes. Press Start to continue and the machine will warm up to 175 degrees on the nozzle and 60 degrees on the bed. It will then home and double tap 49 spots in a 7×7 in a grid across the bed surface.
(Image credit: Tom’s Hardware)
Loading Filament in the AnkerMake M5C
(Image credit: Tom’s Hardware)
The AnkerMake M5C has an improved filament path with a reverse Bowden tube connected to the spool holder. This allows easy feeding directly into the extruder.
To load filament, tap the thermometer icon next to the Print button. Select the type of filament you are loading from the list and it will warm the nozzle and plate appropriately. Push the filament into the Bowden tube until you meet resistance. Push the lever on the tool head and push the filament in an extra few millimeters. Now click Extrude from the main screen and it will feed the filament into the hotend.
To remove filament or change colors, reverse the process.
Design of the AnkerMake M5C
(Image credit: Tom’s Hardware)
The AnkerMake M5C is a more compact version of the AnkerMake M5, with a similar polished modern appearance. The wiring and mechanics are neatly concealed within the printer’s frame, with only a single exposed cable running to the tool head.
The printer has no screen or control panel, and must use a mobile app or computer to operate it. The printer’s biggest feature is the large play button on the front right corner of the base and a glowing LED “M” on the gantry.
AnkerMake is making a big deal out of the “one touch button” on the M5C, but it’s more of a gimmick than anything revolutionary. The button is programmable (from the app, of course) and can do a limited set of functions. You can program it to do five things – three when idle, two when printing. The button performs these actions when you tap it once, double tap and press for 3 seconds.
While idle, you can have the button:
Print the latest file on the USB-C drive
Reprint the last file
Auto-level
Home all axis
Do Nothing
While printing, the button can
Pause/Continue
Stop Printing
Do Nothing
The button is useful as an emergency stop, or to pause the printer for a filament swap. But since the printer doesn’t come with a USB-C flash drive (an unusual format I don’t own) I wasn’t able to test “playing” a file from the memory stick. I was able to reprint the last file using the button, which is somewhat helpful.
When you send a pre-sliced file from the mobile app there’s no need to push the button – the app does all the work. When you send files over WiFi from the AnkerMake slicer, there’s also no need to push the button.
With all this emphasis on a mobile app, the AnkerMake M5C could really use a camera. The M5 had one, and it seems strange to have to make visual contact with a machine to verify the print bed is clear when you could – in theory – start a print from anywhere in the world.
Despite the lack of a camera, the app is remarkably useful. When the printer runs into problems, it can alert you right away. When I tested the filament runout sensor, the app pinged my phone and told me it needed filament. I unplugged the tool head without turning off the power (to photograph the interior) causing the machine to throw a major error – it could no longer sense the printer’s temperature. It alerted me to this potentially dangerous problem with loud beeping, a warning from my app, AND an email.
The AnkerMake app is home to a growing number of pre-sliced files. The files are all small enough to fit the M5C, and are compatible with the M5. As of this posting, the app has 37 files that are mostly toys. This is nowhere near the level of ToyBox’s app, but there’s no saying how much the library may grow. I’m not seeing a way to add your own files to the library, though perhaps that is something that could be done if I had a USB-C stick. There is no file storage on the printer itself.
The AnkerMake app can also be used with the older M5, and can run several printers simultaneously.
The M5C may be smaller than the M5, and doesn’t have a camera with AI, but that doesn’t make it an inferior product. It has a better all metal hotend, capable of reaching 300 degrees and a higher flow rate (35mm³ vs 24mm³), the filament runout sensor is on the tool head where it’s less annoying and the Y belt isn’t stuck in a crevasse where filament scraps can get tangled. It’s also a little more quiet than the M5, but not as quiet as a slower printer.
The M5C is extremely fast without needing all the weight of the previous machine to hold it steady. In April, the company released a firmware update for the M5, stating that Klipper’s Pressure Advance and Input Shaper had been integrated, which doubled that printer’s top speed to 500mm/s. The new limits are standard on the M5C, which also seems better tuned and produces cleaner prints at high speed.
AnkerMake is sticking to its proprietary high flow nozzle that you’ll need to purchase from their website. You get 10 for $20, which is a tad high for brass nozzles, but you can get different sizes, from .2 to .8mm.
Preparing Files / Software
(Image credit: Tom’s Hardware)
AnkerMake has its own custom slicer, which has seen a major overhaul since the launch of the M5. I’m normally not a big fan of proprietary slicers, but I’ll make an exception for this one. Though other slicers have Easy or Basic modes, the one for AnkerMake actually works well because of the simple layout. You work your way from top to bottom selecting Printer, Material, Printing Style (speed), Layer Height and so forth. Everything is limited to a Yes/No or maximum choice of three options.
Click Slice and send to the Preview Menu – which is the same screen you’d see on Expert Mode. Here you can double check the slicing. Click Export to save the file – perhaps to that elusive USB-C flash drive – or click Print and send it right to a printer. It still goes through the motions of creating an AI image, which is only used for the M5 and its camera. This might be misleading to M5C owners – or a future upgrade?
Printing on the AnkerMake M5C
The AnkerMake M5C has a top speed of 500mm/s, but you’ll get higher quality if you stick to 250mm/s – not too shabby. I did a lot of speed testing, and though 500mm/s is a bit of a reach, there were never any quality issues due to vibration.
Don’t try to ONLY print at 500mm/s with the AnkerMake M5C.(Image credit: Tom’s Hardware)
My test prints came from the AnkerMake app, and since the machine didn’t come with any filament, I used samples of EIBOS Matte-PLA. You’ll definitely want to shop for filament before bringing this machine home. We have a guide with our favorite filaments for 3D printing here.
The gray cat/bunny on the right was printed directly from the app and defaulted to “fast printing.” It looks a little rough and the chin suffered from lack of supports. The one in the middle I sliced myself, still using default “normal” speeds, but I bumped up the walls to three – and removed the brim – which increased the quality of the print.
The khaki bunny on the left used the same settings but was sent to the AnkerMake M5. The print has slightly less quality than the sliced M5C print, with somewhat rough layers. The print took 30 minutes to print from the app, and 50 minutes and 21 seconds to print with 3 walls at 250 mm/s.
AnkerMake App Big Eared Cat by Three Wu(Image credit: Tom’s Hardware)
I ran an amazingly fast 20 minute 4 second Benchy using Speed Boat Rules (2 walls, 3 top and bottom layers, 10% grid infill, a .25 layer height and .5 layer width). The layers are rough, but there’s no ringing or layer shifts. Printed in ordinary gray Inland PLA.
3D Benchy(Image credit: Tom’s Hardware)
To show what the AnkerMake M5C can do under less stressful conditions, I printed this Clockspring Cosmic Saucer, using Easy Mode, Normal Speed (top speed at 250mm/s) and a .2mm layer height. The pieces are printed individually and screw together.
This is remarkably smooth with crisp details and smooth walls. I printed it in Inland Dual Color Gold/Silver, Polymaker’s Starlight Jupiter (antenna) and Galaxy Dark Blue PolyLite PLA (landing gear), and Protopasta’s Cobalt Blue Translucent HTPLA (glass). Total print time is 2 hours, 30 minutes and 33 seconds.
For a practical print, I used the AnkerMake M5C to print PETG brackets to hold up a new filament shelf. I kept AnkerMake’s default settings for PETG – which restrained wall speed to 150mm/s, and beefed up the print to 6 walls and 25% infill. The top layer is a bit streaky, but the parts are still very strong. This print took 2 hours and 59 minutes to print, using IC3D recycled black PETG.
For TPU I had another practical print – caps for the end of the pipes making up my new RepRack filament shelf. I created these caps in Tinkercad, then uploaded them to Printables. Each cap took 10 minutes and 3 seconds to print using AnkerMake’s default Easy Mode settings for TPU. They printed remarkably smooth and fast with no stringing in Inland Black TPU.
The AnkerMake M5C is a fast printer with some features – or lack there of – that might take some getting used to. I was concerned about the lack of a screen, but if you’re online all the time, having a phone handy isn’t really that big of an issue. You can also get around the screen issue by keeping an old mobile device with the app loaded near the printer.
One of the fastest bed slingers around, it’s definitely competitive in the current race for speedy 3D printing. With a price tag of $399 and the promise of an optional six spool “color engine” by the end of the year, the AnkerMake M5C is one to watch.
It’s dead simple to use, which makes this a great recommendation for beginners and makers who just want to make. There’s not much to tweak on this printer, so people who like to mod out their machines may want to take a pass and go build a Voron instead.
If you’re interested in other speedy printers at even higher prices, check out the blazing-fast $699 Bambu Lab P1S or the FLSun V400 Delta, a machine that is a joy to watch, but has a hefty price at $849.
If you want speed, ease AND a screen in this price range, check out the similarly priced Sovol SV07 that runs vanilla Klipper for $339. And if you’re on a really tight budget, then the Klippered Elegoo Neptune 4, priced at $259 might be just the ticket. However, at $399, the AnkerMake M5C offers a great balance of features at a reasonable price. Just keep that phone handy.
Today we spotted a large discount on the Asus ROG Strix G18 gaming laptop that’s now reduced to $2,099 at Best Buy. This is still a very high ticket price item, but you do get some impressive hardware in this large chassis. The 18-inch screen has a QHD resolution and a very fast 240Hz refresh rate – helping to power these specs are an Nvidia RTX 4080 GPU and an Intel Core i9-13980HX processor.
An alternative to either AMD or Nvidia on the GPU front, this AIB edition Acer Predator BiFrost Intel Arc A770 has a slight discount of $309 at Newegg. This GPU comes with 16GB of VRAM, and Intel has been delivering frequent driver updates that look to ever improve the performance of this GPU.
Pick up this last-generation AMD Ryzen 7 5800X CPU for just $189. Still, a solid performer if you want to put together a budget-build PC or even upgrade a previous AM4 system. With eight cores and 16 threads, this CPU is still highly capable of either productivity work or gaming.